“Memory Overcommit? Isn’t that a VMware capability?”
Before we talk about it, let’s take a look at the word “Overcommit” (from The Free Dictionary):
v. o·ver·com·mit·ted, o·ver·com·mit·ting, o·ver·com·mits
- To bind or obligate (oneself, for example) beyond the capacity for realization.
- To allocate or apportion (money, goods, or resources) in amounts incapable of replacement.
- To be or become overcommitted.
That phrase “beyond the capacity for realization” is important. To overcommit memory means to obligate more memory be used than the capacity we actually physically have.
“Is that a good thing?”
It can be, if, in the case of the consolidation ratios of virtual machines on a physical host, it’s more important for you to pack more onto a box than it is to get decent performance out of those virtual machines.
“Dynamic Memory” is Microsoft’s solution (in Hyper-V) to do something similar. But in this case, Microsoft does not overcommit. By contrast, it allocates memory to or from VMs sharing a virtualization host based on the memory demand of the VMs.
Today in Part 3, my friend Dan Stolts expands on this definition and these technologies for us.
Do you have Dynamic Memory? Do I? I can’t recall.
But I do remember that you can evaluate Windows Server 2012 for free, and try out Hyper-V Dynamic Memory. And Microsoft Hyper-V Server 2012 also supports Dynamic Memory, and is a FREE virtualization platform.
So at least that’s something.