Right Sizing Virtual Machines – Is it really important?

I have talked about Virtualization and how to configure Virtual Machines (VMs) for quite some time now. I’m sure every IT Professional on the planet has played with some form of virtualization or other by now as well. I think by now, we all understand the basics of virtualizing physical workloads and how to assign virtual / logical processors and memory to our VMs. One of the things I see folks struggle with and not fully understand is the concept of “right sizing” the VM configurations. What exactly do I mean by this? Let’s take a look at a simple example of some physical server workloads that we may virtualize for a fictitious company named Contoso.

Contoso is currently not virtualizing anything. They are looking to consolidate 80% of their physical servers onto 4 Hyper-V hosts (yes, I’m using Hyper-V since I work for Microsoft Smile). All physical servers for Hyper-V are configured with the following:

Single Intel Processor with 4 cores and HyperThreading (total of 8 virtual processors)
256 GB RAM
Gobs of disk space

*I’m only focusing on CPU and Memory in this article so I won’t go into details around networking here (perhaps a future article).

On one of the Hyper-V hosts, Contoso wants to host the following VMs

  • 2 Domain Controllers
  • 2 Exchange 2010 CAS Servers
  • 2 Exchange 2010 HT Servers
  • 2 Exchange 2010 MBX Servers
  • 2 File Servers
  • 5 Web Servers
  • 1 SharePoint 2010 Server
  • 1 SQL 2008 Server

Based on the existing configurations of the physical servers, the IT Administrator configures the VMs to have the following CPU and memory Resources

  • Domain Controller – 4 Logical CPUs and 8 GB RAM
  • Exchange 2010 CAS Servers – 4 Logical CPUs and 12 GB RAM
  • Exchange 2010 HT Servers – 4 Logical CPUs and 12 GB RAM
  • Exchange 2010 MBX Servers – 4 Logical CPUs and 16 GB RAM
  • File Servers – 4 Logical CPUs and 16 GB RAM
  • Web Servers – 4 Logical CPUs and 8 GB RAM
  • SharePoint 2010 Server – 4 Logical CPUs and 24 GB RAM
  • SQL 2008 Server – 4 Logical CPUs and 32 GB RAM

With the above configurations, a total of 224 GB of RAM is needed and we are “consuming” 68 logical CPUs. At first glance, this seems doable, but does this really make sense? When each of these servers were running on their own physical servers, the application had full access to all the CPU resources. In the Hyper-V world, these 64 logical CPUs are really sharing the processing power of 8 virtual processors on the host. If we follow the guidance of 8 Logical Processors to 1 Virtual Processor, we have exceeded the guidance. This by itself is not my true concern as the 8:1 ratio is a guidance, but not a hard set technical limit.

In reality, what concerns me is that there was not analysis done on needed resources when moving from the physical world to the virtual world. 

Let’s examine each of the work loads I listed above, starting with the Domain Controllers. Does a DC really need 4 logical processors and 8 GB or RAM? Probably not. This is a case where I would configure only 1 logical processor for the DC and then monitor the performance and adjust as necessary. From a memory perspective, I would take advantage of Dynamic Memory in Windows Server 2008 R2 SP1 and specify the Startup RAM amount to be 1 GB and the Maximum RAM to be 8 GB. Once again, I would monitor memory demand for the VM and adjust if necessary. Quite honestly, I would doubt the DC would even need 8 GB as the maximum, but we’ll go with that number since we are using dynamic memory.

For the Exchange Servers, the calculation of logical processors should be based on the Best Practices for Virtualizing Exchange Server 2010. This one is a lot harder to truly analyze without knowing the entire Exchange 2010 architecture. Let’s just say that we follow the guidance in the afore mentioned white paper and each of the Exchange VMs should have 2 logical processors each. From a memory perspective, Exchange is not an ideal workload for dynamic memory, so we need to “right size” the configuration. In my example, I’ll assume the memory allocation is correct.

For a File Server, CPU and memory are probably not as critical as disk IO and network bandwidth. For that reason, I would likely reduce the logical processors to 2 and use dynamic memory with a Startup value of 4 GB and a Maximum of 16 GB. 

Depending on what the web servers are hosting and if we are using a scale out method or not, I may also reduce the logical processors for the web servers to 2 and use dynamic memory with a startup value of 2 GB and maximum of 8 GB. I realize this seems somewhat arbitrary, but I would be monitoring the performance and resource utilization of the VMs to ensure resources are adequate and adjust as necessary.

For simplicity sake, I will assume the SharePoint and SQL Server configurations are already optimized in this configuration. I would not recommend using dynamic memory for SharePoint or SQL either.

After my resizing exercise, I now have 36 logical processors assigned and a starting memory consumption of 148 GB. Remember, there was very good chance that the physical hardware was barely being utilized at 10% on an average basis so why would we virtualize only to assign the same amount of “overage” to the VMs. In reality, I would have been monitoring the performance of the physical servers so that I truly understood the resource utilization of my server workload and then use that to do the “right sizing” of the VM configurations.

Another thing to consider is High Availability. If the four Hyper-V hosts are clustered to provide fail over and load balancing capabilities, then the hosts need to be configured properly to accommodate additional VM workloads as VMs are migrated from one host to another.

In conclusion, it is VERY important to properly size the VMs from a CPU and Memory perspective to get the most out of your virtualization infrastructure. Regular monitoring of the VMs is also important to ensure performance is always optimized.

Harold Wong