As customers started to test the waters of virtualization within their organization, most started with the non-mission critical apps “just to be safe”. As the comfort level increased, the majority (if not all) of the non-mission critical apps were moved into the virtualization matrix. There are still quite a few customers who do not feel comfortable virtualizing their Tier 1 business critical applications. Yet, these are the applications that would probably have the biggest impact on overall ROI. The most common reason I hear is that the performance for these high resource (CPU, memory, IO) workloads do not translate well into a virtualized environment. This may have been true 4 years ago, but today, I would argue that with proper planning 99% of all workloads can indeed be virtualized.
When a SQL Server is running on a physical server with a single socket, 4 core CPU and 64 GB of RAM, it has full access to EVERYTHING. This means the CPU, memory and even networking resources on that server are dedicated to servicing SQL Server. As I look to virtualize this workload, I need to take into account how many logical processors are in the hypervisor host (in my case, Hyper-V) and how I have the networking configured for the host. If I am running other Virtual Machines (VMs) on this host, I need to be cognizant of the overall usage of logical processors as they are now time slicing resources for all VMs running. Also, all the network traffic for all the VMs running on this host are being pushed out on the same physical network adapter(s).
In this scenario, I would not be running my SQL Server VM on the same host as other resource intensive VMs if at all possible. This means I would run some of my lower utilization VMs on this host so that SQL will get more overall resources. Also, I would set the priority of the SQL VM so it is higher in the ranking. I think you get the picture.
Another thing to take into account is the actual CPU architecture being utilized on the hypervisor hosts. Not all quad core processors are equal. You have to look at the maximum performance of each core. I point this out because in many organizations, the team that manages the SQL Servers usually are the ones that provide specs for these physical servers and these may not be the same types of servers that are used by the team that manages the virtualization hosts.
The last item I want to point out is memory. As you start working with higher memory capacity servers, the NUMA architecture really comes into play. If the Hyper-V host uses memory banks across NUMA nodes, the performance will degrade. Since the OS in the VM is not managing the physical memory, it has no control over how memory is accessed from a NUMA perspective. Instead, it is the Hyper-V host that manages this for all VMs on that host. For these high memory capacity servers, care must be taken to configure NUMA settings so this does not become a negative factor.
We have done testing of our own of these Tier 1 applications in a Hyper-V environment and have published them here. Please download and read so you can be prepared to start moving these workloads to Hyper-V if you haven’t already done so.