I thought long and hard about what I was going to write for this article. My first thought was to go into detail on monitoring memory using Performance Monitor. Then I thought about it a bit more and remembered that my teammate, Chris Avis, wrote a blog post about Dynamic Memory monitoring on March 6th. I followed up on that with an article on March 8 regarding how much memory a VM thinks it has versus what it actually has. I guess I could go into more detail on monitoring the memory on the Hyper-V host to see what the available memory is or how many Pages / sec there are to see if there’s an issue with the amount of physical memory.
I’ve decided that it does make sense to keep an eye out on the following Counters in Performance Monitor as they do show the pressure of memory demand.
Available MBytes – Amount of available memory in MegaBytes
Page Faults / Sec – Something wasn’t found in memory so the system had to perform a fault to retrieve from somewhere else
Cache Bytes Peak – Amount of memory used by the system for file cache (which means memory not available for other things)
Of course, I thought about this a bit more and realized I was over complicating things.
When configuring a Hyper-V Server, the most common resource that becomes a bottleneck is memory. Even if I do everything possible with Dynamic Memory settings and assign the proper memory weights for each VM, I will still ultimately be limited by how much physical memory I have in my server to assign to the VMs. I can even get very granular with the NUMA settings to try and optimize the memory assignment for the VMs (which will probably do more harm than good if I do this wrong) and I will still be limited by the amount of physical memory on the server.
So, I’ve decided the best performance management task I can do as it relates to memory is ….
Add GOBS of memory to the server so I don’t run out.
I must say, that has got to be the easiest conclusion and smartest advice I can give anyone as it relates to optimizing memory for a Hyper-V host.