The Journey of a Thousand VMs Begins with a Few Steps



AskPFEPlat is in the process of a transformation to the new Core Infrastructure and Security TechCommunity, and will be moving by the end of March 2019 to our new home at (hosted at Please bear with us while we are still under construction!

We will continue bringing you the same great content, from the same great contributors, on our new platform. Until then, you can access our new content on either as you do today, or at our new site Please feel free to update your bookmarks accordingly!

Why are we doing this? Simple really; we are looking to expand our team internally in order to provide you even more great content, as well as take on a more proactive role in the future with our readers (more to come on that later)! Since our team encompasses many more roles than Premier Field Engineers these days, we felt it was also time we reflected that initial expansion.

If you have never visited the TechCommunity site, it can be found at On the TechCommunity site, you will find numerous technical communities across many topics, which include discussion areas, along with blog content.

NOTE: In addition to the AskPFEPlat-to-Core Infrastructure and Security transformation, Premier Field Engineers from all technology areas will be working together to expand the TechCommunity site even further, joining together in the technology agnostic Premier Field Engineering TechCommunity (along with Core Infrastructure and Security), which can be found at!

As always, thank you for continuing to read the Core Infrastructure and Security (AskPFEPlat) blog, and we look forward to providing you more great content well into the future!


As PFEs, one of our major roles and responsibilities is to relay best practices and lessons from the field.”  In this post, I’ll present a method – not the only method – of how to progress through a Hyper-V infrastructure design.  This is a high-level post and the content should not be considered “enough” to arrive at a suitable end-result design, but hopefully, this helps the reader along the virtualization path and will stimulate some thoughts and discussions.  

NOTE: the focus here is on server virtualization only and does not include aspects for desktop/VDI or application virtualization (such as RemoteFX or App-V).  Nor does this discussion address all aspects of a Private Cloud solution (more on Private Cloud details can be found here:

High Level Design Steps for a Hyper-V Deployment 

  1. Benchmark your dev/test/prod server fleet and establish your candidates for virtualization

  1. We offer a free toolkit that can inventory an environment and produce very detailed reports and information to help with this (the MAP is a very useful tool beyond just virtualization efforts, too)


  1. You may have your own tool(s) or may already have an established list of virtualization candidates

  1. Determine availability requirements of the applications/workloads/VMs

  1. Do the service levels of the applications/workloads allow for routine maintenance of the system?

  1. A departmental application that is typically used during business hours only

  1. Are there requirements for the app to sustain high levels of availability?

  1. Mission-critical line-of-business application that is used 24×7

  1. Consider the deployment location/environment where the VM guests will be hosted

  1. Branch officeoften a single-node host deployed on fault-tolerant server hardware

  2. HA Branch Officeoften a two-node Failover Cluster deployed on fault-tolerant server hardware

  3. Centralized Data Centeroften one or more multi-node Failover Cluster ‘farms

  1. Determine the desired VM Guest ‘hardware’ profile(s) – vProc, vRAM, VHD(s), vNIC(s)

  1. One idea is to create typical use-case profiles such that the number of VMs per physical host can be easily predicted/budgeted

  1. Low Utilization VM – 1 proc; 768 MB RAM; 20 GB C:\

  2. Standard Utilization VM – 1 proc; 1024 GB RAM; 40 GB C:\

  3. High Utilization VM – 2 proc; 2048 GB RAM; 60 GB C:\

  1. Another idea is to spec each VM based on detailed measurements/requirements for each particular workload. 

  1. This can provide more optimal use of physical host server resources but can be more difficult to accomplish due to variations of server workloads and additional time to benchmark/perfmon each application

  1. Application XYZ measured out for 1GB RAM and two Procs

  2. Application ABC measured out at 768 MB RAM and one Proc

  1. SCOM/SCVMM and Dynamic Memory features can help facilitate this effort more easily

  1. Determine the number of planned VM Guests and consider future capacity needs

  2. Determine the OS for the VM Hosts

  1. Microsoft Hyper-V Server 2008 R2 SP1

  1. Free download

  2. Command-line only interface

  3. Hyper-V Role only


  1. Microsoft Windows Server 2008 R2 SP1 CORE install + Hyper-V Role

  1. Full feature for-cost OS

  2. Command-line only interface

  3. Hyper-V Role (additional Roles available/supported)

  1. Microsoft Windows Server 2008 R2 SP1 GUI install + Hyper-V Role

  1. Full feature for-cost OS

  2. Full GUI ‘typical’ Windows interface

  3. Hyper-V Role (additional Roles available/supported)

  1. Compare features and limitations of the free/Standard/Enterprise/Datacenter versions of Hyper-V


  1. A few pro/cons for CORE vs GUI OS versions

  1. CORE – pro

  1. Fewer patches than GUI = fewer reboots due to maintenance

  2. Smaller attack surface than GUI

  3. Fewer ‘casual’ logons/administration due to lack of typical tools/consoles available on the GUI versions of the OS

  1. CORE – con

  1. Separate/additional build to maintain from GUI version of OS

  2. Admin skillset for managing a command-line OS is not as prevalent as GUI versions of the OS

  3. Some 3rd party apps/agents/tools have requirements for some of the GUI elements that CORE lacks

  1. Determine the VM host storage architecture/model

  1. Single node host

  1. Direct-attached storage (DAS) – predominantly SAS but becoming SSD

  1. Two-node Failover Cluster

  1. DAS – predominantly SAS

  2. SAN – predominantly iSCSI or fibre channel

  1. Multi-node Failover Cluster ‘farm’

  1. SAN – predominantly iSCSI or fibre channel

  1. Determine the storage architecture details for the VM host(s)

  1. RAID requirements

  2. Controller redundancy requirements

  3. Controller cache requirements

  1. Determine the hardware requirements of the VM Host servers

  1. CPU(s)

  2. RAM

  3. Local storage

  4. NIX

  5. SAN connectivity

  6. Out of band mgmt of the host server(s)

  7. Consider the additional overhead of one or more Cluster Failover events and the additional load from the VM guests when they are migrated onto the remaining node(s)

Finally, a few great resources for some specific Hyper-V details:



Hopefully, the information presented here provides some food for thought regarding your Hyper-V deployments.