** Newly updated to include 2012 R2 Best Practices. See 11/03/2013 blog regarding R2 updates by clicking here **
Windows Server 2012 and Windows Server 2012 R2 provided major improvements to the Hyper-V role, including increased consolidation of server workloads, Hyper-V Replica, Cluster Aware Updating (CAU), network virtualization and the Hyper-V extensible switch, just to name a few! Hyper-V 3.0, as some call it, helps organizations improve server utilization while reducing costs.
The following is a checklist I initially developed for Windows Server 2008 R2 SP1 (which can be found here: http://blogs.technet.com/b/askpfeplat/archive/2012/11/19/hyper-v-2008-r2-sp1-best-practices-in-easy-checklist-form.aspx) and overhauled with the latest release of Server 2012 and Server 2012 R2. Those of you who have used my previous checklist will notice quite a few items remaining; that’s because many of the best practices still apply to Hyper-V in Server 2012 and Server 2012 R2!!
I find having a checklist can be a great tool to use not only when reviewing an existing Hyper-V implementation, but one which can be leveraged as part of pre-planning stages, to ensure best practices are implemented from the start.
It’s important to note this is not an exhaustive compilation, rather a grouping of features/options commonly used in businesses I’ve had the pleasure of assisting.
A special thanks to Ted Teknos, Ryan Zoeller and Rob Hefner for their input/suggestions/corrections as I put this together!
So, without further ado, here’s the newly updated Hyper-V 2012 and Hyper-V 2012 R2 Best Practice Checklist!
Disclaimer: As with all Best Practices, not every recommendation can – or should – be applied. Best Practices are general guidelines, not hard, fast rules that must be followed. As such, you should carefully review each item to determine if it makes sense in your environment. If implementing one (or more) of these Best Practices seems sensible, great; if it doesn't, simply ignore it. In other words, it's up to you to decide if you should apply these in your setting.
GENERAL (HOST):
⎕ Consider using Server Core, or the Windows Minimal Interface, to reduce OS overhead, reduce the potential attack surface, and to minimize reboots (due to fewer software updates).
-
Server Core information: http://msdn.microsoft.com/en-us/library/windows/desktop/hh846313(v=vs.85).aspx
-
Windows Minimal Interface Information: http://msdn.microsoft.com/en-us/library/windows/desktop/hh846317(v=vs.85).aspx
⎕ Ensure hosts are up-to-date with recommended Microsoft updates, to ensure critical patches and updates – addressing security concerns or fixes to the core OS – are applied.
⎕ Ensure all applicable Hyper-V hotfixes and Cluster hotfixes (if applicable) have been applied. Review the following sites and compare it to your environment, since not all hotfixes will be applicable:
Windows Server 2012:
Update List for Windows Server 2012 Hyper-V: http://social.technet.microsoft.com/wiki/contents/articles/15576.hyper-v-update-list-for-windows-server-2012.aspx
Recommended hotfixes and updates for Windows Server 2012-based failover clusters: http://support.microsoft.com/kb/2784261
A fellow Microsoft employee, Cristian Edwards, has recently posted a PowerShell script that detects which Hyper-V and Failover Clustering 2012 updates you are missing based on the list updated by the Microsoft Product Group! Check it out here: http://blogs.technet.com/b/cedward/archive/2013/05/24/validating-hyper-v-2012-and-failover-clustering-2012-hotfixes-and-updates-with-powershell.aspx
Windows Server 2012 R2
- Update List for Windows Server 2012 R2 with Update Hyper-V:
http://social.technet.microsoft.com/wiki/contents/articles/20885.hyper-v-update-list-for-windows-server-2012-r2.aspx
- Update List for Windows Server 2012 R2 Hyper-V: http://social.technet.microsoft.com/wiki/contents/articles/20885.hyper-v-update-list-for-windows-server-2012-r2.aspx (added 01/05/2014)
Recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters: http://support.microsoft.com/kb/2920151 (added 01/05/2014)
⎕ Ensure hosts have the latest BIOS version, as well as other hardware devices (such as Synthetic Fibre Channel, NIC’s, etc.), to address any known issues/supportability
⎕ Host should be domain joined, unless security standards dictate otherwise. Doing so makes it possible to centralize the management of policies for identity, security, and auditing. Additionally, hosts must be domain joined before you can create a Hyper-V High-Availability Cluster.
· For more information: http://technet.microsoft.com/en-us/library/ee941123(v=WS.10).aspx
⎕ RDP Printer Mapping should be disabled on hosts, to remove any chance of a printer driver causing instability issues on the host machine.
-
Preferred method: Use Group Policy with host servers in their own separate OU
-
Computer Configuration –> Policies –> Administrative Templates –> Windows Components –> Remote Desktop Services –> Remote Desktop Session Host –> Printer Redirection –> Do not allow client printer redirection –> Set to "Enabled
⎕ Do not install any other Roles on a host besides the Hyper-V role and the Remote Desktop Services roles (if VDI will be used on the host).
-
When the Hyper-V role is installed, the host OS becomes the "Parent Partition" (a quasi-virtual machine), and the Hypervisor partition is placed between the parent partition and the hardware. As a result, it is not recommended to install additional (non-Hyper-V and/or VDI related) roles.
⎕ The only Features that should be installed on the host are: Failover Cluster Manager (if host will become part of a cluster), Multipath I/O (if host will be connecting to an iSCSI SAN, Spaces and/or Fibre Channel). (See explanation above for reasons why installing additional features is not recommended.)
⎕ Anti-virus software should exclude Hyper-V specific files using the Hyper-V: Antivirus Exclusions for Hyper-V Hosts article, namely:
-
All folders containing VHD, VHDX, AVHD, VSV and ISO files
-
Default virtual machine configuration directory, if used (C:\ProgramData\Microsoft\Windows\Hyper-V)
-
Default snapshot files directory, if used (%systemdrive%\ProgramData\Microsoft\Windows\Hyper-V\Snapshots)
-
Custom virtual machine configuration directories, if applicable
-
Default virtual hard disk drive directory
-
Custom virtual hard disk drive directories
-
Snapshot directories
-
Vmms.exe (Note: May need to be configured as process exclusions within the antivirus software)
-
Vmwp.exe (Note: May need to be configured as process exclusions within the antivirus software)
-
Additionally, when you use Cluster Shared Volumes, exclude the CSV path "C:\ClusterStorage" and all its subdirectories.
-
For more information: http://social.technet.microsoft.com/wiki/contents/articles/2179.hyper-v-anti-virus-exclusions-for-hyper-v-hosts.aspx
⎕ Default path for Virtual Hard Disks (VHD/VHDX) should be set to a non-system drive, due to this can cause disk latency as well as create the potential for the host running out of disk space.
⎕ If you choose to save the VM state as the Automatic Stop Action, the default virtual machine path should be set to a non-system drive, due to the creation of a .bin file is created that matches the size of memory reserved for the virtual machine. A .vsv file may also be created in the same location as the .bin file, adding to disk space used for each VM. (The default path is: C:\ProgramData\Microsoft\Windows\Hyper-V.)
-
Note: Hyper-V in Server 2012 will now only use the .bin if you choose to save the VM state as the Automatic Stop Action.
-
Change to .bin usage in Server 2012: http://blogs.msdn.com/b/virtual_pc_guy/archive/2012/03/26/option-to-remove-bin-files-with-hyper-v-in-windows-8.aspx
⎕ If you are using iSCSI: In Windows Firewall with Advanced Security, enable iSCSI Service (TCP-In) for Inbound and iSCSI Service (TCP-Out) for outbound in Firewall settings on each host, to allow iSCSI traffic to pass to and from host and SAN device. Not enabling these rules will prevent iSCSI communication.
To set the iSCSI firewall rules via netsh, you can use the following command:
Netsh advfirewall firewall set rule group=”iSCSI Service” new enable=yes
⎕ Periodically run performance counters against the host, to ensure optimal performance.
-
Recommend using the Hyper-V performance counter that can be extracted from the (free) Codeplex PAL application:
-
Install PAL on a workstation and open it, then click on the Threshold File tab.
-
Select "Microsoft Windows Server 2012 Hyper-V" from the Threshold file title, then choose Export to Perfmon template file. Save the XML file to a location accessible to the Hyper-V host.
-
Next, on the host, open Server Manager –> Tool –> Performance Monitor
-
In Performance Monitor, click on Data Collector Sets –> User Defined. Right click on User Defined and choose New –> Data Collector Set. Name the collector set "Hyper-V Performance Counter Set" and select Create from a template (Recommended) then choose Next. On the next screen, select Browse and then locate the XML file you exported from the PAL application. Once done, this will show up in your User Defined Data Collector Sets.
-
Run these counters in Performance Monitor for 30 minutes to 1 hour (during high usage times) and look for disk latency, memory and CPU issues, etc.
GENERAL (VMs):
⎕ Ensure you are running only supported guests in your environment. Windows Server 2012/2012 R2 Hyper-V Supported Guest Operating Systems (updated 07/25/2014):
- Windows Server 2012 R2 supported guest operating systems – http://technet.microsoft.com/library/dn792027.aspx
- Windows Server 2012 supported guest operating systems – http://technet.microsoft.com/library/dn792028.aspx
⎕ If you are using (or plan to use) Linux in your Hyper-V environment, check out Ben Armstrong’s awesome blog, titled “What version of Linux Supports what in Hyper-V?” to see features/functionality in Hyper-V with various Linux flavors (added 06/17/2014):
GENERAL (PERFORMANCE TUNING):
⎕ (For R2) Review the “Performance Tuning Guidelines for Windows Server 2012 R2” document:
⎕ Review the Performance Tuning Guidelines for Windows Server 2012” document: http://msdn.microsoft.com/en-us/library/windows/hardware/jj248719.aspx
PHYSICAL NICs:
⎕ Ensure NICs have the latest firmware, which often address known issues with hardware.
⎕ Ensure latest NIC drivers have been installed on the host, which resolve known issues and/or increase performance.
⎕ NICs should not use APIPA (Automatic Private IP Addressing). APIPA is non-routable and not registered in DNS.
⎕ VMQ should be enabled on VMQ-capable physical network adapters bound to an external virtual switch.
-
For more information:
⎕ TCP Chimney Offload is not supported with Server 2012 software-based NIC teaming, due to TCP Chimney has the entire networking stack offloaded to the NIC. If software-based NIC teaming is not used, however, you can leave it enabled.
-
TO SHOW STATUS:
-
From an elevated command-prompt, type the following:
-
netsh int tcp show global
-
(The output should show Chimney Offload State disabled)
-
TO DISABLE TCP Chimney Offload:
-
From an elevated command-prompt, type the following:
-
netsh int tcp set global chimney=disabled
⎕ Jumbo frames should be turned on and set for 9000 or 9014 (depending on your hardware) for CSV, iSCSI and Live Migration networks. This can significantly increase throughput while also reducing CPU cycles.
-
End-to-End configuration must take place – NIC, SAN, Switch must all support Jumbo Frames.
-
You can enable Jumbo frames when using crossover cables (for Live Migration and/or Heartbeat), in a two node cluster.
-
To verify Jumbo frames have been successfully configured, run the following command from all your Hyper-V host(s) to your iSCSI SAN:
-
Ping 10.50.2.35 –f –l 8000
-
This command will ping the SAN (e.g. 10.50.2.35) with an 8K packet from the host. If replies are received, Jumbo frames are properly configured.
⎕ NICs used for iSCSI communication should have all Networking protocols (on the Local Area Connection Properties) unchecked, with the exception of:
-
Manufacturers protocol (if applicable)
-
Internet Protocol Version 4
-
Internet Protocol Version 6.
-
Unbinding other protocols (not listed above) helps eliminate non-iSCSI traffic/chatter on these NICs.
⎕ Management NIC should be at the top (1st) in NIC Binding Order. To set the NIC binding order: Control Panel –> Network and Internet –> Network Connections. Next, select the Advanced menu item, and select Advanced Settings. In the Advanced Settings window, select your management network under Connections and use the arrows on the right to move it to the top of the list.
⎕ NIC Teaming should not be used on iSCSI NIC’s. MPIO is the best method. NIC teaming can be used on the Management, Production (VM traffic), CSV Heartbeat and Live Migration networks.
-
For more information on NIC Teaming:
-
For more information on MPIO:
-
Microsoft Multipath I/O (MPIO) Users Guide for Windows Server 2012:
-
Managing MPIO with Windows PowerShell on Windows Server 2012:
⎕ If you are using NIC teaming for Management, CSV Heartbeat and/or Live Migration, create the team(s) before you begin assigning Networks.
⎕ If using aggregate (switch-dependent) NIC teaming in a guest VM, only SR-IOV NICs should be used on guest.
-
For more information: http://www.microsoft.com/en-us/download/details.aspx?id=30450
⎕ If using NIC teaming inside a guest VM, follow this order:
METHOD #1:
-
Open the settings of the Virtual Machine
-
Under Network Adapter, select Advanced Features.
-
In the right pane, under Network Teaming, tick the “Enable this network adapter to be part of a team in the guest operating system.
-
Once inside the VM, open Server Manager. In the All Servers view, enable NIC Teaming from Server
METHOD #2:
-
Use the following PowerShell command (Run as Administrator) on the Hyper-V host where the VM currently resides:
-
Set-VMNetworkAdapter –VMName contoso-vm1 –AllowTeaming On
-
This PowerShell command turns on resiliency if one or more of the teamed NICs goes offline.
-
Once inside the VM, open Server Manager. In the All Servers view, enable NIC Teaming from Server
⎕ When creating virtual switches, it is best practice to uncheck Allow management operating system to share this network adapter, in order to create a dedicated network for your VM(s) to communicate with other computers on the physical network. (If the management adapter is shared, do not modify protocols on the NIC.)
Please note: we fully support and even recommend (in some cases) using the virtual switch to separate networks for Management, Live Migration, CSV/Heartbeat and even iSCSI. For example two 10GB NIC’s that are split out using VLANs and QoS.
⎕ Recommended network configuration when clustering:
|
Min # of Networks on Host |
Host Management |
VM Network Access |
CSV/Heartbeat |
Live Migration |
iSCSI |
|
5 |
“Management” |
“Production” |
“CSV/Heartbeat” |
“Live Migration” |
“iSCSI” |
** CSV/Heartbeat & Live Migration Networks can be crossover cables connecting the nodes, but only if you are building a two (2) node cluster. Anything above two (2) nodes requires a switch. **
⎕ Turn off cluster communication on the iSCSI network.
-
In Failover Cluster Manager, under Networks, the iSCSI network properties should be set to “Do not allow cluster network communication on this network.” This prevents internal cluster communications as well as CSV traffic from flowing over the same network.
⎕ Redundant network paths are strongly encouraged (multiple switches) – especially for your Live Migration and iSCSI network – as it provides resiliency and quality of service (QoS).
NETWORK ARCHITECTURE:
Cristian Edwards Sabathe, a fellow Microsoft employee, has put together a terrific series of blogs on network architecture. I would strongly encourage you to spend some time going through these. (added 03/07/2014)
1. Hyper-V 2012 R2 Network Architectures Series (Part 1 of ) – Introduction (This Post)
5. Hyper-V 2012 R2 Network Architectures Series (Part 5 of ) – Converged Networks using Dynamic QoS
6. Hyper-V 2012 R2 Network Architectures Series (Part 6 of ) – Converged Network using CNAs
7. Hyper-V 2012 R2 Network Architectures Series (Part 7 of ) – Conclusions and Summary
VLANS:
⎕ If aggregate NIC Teaming is enabled for Management and/or Live Migration networks, the physical switch ports the host is connected to should be set to trunk (promiscuous) mode. The physical switch should pass all traffic to the host for filtering.
⎕ Turn off VLAN filters on teamed NICs. Let the teaming software or the Hyper-V switch (if present) do the filtering.
-
For more information, download the Windows Server 2012 NIC Teaming (LBFO) Deployment and Management document: http://www.microsoft.com/en-us/download/details.aspx?id=30160
VIRTUAL NETWORK ADAPTERS (NICs):
⎕ Legacy Network Adapters (a.k.a. Emulated NIC drivers) should only be used for PXE booting a VM or when installing non-Hyper-V aware Guest operating systems. Hyper-V's synthetic NICs (the default NIC selection; a.k.a. Synthetic NIC drivers) are far more efficient, due to using a dedicated VMBus to communicate between the virtual NIC and the physical NIC; as a result, there are reduced CPU cycles, as well as much lower hypervisor/guest transitions per operation.
HOST DISKS:
⎕ Disk used for CSV must be partitioned with NTFS. You cannot use a disk for a CSV that is formatted with FAT, FAT32, or Resilient File System (ReFS).
-
For more information: http://technet.microsoft.com/en-us/library/jj612868.aspx#BKMK_storage
⎕ Unless your storage vendor has a different recommendation, disks used for VHD/VHDx files, including CSV’s, should use 64K formatting (Allocation Unit Size). (added 09/02/2014)
⎕ Use caution when using snapshots. If not properly managed, snapshots can cause disk space issues, as well as additional physical I/O overhead. Additionally, if you are hosting 2008 R2 (or earlier) Domain Controllers, reverting to an earlier snapshot can cause USN rollbacks. Windows Server 2012 has been updated to help better protect Domain Controllers from USN rollbacks; however, you should still limit usage.
-
For more information: http://blogs.technet.com/b/reference_point/archive/2012/12/10/usn-rollback-virtualized-dcs-and-improvements-on-windows-server-2012.aspx
-
If creating an IT policy, alone, is not effective, you can set the snapshot path for each VM to a non-existent location, so user gets an error if they attempt to create a snapshot.
-
If snapshots are mandatory, the snapshot location should not be the host OS drive.
⎕ The recommended minimum free space on CSV volumes containing Hyper-V virtual machine VHD and/or VHDX files:
-
15% free space, if the partition size is less than 1TB
-
10% free space, if the partition size is between 1TB and 5TB
-
5% free space, if the partition size is greater than 5TB
-
To enumerate current volume information, including the percentage free, you can use the following PowerShell command:
-
Get-ClusterSharedVolume "Cluster Disk 1" | fc *
-
Review the "PercentageFree" output
-
⎕ It is not supported to create a storage pool using Fiber Channel or iSCSI LUNs.
-
For more information see:
⎕ Page file on Hyper-V Host should managed by the OS and not configured manually.
VM DISKS:
⎕ New disks should use the VHDX format. Disks created in earlier Hyper-V iterations should be converted to VHDX, unless there is a need to move the VHD back to a 2008 Hyper-V host.
-
The VHDX format supports virtual hard disk storage capacity of up to 64 TB, improved protection against data corruption during power failures (by logging updates to the VHDX metadata structures), and improved alignment of the virtual hard disk format to work well on large sector disks.
⎕ (New R2 Feature) Shared Virtual Hard Disk: Do not use a shared VHDx file for the operating system disk. Servers should have a unique VHDx (for the OS) that only they can access. Shared Virtual Hard Disks are better used as data disks and for the disk witness.
For more information:
MEMORY:
⎕ Use Dynamic Memory on all VMs (unless not supported).
-
Dynamic Memory adjusts the amount of memory available to a virtual machine, based on changes in memory demand using a memory balloon driver, which helps use memory resources more efficiently.
-
For more information:
⎕ Guest OS should be configured with (minimum) recommended memory
-
2048MB is recommended for Windows Server 2012 (e.g. 2048 – 4096 Dynamic Memory). (The minimum supported is 512 MB)
-
2048MB is recommended for Windows Server 2008, including R2 (e.g. 2048 – 4096 Dynamic Memory). (The minimum supported is 512 MB)
-
1024MB is recommended for Windows 7 (e.g. 1024 – 2048 Dynamic Memory). (The minimum supported is 512 MB)
-
1024MB is recommended for Windows Vista (e.g. 1024 – 2048 Dynamic Memory). (The minimum supported is 512 MB)
-
512MB is recommended for Windows Server 2003 R2 w/SP2 (e.g. 256 – 2048 Dynamic Memory). (The minimum supported is 128 MB.
-
512MB is recommended for Windows Server 2003 w/SP2 (e.g. 256 – 2048 Dynamic Memory). (The minimum supported is 128 MB).
-
512MB is recommended for Windows XP. Important: XP does not support Dynamic Memory. (The minimum supported is 64 MB). Note: Support for Windows XP Ends April 2014!
CLUSTER:
⎕ Set preferred network for CSV communication, to ensure the correct network is used for this traffic. (Note: This will only need to be run on one of your Hyper-V nodes.).
(NOTE: In order to configure CSV Redirected Access over one particular cluster network, SMB Multichannel will have to be disabled. See http://blogs.technet.com/b/cedward/archive/2013/11/06/windows-server-2012-smb-multichannel-and-csv-redirected-traffic-caveats.aspx). (added 04/15/2014.)
-
The lowest metric in the output generated by the following PowerShell command will be used for CSV traffic
-
Open a PowerShell command-prompt (using “Run as administrator”)
-
First, you’ll need to import the “FailoverClusters” module. Type the following at the PS command-prompt:
-
Import-Module FailoverClusters
-
Next, we’ll request a listing of networks used by the host, as well as the metric assigned. Type the following:
-
Get-ClusterNetwork | ft Name, Metric, AutoMetric, Role
-
In order to change which network interface is used for CSV traffic, use the following PowerShell command:
-
(Get-ClusterNetwork "CSV Network").Metric=900
-
This will set the network named "CSV Network" to 900
*** Set preferred network for Live Migration, to ensure the correct network(s) are used for this traffic:
-
Open Failover Cluster Manager, Expand the Cluster
-
Next, right click on Networks and select Live Migration Settings
-
Use the Up / Down buttons to list the networks in order from most preferred (at the top) to least preferred (at the bottom)
-
Uncheck any networks you do not want used for Live Migration traffic
-
Select Apply and then press OK
-
Once you have made this change, it will be used for all VMs in the cluster
⎕ The Host Shutdown Time (ShutdownTimeoutInMinutes registry entry) can be increased from the default time, if additional time is needed to be certain local VMs have had enough time to shut down before the host reboots.
-
Registry Key: HKLM\Cluster\ShutdownTimeoutInMinutes
-
Enter minutes in Decimal value.
-
Note: Requires a reboot to take effect
⎕ Run the Cluster Validation periodically to remediate any issues
-
NOTE: If all LUNs are part of the cluster, the validation test will skip all disk checks. It is recommended to set up a small test-only LUN and share it on all nodes, so full validation testing can be completed.
-
If you need to test a LUN running virtual machines, the LUN will need to be taken offline.
-
For more information: http://technet.microsoft.com/en-us/library/cc732035(WS.10).aspx#BKMK_how_to_run
⎕ Consider enabling CSV Cache if you have VMs that are used primarily for read requests, and are less write intensive. Scenarios such as Pooled VDI VMs; also can be leveraged for reducing VM boot storms.
-
For more information: http://blogs.msdn.com/b/clustering/archive/2012/03/22/10286676.aspx
HYPER-V REPLICA:
⎕ Run the Hyper-V Replica Capacity Planner. The Capacity Planner for Hyper-V Replica, allows you to plan your Hyper-V Replica deployment based on the workload, storage, network and server characteristics. This tool will help you determine:
-
How much network bandwidth is required between the primary and replica site?
-
How much storage is required on the primary and replica site?
-
What is the storage impact by enabling multiple recovery points?
⎕ Update inbound traffic on the firewall to allow TCP port ‘80’ and/or port ‘443’ traffic. (In Windows Firewall, enable “Hyper-V Replica HTTP Listener (TCP-In)” rule on each node of the cluster.
To enable HTTP (port 80) replica traffic, you can run the following from an elevated command-prompt:
netsh advfirewall firewall set rule group="Hyper-V Replica HTTP" new enable=yes
To enable HTTPS (port 443) replica traffic, you can run the following from an elevated command-prompt:
netsh advfirewall firewall set rule group="Hyper-V Replica HTTPS" new enable=yes
-
For more information: http://blogs.technet.com/b/meamcs/archive/2012/08/03/windows-server-2012-part3-virtualization-enhancements-mobility-hyper-v-replica.aspx
⎕ Compression is recommended for replication traffic, to reduce bandwidth requirements.
-
For more information: http://social.technet.microsoft.com/wiki/contents/articles/12794.hyper-v-compression-is-recommended-for-replication-traffic.aspx
⎕ Configure guest operating systems for VSS-based backups to enable application-consistent snapshots for Hyper-V Replica.
-
For more information: http://social.technet.microsoft.com/wiki/contents/articles/12795.hyper-v-configure-guest-operating-systems-for-vss-based-backups-to-enable-application-consistent-snapshots-for-hyper-v-replica.aspx
⎕ Integration services must be installed before primary or Replica virtual machines can use an alternate IP address after a failover
⎕ Virtual hard disks with paging files should be excluded from replication, unless the page file is on the OS disk.
-
For more information: http://social.technet.microsoft.com/wiki/contents/articles/12800.hyper-v-virtual-hard-disks-with-paging-files-should-be-excluded-from-replication-en-us.aspx
⎕ Test failovers should be performed monthly, at a minimum, to verify that failover will succeed and that virtual machine workloads will operate as expected after failover
⎕ Hyper-V Replica requires the Failover Clustering Hyper-V Replica Broker role be configured if either the primary or the replica server is part of a cluster.
-
For more information: http://blogs.technet.com/b/virtualization/archive/2012/03/27/why-is-the-quot-hyper-v-replica-broker-quot-required.aspx
⎕ Feature and performance optimization of Hyper-V Replica can be further tuned by using the registry keys mentioned in the article below:
LIVE MIGRATION:
⎕ Compression on Windows Server 2012 R2: If your Live Migration NICs are 10Gbps or less, use live migration with compression; if more than 10Gbps, use live migration with RDMA. (added 06/16/2014)
CLUSTER-AWARE UPDATING:
⎕ Place all Cluster-Aware Updating (CAU) Run Profiles on a single File Share accessible to all potential CAU Update Coordinators. (Run Profiles are configuration settings that can be saved as an XML file called an Updating Run Profile and reused for later Updating Runs. http://technet.microsoft.com/en-us/library/jj134224.aspx
SMB 3.0 FILE SHARES:
⎕ An Active Directory infrastructure is required, so you can grant permissions to the computer account of the Hyper-V hosts.
-
For more information: http://technet.microsoft.com/en-us/library/jj134187.aspx
⎕ Loopback configurations (where the computer that is running Hyper-V is used as the file server for virtual machine storage) are not supported. Similarly, running the file share in VM’s that are hosted on compute nodes that will serve other VM’s is not supported.
VITRUAL DOMAIN CONTROLLERS (DCs):
⎕ Domain Controller VMs should have “Shut down the guest operating system” in the Automatic Stop Action setting applied (in the virtual machine settings on the Hyper-V Host)
⎕ Important: See “Use caution when using snapshots” under the Disk section for more information regarding snapshots.
-
For more information: http://technet.microsoft.com/en-us/library/hh831734.aspx
⎕ Important: Be certain KB2855336 (released July 2013) has been installed. Note: This update rollup was reoffered on July 12, 2013 to fix an issue. For more information about this issue, go to the Known issue about this update rollup section.
-
Domain Controller VHDs that contain the Active Directory Database should have write caching disabled, to reduce the chance of AD corruption (if the database is stored on a vIDE drive). This update rollup addresses this issue for all Windows supported OS.
-
You do not need to do anything for your virtual AD Domain Controller on a vIDE deployment once the patch is applied. AD sends down a request to see if it can disable disk caching and when that fails it issues IOs with the FUA (Force Unit Access) bit, which is required for the integrity guarantees to work.
-
For more information: http://support.microsoft.com/kb/2853952
INTEGRATION SERVICES:
⎕ Ensure Integration Services (IS) have been installed on all VMs. IC's significantly improve interaction between the VM and the physical host.
⎕ Be certain you are running the latest version of integration services – the same version as the host(s) – in all guest operating systems, as some Microsoft updates make changes/improvements to the Integration Services software. (When a new Integration Services version is updated on the host(s) it does not automatically update the guest operating systems.)
-
Note: If Integration Services are out of date, you will see 4010 events logged in the event viewer.
-
You can discover the version for each of your VMs on a host by running the following PowerShell command:
-
-
Get-VM | ft Name, IntegrationServicesVersion, State
NOTE: If VM is offline, the Integration Services version will show Offline.
-
-
If you’d like a PowerShell method to update Integration Services on VMs, check out this blog: http://gallery.technet.microsoft.com/scriptcenter/Automated-Install-of-Hyper-edc278ef
OFFLOADED DATA TRANSFER (ODX) Usage:
⎕ If your SAN supports ODX (see this post for help; also check with your hardware vendor), you should strongly consider enabling ODX on your Hyper-V hosts, as well as any VMs that connect directly to SAN storage LUNs.
IMPORTANT: VHD-based or VHDX-based virtual disks attached to a virtual IDE controller do not support this optimization because integrated development environment (IDE) devices lack support for Offloaded Data Transfer. The virtual hard drive needs to be connected to the virtual machine via a virtual SCSI controller. (added 04/22/2014)
-
To enable ODX, open PowerShell (using Run as Administrator) and type the following:
-
Set-ItemProperty hklm:\system\currentcontrolset\control\filesystem -Name "FilterSupportedFeaturesMode" –Value 0
-
Be sure to run this command on every Hyper-V host that connects to the SAN, as well as any VM that connects directly to the SAN.
-
For more information: http://technet.microsoft.com/en-us/library/jj200627.aspx
-
For more information: http://technet.microsoft.com/en-us/library/hh831375.aspx
CONVERTING VIRTUAL MACHINES:
⎕ If you are converting VMware virtual machines to Hyper-V, consider using MVMC (a free, stand-alone tool offered by Microsoft.
-
Note: If you use the MVMC tool, you can also make use of the free Migration Automation Toolkit (MAT), which can help automate the conversion process.
I sincerely hope you find this blog posting useful! If you do, please forward the link on to others who may benefit!
Until next time,
Roger Osborne, Sr. PFE
@Ken – Sorry to hear you are finding issues with your current setup.
1.) You didn't mention the number of vCPUs you have assigned the VMs, but, at a minimum I'd recommend 2 vCPUs for the FS.
2.) Ensure you have installed the latest Integration Components on all VMs.
3.) Next, ensure you are using the default (synthetic) virtual NIC, and not a legacy NIC, as that will cause a great deal of overhead.
4.) You mention having 4 ea 500 GB SATA drives. What is the speed of these drives? 7,200RPM? How are those configured? RAID-1, RAID-5, RAID-10? Or are they standalone drives? Are these drives being used to host the VMs? If they are being used as standalone drives, you could potentially experience latency depending on read/write operations/sec. RAID is an excellent choice but bottlenecks can arise depending on if it's hardware RAID vs. software RAID. Hardware RAID would be considered best practice.
5.) If you are using a SAN and presenting LUNs which are hosting your VMs, be certain Jumbo Frames have been enabled on the host, switch, router, etc. — anything along the path as the data travels — to ensure you are getting maximum packet throughput. Switching from standard packet sizes (~1500) to jumbo frames (~9000) gives you an additional 6 times throughput!
6.) If you're still not seeing any gains in speed, I'd recommend running the PAL tool against your hosts and see if you an find any other bottlenecks.
7.) If you're a Premier customer, feel free to reach out to your Premier team and get some assistance in the works. Otherwise, consider opening a case with Microsoft and someone will be happy to assist you!!
Side Note: Using a virtualized File Server is perfectly acceptable in production use.
I hope this helps!!
Take care,
Roger
@ Teachmepls – I recommend looking at System Center Data Protection Manager (DPM) 2012. technet.microsoft.com/…/hh758173.aspx
@Miroslav – Yes, that was a typo. Thanks, I have corrected that on the blog post! 🙂
Hi Roger,
How about Hyperthreading? Is it recommended to enable it or disable it?
Does it affect the vCPU ratio, I mean do we need to configure 2 vCPU (1 core) when hyperthreading is enabled?
@ Jugal Piplani – Thank you for reading, and for your kind words. Be sure to stop back periodically, as information is updated from time-to-time.
Roger, regarding the use of Hyper-V Server 2012. You are correct that Hyper-V server comes with no licenses. Therefore you would use a datacenter license. However as I understood you actually can buy a Windows Server 2012 datacenter or standard license and still use the Hyper-V Server 2012 install media for deployment. Last year at MMS I also discussed this with one of the Microsoft speakers and he confirmed that this is a valid option in terms of licensing and support. In competitive virtualization environments you would also buy a datacenter license and "on paper" bind it to a host so licensing is valid, however on the host itself the Windows OS is not installed. Not that much different you would say.
Thank you for the very helpful post.
I did have a question though. If one uses pass-through disks (because of the limitation of having to take a volume offline to expand it) how does Hyper-V communicate with it? We are building a Hyper-V cluster and then a number of guest clusters within it. Our
normal network is on one class B subnet and our storage network is on another – 10.2.x.x for client traffic (and this is also the network the actual hosts will be on, and the guest VM IPs as well) and then 10.5..x for the storage network (the one the pass-through
volumes within the guest cluster nodes will be attached to through MS iSCSI on the VM, as well as dedicated 10Gbe NICs on the host servers. So we want all iSCSI traffic to only flow over the iSCSI NICs. The SAN is an ISCSI SAN.
So is Hyper-V smart enough to route any data reads or writes to the file shares (which will be on the 10.5.x.x net) through the 10.5.x.x. net, or will it route all of the data through the normal 10.2.x.x net in the process? I ask because that net is 1Gbe instead
of 10Gbe.
The fact that the volumes would be offline on the host also leads to this question.
Please let me know if you need further clarification on the setup.
Thank you,
Kurt
Is there any network setting that I should check before?
Roger,
Thanks for the article.
I have a couple questions though.
1) My company is migrating from 1u dell rack servers to dell blade servers. In doing so I am going from having 8 physical 1GB nics to 2 10GB nics attached to the backbone. With only having these 2 10GB nics how would you go about setting them up. I know I can do teaming and create sub-nics more or less but from your article you suggest not using nic teaming for iscsi which is what we will be using. Also we have 3 vlan's setup for this, 1-prod, 2-iscsi, 3-LM. Any suggestions on how to go about breaking all these out to work with the 2 nics and the 3 vlans?
2) With this setup I will be running 9 hyper-v hosts with 32 cores at 2.7ghz and 256gb of ram and 2ssd's in raid 1. Would you still recommend running this setup with Hyper-V Server 2012 or should I use Server 2012 Datacenter instead? We are licensed for Datacenter so that wouldn't be an issue if it would be better. Also we will be using SCVMM for management and I am wondering if its better to setup the cluster in failover cluster manager and then add the cluster to SCVMM or to create everything in SCVMM and not use the FCM? Can you use the FCM in Hyper-V server?
Sorry for so many questions. I feel like I am the only one running Hyper-V on blades with 10GB nics cause I am not finding any info out there on that. I am also not seeing much in the way of setting up Hyper-V Server 2012 in a cluster with SCVMM.
Thanks for the help.
For the Hyper-V host OS recommendation, why not the recommendation to use Hyper-V server 2012 in case the server is dedicated as virtualization host?
With Hyper-V Server 2012 you will have better results in reducing OS overhead, reducing potential attack surface, and to minimize reboots (due to fewer software updates). Hyper-V Servers 2012 can also be used in combination with Windows Server Standard and Datacenter Licenses.
@Kgee —
1.) Yes, it is possible to use NIC teaming and separate traffic using VLANs in Windows Server 2012/Hyper-V.
2.) Your list of networks did not seem to include "Management," "CSV/Heartbeat" or "Live Migration" networks. CSV/Heartbeat and Live Migration networks will be required if you plan on testing clustering. (You may want to revisit the "Recommended network configuration when clustering" section of this blog, for more details.)
3.) I would not recommend NIC teaming your iSCSI network, as typical production implementations use SAN hardware, which works best with MPIO. (The SAN hardware/software will optimize the traffic going from the host to their device using MPIO.) Whenever possible, I recommend mimicking your production environment.
Thank you for your post, and thank you for reading!
Roger
@Dave –
For some reason the links I provided in my last post didn’t post correctly. Here are the corrected links:
Here's a couple links for you to review on creating CSVs and Highly-Available VMs with failover clustering:
blogs.technet.com/…/microsoft-virtual-academy-setting-up-clustered-shared-volumes-technet-video.aspx
blogs.technet.com/…/creating-ha-vms-for-hyper-v-with-failover-clustering-using-free-microsoft-iscsi-target-3-3.aspx
Here's a link to a great resource for setting up scale-out file servers:
blogs.technet.com/…/follow-me-and-learn-windows-server-2012-scale-out-file-servers.aspx
@Shannon – I'm pretty certain you are correct. However, for licensing related questions, I highly recommend you contact your vendor or speak to a Microsoft Volume Licensing specialist for help (http://www.microsoft.com/…/how-to-buy.aspx)
@Jason Hillman – You can certainly do what you’ve outlined; however, it’s important to note it’s not best practice to run VMs on the same drive as the host. Additionally, as this overall solution is geared towards providing remote access to users, the
ideal solution would be highly-available (cluster).
I understand budgets don’t always allow for "ideal" solutions, however. 🙂 If I were in your position, and there were absolutely no other alternatives, I would probably run the VMs, along with the data VHDx drives, on the RAID-10 disks. Of course I don’t know
how much the SQL server is going to be thrashing those disks…
Thank you for reading!!
@Jess — Thank you very much for your kind words! It's great hearing feedback from our visitors, and knowing the work we do has a positive impact!
Regarding the disclaimer: Understanding the content, and what's appropriate in your infrastructure, are definitely important distinctions to make, for sure! 🙂
Thanks for stopping by!
@Bruce – If you're in a Windows Server 2012 Hyper-V environment, there are honestly no good reasons to use pass-through disks. Pass-through disks create complexity, they don't allow for VM snapshots, don't work with Hyper-V Replica, etc., etc. Additionally, the performance gap between VHDX and pass-through is virtually unnoticeable.
Presenting iSCSI LUNs to a VM is perfectly acceptable.
@Jon – As to recommendations on the amounts of VM's per CSV volume, the answer is, It depends. The reality is you'll need to look at several areas:
Whether you're using iSCSI or Fibre, you have a finite maximum bandwidth. If you're using iSCSI, the bandwidth between your hosts and your SAN can will vary greatly depending on the NICs you're using (1Gbps NICs vs. 10Gbps NICs).
What's happening within your VMs? If you have disk-intensive applications running (SQL, for example), you'll need to keep a close eye on IOPS. (Search Bing for IOPS calculators that are made freely available, or use your SAN manufacturer's recommended calculation, if available)
If you have VMs with large memory/CPU needs, this will reduce the overall acceptable number of VMs, due to constraints defined on the hosts (current physical memory & CPUs). You need to periodically monitor how much pressure is being put on your hosts as time goes by, or when additional VMs are added.
Great article. Love it!
@Jon Eakins – Thank you for reading! Yes, Jumbo Frames on teamed NICs work. You mention configuring jumbo frames on the NICs but don’t forget you must also set jumbo frames on the ports of your switch (being used for LM traffic), as well. All end-to-end
hardware must support and be configured to use Jumbo Frames.
@Anand –
Take a look at Set-VMNetworkAdapterVlan PowerShell command.
technet.microsoft.com/…/hh848475(v=wps.620).aspx
Here’s another that might help, as well:
technet.microsoft.com/…/hh848475
@MohsenKian – That’s not something I’ve seen or heard of before, so I would strongly recommend opening a case with Microsoft.
@JSB_1975 – Thank you for your comment! I’m glad you found this helpful!!
@David – Thank you for your comment! As for unbinding physical NICs from virtual switches before updating the NIC drivers, that is not currently a Microsoft best practice recommendation. I would recommend asking your NIC vendor if they have a specific recommendation.
I’ve just been tasked with this and this is an EXCELLENT blog and read.
@ Dave C – Yes, I would only recommend enabling those if your SAN supports them, as you may have unexpected results. Thanks for your positive comment, and for reading!!
@Shishir Kumar – Thank you very much for your kind words, Shishir!! It’s great to hear the time and effort spent creating and maintaining this blog has helped you!
It’s nice to see some discussion around Jumbo Packets. Completely underutilized in most environments. More admins need to learn the benefits and take advantage of the increased throughput.
@Roger
Thanks for reply. How can I open this case with Microsoft?
Hi Roger
I have a HP DL380G8 Server (32 cpu+ 192GB of RAM)
Windows server 2012 R2 on base and installed 7 Hyper-V machines with windows server 2012 R2 too.
I set all my business on this solution on 7 different Hyper-v servers.
My problem is after running this solution I have network issues. after about 2 days, some of my other network devices such as Access points stop working!!! or become so slow and when I reset 3-4 of my Hyper-V clients (or unplug network cables of server) they
start working again (Until 2 days after).
– I added Hardware of any Hyper-v (4 CPU and 8GB of RAM each one)
– I changed network adapters to Legacy NA
and still problem remains.
Please Help me. I’m getting mad
Regards
Mohsen
@Ben – No, I haven’t personally seen the issue you’re describing. If after updating the firmware on your NICs, as HP has suggested, you are still experiencing the issue, I’d recommend opening a case with Microsoft. I would also recommend you confirm your NIC drivers are up to date with the latest stable release.
Thanks for reading!
@David Moore – Typical Microsoft answer: It depends! 🙂 Although I’ve never witnessed an issue with enabling it, there may be circumstances where you are hosting an app that performs better when all the vCPUs carry equal I/O. In that instance, turning
off Hyper-threading might provide better performance; otherwise, I generally recommend enabling it.
great article, thank you for sharing.
@ Dennis
I would recommend reviewing the Windows Server 2012 licensing here: http://www.microsoft.com/…/buy.aspx (see Get Licensing Details) to ensure you're following the Processor CAL agreement. I am in now way, shape or form a licensing guru! 🙂
@Markus – You are indeed correct. Thank you for catching that error! I have fixed it on the blog!
Roger,
Thank you so much for this list – it is really great.
I just wander about two things,
Hyper-V replica and page file – shouldn’t be replicated unless on system drive. Does it mean, this is best practice to replicate system drive with page file on it or should I create separate drive for page file and don’t replicate?
Once again, than you very much.
@ vnavna01
Regarding NIC teaming within BIOS:
I would not have a recommendation that would select one over the other; both should provide a solid solution. Keep in mind, however, that in order to have true NIC teaming failover redundancy, you should ensure each NIC is plugged into separate switches.
Regarding the number of NICs for iSCSI traffic:
The answer is, It depends. The biggest factor is going to be the available bandwidth between your host and the iSCSI device. If your SAN has two 10Gbps NICs, you should consider dedicating several 1Gpbs NICs (using MPIO). If there are only two 1Gbps NICs, you might want to reduce that number to two.
Regarding the number of NICs for VMs:
Again, the answer is, It depends. 🙂 How many VMs will be running on the host at any given time? More VMs (or a few with high traffic) will require more bandwidth.
Don't forget to that, in addition to iSCSI traffic and VM traffic, you should account for your other networks, as well: Live Migration, Management and CSV/Heartbeat.
@Kgee – Thank you for your comment! As to your question: Yes, what you have outlined should work fine; however, I would encourage you to bring blade #1 up to 6 NICs, as well, so you can have redundancy for all your connections.
@Dave – A lot of the items in this Best Practice blog were also covered in the Hyper-V RAP, for sure. Microsoft has begun offering RAPs via RaaS (RAP as a Service).
Quick question, I have 2 Identical Servers that both have Dual 10GB SFP+ and Dual 1GB Ethernet cards. What would be the best way to set this up to handle the different duties for the NICS. I obviously want to do SMB3.0 clustering.
Thanks!
Actually after reading more technet, I see that there is a way to use the pass-through disk as a SCSI disk in the VM instead of using the iSCSI initiator, so basically a third option for connecting them. I thought that pass-through and the iSCSI in the
guest were part of the same solution, but now I see there are two different ways and what I was referring to before wasn’t pass-through but rather iSCSI within the guest.
Knowing this my guess is that using pass-though would have Hyper-V routing the storage traffic through the correct NIC. But it would nice to get that confirmed.
Also I am thinking of doing some of the volumes (that hardly ever have to be expanded) as virtual disks in the guests. Are there issues using them alongside pass-though disks?
Thank you,
Kurt
@Fillogio – The biggest benefit of running VMs on a CSV — with 2 (or more) Hyper-V hosts — is this configuration will give you failover capabilities, in the event a Hyper-V host goes offline.
And if you're running SCVMM & SCOM, you can implement PRO tips to help manage your environment. Should a host become heavily loaded, and other hosts have resources to spare, PRO tips can help automate moving VMs off the heavily loaded host to other hosts.
Cheers!
Thanks for the post. Regarding the Guest OS and (minimum) recommended memory – the values should be in MBs, not in GBs so it's a typo, am I right? 🙂
Great Job!
Thanks for this!
@ Shane M – Thank you for reading! Glad you found the information useful!!
@NA I see this as well we will look into this.
@Maurice,
Thank you for your comment!
Regarding your questions:
1.) I'm not saying never use it in production. 🙂 I just think you need to weigh the pros & cons. Our Hyper-V team did AMAZING work on the new VHDX format; however, when using dynamically expanding VHDX files, I see the possibility for file fragmentation as the VHDX file increases in size over time, which can lead to overall reduced speed when compared to a VHDX that has been fully allocated at creation.
The second reason for recommending fixed over dynamic is to prevent your VM storage volumes from running out of disk space, due to having dynamically expanding VHDX's (suddenly and unexpectedly) maxing out the volume they're sitting on (SAN, local or otherwise), leading to VMs suddenly pausing, and creating outages. Careful planning and oversight should be adhered to when you decide to use dynamic disks.
2.) I’m not sure I’m following your point regarding turning off the firewall on the iSCSI NICs via the registry entry, as there wouldn’t be any appreciable performance gains, in my opinion. If your goal is to ensure uninhibited iSCSI traffic, my recommendation is to enable (allow) the built-in iSCSI rule within Windows Firewall, and not change the registry.
Thanks for reading!
@Dave –
It sounds like your servers will work very well as hosts. Your next step is putting together a great storage solution.
My recommendation would to implement Cluster Shared Volumes (CSVs), as this will make your virtual machines highly-available. In order to set up CSVs, however, you need to have shared storage.
To set up shared storage, you need to have a device (SAN, Scale-Out-File Servers or a NAS device that supports SMB 3.0) to share the storage you will be using in your cluster environment. Each host in a CSV environment will need to have access to the shared storage, therefore you cannot use the storage on either server (as that would give you a single source of failure).
Here's a couple links for you to review on creating CSVs and Highly-Available VMs with failover clustering:
blogs.technet.com/microsoft-virtual-academy-setting-up-clustered-shared-volumes-technet-video.aspx
blogs.technet.com/creating-ha-vms-for-hyper-v-with-failover-clustering-using-free-microsoft-iscsi-target-3-3.aspx
Here's a link to a great resource for setting up scale-out file servers:
blogs.technet.com/follow-me-and-learn-windows-server-2012-scale-out-file-servers.aspx
@Ameer –
1.) Hyper-V does a great job managing CPU workloads; as such, you shouldn't need to dedicate CPUs for the host. CPU resources, and the tuning of those resources, should only be done if you have a very specific reason for it.
2.) The answer for that is, "It depends." 🙂 How many exchange mailboxes will you be hosting, how hard will your SQL server database(s) be hit, etc. Answering those questions will help you get a general idea as to how many cores, memory, etc. you'll need in order to adequately supply the needed resources.
3.) Hyper-V in Server 2012 is excellent in reserving the necessary memory for the host server, so no manual changes are recommended. (In Server 2008 R2 there was a recommendation to change the host memory reserve, but that no longer applies to Hyper-V in Server 2012.
@ Mark – Yes, I would consider this a recommendation that would apply to Server 2008 R2.
@ Dennis – There are certainly times when using Hyper-V Server 2012 is desirable, such as a lab environment; however, I most often recommend using Windows 2012 Datacenter (or Enterprise), as these products almost always end up being the most cost-effective, due to licensing. Hyper-V Server 2012 does not come with any licenses. http://www.microsoft.com/…/buy.aspx
@cnc – Teaming each of the four networks you outlined in your post, and plugging them into a separate switch, will definitely give you the redundancy you're after.
Switch Independent teaming is supported.
Thanks for posting!
Roger
@Greg – In this blog I state that "NICs should not use APIPA (Automatic Private IP Addressing). APIPA is non-routable and not registered in DNS." So, just to be clear, I'm not recommending disabling APIPA; I'm just stating you should not use APIPA on any host networks. 🙂
APIPA will only come into play if there is not a static entry and there is no DHCP server offering addresses on the network in which the NIC is plugged in. Therefore, my recommendation for your Hyper-V hosts would be to set static entries for each NIC, rather than rely on DHCP, etc.
I've never been a proponent of disabling features like this (or IPv6, as another example) through registry hacks, as they have a way of coming back to bite you when you least expect it. Just my $0.02 🙂
Thanks for reading, and thanks for posting!
@Marc – Create a temporary folder on the host; next set the snapshot path for each VM to the temporary folder you just created. Once complete, simply delete the folder.
I thank you for all of the information. I have two quick questions. I have a server that has the ability to do NIC teaming within the BIOS. Would that be recommended or should I stay with Windows to complete the task? Also I have 12 nics within each server how many should I put towards the ISCSI connection and how many for the virtual networks?
Thank you
Dear Roger,
Thanks for your handy checklist.
A quick question: any advantage in storing VM VHD in a smb share provided by a HA clustered Scale-Out File Server role compared to a CSV?
Cheers
@JB – The problem can also occur on W2K8 R2 but is much less common. However, Microsoft intends to ship a preventative fix for 2008 R2 Hyper-V hosts and VM guests. The current plan is to ship this fix on Patch Tuesday, August 2013.
Thanks for reading, and thanks for posting!
@NM-BG – I would recommend looking at converged networking options, such as dual (teamed) 10gb NICs.
Thanks, Benedict!
@Roger Thank you very much 🙂
Very nice article, love the detail.
@MohsenKian – If you have a Premier agreement, contact your Technical Account Manager; they can assist with getting a case opened. If you don’t have a Premier agreement, you can go here:
http://support.microsoft.com/gp/profsup/en-au
@d7krupa –
I would recommend teaming the dual 1Gbps NICs and using that for Management. Next, I would team the dual 10Gbps NICs and use those for Live Migration, iSCSI, Production and CSV/Heartbeat. Not only will you have redundancy, you'll also have excellent throughput with those 10Gbps NICs! 🙂
Good list, where is Power Configuration in BIOS and Host OS though?
http://www.wservernews.com/archives/2014-11-03.htm might help too
http://support.citrix.com/article/CTX127395 is a good write up of what I’m talking about.
This page has some weird facebook related warnings floating on it. Only showing when viewed w/ IE. Chrome etc. doesn’t show them.
SECURITY WARNING: Please treat the URL above as you would your password and do not share it with anyone.
s-static.ak.facebook.com/…/xd_arbiter.php
Roger, really a great resource! Two questions: (1) Dynamic VHDX: the "performance-tuning-guidelines-windows-server-2012.docx" from http://download.microsoft.com states on page 125 "When using the VHDX format, we recommend that you use the dynamic type because it offers resiliency guarantees in addition to space savings that are associated with allocating space only when there is a need to do so. ". You still recommend fixed size. Why? (2) Is it a bad / neutral / good suggestion to set *NdisDeviceType in the registry to value 1 (NDIS_DEVICE_TYPE_ENDPOINT) on dedicated iSCSI interfaces in order to turn off Windows Firewall on this device ("Public Network" problem)?
Is the free space recommendation for the CSV also applies for Windows 2008 R2 SP1 Hyper-V?
What software for backup vm in hyper-v ?!
Great article! Thank you for putting this together. Definitely adding this to my Hyper-V deployment bible.
Hyper-V Server is the ideal virtualization host. You need to be covered for your licenses obviously. But you can then deploy Hyper-V Server which is lean and mean.
That's that f******* great!!
Thank you for great info! I wonder for SaaS hosting environment can the host be configured without being part of any domain?
We have 6+"Dell R820" with Windows Server 2012 Datacenter and each we plane to configure as a Host Server.
Great post!
One question – Are there any recommendations on the amounts of VM's on a CSV volume?
Is it for example acceptable to have a 5-node cluster with 100 VM's on a single CSV for this cluster?
I haven't found any such info anywhere.. Should be greatly appreciated to know if there are any limitations.
Outstanding Article! This is my first time here and I'm about to check out what else you've written….but first,
Bless you for your disclaimer!
Its been sometimes exasperating training a succession of bosses and coworkers that Best Practices are meant to be "rules of thumb", and they are not necessary (nor sufficient) conditions for success.
Very nice collection of BP's Roger, thanks for sharing this.
I have one question with regards to BP's around security configuration, especially in situations (Live Migration, access to Shares for VMs and Remote Management through PS come to mind) where delegation is needed. I understand that you basically have 2 options:
Kerberos
seems to be the easiest but carries with it a potential security risk if the first hop is compromised.
although more secure, carries the burden of management overhead in the sense that you have set up (and manage) kerberos constrained delegation in AD for the computer accounts that require delegation (like hyper-v hosts and library servers f.i.). This could be quite the chore in a large environment of >100 hosts (yes, I know, can be automated but still).
Any thoughts on this?
BTW, it would be fantastic if SCVMM would take care of the management of this in my opinion.
Thanks,
Serge
Very Good document
For the non-OS iSCSI disks I have been using (in Server 2008 Hyper-V) a mixture of both VHD (on a iSCSI LUN mounted to the host) and pass-thru iSCSI LUNs connected directly to the VM. Do you have any recomendations or best practices for the use of these mechanisms in Server 2012 Hyper-V?
Hey Roger,
If we don't have a dedicated GPO for our servers, is there a way to modify this via the GUI / PShell as opposed to modifying the Local GPO for each and every server, since that would be killer. Also, as per here, social.technet.microsoft.com/…/630cc818-69b0-4e1c-8d65-1b895b20e203, it doesn't seem like there is a RDS standalone GUI anymore in 2012 like there was previously, right?
Thanks.
Reuvy
Thanks for the great article. This is very helpful in setting up a new cluster.
Let's say I want full Ethernet switch (and NIC, cable, etc.) redundancy for my Hyper-V cluster. I'm running Shared SAS, so iSCSi connections are not an issue. I also want to dedicate NICs for my "Management”, “Production”, “CSV/Heartbeat”, and “Live Migration.”
Would I go about this by:
Teaming two NICs for each of the four Hyper-V networks listed above.
Plugging each of the two NICs into a separate switch.
Connecting the various networks between the two switches.
Is it this simple, or am I missing something?
Also, I read a non-Microsoft article that indicated only Switch Dependent teaming was supported on Hyper-V clusters. Is this true, or can I use Switch Independent teaming for all four Hyper-V networks?
Thanks again.
Hi Roger
How to set vlanid in host network adapter in win 2012 core mode.
Pls advice,
Regards,
Anand
We replaced a Server 2000 (DC)(1GB RAM( and a server 2003 (FS) (1 GB RAM) with a new Hyper-V 2012. The server has 16 GB RAM (4GB allocated to each of 2 VMs) One of the VMs is the DC and the other the FS.
FS performance is horrendous. Your note suggests against using a VM as a FileServer for compute nodes, but we are only using the FS for the client shares.
Client shares are used only for administratrative purposes, Word, Excell, ACCESS, though they will want to open the occasional CAD drawing with a Autocad reader.
Dell R520 16 GB RAM. 4 ea 500 GB SATA drives
Max 20 users, of which 5 are regular users. Performance is bad with no load, accessing large files, very small ACCESS db, etc..
Any ideas?
Hi Roger, thanks for the great article!
One detail seems to need correction:
*128MB is recommended for Windows XP (e.g. 128 – 2048 Dynamic Memory). (The minimum supported is 64 MB) *
Windows XP unfortunately does not support dynamic memory, so the vm would remain on 128MB if configured this way, which would lead to heavy paging in/out.
Best,
Markus
Hi!
Thanks for a great article. I tried to implement the shutdown time on some of my servers: (HKLMClusterShutdownTimeoutInMinutes) 2012 DataC. servers, So I set the value in registry and rebooting server. When server has come back up, it has set back to the default value. Is this a behaviour seen in other Environment?
Cheers
What a great resource, thanks for putting this together Roger!!
Hi
Thanks for this article, it was very usefull in many ways.
I have the same issue as Olle, if I modify the ShutdownTimeoutInMinutes registry key to 30, when I reboot the server, it goes back to 149 (default value).
I've tried in Powershell too:
(Get-Cluster CLUSTER_NAME).ShutdownTimeoutInMinutes = 30
–> the value is set on each member of the cluster, but if one member reboot : same result.
Do you know this behavior ?
Hi There,
Great article – thanks.
You say:
"If creating an IT policy, alone, is not effective, you can set the snapshot path for each VM to a non-existent location, so user gets an error if they attempt to create a snapshot. "
I would very much like to disable snapshotting, however if I try and set the path to a location that doesn't exist, either via he GUI or PowerShell it errors that the path cannot be found. how do you propose setting the path to a non existent location?
Thanks
Perfect article for me. BTW I read somewhere that you should always install the guest OS to be the same as the host OS. Am I making that up? Doesn't make sense, but I'm about to deploy two large HyperV environments using 2012 Datacenter and do have the opportunity to do that. Datacenter on the guests too?
I have two testing hosts. One with 6 NICs and the other with 4 NICs. What NIC configuration would you recommend if I wanted to also deploy clustering and use iSCSI for storage? I come from the vMware world where we could team NICs and have run different services off the team separated by different VLANS eg:
6NIC host
vMotion + Service Console= Team 1 (vSwitch0)
iSCSI_02 + iSCSI_01 = Team 2 (vSwitch1)
vMware guest network = Team 3 (vSwitch2)
4NIC host
vMotion + Service Console+iSCSI_02 + iSCSI_01= Team 1 (vSwitch0)
vMware guest network = Team 3 (vSwitch1)
Hey Rogers,
KB2855336 – is this only for 2012 virtualised DC's, not 2008 R2?
Cheers,
You mentioned to disable APIPA…Is there an easy way to do this if you have 4 Nodes with 8 NICs? Everything I've read says that you need to add a Registry key to each NIC to disable!
APIPA can be disabled by adding the "IPAutoconfigurationEnabled" DWORD registry entry with a value of 0x0 to the following registry key:
HKEY_LOCAL_MACHINESystemCurrentControlSetServicesTcpipParametersInterfaces<Adapter GUID>
Note The Adapter GUID subkey is a globally unique identifier (GUID) for the computer's LAN adapter.
Thanks,
Greg
Roger,
I am a bit perplexed on the many options available. What I have to deal with is 2 new Dell servers that are exactly the same build. No SANS and No file server. Just 2 nodes on a network connected to AD.
Like I had mentioned earlier 4 nics. 2 10 GB SFP+ and 2 1GB. For storage I have 8 drives at 2TBs a pieces, Raid 50 through Perc 710 card.
My question is how should I be setting up the storage, should I be using clustering? Should I create Iscsi Virtual Disks? What would be the best config possible?
Thank you.
Hi Roger,
Thanks for the great article roger, i have couple of questions regarding Windows 2012 hyper-v;
1) Is there is any recommendation on how much cores should be dedicated for the host server that should be excluded from vCPU-logical cores calculation?
2) And is there is any recommendation on vCPU ratio when we calculate needed logical cores for virtual servers (Exchange, SQL, …) ?
3) my last question any recommendation for the memory that should be dedicated for the host server (will be used for hyper-v OS and Hyper-v service) ?
thanks in advance..
Thanks Roger… This is definitely a great article and very handy.
Roger,
Great BP! Has been immensely helpful. Do you plan on updating this for 2012 R2?
@ Patrick – Yes, I will provide an update to show any changes in Best Practices as a result of R2. I plan on doing this in the next couple of weeks. Thank you for your comment!
@patrik996 –
I would recommend reviewing the following blog regarding Hyper-Threading: blogs.technet.com/…/to-hyper-thread-or-not-that-is-the-question.aspx
Thank you for your comment!
I am a late bloomer to this thread. I think the term for what I am wanting to do is nested virtualization?
I am wanting to set up a POC for my company but unfortunately all I can use are VMWare VM's for my hosting environment. I also am running System Center 2012 and want to get Azure Pack running with VMM utilizing Hyper-V. The question is, although unsupported, is it possible to get hyper-v running on a VMWare guest OS?
@Jason – I've heard that is possible; however, I have not tried doing that. And as you stated, if it is possible, it would not a supported configuration. 🙂
Well done!!
@ Fabio – Thank you! It's always great to hear from readers who find this blog helpful!
Great article Roger, any chance to have a Server 2012 R2 update to this very useful check-list?
Thanks
@Marco – I am working on a blog to address the Best Practices changes/updates question related to R2, which I hope to have published in the very near future. Thank you for your comment!
Regarding Dynamic Memory. We have several Small Business customers running Windows 2012 Hyper-V and maybe 4-8 VMs with Dynamic Memory. Do you suggest setting "Maximum Memory" for all VMs so you don't overcommit your physical memory. So if you have for example 32 GB physical RAM and you have 4 VMs, you put each to 8 GB maximum? OR would you rather recommend not setting a maximum and instead monitoring the VMs and the physical host to see if you ever reach 32 GB used and take action after that (adding more physical memory etc)?
Great Article, is my current multi-tenant cloud baseline. Kudos!
regarding CSV, I have 3 questions both for Win2012 and Win2012R2, can you help?
1) Whats the maximum recomended size for a CSV, should I split CSV every 2TB or can I go Higher?
2) Im formatting CSV LUNs with 64k instead of default, is this OK?
2) Im separating every "customer" with a CSV, so CSV goes from 300GB to 1.5TB depending on the customer size (amount of VMs and VHDX), I currently have over 20 CSV per cluster. Will it be better to mixed them all in a Standarized size CSV?
i am sure i have read on another technet blog some advice about disabling netbios & disabling dns registration on the various nics such as iscsi, csv, etc., and just having those enabled on the management nic.
is this still best practise? i think sometimes you get strange things happening if netbios is enabled on all networks rather than just management.
@Jones – I think it's a good idea to set max memory whenever running resource intensive applications, such as SQL (which tends to take every MB you offer it). However, the big benefit to Dynamic Memory is that memory resources can also be reduced from VMs that are no longer consuming more memory than the minimum allocated amount, so your host can reuse that memory, as needed.
@juan quesada – Thank you for your kind words!
Regarding your questions: I would recommend you consult with your storage vendor for all three. (e.g. Your storage vendor may recommend you configure each LUN with one partition and place one Cluster Shared Volume on it.)
I will tell you, however, what I've generally seen has been 2TB CSVs. You can go higher using GPT, however.
Based on what you've described, I think keeping your current practice (separating every customer with a CSV) makes good sense. You're able to grow each CSV as needed and aren't allocating 1.5TB to a customer who you expect to only consume <300GB, which gives you room to grow your multi-tenant solution! 🙂
Thanks for reading!
@david hood – Thanks for reading, David!
If all your networks are on separate VLANs, NICs, such as iSCSI, aren't able to register their DNS information; however, unbinding the manufacturers protocol (if applicable), IPv4 and IPv6 protocols (on the iSCSI NICs) will help eliminate non-iSCSI traffic/chatter.
Best practice for NICs used for VM traffic is to uncheck the "Allow management operating system to share this network adapter" in the Virtual Switch Manager. When that is done, all protocols are automatically stripped from the NIC except for the manufacturers protocol (if applicable) and the Hyper-V Extensible Virtual Switch.
The CSV network requires Microsoft networking protocols remain bound.
Just found this for the first time – thanks for the best practices checklist! I was pleased to see that I have most of this already done, but will be making some small tweaks as I agree with a few things you've pointed out that I hadn't thought of.
One thing I am not sure about is to use fixed disks for all VMs. I recently attended a Hyper-V 2012 IT Camp led by @Tommy_Patterson and he shared that there is a negligible difference in performance with fixed vs. dynamic disks in Hyper-V 2012. So I am continuing to use dynamic disks for my VHDXs.
My only question is regarding CSV/Heartbeat network. Currently I do not have a separate network for CSV/Heartbeat. Do you have any links to share regarding this specific network?
Thanks again!
@eric szymczyk there might be a negligible difference in performance with fixed vs. dynamic disks as far as the .vhd/x itself, but performance can still suffer because of mis-aligment. Consult your storage vendor:
communities.netapp.com/…/performance-and-dynamic-vhd-s
eric szymczyk – Thank you for the positive feedback!
Microsoft has done some AMAZING work with the new VHDx format, for sure. That being said, however, I always try to lean towards reducing potential pitfalls/outages, which come back to bite you when you least expect it; which leads me to why I don't recommend dynamic VHDx files in production:
If LUN/Volume/VHDx sizing isn't watched closely, and the Dynamic VHDx files total maximum size exceeds that of the LUN/Volume, you can quickly run out of space and cause VMs to go offline. Before I would feel comfortable recommending dynamic over fixed would be if there is a very rigid process in place, which could mitigate the risk. Of course fixed disks don't introduce this risk, so that's why I recommend fixed over dynamic! 🙂
Thanks for the comment!
Thanks so much for taking the time and sharing this valuable knowledge.
Is there a way to quickly create fixed vhdx files? The time commitment to create large fixed vhdx files is what forced us to try using dynamic disks in production. The old tool vhd tool. hasn't been updated to support vhdx files. blogs.msdn.com/…/quick-fixed-vhd-creation-tool.aspx.
@glenn
To create VHD(X) files quickly its best to offload that to an ODX enabled SAN. Most major manufacturers have ODX enabled SANs, Dell Compellent, HP 3PAR, etc. SCVMM 2012 R2 can now use odx for vm deployment too. Essentially odx offloads the task of vhdx creation to the san which is faster than creating it directly on a host.
@glenn – Ryan is exactly right! ODX will definitely be your friend! A large number of SAN vendors offer support for this technology. To give you an example: With ODX, I was able to create a 100GB fixed VHDx in less than 3 seconds!
Could you expand on VHDx location, Raid Levels, multiple arrays, spindles, and servers with multiple VM's on the typical single server or multiple standard server configurations where the Host OS and Guest are all located on the same box that are not using any sort of network storage system? In the SMB space we have budgets and space requirements where the likely deployment is going to be the standard Tower Server with a host of internal drives usually on a single RAID controller however it's very common to need to deploy multiple VM's on those boxes. A perfect example would be since MSFT decided we don't need SBS in the small business space (under 75 users) we are left with needing to deploy an AD server and an Exchange Server, that's two VM's, same box, both needing a pretty good amount of I/O.
hi Roger.. thank u for the helpful blog..
In addition i would like to ask u that i have a problem With connecting on of my Network adapter to one or more Virtual machine blocked me from connecting my host to logical network. to give brife hint about my Network adapter.. they are 2Nic and Microsoft teaming and its conected to to external Virtual network to allow managment and to logical switch (Virtual switch) Does this connetivity have cause ..?
thank u
Alex
Pingback from Microsoft – Preparing for free exam 74-409 Server Virtualization with Windows Server Hyper-V and System Center | blog.bjornhouben.com
Pingback from Microsoft – Preparing for free exam 74-409 Server Virtualization with Windows Server Hyper-V and System Center | blog.bjornhouben.com
Pingback from The Most Popular Posts of 2013 and Belated Birthday – Ask Premier Field Engineering (PFE) Platforms – Site Home – TechNet Blogs
Great post. As auditors we have been searching for material like this to help us when it comes to assessing our clients using Hyper-V. Thank you for making my job a little easier!
Thanks For Post. Very helpfull Information.
Hi Roger,
Have you come across issues using Windows Network Teaming and a Virtual Switch, where some sort of LACP failure on one adapter starts affecting network communications to and from VMs? Instead of recovering and utilising the other teamed adapters properly, the issues continue until some manual intervention takes place. This seems to be the case with both Switch Dependent and Independent modes with HP Proliant Servers in my experience so far. HP have recommended a firmware update for the network cards in question (331FLR and 331T). Was just hoping to get your opinion.
If I’m not wrong these best practice items were extracted from the Hyper-V-RAP program? Which tool do we use now for HVRAP?
Thanks!
Dave
Apologies if this is covered and I missed it. What are the recommendations on the page file for the GUESTS? Should there be one? Should it be on a dedicated vmdx? What size should it be?
thanks,
-Ravi
Does anyone have any reference material that explains the 2012 Hyper-V Host page file, so I can get an understanding as to how it is different from 2008 r2? This article recommends keeping page file as ‘auto’. In 2008 R2 traditionally I would set a page file maximum (i.e. 4GB) on hosts with large amount of memory allocated.
Excellent checklist Roger, many thanks! One tip that I’d like to see added; unbind virtual switches before updating Hyper-V host NIC drivers (and reattach after updating). We were an early Hyper-V shop and this one bit us in the donkey a few times, I believe it’s still a current issue with the 2012 products.
Nice article I will be sure to refer back to this as we manage many Hyper-V sites. Can you make comment on CPU Hyper threading, should it be turned on or off.
Great post! On our 2012 cluster we were told by MS Support to physically disable ODX and TRIM if your SAN does not support it.
Great post Roger, I wish have seen this last year when were consolidating our Hyper-v environment.
Pingback from Windows Server 2012 Hyper-V Best Practices (In Easy Checklist Form) | Windows Vmware Topics
Roger great post.
I have a question on NICs and MS teaming. We use 8, 1GB NICs.
1Mgmt, 2 HV Guest(2012 R2 Teaming), 1 CSV, 2 iSCSI(Not Teamed), 2 Live Migration(2012 R2 Teaming).
My question is around the Live migration network. They are teamed to get a little more throughput. Do you know if the MS Teaming set to LACP/Dynamic supports jumbo frames? I enabled Jumbo Frames on both NICs and then added them to a team. When I ping with "ping
-S server1_LivMig_NIC -f -l 8000 server2_LivMig_NIC" I receive "Packet needs to be fragmented but DF set."
Thanks for your reply Roger. Jumbo frames is enabled on the switch and the physical NICs that compose the Microsoft teamed adapter. I did not see an option on the teamed adapter to change the frame size. Jumbo frames work on non teamed adapters. The switch
is all Jumbo or none so it is not set per vlan or switchport. Thanks again for your help!
Roger, can’t describe in words the help I have received from this blog as well as your timely personal responses. Thanks a lot. May God Bless you and you keep helping people like me.
Thanks for the great information!
I do have a question regarding storage as a result of my reading though.
I am creating a host that will house two Windows Server 2012 R2 VMs on it. One will be a SQL Server and the other will be Remote Desktop Server. The software vendor has provided me the following specifications:
DB Server (15K Drives):
RAID Channel 1 (320GB x 2) – RAID 1 – (C) OS + SQL Server App – (D) SQL Server Log Files
RAID Channel 2 (500GB x 3) – RAID 5 – (E) SQL Server Data Files
RDS Server (10K Drives):
(500GB x 2) – RAID 1 – (C) OS + Application Files
My trouble is in converting the physical requirements into highly performing solution on the host and VMs. My host has a SAS RAID contoller and (8) 15K 600GB SAS drives. I had planned on using (2) in a RAID 1 as a mirror (600GB) and (6) in a RAID 10 (1800GB).
I was thinking I could create a (100GB) Virtual Drive on the RAID 1 array for the host, then a second (500GB) Virtual Drive on the RAID 1 array to house the VHDX and VM configuration files for the RDS server. The RAID 10 array could be used for all of the partitions
for the DB Server.
I have no access or budget for external storage and am supposed to do the best I can with what I have. I would love input and I apologize if this is posted in the wrong location.
Hi ,
Thanks for the good article-
Here I have question where you have mention 5 networks but if I want theese network then total nics would be 10.
in Rack mount server its difficult to get 10 Nics.
Hence what would be the best practice to share the nics for all the traffic ? please reply.
Windows Server 2012 Hyper-V Best Practices (In Easy Checklist Form) – Ask Premier Field Engineering (PFE) Platforms – Site Home – TechNet Blogs
First off, great checklist. Currently trying to deploy HyperV myself and wanted some advice. I am creating a test/soon to be production environment with 2 blade servers. One blade has 4 1GB NICs and the other has 6 !GB NICs. What do you recommend for the
Hyper V network.
Blade 1
Nic1 – Management, VMNET
Nic2 – Live Migration, CSV
Nic3 – ISCSI MPIO
Nic4 – ISCSI MPIO
Blade 2
Nic1 – NIC team1: Management, VMNET
Nic2 – NIC team1: Management, VMNET
Nic3 – NIC team2: Live Migration, CSV
Nic4 – NIC team2: Live Migration, CSV
Nic5 – ISCSI MPIO
Nic6 – ISCSI MPIO
Thank you Roger for the response. Due to the Chassis configuration and cost restrictions on blade 1, we can’t add NICs, so unfortunately I am stuck with 4 NICs. Would you recommend NIC teaming NIC1 and NIC2 so that Management, VMNET, Live Migration, and
CSV are at least redundant or is it not recommended to have CSV and Live Migration NIC teamed?
Another two cities visited, and more Q/A that are recurring themes…
When did Windows Server stop
Great article. Thanks for sharing.
For NIC priority and usage, saw some other articles said cluster dedicated heartbeat should have all other protocols disabled other than IPv4 for enhanced responsiveness. In this case, since CSV network requires "F&P Sharing" and "MS Network Client" feature
to function properly, is it better to separate these 2 network/virtual interfaces?
In some very critical environment, we might push the cluster heartbeat to a higher priority than the management network, is that not acceptable or not recommended?
Our practice for implementing Hyper-V cluster farm, is having "team A" 4xGBE or 2x10GBE for all Mgmt, Heartbeat, CSV and LM traffic, another 2x or even 4x 10GBE "team B" for VM virtual switches. While I would like to share that we do "team B" not on Windows
LBFOAdmin. We use SCVMM to handle it, or else SCVMM will treat the teamed interface a physical NIC, making it impossible for great deal of powerful feature like defining virtual switch and etc.
My suggestion is to promote SCVMM. Hyper-V farm with and without SCVMM are in 2 different worlds!
Thanks,
Lawrence
Roger, is there any way in a stock 2012r2 cluster (2, 3 or 4 nodes) to monitor disk space and send/receive email alerts when approaching a specific limit on external storage?
I think that the term csv/heartbeat should be changed to csv/intracluster since heartbeat traffic is sent over all networks enabled for cluster communication. It can make people to think that heartbeat traffic is sent only through one NIC which is not
correct. The intracluster traffic is sent also through CSV network (< cluster metric).
I’m glad this article is here. It’s a great article, and we borrow heavily from it in our build documentation. It’s seriously invaluable.
Two things would make it easier to use. If the headings were anchors, that would make referring to them directly way easier than linking to the page and describing the heading.
And it would help if when I went to bookmark it, the name were "Windows Server 2012 Hyper-V Best Practices (In Easy Checklist Form)", and not http–blogs.technet.com-b-askpfeplat-archive-2013-03-10-windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx.
Likewise, if friendly names for hyperlinks were used throughout the page, "Hyper-V 2008 R2 SP1 Best Practices In Easy Checklist Form" is much easier to read and more immediately helpful than
http://blogs.technet.com/b/askpfeplat/archive/2012/11/19/hyper-v-2008-r2-sp1-best-practices-in-easy-checklist-form.aspx).
It could make sense to have a new article that combines this and Windows Server 2012 R2 Hyper-V Best Practices (In Easy Checklist Form). Or maybe a new article when the time comes for Windows Server vNext?
@Gregory.Stigers – Thank you for the AWESOME feedback!
Thanks for the post. This gives good insight for preapring for designing hyper-v failover cluster.
Went through the recommendations, most of them seems to be inplace.Having issues with bunch of VMs going to hyper-v boot failure screen " boot failure reboot and select proper device or please insert boot" during the CAU run.
It would be of great help if you give bit of insight on this behaviour.Does it some way related to Integration Services of the VM. Please guide on this how to resolve.
Thanks for the post. Did you have a best practice guide for Templates on CSV? I am having an issue where I will create a template -> store in the Library. When I go to deploy the template, it will not see the CSV of either Host as an option for deployment.
Instead it wants to deploy on the Library server, which is also a Hyper V host managed by SCVMM.
"integrated development environment (IDE) devices" should read "integrated drive electronics (IDE) devices"