Windows Server 2012 Hyper-V Best Practices (In Easy Checklist Form)



AskPFEPlat is in the process of a transformation to the new Core Infrastructure and Security TechCommunity, and will be moving by the end of March 2019 to our new home at (hosted at Please bear with us while we are still under construction!

We will continue bringing you the same great content, from the same great contributors, on our new platform. Until then, you can access our new content on either as you do today, or at our new site Please feel free to update your bookmarks accordingly!

Why are we doing this? Simple really; we are looking to expand our team internally in order to provide you even more great content, as well as take on a more proactive role in the future with our readers (more to come on that later)! Since our team encompasses many more roles than Premier Field Engineers these days, we felt it was also time we reflected that initial expansion.

If you have never visited the TechCommunity site, it can be found at On the TechCommunity site, you will find numerous technical communities across many topics, which include discussion areas, along with blog content.

NOTE: In addition to the AskPFEPlat-to-Core Infrastructure and Security transformation, Premier Field Engineers from all technology areas will be working together to expand the TechCommunity site even further, joining together in the technology agnostic Premier Field Engineering TechCommunity (along with Core Infrastructure and Security), which can be found at!

As always, thank you for continuing to read the Core Infrastructure and Security (AskPFEPlat) blog, and we look forward to providing you more great content well into the future!



** Newly updated to include 2012 R2 Best Practices. See 11/03/2013 blog regarding R2 updates by clicking here **

Windows Server 2012 and Windows Server 2012 R2 provided major improvements to the Hyper-V role, including increased consolidation of server workloads, Hyper-V Replica, Cluster Aware Updating (CAU), network virtualization and the Hyper-V extensible switch, just to name a few! Hyper-V 3.0, as some call it, helps organizations improve server utilization while reducing costs.

The following is a checklist I initially developed for Windows Server 2008 R2 SP1 (which can be found here: and overhauled with the latest release of Server 2012 and Server 2012 R2. Those of you who have used my previous checklist will notice quite a few items remaining; that’s because many of the best practices still apply to Hyper-V in Server 2012 and Server 2012 R2!!

I find having a checklist can be a great tool to use not only when reviewing an existing Hyper-V implementation, but one which can be leveraged as part of pre-planning stages, to ensure best practices are implemented from the start.

It’s important to note this is not an exhaustive compilation, rather a grouping of features/options commonly used in businesses I’ve had the pleasure of assisting.

A special thanks to Ted Teknos, Ryan Zoeller and Rob Hefner for their input/suggestions/corrections as I put this together!

So, without further ado, here’s the newly updated Hyper-V 2012 and Hyper-V 2012 R2 Best Practice Checklist!

Disclaimer: As with all Best Practices, not every recommendation can – or should – be applied. Best Practices are general guidelines, not hard, fast rules that must be followed. As such, you should carefully review each item to determine if it makes sense in your environment. If implementing one (or more) of these Best Practices seems sensible, great; if it doesn't, simply ignore it. In other words, it's up to you to decide if you should apply these in your setting.


⎕ Consider using Server Core, or the Windows Minimal Interface, to reduce OS overhead, reduce the potential attack surface, and to minimize reboots (due to fewer software updates).


⎕ Ensure hosts are up-to-date with recommended Microsoft updates, to ensure critical patches and updates – addressing security concerns or fixes to the core OS – are applied.

⎕ Ensure all applicable Hyper-V hotfixes and Cluster hotfixes (if applicable) have been applied. Review the following sites and compare it to your environment, since not all hotfixes will be applicable:

Windows Server 2012:

Windows Server 2012 R2

⎕ Ensure hosts have the latest BIOS version, as well as other hardware devices (such as Synthetic Fibre Channel, NIC’s, etc.), to address any known issues/supportability

⎕ Host should be domain joined, unless security standards dictate otherwise. Doing so makes it possible to centralize the management of policies for identity, security, and auditing. Additionally, hosts must be domain joined before you can create a Hyper-V High-Availability Cluster.

· For more information:

⎕ RDP Printer Mapping should be disabled on hosts, to remove any chance of a printer driver causing instability issues on the host machine.

  • Preferred method: Use Group Policy with host servers in their own separate OU

  • Computer Configuration –> Policies –> Administrative Templates –> Windows Components –> Remote Desktop Services –> Remote Desktop Session Host –> Printer Redirection –> Do not allow client printer redirection –> Set to "Enabled


⎕ Do not install any other Roles on a host besides the Hyper-V role and the Remote Desktop Services roles (if VDI will be used on the host).

  • When the Hyper-V role is installed, the host OS becomes the "Parent Partition" (a quasi-virtual machine), and the Hypervisor partition is placed between the parent partition and the hardware. As a result, it is not recommended to install additional (non-Hyper-V and/or VDI related) roles.

⎕ The only Features that should be installed on the host are: Failover Cluster Manager (if host will become part of a cluster), Multipath I/O (if host will be connecting to an iSCSI SAN, Spaces and/or Fibre Channel). (See explanation above for reasons why installing additional features is not recommended.)

⎕ Anti-virus software should exclude Hyper-V specific files using the Hyper-V: Antivirus Exclusions for Hyper-V Hosts article, namely:

    • All folders containing VHD, VHDX, AVHD, VSV and ISO files

    • Default virtual machine configuration directory, if used (C:\ProgramData\Microsoft\Windows\Hyper-V)

    • Default snapshot files directory, if used (%systemdrive%\ProgramData\Microsoft\Windows\Hyper-V\Snapshots)

    • Custom virtual machine configuration directories, if applicable

    • Default virtual hard disk drive directory

    • Custom virtual hard disk drive directories

    • Snapshot directories

    • Vmms.exe (Note: May need to be configured as process exclusions within the antivirus software)

    • Vmwp.exe (Note: May need to be configured as process exclusions within the antivirus software)

    • Additionally, when you use Cluster Shared Volumes, exclude the CSV path "C:\ClusterStorage" and all its subdirectories.

  • For more information:

⎕ Default path for Virtual Hard Disks (VHD/VHDX) should be set to a non-system drive, due to this can cause disk latency as well as create the potential for the host running out of disk space.

⎕ If you choose to save the VM state as the Automatic Stop Action, the default virtual machine path should be set to a non-system drive, due to the creation of a .bin file is created that matches the size of memory reserved for the virtual machine.  A .vsv file may also be created in the same location as the .bin file, adding to disk space used for each VM. (The default path is: C:\ProgramData\Microsoft\Windows\Hyper-V.)

⎕ If you are using iSCSI: In Windows Firewall with Advanced Security, enable iSCSI Service (TCP-In) for Inbound and iSCSI Service (TCP-Out) for outbound in Firewall settings on each host, to allow iSCSI traffic to pass to and from host and SAN device. Not enabling these rules will prevent iSCSI communication.

To set the iSCSI firewall rules via netsh, you can use the following command:

Netsh advfirewall firewall set rule group=”iSCSI Service” new enable=yes

⎕ Periodically run performance counters against the host, to ensure optimal performance.

  • Recommend using the Hyper-V performance counter that can be extracted from the (free) Codeplex PAL application:

  • Install PAL on a workstation and open it, then click on the Threshold File tab.

  • Select "Microsoft Windows Server 2012 Hyper-V" from the Threshold file title, then choose Export to Perfmon template file. Save the XML file to a location accessible to the Hyper-V host.

  • Next, on the host, open Server Manager –> Tool –> Performance Monitor

  • In Performance Monitor, click on Data Collector Sets –> User Defined. Right click on User Defined and choose New –> Data Collector Set. Name the collector set "Hyper-V Performance Counter Set" and select Create from a template (Recommended) then choose Next. On the next screen, select Browse and then locate the XML file you exported from the PAL application. Once done, this will show up in your User Defined Data Collector Sets.

  • Run these counters in Performance Monitor for 30 minutes to 1 hour (during high usage times) and look for disk latency, memory and CPU issues, etc.


Ensure you are running only supported guests in your environment. Windows Server 2012/2012 R2 Hyper-V Supported Guest Operating Systems (updated 07/25/2014):

If you are using (or plan to use) Linux in your Hyper-V environment, check out Ben Armstrong’s awesome blog, titled “What version of Linux Supports what in Hyper-V?” to see features/functionality in Hyper-V with various Linux flavors (added 06/17/2014):


(For R2) Review the “Performance Tuning Guidelines for Windows Server 2012 R2” document:

Review the Performance Tuning Guidelines for Windows Server 2012” document:



⎕ Ensure NICs have the latest firmware, which often address known issues with hardware.

⎕ Ensure latest NIC drivers have been installed on the host, which resolve known issues and/or increase performance.

NICs should not use APIPA (Automatic Private IP Addressing). APIPA is non-routable and not registered in DNS.

⎕ VMQ should be enabled on VMQ-capable physical network adapters bound to an external virtual switch.

  • For more information:

⎕ TCP Chimney Offload is not supported with Server 2012 software-based NIC teaming, due to TCP Chimney has the entire networking stack offloaded to the NIC. If software-based NIC teaming is not used, however, you can leave it enabled.


  • From an elevated command-prompt, type the following:

  • netsh int tcp show global

  • (The output should show Chimney Offload State disabled)

  • TO DISABLE TCP Chimney Offload:

  • From an elevated command-prompt, type the following:

  • netsh int tcp set global chimney=disabled

⎕ Jumbo frames should be turned on and set for 9000 or 9014 (depending on your hardware) for CSV, iSCSI and Live Migration networks. This can significantly increase throughput while also reducing CPU cycles.

  • End-to-End configuration must take place – NIC, SAN, Switch must all support Jumbo Frames.

  • You can enable Jumbo frames when using crossover cables (for Live Migration and/or Heartbeat), in a two node cluster.

  • To verify Jumbo frames have been successfully configured, run the following command from all your Hyper-V host(s) to your iSCSI SAN:

  • Ping –f –l 8000

  • This command will ping the SAN (e.g. with an 8K packet from the host. If replies are received, Jumbo frames are properly configured.


⎕ NICs used for iSCSI communication should have all Networking protocols (on the Local Area Connection Properties) unchecked, with the exception of:

  • Manufacturers protocol (if applicable)

  • Internet Protocol Version 4

  • Internet Protocol Version 6.

  • Unbinding other protocols (not listed above) helps eliminate non-iSCSI traffic/chatter on these NICs.

Management NIC should be at the top (1st) in NIC Binding Order. To set the NIC binding order: Control Panel –> Network and Internet –> Network Connections. Next, select the Advanced menu item, and select Advanced Settings. In the Advanced Settings window, select your management network under Connections and use the arrows on the right to move it to the top of the list.

⎕ NIC Teaming should not be used on iSCSI NIC’s. MPIO is the best method. NIC teaming can be used on the Management, Production (VM traffic), CSV Heartbeat and Live Migration networks.

  • For more information on NIC Teaming:

  • For more information on MPIO:

  • Microsoft Multipath I/O (MPIO) Users Guide for Windows Server 2012:

  • Managing MPIO with Windows PowerShell on Windows Server 2012:

⎕ If you are using NIC teaming for Management, CSV Heartbeat and/or Live Migration, create the team(s) before you begin assigning Networks.

⎕ If using aggregate (switch-dependent) NIC teaming in a guest VM, only SR-IOV NICs should be used on guest.

⎕ If using NIC teaming inside a guest VM, follow this order:


  • Open the settings of the Virtual Machine

  • Under Network Adapter, select Advanced Features.

  • In the right pane, under Network Teaming, tick the “Enable this network adapter to be part of a team in the guest operating system.

  • Once inside the VM, open Server Manager. In the All Servers view, enable NIC Teaming from Server



  • Use the following PowerShell command (Run as Administrator) on the Hyper-V host where the VM currently resides:

  • Set-VMNetworkAdapter –VMName contoso-vm1 –AllowTeaming On

  • This PowerShell command turns on resiliency if one or more of the teamed NICs goes offline.

  • Once inside the VM, open Server Manager. In the All Servers view, enable NIC Teaming from Server

⎕ When creating virtual switches, it is best practice to uncheck Allow management operating system to share this network adapter, in order to create a dedicated network for your VM(s) to communicate with other computers on the physical network. (If the management adapter is shared, do not modify protocols on the NIC.)

Please note: we fully support and even recommend (in some cases) using the virtual switch to separate networks for Management, Live Migration, CSV/Heartbeat and even iSCSI.  For example two 10GB NIC’s that are split out using VLANs and QoS.


⎕ Recommended network configuration when clustering:


Min # of Networks on Host

Host Management

VM Network Access


Live Migration






“Live Migration”


** CSV/Heartbeat & Live Migration Networks can be crossover cables connecting the nodes, but only if you are building a two (2) node cluster. Anything above two (2) nodes requires a switch. **

⎕ Turn off cluster communication on the iSCSI network.

  • In Failover Cluster Manager, under Networks, the iSCSI network properties should be set to “Do not allow cluster network communication on this network.” This prevents internal cluster communications as well as CSV traffic from flowing over the same network.

⎕ Redundant network paths are strongly encouraged (multiple switches) – especially for your Live Migration and iSCSI network – as it provides resiliency and quality of service (QoS).


Cristian Edwards Sabathe, a fellow Microsoft employee, has put together a terrific series of blogs on network architecture. I would strongly encourage you to spend some time going through these.  (added 03/07/2014)

1. Hyper-V 2012 R2 Network Architectures Series (Part 1 of ) – Introduction (This Post)

2. Hyper-V 2012 R2 Network Architectures Series (Part 2 of ) – Non-Converged Networks, the classical but robust approach

3. Hyper-V 2012 R2 Network Architectures Series (Part 3 of ) – Converged Networks Managed by SCVMM and Powershell

4. Hyper-V 2012 R2 Network Architectures Series (Part 4 of ) – Converged Networks using Static Backend QoS

5. Hyper-V 2012 R2 Network Architectures Series (Part 5 of ) – Converged Networks using Dynamic QoS

6. Hyper-V 2012 R2 Network Architectures Series (Part 6 of ) – Converged Network using CNAs

7. Hyper-V 2012 R2 Network Architectures Series (Part 7 of ) – Conclusions and Summary


⎕ If aggregate NIC Teaming is enabled for Management and/or Live Migration networks, the physical switch ports the host is connected to should be set to trunk (promiscuous) mode. The physical switch should pass all traffic to the host for filtering.

⎕ Turn off VLAN filters on teamed NICs. Let the teaming software or the Hyper-V switch (if present) do the filtering.



⎕ Legacy Network Adapters (a.k.a. Emulated NIC drivers) should only be used for PXE booting a VM or when installing non-Hyper-V aware Guest operating systems. Hyper-V's synthetic NICs (the default NIC selection; a.k.a. Synthetic NIC drivers) are far more efficient, due to using a dedicated VMBus to communicate between the virtual NIC and the physical NIC; as a result, there are reduced CPU cycles, as well as much lower hypervisor/guest transitions per operation.



⎕ Disk used for CSV must be partitioned with NTFS. You cannot use a disk for a CSV that is formatted with FAT, FAT32, or Resilient File System (ReFS).

⎕ Unless your storage vendor has a different recommendation, disks used for VHD/VHDx files, including CSV’s, should use 64K formatting (Allocation Unit Size).   (added 09/02/2014)

⎕ Use caution when using snapshots. If not properly managed, snapshots can cause disk space issues, as well as additional physical I/O overhead. Additionally, if you are hosting 2008 R2 (or earlier) Domain Controllers, reverting to an earlier snapshot can cause USN rollbacks. Windows Server 2012 has been updated to help better protect Domain Controllers from USN rollbacks; however, you should still limit usage.

⎕ The recommended minimum free space on CSV volumes containing Hyper-V virtual machine VHD and/or VHDX files:

  • 15% free space, if the partition size is less than 1TB

  • 10% free space, if the partition size is between 1TB and 5TB

  • 5% free space, if the partition size is greater than 5TB

  • To enumerate current volume information, including the percentage free, you can use the following PowerShell command:

  • Get-ClusterSharedVolume "Cluster Disk 1" | fc *

    • Review the "PercentageFree" output

⎕ It is not supported to create a storage pool using Fiber Channel or iSCSI LUNs.

  • For more information see:

⎕ Page file on Hyper-V Host should managed by the OS and not configured manually.


⎕ New disks should use the VHDX format. Disks created in earlier Hyper-V iterations should be converted to VHDX, unless there is a need to move the VHD back to a 2008 Hyper-V host.

  • The VHDX format supports virtual hard disk storage capacity of up to 64 TB, improved protection against data corruption during power failures (by logging updates to the VHDX metadata structures), and improved alignment of the virtual hard disk format to work well on large sector disks.

⎕ (New R2 Feature) Shared Virtual Hard Disk: Do not use a shared VHDx file for the operating system disk. Servers should have a unique VHDx (for the OS) that only they can access. Shared Virtual Hard Disks are better used as data disks and for the disk witness.

For more information:



⎕ Use Dynamic Memory on all VMs (unless not supported).


  • Dynamic Memory adjusts the amount of memory available to a virtual machine, based on changes in memory demand using a memory balloon driver, which helps use memory resources more efficiently.

  • For more information:

⎕ Guest OS should be configured with (minimum) recommended memory

  • 2048MB is recommended for Windows Server 2012 (e.g. 2048 – 4096 Dynamic Memory). (The minimum supported is 512 MB)

  • 2048MB is recommended for Windows Server 2008, including R2 (e.g. 2048 – 4096 Dynamic Memory). (The minimum supported is 512 MB)

  • 1024MB is recommended for Windows 7 (e.g. 1024 – 2048 Dynamic Memory). (The minimum supported is 512 MB)

  • 1024MB is recommended for Windows Vista (e.g. 1024 – 2048 Dynamic Memory). (The minimum supported is 512 MB)

  • 512MB is recommended for Windows Server 2003 R2 w/SP2 (e.g. 256 – 2048 Dynamic Memory). (The minimum supported is 128 MB.

  • 512MB is recommended for Windows Server 2003 w/SP2 (e.g. 256 – 2048 Dynamic Memory). (The minimum supported is 128 MB).



⎕ Set preferred network for CSV communication, to ensure the correct network is used for this traffic. (Note: This will only need to be run on one of your Hyper-V nodes.).

(NOTE: In order to configure CSV Redirected Access over one particular cluster network, SMB Multichannel will have to be disabled. See (added 04/15/2014.)

  • The lowest metric in the output generated by the following PowerShell command will be used for CSV traffic

  • Open a PowerShell command-prompt (using “Run as administrator”)

  • First, you’ll need to import the “FailoverClusters” module. Type the following at the PS command-prompt:

  • Import-Module FailoverClusters

  • Next, we’ll request a listing of networks used by the host, as well as the metric assigned. Type the following:

  • Get-ClusterNetwork | ft Name, Metric, AutoMetric, Role

  • In order to change which network interface is used for CSV traffic, use the following PowerShell command:

    • (Get-ClusterNetwork "CSV Network").Metric=900

    • This will set the network named "CSV Network" to 900


*** Set preferred network for Live Migration, to ensure the correct network(s) are used for this traffic:

  • Open Failover Cluster Manager, Expand the Cluster

  • Next, right click on Networks and select Live Migration Settings

  • Use the Up / Down buttons to list the networks in order from most preferred (at the top) to least preferred (at the bottom)

  • Uncheck any networks you do not want used for Live Migration traffic

  • Select Apply and then press OK

  • Once you have made this change, it will be used for all VMs in the cluster

⎕ The Host Shutdown Time (ShutdownTimeoutInMinutes registry entry) can be increased from the default time, if additional time is needed to be certain local VMs have had enough time to shut down before the host reboots.

    • Registry Key: HKLM\Cluster\ShutdownTimeoutInMinutes

    • Enter minutes in Decimal value.

    • Note: Requires a reboot to take effect


⎕ Run the Cluster Validation periodically to remediate any issues

  • NOTE: If all LUNs are part of the cluster, the validation test will skip all disk checks. It is recommended to set up a small test-only LUN and share it on all nodes, so full validation testing can be completed.

  • If you need to test a LUN running virtual machines, the LUN will need to be taken offline.

  • For more information:

⎕ Consider enabling CSV Cache if you have VMs that are used primarily for read requests, and are less write intensive. Scenarios such as Pooled VDI VMs; also can be leveraged for reducing VM boot storms.



Run the Hyper-V Replica Capacity Planner. The Capacity Planner for Hyper-V Replica, allows you to plan your Hyper-V Replica deployment based on the workload, storage, network and server characteristics. This tool will help you determine:

    • How much network bandwidth is required between the primary and replica site?

    • How much storage is required on the primary and replica site?

    • What is the storage impact by enabling multiple recovery points?

⎕ Update inbound traffic on the firewall to allow TCP port ‘80’ and/or port ‘443’ traffic. (In Windows Firewall, enable “Hyper-V Replica HTTP Listener (TCP-In)” rule on each node of the cluster.

To enable HTTP (port 80) replica traffic, you can run the following from an elevated command-prompt:

netsh advfirewall firewall set rule group="Hyper-V Replica HTTP" new enable=yes

To enable HTTPS (port 443) replica traffic, you can run the following from an elevated command-prompt:

netsh advfirewall firewall set rule group="Hyper-V Replica HTTPS" new enable=yes

⎕ Compression is recommended for replication traffic, to reduce bandwidth requirements.

⎕ Configure guest operating systems for VSS-based backups to enable application-consistent snapshots for Hyper-V Replica.

⎕ Integration services must be installed before primary or Replica virtual machines can use an alternate IP address after a failover

⎕ Virtual hard disks with paging files should be excluded from replication, unless the page file is on the OS disk.

⎕ Test failovers should be performed monthly, at a minimum, to verify that failover will succeed and that virtual machine workloads will operate as expected after failover

⎕ Hyper-V Replica requires the Failover Clustering Hyper-V Replica Broker role be configured if either the primary or the replica server is part of a cluster.

⎕ Feature and performance optimization of Hyper-V Replica can be further tuned by using the registry keys mentioned in the article below:



⎕ Compression on Windows Server 2012 R2:  If your Live Migration NICs are 10Gbps or less, use live migration with compression; if more than 10Gbps, use live migration with RDMA.  (added 06/16/2014)


⎕ Place all Cluster-Aware Updating (CAU) Run Profiles on a single File Share accessible to all potential CAU Update Coordinators. (Run Profiles are configuration settings that can be saved as an XML file called an Updating Run Profile and reused for later Updating Runs.



⎕ An Active Directory infrastructure is required, so you can grant permissions to the computer account of the Hyper-V hosts.

⎕ Loopback configurations (where the computer that is running Hyper-V is used as the file server for virtual machine storage) are not supported. Similarly, running the file share in VM’s that are hosted on compute nodes that will serve other VM’s is not supported.



⎕ Domain Controller VMs should have “Shut down the guest operating system” in the Automatic Stop Action setting applied (in the virtual machine settings on the Hyper-V Host)

Important: See “Use caution when using snapshots” under the Disk section for more information regarding snapshots.

Important: Be certain KB2855336 (released July 2013) has been installed. Note: This update rollup was reoffered on July 12, 2013 to fix an issue. For more information about this issue, go to the Known issue about this update rollup section.

  • Domain Controller VHDs that contain the Active Directory Database should have write caching disabled, to reduce the chance of AD corruption (if the database is stored on a vIDE drive). This update rollup addresses this issue for all Windows supported OS.

  • You do not need to do anything for your virtual AD Domain Controller on a vIDE deployment once the patch is applied. AD sends down a request to see if it can disable disk caching and when that fails it issues IOs with the FUA (Force Unit Access) bit, which is required for the integrity guarantees to work.

  • For more information:


⎕ Ensure Integration Services (IS) have been installed on all VMs. IC's significantly improve interaction between the VM and the physical host.

⎕ Be certain you are running the latest version of integration services – the same version as the host(s) – in all guest operating systems, as some Microsoft updates make changes/improvements to the Integration Services software. (When a new Integration Services version is updated on the host(s) it does not automatically update the guest operating systems.)

  • Note: If Integration Services are out of date, you will see 4010 events logged in the event viewer.

  • You can discover the version for each of your VMs on a host by running the following PowerShell command:

    • Get-VM | ft Name, IntegrationServicesVersion, State

      NOTE: If VM is offline, the Integration Services version will show Offline.



⎕ If your SAN supports ODX (see this post for help; also check with your hardware vendor), you should strongly consider enabling ODX on your Hyper-V hosts, as well as any VMs that connect directly to SAN storage LUNs.

IMPORTANT: VHD-based or VHDX-based virtual disks attached to a virtual IDE controller do not support this optimization because integrated development environment (IDE) devices lack support for Offloaded Data Transfer. The virtual hard drive needs to be connected to the virtual machine via a virtual SCSI controller. (added 04/22/2014)

  • To enable ODX, open PowerShell (using Run as Administrator) and type the following:

  • Set-ItemProperty hklm:\system\currentcontrolset\control\filesystem -Name "FilterSupportedFeaturesMode" –Value 0

  • Be sure to run this command on every Hyper-V host that connects to the SAN, as well as any VM that connects directly to the SAN.



⎕ If you are converting VMware virtual machines to Hyper-V, consider using MVMC (a free, stand-alone tool offered by Microsoft.


I sincerely hope you find this blog posting useful! If you do, please forward the link on to others who may benefit!

Until next time,


Roger Osborne, Sr. PFE