Networking configurations for Hyper-V over SMB in Windows Server 2012 and Windows Server 2012 R2

One of the questions regarding Hyper-V over SMB that I get the most relates to how the network should be configured. Networking is key to several aspects of the scenario, including performance, availability and scalability.

The main challenge is to provide a fault-tolerant and high-performance network for the two clusters typically involved: the Hyper-V cluster (also referred to as the Compute Cluster) and the Scale-out File Server Cluster (also referred to as the Storage Cluster).

Not too long ago, the typical configuration for virtualization deployments would call for up to 6 distinct networks for these two clusters:

  • Client (traffic between the outside and VMs running in the Compute Cluster)
  • Storage (main communications between the Compute and Storage clusters)
  • Cluster (communication between nodes in both clusters, including heartbeat)
  • Migration (used for moving VMs between nodes in the Compute Cluster)
  • Replication (used by Hyper-V replica to send changes to another site)
  • Management (used to configuring and monitoring the systems, typically also including DC and DNS traffic)

These days, it’s common to consolidate these different types of traffic, with the proper fault tolerance and Quality of Service (QoS) guarantees.

There are certainly many different ways to configure the network for your Hyper-V over SMB, but this blog post will focus on two of them:

  • A basic fault-tolerant solution using just two physical network ports per node
  • A high-end solution using RDMA networking for the highest throughput, highest density, lowest latency and low CPU utilization.

Both configurations presented here work with Windows Server 2012 and Windows Server 2012 R2, the two versions of Windows Server that support the Hyper-V over SMB scenario.

Configuration 1 – Basic fault-tolerant Hyper-V over SMB configuration with two non-RDMA port

 

The solution below using two network ports for each node of both the Compute Cluster and the Storage Cluster. NIC teaming is the main technology used for fault tolerance and load balancing.

image

Configuration 1: click on diagram to see a larger picture

Notes:

  • A single dual-port network adapter per host can be used. Network failures are usually related to cables and switches, not the NIC itself. It the NIC does fail, failover clustering on the Hyper-V or Storage side would kick in. Two network adapters each with one port is also an option.
  • The 2 VNICs on the Hyper-V host are used to provide additional throughput for the SMB client via SMB Multichannel, since the VNIC does not support RSS (Receive Side Scaling, which helps spread the CPU load of networking activity across multiple cores). Depending on configuration, increasing it up to 4 VNICs per Hyper-V host might be beneficial to increase throughput.
  • You can use additional VNICs that are dedicated for other kinds of traffic like migration, replication, cluster and management. In that case, you can optionally configure SMB Multichannel constraints to limit the SMB client to a specific subset of the VNICs. More details can be found in item 7 of the following article: The basics of SMB Multichannel, a feature of Windows Server 2012 and SMB 3.0
  • If RDMA NICs are used in this configuration, their RDMA capability will not be leveraged, since the physical port capabilities are hidden behind NIC teaming and the virtual switch.
  • Network QoS should be used to tame each individual type of traffic on the Hyper-V host. In this configuration, it’s recommended to implement the network QoS at the virtual switch level. See https://technet.microsoft.com/en-us/library/jj735302.aspx for details (the above configuration matches the second one described in the linked article).

Configuration 2 - High-performance fault-tolerant Hyper-V over SMB configuration with two RDMA ports and two non-RDMA ports

 

The solution below requires four network ports for each node of both the Compute Cluster and the Storage Cluster, two of them being RDMA-capable. NIC teaming is the main technology used for fault tolerance and load balancing on the two non-RDMA ports, but SMB Multichannel covers those capabilities for the two RDMA ports.

image

Configuration 2: click on diagram to see a larger picture

Notes:

  • Two dual-port network adapter per host can be used, one RDMA and one non-RDMA.
  • In this configuration, Storage, Migration and Clustering traffic should leverage the RDMA path. The client, replication and management traffic should use the teamed NIC path.
  • In this configuration, if using Windows Server 2012 R2, Hyper-V should be configured to use SMB for Live Migration. This is not the default setting.
  • The SMB client will naturally prefer the RDMA paths, so there is no need to specifically configure that preference via SMB Multichannel constraints.
  • There are three different types of RDMA NICs that can be used: iWARP, RoCE and InifiniBand. Below are links to step-by-step configuration instructions for each one:
  • Network QoS should be used to tame traffic flowing through the virtual switch on the Hyper-V host. If your NIC and switch support Data Center Bridging (DCB) and Priority Flow Control (PFC), there are additional options available as well. See https://technet.microsoft.com/en-us/library/jj735302.aspx for details (the above configuration matches the fourth one described in the linked article).
  • In most environments, RDMA provides enough bandwidth without the need of any traffic shaping. If using Windows Server 2012 R2, SMB Bandwidth Limits can optionally be used to shape the Storage and Live Migration traffic. More details can be found in item 4 of the following article: What’s new in SMB PowerShell in Windows Server 2012 R2. SMB Bandwidth Limits can also be used for configuration 1, but it's more common here.

 

I hope this blog posts helps with the network planning for your Private Cloud deployment. Feel free to ask questions via the comments below.