Hello, my name is Michael Godfrey and I am a Platform’s Premier Field Engineer (PFE) at Microsoft. I have been a Fabric Administrator for the past few years and have made it a habit of building quite a few Hyper-V Clusters with System Center Virtual Machine Manager. I have helped a lot of customers deploy Switch Embedded Teams in SCVMM 2016 over the past year, and like every good engineer, I decided it was time to share that knowledge with the world.
So, in this post, I will be walking you through a deployment of a Switch Embedded Team in SCVMM 2016 or the new SCVMM 1801 edition. The steps are the same in both, so feel free to check out SCVMM 1801, if you are not familiar with our Semi Annual Channel release of System Center, you can read more about it here.
If you are not familiar, a Switch Embedded Team or SET, is a new function in Server 2016 as well as SCVMM 2016/1801 and will allow converging of multiple network adapters. This is not new from 2012R2, but SET will allow us to simplify the deployment of our teams, with the combined benefits of Hardware Accelerated Networking features like RDMA and RSS. The SET is managed at the Hyper-V Switch level and not the Network Team or LBFO level, ensuring that we can build multiple vSwitches inside the team, while preserving our QOS.
As with every network deployment, it is wise to understand your available networks first, before you start deploying. In this example, I am using vlans presented to me by my Network Team, that are already created and deployed. I will be taking these networks, and creating a matching Virtual Network in SCVMM and Hyper-V. In the example I have the following networks.
These are just example networks for this demo, you will need subnets with enough range for all your hosts. I would also include other networks like SMB, Guest Vlans for all the Virtual Machines and Backup networks. For the sake of the post, I wanted to keep things simple.
I am also including a High Level Overview to help you understand more in depth what a completed design would look like:
First thing you need to do is create a Logical Network. You can think of the Logical Network as the definition of all your Hyper-V Hosts networks for your entire organization. This is the central space we can manage our “Distributed Networking” if you will in VMM. In it, we will deploy several Network Sites. The Network Sites will be the barrier for the network segments, and I like to describe them as Datacenters. You can use them how ever you like, as a DMZ, a Lower Lifecycle or any other network barrier, but I have found Datacenters works best for me.
You will need to visit the Fabric Workspace of VMM to get started with Logical Networks, then you can find it in the networking section. Start by creating a new logical network, giving it a name and a description. Then you will have the choice between three options for what type of logical network you would like. This is a crossroads, and you will not be able to change this. You need to determine one and can use multiple logical networks in that case.
You will see a One Connected Network. This is a great option if you are planning on using the same virtual network for all your VMs or if you are planning on implementing Software Defined Networking v2 in Server 2016. This option allows you to create your own network segmentation at a Virtual level but will require the deployment of Network Controllers in your environment.
The most popular option I see is the second, VLAN Based Independent. This option is useful for providing VLAN based segmentation for our VMs and the Infrastructure Networks. This requires you to add each vlan to the assigned Network Site in VMM, and then create a VM Network. Once the Logical Network is deployed to a host, any change you make like adding a VMNetwork and Subnet is automatically associated with the host(s), essentially working in a Distributed Switch model.
The third option is a Private Network, this is great in a Lab scenario, where all the VMs will be able to communicate with themselves, they will however not be able to communicate outside their VMNetwork to other resources outside the cluster.
Once you select the Logical Network Setting, you will need to create your first, of many, Network Sites. Remember, Network Sites can be any form of Network Isolation you need, I prefer to separate my sites as Datacenter Locations. You will give your Network Site a name and then isolate it to your Host’s Groups, this will make sure that network can only be deployed to Hosts in that Network Site. This prevents accidental deployments and helps create my favorite word in Virtualization; Consistency.
You will then need to add the Vlan ID or Subnet or even both, no one will ever fault you for providing both, so I suggest adding both, the more information you present, the better the design.
The next step in our journey toward a Consistent and Highly Available Switch Embedded Team is to provide a Port Profile. There are two types of Port Profiles; Uplink and Virtual. We will be using Virtual Port Profiles in Logical Switching but will need to define an Uplink Port Profile for the Physical Adapters to use in our Virtual Networks. The Uplink port profile will also define the Load Balancing method and Algorithm our Physical adapters are subjected to. You have a few choices, but in utilizing Switch Embedded Teaming, we are restrained to using Switch Independent connections for our Physical Adapters. This means that each of our nics is connected to a separate Physical Switch. Most Admins connect Nic 1 & 3 to Switch A, and Nics 2 & 4 to Switch B, to provide Fault Tolerance. This is a best practice and is widely accepted as a good design.
You will see that LACP is another option, while this is great if you can configure your Switch with Aggregate ports, it is not supported in S.E.T. Therefore we will not use it.
You also will be picking a Load Balancing option, in S.E.T. we will choose the Host Default, which provides load balancing for all network traffic in our team, across all Nics. This will work best when we utilize things like SMB Multichannel and RDMA (Remote Direct Memory Access) to utilize the full bandwidth available to our NICs.
The last option in the Port Profile is selecting a Host Group that can utilize it. The great thing about Port Profiles is they are Logical Network Dependent and not Site dependent, so you can use just one, or you can make several, the option is up to you, and dependent on the type of Network Traffic you expect.
The Virtual Machines and Virtual Switches will need something to connect to, to provide their Network Isolation, this is known as VM Networks. These networks provide the VLAN and Subnet separation in VMM and should be a virtual representation of your Physical networks. You will need these in the Uplinks section of Logical Switches and can create them in the Fabric Workspace. When creating them, give them a name so when your Administrators assign them, they can be confident they chose the right network. Also, be sure to select the correct Logical Network associated with the Subnet/VLAN you are creating the VM Network for. In the Isolation Options, you will be able to select the Network Site, IPV4 Subnet or IPV6 Subnet for the VM Network. This will ensure that VMs or Virtual Network Adapters that are placed in this VM Network are isolated to that VLAN/Subnet. If you provided a VLAN ID of 0 in the Network Sites selection of Logical Networks, the VLAN will be untagged for the VMs in that VM Network.
When creating a Custom Port Profile or customizing the ones Microsoft provides, you have several options, including Security, Offload and Bandwidth Settings.
In the offload settings you will be able to enable things like VMMQ, SR-IOV, RSS and RDMA. Virtual Machine Queue is a way of distributing the packet processing among the virtual processors in a VM. The SR-IOV and RDMA options will require Network cards that support these, and SR-IOV cannot be used in a Team, so keep that in mind.
The Security Settings will allow you to block things like MAC address spoofing, or DHCP broadcasts in your VMs. It will also allow NIC teaming in your VM Guests, handy if you want to deploy Virtual SQL Clusters.
The Bandwidth settings allow you to set Network QOS settings. This is the section that allows you to set “speed limits” on your Virtual Networks and even provide lanes, for higher priority traffic, like Live Migrations or Storage.
The Logical Switch is where we begin to build our Switch Embedded Team. This is a Network Site dependent feature, so you will need one per Datacenter in my example, or per Network Site. Then logical switch is where we create a Network Team, create several vSwitches and then set QOS Port Profiles to define the traffic on those vSwitches.
In the Logical Switches section of the Fabric Workspace, you will create a new Logical Switch. You can then give it a name and description. The next step is important. In the Uplink Mode you will have an option for Team or Embedded Team. Since we are doing Switch Embedded we will select Embedded Team. The Team option will deploy a Load Balanced Failover Team (LBFO). This is the method we used in Server 2012 & 2012R2, but with 2016 we will be using Switch Embedded, and so we want to select it.
The next Settings are the bandwidth settings, this comes into play with our Virtual Port Profiles, or Network QOS. You can select Weight or Default, which will use the Weight setting in our Virtual Port Profiles. The other options are Absolute and None, the best practice guidance is to use Weight or Default, they are the same setting.
The Extensions are used for Network Filtering, they are not recommended for Switch Embedded Teaming, and so we will clear these selections.
The next setting in Logical Switches is where we start to define what type of traffic we will be assigning to our virtual switches. For this we will be using Virtual Port Profiles. These are the Network Settings on a Nic that are Pre-defined, to make Network QOS or Security settings consistent to deploy. The System Center team has included several pre-built Port Profiles for you, but you can always customize them, with from the Logical Switch Wizard or the Port Profiles Section of the Fabric Workspace. We will add a Port Classification for each type of Network Traffic we expect to use. Each Port Classification will be attached to a Port Profile, that will define things like Minimum/Maximum Bandwidth, Bandwidth Weight and Network Security Options, more on that later. Add each classification you plan to use, and then connect it to the Virtual Port Profile that matches that Classification. In this example, pre-defined Port Profiles are being used, but remember you can always customize.
The last step is to define the Switch Embedded Team. This is what you have been waiting for, so lets do it. First add your existing Port Profile we created earlier. If you forgot, don’t worry, you can select New Port Profile and create it here as well. You will see the Port Profile Load Balancing settings for the Team and will see the Network Site assigned. Then select New Virtual Network Adapter and create the first of many Virtual Switches. In the example I create a virtual Switch for Management Traffic (OS), Cluster and Live Migration. Each Virtual Network Adapter will have a VM Network assigned to it, which was previously created. Then you can pick the IP Pool settings, to allow VMM to assign the IP address for you, or Statically assign it. The last option is to set the Port Profile Classification, which we defined in the previous step of the Logical Switch Wizard.
One important note, the Management network will need to have two additional checkboxes enabled, the virtual adapter will be used for host management and Inherit Network Connection Settings from Adapter. These will allow our Management team to function as the primary vSwitch and let us adopt the Static IP address of our Hyper-V Host.
Once you place all these Uplinks in the Logical Switch, it will be deployed to every host, with these settings when assigned. This is the key to Cluster Network Consistency. I told you that word would be used a lot.
Congratulations, the Logical Network is designed. This is the hard part. The next step is to assign it to our hosts.
Configure the Hosts
The last step of setting a Switch Embedded Team is to deploy it to the hosts. This is so much easier now in SCVMM 2016 and leads again to my favorite word in Virtualization; Consistency. This method will ensure that every host at the Network Site is deployed with the same configuration, every time.
Start with the Fabric workspace and navigate to the Host Group that you will be working with. Select a host and go the properties of the host. You will then select Virtual Switches from the Navigation Bar and then begin applying your Logical Network.
Select the New Virtual Switch option, then then select New Logical Switch. Select the Logical Switch we created for that site and in the Physical Adapters select the Physical Nics you will be assigning to the team. Here is a hint I have picked up over the years, name the nics something that describes their purpose, and if you can append a switch name and switch port for easier troubleshooting later.
The rest is done for you, as we defined the Uplinks in the Logical Switch configuration, so you see all the Virtual Networks that will be deployed to the hosts, in this case Management, Live Migration and Cluster Networks will be added as vSwitches on this host. I also created IP Pools to handle IP address assignment for my Live Migration and Cluster Networks, and because we told the Management Virtual Network to inherit the IP settings, it set our Static IP as the Management Address.
Please visit my GitHub repository here. to download the PowerShell to build the Switch Embedded Team via SCVMM. You will need to update some Variables and the subnets to be your own, but I hope you find it useful.
That’s it, we have deployed Switch Embedded Teaming in a scalable, consistent manner. I hope you enjoyed this post, and I look forward to sharing more.
Premier Field Engineer