This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog. Today’s blog post covers Hyper-V Extensible Switch Enhancements and how it applies to the larger topic of “Transform the Datacenter.” To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: IaaS Innovations.”
Windows Server 2012 introduced two great features for virtualization users; Hyper-V Network Virtualization and the Hyper-V Extensible Switch. These features are part of Microsoft’s Software Defined Networking (SDN) strategy outlined in Transforming your datacenter – networking.
Hyper-V Network Virtualization (HNV) is an encapsulation technology that enables datacenters to use a flat network fabric that can be configured to have multiple isolated tenant virtual networks on the same physical network. This allows service providers to reconfigure their network infrastructure without reconfiguring physical equipment or cables.
The Hyper-V Virtual Switch has a rich set of native features that have been added. This includes the ability for third parties to extend the virtual switch; hence the Hyper-V Extensible Switch. The Hyper-V Extensible Switch enables software partners to provide added switch functionality, called extensions that plug into the virtual switch. Partners can add extensions that monitor or filter network traffic, as well as replace the switch forwarding logic. Through a close collaboration, Cisco delivered the Nexus 1000V for Hyper-V, a forwarding extension that shapes the Hyper-V virtual switch into a Cisco virtual switch.
Other third party extensions available for Windows Server 2012 include NEC’s OpenFlow extension, 5Nine’s firewall extension, and InMon’s sFlow monitoring extension. With partners providing such great enhancements to the Hyper-V platform, what more could Microsoft do for them? Well, one thing is to expose both the HNV fabric and tenant specific IP address spaces to extensions. Extensions could monitor more efficiently and make more intelligent decisions on HNV network traffic if the extension could view both address spaces.
Windows Server 2012
Hyper-V Network Virtualization provides multi-tenant isolation by virtualizing a tenants IP addresses, which we call customer addresses (CA). These customer addresses are encapsulated in the datacenter’s flat network fabric provider addresses (PA) using NVGRE. The benefits of HNV mean multiple tenants can exist security on the same datacenter network, you can bring-your-own-IP to the cloud, and “Any Service, Any Server, Any Cloud” is possible (e.g., things like cross subnet live migration). If you’re interested in more background, read the HNV introduction blog.
HNV in Windows Server 2012 was realized in a network filter below the virtual switch. See Figure 1. All extensions operating on HNV traffic would only see the customer address space because any encapsulation/decapsulation was happening before the traffic arrived at the virtual switch. Therefore, an extension had no knowledge of whether the traffic was addressed for the physical/provider or customer address space. All traffic addresses looked the same.
Windows Server 2012 R2
The goal in Windows Server 2012 R2 was to give third party extensions visibility into both Hyper-V Network Virtualization address spaces. Two key changes were make this happen. First, the HNV module was moved to inside the virtual switch so that extensions could see both the provider (PA) and virtual (CA) IP address spaces. See Figure 2. This allows forwarding and other types of extensions to make decisions with knowledge of both address spaces.
All traffic through the virtual switch traverses the extensions on both the ingress path (inbound through each extensions) and the egress path (outbound through each extension in the reverse order). The HNV module is invoked between ingress and egress paths, thus all extensions see traffic before and after NVGRE encapsulation/decapsulation. Depending on the traffic type and direction, the flow through the virtual switch is different.
Non-HNV traffic flows from either the physical NIC or a virtual machine down ingress and out egress. There is no NVGRE encapsulation/decapsulation.
HNV traffic from external arrives encapsulated and flows down ingress with the PA spaces addresses on the packet. The HNV module decapsulates the NVGRE traffic and it flow up egress as CA space addressed traffic.
HNV destined traffic from the host or a virtual machine flows down the ingress path with CA space addresses. The HNV module performs NVGRE encapsulation and the traffic flows up egress with PA space addresses.
The second change to the Hyper-V Extensible Switch was to implement hybrid forwarding. Hybrid forwarding directs packets to different forwarding agents, based upon the packet type. In the Windows Server 2012 R2 implementation a packet that is NVGRE is forwarded by the HNV module. A packet that is non-NVGRE is forwarded as normal by the forwarding extension. Regardless of which agent performs the forwarding computation, the forwarding extension still has the opportunity to apply additional policies to the packet. If there is no forwarding extension, the Microsoft forwarding logic takes over for non-NVGRE packets.
Cisco Nexus 1000V and hybrid forwarding
Many Microsoft customers use Cisco networking equipment and were excited about Cisco’s Nexus 1000V for Hyper-V. Similarly, many saw significant benefit to implementing HNV in their environments. So it was a given that Microsoft and Cisco would continue to work closely together in Windows Server 2012 R2 and ensure that both HNV and the Nexus 1000V could be used simultaneously in the same switch.
Let’s talk briefly talk about how packets are handled with HNV and the Nexus 1000V forwarding extension both in the virtual switch. When a packet arrives at the virtual switch, it is passed through all non-forwarding extensions on the ingress path. The virtual switch then detects whether the packet is an HNV packet or not. If the packet is identified as HNV it is processed by the HNV module. The HNV module will perform either NVGRE encapsulation or decapsulation depending on whether the packet destination is external to the host or for local virtual machines. The HNV module will compute the destination table for the packet. The packet is then passed along the egress path, the first extension being the Nexus 1000V. Since it is an HNV packet, the Nexus 1000V does not attempt to compute the destination table. However the Nexus 1000V is free to apply all other policies and security filtering to the packet. The packet then continues up the egress path through all other extensions.
If the packet is not HNV, the packet bypasses the HNV module and is passed to the Nexus 1000V on the ingress path instead. The Nexus 1000V controls the destination table computation for all such non-HNV packets. On egress the packet is passed through the Nexus 1000V again so it can apply policy and security filtering. The packet then continues up the egress path through all other extensions.
The key thing to note here is that forwarding depends on whether the traffic is HNV or non-HNV. And that regardless of which module computes the destination table, the forwarding extension enforces its policies and security on all packets traveling through the virtual switch.
To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.
Bob Combs, Program Manager, Windows Core Networking team