Hi there! In the previous blog post, we showcased the work we drive with customers in the TAP program and the Enterprise Engineering Center. In this blog post, we’ll introduce some of the work my team drives for partner adoption and readiness on pre-release server OSes. Similarly, The Windows Server Partner Engineering program is focused on enabling partners during early development cycles. This team drives the partner adoption key new features and support readiness of their products on the new Window Server operating systems. In today’s post we’ll cover new networking features in Windows Server 2012, with subsequent posts to cover specific implementations of these features in Partner products.
Windows Server 2008 R2 introduced several new networking-related features, including Virtual Desktop Infrastructure (VDI) deployments; Windows Server 2008 R2 RemoteApp publishing; peer-to-peer caching optimizations; improvements for optimizing network traffic performance; enhanced security and management; and measures to avoid data duplication.
Windows Server 2012 builds on these advances with an array of new and enhanced features that help reduce networking complexity while lowering costs and simplifying management tasks. With Windows Server 2012, IT administrators have tools that can automate and consolidate networking processes and resources.
Let’s take a closer look at the features of Windows Server 2012 that facilitate more efficient and cost-effective networking.
Support for native NIC-teaming Load Balancing and Failover (LBFO)
Windows Server 2012 helps you provide fault tolerance on your network adapters without having to buy additional hardware and software. Windows Server 2012 includes NIC Teaming as a new feature, which allows multiple network interfaces to work together as a team, preventing connectivity loss if one network adapter fails. It allows a server to tolerate network adapter and port failure up to the first switch segment. NIC Teaming also allows you to aggregate bandwidth from multiple network adapters, for example, so four 1-gigabyte (GB) network adapters can provide an aggregate of 4 GB/second of throughput.
The advantages of a Windows teaming solution are that it works with all network adapter vendors, spares you from most potential problems that proprietary solutions cause, provides a common set of management tools for all adapter types, and is fully supported by Microsoft.
Hyper-V Network Virtualization
With the success of virtualized data centers, IT organizations and hosting providers (providers who offer colocation or physical server rentals) are offering flexible virtualized infrastructures that make it easier to offer on-demand server instances to their customers. This new class of a service is referred to as infrastructure as a service (IaaS). Windows Server 2012 provides all the required platform capabilities to enable enterprise customers to build private clouds and transition to an IaaS operational model, and also to enable hosting providers to build public clouds and offer IaaS solutions to their customers.
Network virtualization in Windows Server 2012 provides policy-based, software-controlled network virtualization, which reduces the management overhead that is faced by enterprises when they are expanding dedicated IaaS clouds. Network virtualization also provides better flexibility for cloud hosting providers, scalability for managing virtual machines, and higher resource utilization.
An IaaS scenario where multiple virtual machines from different divisions (dedicated cloud) or different customers (hosted cloud) requires secure isolation. Currently, virtual LANs (VLANs) are the mechanism most organizations use to provide address space reuse, and tenant isolation. A VLAN uses explicit tagging in the Ethernet frames, and it relies on Ethernet switches to enforce isolation and restrict traffic to network nodes of the same tag. The main drawbacks with VLANs are:
· Increased risk of an inadvertent outage due to cumbersome reconfiguration of production switches whenever virtual machines or isolation boundaries move in the dynamic data center.
· Limited scalability because typical switches support no more than 1,000 VLAN IDs (maximum of 4,094).
· VLANs cannot span multiple logical subnets, which limits the number of nodes within a single VLAN and restricts the placement of virtual machines based on physical location. Even though VLANs can be enhanced or stretched across physical intranet locations, the stretched VLAN must be all on the same subnet.
In addition to the drawbacks presented by VLANs, virtual machine IP address assignment presents other significant issues, including:
· Moving to a cloud platform typically requires reassigning IP addresses for the service workloads.
· Policies (security, management, and other) are tied to IP addresses.
· Physical locations determine the virtual machine IP address.
· The topological dependency of virtual machine deployment and traffic isolation.
The IP address is the fundamental address that is used for layer 3 network communication. In addition to being an address, there is semantic information associated with an IP address. For example, one subnet might contain specific services or be in a distinct physical location. Firewall rules, access control policies, and Internet Protocol security (IPsec) security associations are commonly linked to IP addresses. Unfortunately, when moving to the cloud, the IP addresses must be changed to accommodate the physical and topological restrictions of the data center. This renumbering of IP addresses is burdensome because all of the associated policies based on IP addresses must also be updated.
When data center network administrators plan the physical layout of the data center, they must make decisions about where subnets will be physically placed and routed in the data center. These decisions are based on IP and Ethernet technologies that are 30 years old. These technologies influence the potential IP addresses that are allowed for virtual machines running on a specific server or server blade that is connected to a specific rack in the data center. When a virtual machine is provisioned and placed in the data center, it must adhere to the choices and restrictions regarding the IP address. Therefore, the typical result is that data center administrators assign IP addresses to the virtual machines, forcing the virtual machine owners to adjust all their policies that were based on the original IP address. This renumbering overhead is so high that many enterprises deploy only new services into their cloud platform, leaving legacy applications alone.
Network virtualization in Windows Server 2012 removes the constraints of VLAN and hierarchical IP address assignment for virtual machine provisioning. It makes IaaS cloud computing easy for customers to implement, and easy for hosting providers and data center administrators to manage. In addition, network virtualization maintains the necessary multitenant isolation and security requirements. The following list summarizes the key benefits and capabilities of network virtualization in Windows Server 2012:
- Uncouples workloads from internal IP addresses. Enables customers to keep their internal IP addresses while moving workloads to shared IaaS cloud platforms. Uncoupling workloads from internal IP addresses minimizes the configuration changes that are needed for IP addresses, DNS names, security policies, and virtual machine configurations.
- Decouples server and network administration. Server workload placement is simplified because migration and placement of workloads are independent of the underlying physical network configurations. Server administrators can focus on managing services and servers, while network administrators can focus on overall network infrastructure and traffic management.
- Removes the tenant isolation dependency on VLANs. In the software-defined, policy-based data center networks, network traffic isolation is no longer dependent on VLANs, but enforced within host computers running Hyper-V based on the multitenant isolation policy. Network administrators can still use VLANs to manage traffic for the physical infrastructure where the topology is primarily static.
- Enables flexible workload placement. Allows services and workloads to be placed or migrated to any server in the data center. They can keep their IP addresses, and they are not limited to a physical IP subnet hierarchy or VLAN configurations.
- Simplifies the network and improves server and network resource utilization. The rigidity of VLANs and dependency of virtual machine placement on physical network infrastructure results in overprovisioning and underutilization. By breaking the dependency, the increased flexibility of virtual machine workload placement can simplify network management and improve server and network resource utilization.
- Works with existing infrastructure and emerging technology. Network virtualization can be deployed in current data center environments, and it is compatible with the emerging data center ?flat network? technologies, such as the Transparent Interconnection of Lots of Links (TRILL) architecture that is intended to expand Ethernet topologies.
- Supports configuration by using Windows PowerShell and WMI. You can use Windows PowerShell to enable scripting and automation of administrative tasks.
The Hyper-V Extensible Switch
Windows Server 2012 provides improved multitenant security for customers on a shared infrastructure as a service (IaaS) cloud through the new Hyper-V Extensible Switch. The Hyper-V Extensible Switch is a layer-2 virtual interface that provides programmatically managed and extensible capabilities to connect virtual machines to the physical network.
Management features are built into the Hyper-V Extensible Switch that allow you to troubleshoot and resolve problems on Hyper-V Extensible Switch networks:
• Windows PowerShell and scripting support.
• Unified tracing and enhanced diagnostics.
• Open framework
The Hyper-V Extensible Switch architecture in Windows Server 2012 is an open framework that allows third parties to add new functionality such as monitoring, forwarding, and filtering into the virtual switch. Extensions are implemented by using Network Device Interface Specification (NDIS) filter drivers and Windows Filtering Platform (WFP) callout drivers. These two public Windows platforms for extending Windows networking functionality are used as follows:
- NDIS filter drivers are used to monitor or modify network packets in Windows.
- WFP callout drivers allow independent software vendors (ISVs) to create drivers to filter and modify TCP/IP packets, monitor or authorize connections, filter IP security (IPsec)–protected traffic, and filter remote procedure calls (RPCs).
Several partners have already announced extensions with the unveiling of the Hyper-V Extensible Switch; no “one switch only” solution for Hyper-V.
- Windows Reliability/Quality. Extensions experience a high level of reliability and quality from the strength of the Windows platform and the Windows Logo Program Certification, which sets a high bar for extension quality
- Unified management. The management of extensions is integrated into the Windows management through Windows PowerShell cmdlets and WMI scripting. One management story for all.
- Easier to support. Unified tracing means its quicker and easier to diagnose issues when they arise. Less down time increases availability of services.
Support for SR-IOV networking devices
Single Root I/O Virtualization (SR-IOV) is a standard introduced by the PCI-SIG, the special-interest group that owns and manages Peripheral Component Interconnect (PCI) specifications as open industry standards. SR-IOV works in conjunction with system chipset support for virtualization technologies that provide remapping of interrupts and Direct Memory Access, and allows SR-IOV–capable devices to be assigned directly to a virtual machine.
Hyper-V in Windows Server 2012 enables support for SR-IOV–capable network devices and allows an SR-IOV virtual function of a physical network adapter to be assigned directly to a virtual machine. This increases network throughput and reduces network latency while also reducing the host CPU overhead Rrequired for processing network traffic.
RSS (Receive Side Scaling) improvements
RSS spreads monitoring interrupts over multiple processors, so that a single processor isn’t required to handle all I/O interrupts, which was common with earlier versions of Windows Server. Active load balancing between the processors tracks the load on the different CPUs and then transfers the interrupts as needed.
You can select which processors will be used for handling RSS requests, including processors that are beyond 64 KB, which allows you to take advantage of some of the very high-end computers that have a large number of logical processors.
RSS works with in-box NIC Teaming or Load Balancing and Failover (LBFO) to address an issue in previous versions of Windows Server, where a choice had to be made between using hardware drivers or RSS. RSS will also work for User Datagram Protocol (UDP) traffic and can manage and debug applications that use Windows Management Instrumentation (WMI) and Windows PowerShell.
RSC (Receive Side Coalescing) improvements.
RSC improves the scalability of the servers by reducing the overhead for processing a large amount of network I/O traffic by offloading some of the work to network adapters.
This feature improves the scalability of the servers by reducing the overhead for processing a large number of network IO traffic by offloading some of the work to RSC capable NICs. This feature is applicable in receive intensive workloads where the server is receiving a large number of small data packets that need to be combined before they can be processed. If all these packets need to be handled by the server then the overhead will reduce performance and scale of the server. RSC capable NICs can collect all these small/tiny packets and combine them into a single packet before passing them over to the CPU for processing. So the amount of overhead required by the CPU to process this data is significantly reduced. In early testing, RSC has reduced CPU usage by up to 20 percent.
This feature can also be managed using PowerShell/WMI. This technology is generally referred to as Large Receive Offload (LRO) or Generic Receive Offload (GRO) on other OSes and by partners.
Dynamic Virtual Machine Queues (D-VMQs)
The Virtual Machine Queue (VMQ) is a hardware virtualization technology for the efficient transfer of network traffic to a virtualized host OS. A VMQ capable NIC classifies incoming frames to be routed to a receive queue based on filters
which associate the queue with a VM’s virtual NIC. Each virtual machine device buffer is assigned a VMQ, which avoids needless packet copies and route lookups in the virtual switch.
Essentially, VMQ allows the host’s single network adapter to appear as multiple network adapters to the virtual machines, allowing each virtual machine its own dedicated network adapter. The result is less data in the host’s buffers and an overall performance improvement to I/O operations.
These hardware queues may be affinitized to different CPUs thus allowing for receive scaling on a per-VM NIC basis. Windows Server 2008 R2 allowed administrators to statically configure the number of processors available to process interrupts for VMQ. Without VMQ, CPU 0 would run hot with increased network traffic. With VMQ, the interrupts were spread across more processors. However, network load may vary over time. A fixed number of processors may not be suitable in all traffic regimes.
New in Windows Server 2012: Windows Server 2012 dynamically distributes incoming network traffic processing to host processors (based on processor usage and network load). In times of heavy network load, Dynamic VMQ automatically recruits more processors. In times of light network load, Dynamic VMQ relinquishes those same processors. VMQ spreads interrupts for network traffic across available processors. In Windows Server 2012, the Dynamic VMQ capability allows an adaptive algorithm to modify the CPU affinity of queues without the need of removing/re-creating queues. This results in a better match of network load to processor use, resulting in increased network performance.
Quality of Service (QoS)
Windows Server 2012 includes new Quality of Service (QoS) bandwidth management features that enable hosting providers and enterprises to provide services with predictable network performance to virtual machines on a server running Hyper-V.
Windows Server 2012 supports bandwidth floors and bandwidth caps.
Windows Server 2012 also takes advantage of Data Center Bridging (DCB)–capable hardware to converge multiple types of network traffic on a single network adapter with a better level of service to each type.
With Windows PowerShell, you can configure all these new features manually or automate them in a script to manage a group of servers, regardless of whether they’re domain-joined or stand-alone, with no dependencies.
Dynamic Host Configuration Protocol (DHCP) Server Failover
DHCP failover allows two DHCP servers to synchronize lease information almost instantaneously and to provide high availability of DHCP service. If one of the servers becomes unavailable, the other server assumes responsibility for servicing clients for the same subnet. Now you can also configure failover with load balancing, with client requests distributed between the two DHCP servers. DHCP Server Failover in Windows Server 2012 provides support for two DHCPv4 servers.
Administrators can deploy Windows Server 2012 DHCP servers as failover partners in either hot standby mode or load-sharing mode.
Domain Name System Security Extensions (DNSSEC)
Domain Name System Security Extensions (DNSSEC) is a suite of additions to Domain Name Systems (DNS) that helps protect DNS traffic from attack. By validating a digital signature attached to each DNS response, the resolver can verify the authenticity of DNS data, even from an untrusted DNS server. Specifically, DNSSEC provides origin authority, data integrity, and authenticated denial of existence.
Windows Server 2012 extends and simplifies your implementation of DNSSEC by providing a “sign and forget operation experience”.
IP Address Management (IPAM)
Windows Server 2012 introduces IP Address Management (IPAM), a framework for discovering, monitoring, auditing, and managing the IP address space and the associated infrastructure servers on a corporate network.
IPAM provides the following:
· Automatic IP address infrastructure discovery.
· Migration of IP address data from spreadsheets or other tools.
· Custom IP address space display, reporting, and management.
· Audit of server configuration changes and tracking of IP address usage.
· Monitoring and specific scenario-based management of Dynamic Host Configuration Protocol (DHCP) and Domain Name System (DNS) services.
· The other salient aspects and features are the following:
· Agentless architecture.
· Distributed deployment.
· Remote management, where the IPAM console can be remote and manage any instance of IPAM server in the network.
· Infrastructure servers of Windows Server 2008, Windows Server 2008 R2, Windows Server 2008 R2 SP1, and Windows Server 2012
· Backup and restore and disaster discovery.
· Active Directory integration.
· In-box availability.
IPAM gives you a choice of two main architectures:
Distributed, where an IPAM server is deployed at every site in an enterprise. This mode of deployment is largely preferred to reduce network latency in managing infrastructure servers from a centralized IPAM server.
Centralized, where one IPAM server is deployed in an enterprise. This will be deployed even in case of the distributed mode. This way, administrators would have one single console to visualize, monitor, and manage the entire IP address space of the network and also the associated infrastructure servers.
DirectAccess – more secure and efficient remote access
Increasing numbers of employees are working remotely, but they are still expected to maintain a high level of productivity. This expectation increases the need for remote users to have more secure remote access to corporate networks.
Introduced in Windows 7 and Windows Server 2008 R2, DirectAccess enables remote users to securely access shared resources, websites, and applications on an internal network without connecting to a virtual private network (VPN). DirectAccess establishes bidirectional connectivity with an internal network every time a DirectAccess-enabled computer is connected to the Internet. Users never have to think about connecting to the internal network, and IT administrators can manage remote computers outside the office, even when the computers are not connected to the VPN.
Unified remote access
With Windows Server 2012, DirectAccess and VPN can be configured together in the Remote Access Management console by using a single wizard. This allows the enterprise to deploy a single solution that meets the needs of heterogeneous client environment encompassing both Windows/Non-Windows and Managed/Unmanaged devices. The new role allows easier migration of Windows 7 RRAS and DirectAccess deployments and provides several new features and improvements.
Some of the notable improvements to DirectAccess include –
- Simplified DirectAccess deployment management for small and medium organization administrators: New Wizard based deployment targeted at IT generalists allows deployment of DirectAccess with just a few clicks.
- Removal of PKI deployment as a DirectAccess prerequisite:
- Built-in NAT64 and DNS64 support for accessing IPv4-only resources: DirectAccess can now be deployed without any changes needed to enterprise infrastructure. IPv6 is not required for DirectAccess.
- Support for new deployment topologies: Enterprises can now deploy the DirectAccess server behind a NAT device / on a server with a single network adapter
- Support for additional 2 factor auth mechanisms : Inbuild support for One time passwords and TPM virtual smartcard based authentication.
- Multisite support: Allows for a roaming client to connect through the closest server to gain access to corporate resources. Clients will also automatically failover to available entry points in failure cases.
- Monitoring and reporting: Rich set of experiences that allows an administrator to monitor users as well as server health. Historical reporting capabilities are also present.
BranchCache is improved in Windows 8 and Windows Server 2012, with a streamlined deployment process and an improved ability to optimize bandwidth over WAN connections between BranchCache-enabled content servers and remote client computers. This functionality lets remote client computers access data and run applications in a more secure, efficient, and scalable way.
To optimize WAN bandwidth, BranchCache downloads content from your content servers and caches it at branch office locations; letting client computers at the branch office locations access it locally.
Windows Server 2012 offers higher scalability and improved performance to compensate for high latency and variable network conditions in both physical and virtual environments, which result in the following:
• Predictability. Windows Server 2012 responds in a predictable manner to changing I/O.
• Scalability. Windows Server 2012 easily reduces latency to handle applications with a high I/O per second (IOPS).
• Low latency. Windows Server 2012 decreases the amount of end-to-end transaction processing that’s necessary for critical applications.
• Reliability. Windows Server 2012 offers new ways to reduce network downtime while offering enhanced performance.
Look to upcoming blogs for a discussion of various Partner implementations of these new and improved Networking features in Windows Server 2012.
PaCE Partner Readiness Team