In today’s installment of the Demystifying… series, we look at……………………
Part 5: Datacenter Hardware Technologies
Terms in this tip include:
- Converged Network Adaptors
- Remote Direct Memory Access (RDMA)
- RDMA over Converged Ethernet (RoCE)
- Datacenter Bridging
- Virtual Machine Queue (VMQ)
- Load Balancing and Failover (LBFO)
- Switch Embedded Teaming (SET)
Converged Network Adaptors – A Converged Network Adapter (CNA), also called a converged network interface controller (C-NIC), combines two different functions in single piece of hardware. Converged NICs combine the functionality of both a fiber channel host bus adapter for connecting a server to a storage area network (SAN), and an Ethernet connection to connect the same server to a local area network (LAN). In other words, it “converges” access to, respectively, a storage area network and a general-purpose computer network.
Remote Direct Memory Access (RDMA) – Implemented on hardware, Remote Direct Memory Access (RDMA) supports zero-copy networking by enabling the network adapter to transfer data directly to or from application memory, eliminating the need to copy data between application memory and the data buffers in the operating system. Such transfers require no work to be done by CPUs, caches, or context switches, and transfers continue in parallel with other system operations. When an application performs an RDMA Read or Write request, the application data is delivered directly to the network, reducing latency and enabling fast message transfer.
The high-throughput, low-latency communication provided by RDMA makes it ideal for network-intensive applications like networked storage or cluster computing.
RDMA over Converged Ethernet (RoCE) – RDMA over Converged Ethernet (RoCE) is a network protocol that allows remote direct memory access (RDMA) over an Ethernet network. Two RoCE versions exist, RoCE v1 and RoCE v2. RoCE v1 uses the Ethernet protocol as a link layer protocol and hence allows communication between any two hosts in the same Ethernet broadcast domain. While RoCE v2 is a RDMA running on top of UDP/IP and can be routed.
Datacenter Bridging (DCB) – Data Center Bridging DCB is a suite of Institute of Electrical and Electronics Engineers (IEEE) standards that enable Converged Fabrics in the data center, where storage, data networking, cluster IPC and management traffic all share the same Ethernet network infrastructure. DCB provides hardware-based bandwidth allocation to a specific type of traffic and enhances Ethernet transport reliability with the use of priority-based flow control. Hardware-based bandwidth allocation is essential if traffic bypasses the operating system and is offloaded to a converged network adapter, which might support Internet Small Computer System Interface (iSCSI), Remote Direct Memory Access (RDMA) over Converged Ethernet, or Fiber Channel over Ethernet (FCoE). Priority-based flow control is essential if the upper layer protocol, such as Fiber Channel, assumes a lossless underlying transport.
Virtual Machine Queue (VMQ) – Virtual Machine Queue (VMQ) is a feature available to computers running Windows Server with the Hyper-V server role installed, that have VMQ-capable network hardware. VMQ uses hardware packet filtering to deliver packet data from an external virtual machine network directly to virtual machines, which reduces the overhead of routing packets and copying them from the management operating system to the virtual machine.
Load Balancing and Failover (LBFO) – Load Balancing and Failover logically combines multiple network adapters to provide bandwidth aggregation and traffic failover to prevent connectivity loss in the event of a network component failure. Load Balancing with Failover is also known as NIC Teaming in Windows Server 2012 / 2016.
Switch Embedded Teaming (SET) – Switch Embedded Teaming (SET) is an alternative NIC Teaming solution that you can use in environments that include Hyper-V and the Software Defined Networking (SDN) stack in Windows Server 2016. SET integrates some NIC Teaming functionality into the Hyper-V Virtual Switch.
SET allows you to group between one and eight physical Ethernet network adapters into one or more software-based virtual network adapters. These virtual network adapters provide fast performance and fault tolerance in the event of a network adapter failure.
SET member network adapters must all be installed in the same physical Hyper-V host to be placed in a team.