The Interop conference happened this week in Las Vegas (see http://www.interop.com/lasvegas) and Mellanox showcased their high-speed ConnectX-3 network adapters during the event. They showed an interesting setup with Windows Server 2012 Beta and SMB 3.0 that showed amazing remote file performance using SMB Direct (SMB over RDMA). The short story? 5.8 Gbytes per second from a single network port. Yes, that’s giga bytes, not giga bits. Roughly one DVD per second. Crazy, right?
It’s really not a complicated setup, with a single SMB Server and a single SMB client connected over one network port. The unique thing here is the combination of Intel Romley motherboards each with two CPUs each with 8 cores, the faster PCIe Gen3 bus, four FusionIO ioDrive 2 drives rated at 1.5 Gbytes/sec each and the latest Mellanox InfiniBand ConnectX-3 network adapters. Here’s what the different configurations look like:
To better compare the different networking technologies, I worked with Mellanox to gather information on traditional (non-RDMA) 10Gbps Ethernet, QDR InfiniBand (32 Gbps data rate) and FDR InfiniBand (54 Gbps data rate). All done with the same network adapter, just using different cables. You can see in the picture below the back of the server showing the four FusionIO cards and the ConnectX-3 card with two cables connected (the top connector with a QSFP to SPF+ adapter for the 10GbE SFP+ cable and the bottom one using an the InfiniBand FDR cable with a QSPF connector). Both are passive copper cables, but fiber optic versions are also available.
The results on the table below speak for themselves. The remote throughput is nearly identical to the local throughput for 512KB IOs at 5,792 Mbytes/sec. The results for smaller 8KB IOs is also impressive, showing over 340,000 IOPS on the remote system. Note that these are 8KB IOs, typically used by real workloads like OLTP systems. These are not tiny 512-byte IOs, so commonly used to produce large IOPS numbers but that do not match common workloads. You also can’t miss how RDMA improves the numbers for % Privileged CPU utilization, fulfilling the promise of low CPU utilization and low number of cycles per byte. The comparison between traditional, non-RDMA 10GbE and InfiniBand FDR for the first workload shows the most impressive contrast: over 5 times the throughput with about half the CPU utilization.
Here is some of the output for Performance Monitor in each configuration, for the anyone looking for the nasty details (you can click on the pictures to see a larger version).
|Configuration \ Workload||512KB IOs, 8 threads, 8 outstanding||8KB IOs, 16 threads, 16 outstanding|
|Non-RDMA (Ethernet, 10 Gbps)|
|RDMA (InfiniBand QDR, 32 Gbps)|
|RDMA (InfiniBand FDR, 54 Gbps)|
Note: You’ll find slight differences in the bandwidth and IOPS numbers between the two tables. The first table (with the blue background) is more accurate, since it shows a 60-second average and it uses base 2 for the bandwidth (multiples of 1024). The second table (with the performance monitor screenshots) shows instant values with base 10 (multiples of 1000).
If you want to try this scenario in your own lab, all you need is similarly configured machines and Windows Server 2012 Beta (available as a free download). For a complete list of required hardware for the InfiniBand configuration and step by step instructions on how to make this happen, see this blog post on Deploying Windows Server 2012 Beta with SMB Direct (SMB over RDMA) and the Mellanox ConnectX-2/ConnectX-3 using InfiniBand – Step by Step.
I also delivered a short presentation at the conference covering this demo. The presentation is attached to this blog post (see link to the PDF file below).
Update on 5/7: Added picture of the server.
Update on 5/11: Presentation attached to this blog post.