Faking Networks

On a Windows HPC Server 2008 head node, that is...

1. No Infiniband on the head node

In many cases people want to save themselves some money by not installing an Infiniband adapter on the head node, thereby also sparing a port on that expensive infiniband switch. It makes a lot of sense, especially when you plan not to perform any calculations on such machine. So, how do we make the software believe it has an Infiniband adapter?

The HPC management tools do not care too much about the type of connection you have, as long as they can get an IP address to communicate with. So, you can install a "loopback adapter", give it a fixed IP address and pretend it is a real network card. Of course, you will not be able to use it to communicate with the compute nodes, but if all you want to carry on IB is MPI traffic amongst those, the trick will work.

The only caveat is that you lose the ability to use dhcp on the infiniband network, hence you will have to provide a mechanism to assign fixed IP addresses for IPoIB communication. Of course the subnet you use on the "fake" IB and the real one must be the same.

The easiest way is possibly to write a small script that uses the netsh command, then run it on all the compute nodes. You will need at least 1 private Ethernet network for management traffic across the cluster.

For instance, the command below will assign the ip address 192.168.3.100 and a 24-bit mask to the network connection called "Application"

netsh int ip set address "Application" static 192.168.3.100 255.255.255.0

2. No public ethernet

In several cases I found that the head node has only 1 ethernet card. Our HPC software out of the box prevents the use of Windows Deployment Services, DHCP unless you have at least 2 adapters, in order to avoid conflicts with existing deployment solutions. You may choose to install a fake "public" network on a loopback adapter and thus enable WDS on the real "private" network.

3. No private ethernet

Another interesting case is the one you get with many pre-built clusters, which provide 1 Ethernet and 1 Infiniband network in the box.

Note that when you install an Infiniband stack (e.g. WinOF 2.0), you typically get an IP-over-IB protocol provider. Thus, it is possible to use the infiniband network to route private cluster traffic, with the exception of deployment (no PXE-boot over IB). For "heavy" mpi applications, you will want to keep the two networks separate anyway.

Powered by Qumana