Now that our storage is straightened out, it’s time to set our sights on network gear. As was mentioned earlier, this entire project is operating as an independent entity from Microsoft’s normal IT operations. There’s no shared network, authentication, or any kind of IT services at all. We’ll simply be another waypoint on the Internet, which just happens to be hosted in a datacenter. That means we need to own all of our network equipment.
Unlike our storage clusters, we considered it highly desirable to make the network a one-vendor solution for maximum compatibility and to avoid the overhead of having to learn multiple CLI and other management tools. After all, while many of us are fabric specialists with lots of experience in configuring hardware and networks, we’re not network engineers in the traditional sense. That means we will occasionally need to rely on others to assist with our network, and considering where that expertise lies.
In addition, it was important to keep an eye on the direction of the industry and the direction Microsoft and the WSSC org in general is moving. An example is the Open Management Interface (OMI) initiative. Microsoft strongly believes a common management interface is critical to the success of SDN in the market. Using devices with OMI support allows us to leverage features in Windows Server and System Center for configuration and reporting, and be a good test bed for future development of our products.
Taken together, those options led to one network vendor: Cisco. Besides being a long-time leader in networking, there is a large base of Cisco expertise inside and outside Microsoft to take advantage of. On top of that, their excellent Nexus line has support for OMI in the particular devices we were considering, making this an ideal marriage.
In the next few posts, we’ll cover the particular models we invested in, and explain why they were chosen for their roles.