Enough introduction, on to the design! Our first step was to decide on the configuration of our fabric; the network, storage, and I/O paths needed for cloud computing. You’ll find we made some interesting decisions here, but then this is a real project with a real budget and timelines, not an ideal reference architecture. Like many of you, we had to make tradeoffs for the sake of schedule and budget, while adhering to our business goals.
Thankfully, in our data center many basic services are managed for us by the on-site facilities team and other IT service groups. Power, cooling, cabling, authentication, and other staples are a given. While we can request changes for our special needs, any concern about the very lowest level of functionality is mitigated. That leaves a few critical decisions in our hands:
We start with a given: Our service would be built using virtual machines running on Windows Server 2012 Hyper-V. The fact that we can achieve almost bare-metal performance and provide operational flexibility via Shared-Nothing Live Migration made this decision a no-brainer. Building a dynamic service directly on the hardware would have been foolish in our situation. Our hardware consisted of existing assets that gain new life as hosts for this environment.
This decision was a little more challenging. We have multiple options that could satisfy the needs of the project, each with their own pros and cons. Traditional SAN configurations are reliable and high-performing, but come with a steep price tag. iSCSI brings commodity connections like Gigabit Ethernet to the table, keeping costs down, but often at the cost of performance and additional configuration headaches. Finally, with the introduction of SMB 3.0 we now have a scalable, reliable file protocol option for running VMs and SQL databases on a wide variety of storage hardware. My glowing adjectives probably give away which way we were leaning, but we’ll spend a lot more time on the “why” and benefits later.
While the network plant (wires, switches, routers) is taken care of for us, the decision about how best to configure those connections lands on us. Networks for our VMs, host management, storage traffic, reliability, scale; all of these are factors we had to consider when designing the host environment.
The new few posts will detail what decisions we made in each instance, and how we ended up configuring them for our environment.