Planning for a Private Cloud – Part 1

There are pros and cons to every job in every company. In my role as an IT Pro Evangelist (ITE) here at Microsoft, I get to use the latest and greatest versions of our software. Since my role is a technical one, I also get paid to be technically adept at all the upcoming products and not just the current released versions. I get hardware that allows me to create simple, small test environments to learn the technologies. I am also not on call for technical support issues and am not responsible for production systems.

Since I’m not dealing with production environments, I no longer deal with physical servers or have to worry about production configurations for 99.9+% uptime. I stopped following “best practices” when configuring servers because in my learning lab and for demos, it is a lot easier to use the built-in Administrator account to do all my work and since I’m lazy, I keep my password simple (not worried about hackers) and don’t worry about expiration. 

The Private Cloud (and Cloud in general) is becoming more and more “main stream” so I really needed to go beyond the basics myself. I quickly realized that the laptops I have are not sufficient to really build out anything beyond a basic private cloud environment. I thought I would share my experiences over the course of a few blog posts. I will start with the planning phase (non-technical) followed by our final solution (technical details) and finally what we actually did to build out our solution.

In this post, I will detail out my though process around the options. I know it is not truly technical in nature, but still something all IT Pros encounter at some point in time.

The first option I considered was to implement only the components needed for the heart of private cloud (virtualization, self service, elasticity, etc.). For this, I would only need a few servers and storage to install and configure Windows Server 2008 R2 with Hyper-V, System Center Virtual Machine Manager 2008 R2, Self Service Portal 2.0 and System Center Operations Manager 2007 R2. This would have been the cheapest option but also would not give flexibility to do other things.

The second option was to implement a full network infrastructure for a fictitious company and implement a private cloud solution as well. With this option, I envisioned a fully functional infrastructure that would be needed by any mid size company to do business (AD, email, file / print services, remote desktop, collaboration technologies like SharePoint, SQL server, etc.). Since we would be using our work laptops to connect to this environment, we would be limited on what we could demo. I decided a VDI solution would be a great fit so we could have many domain joined desktops in the environment that we could connect to without having to dual boot our laptops. This is the option I decided to pursue.

The next step was to decide whether to rent / lease or purchase the hardware. Within Microsoft, there are options for renting servers and / or rack space in a datacenter for our purposes. Needless to say, the costs were a lot higher than I expected. I happened to discuss this with Ed Horley, an MVP out of the Bay Area. He went and spoke with his company and they offered to host our servers free of charge. Now, that is something that peaked my interest. Since I had already done the research on the cost of purchasing our own hardware versus renting, I knew it would be much cheaper for us to purchase and accept the offer from Ed’s company (GroupWare Technology) to host our hardware. 

Despite what people may think about Microsoft, we do not have unlimited funds and I had to maximize the hardware within my budget. I went ahead and purchased 5 Dell R710 Servers, each with 48 GB of RAM and Dual Intel L5640 6 core processors. I also went with SATA drives since that allowed me to maximize storage capacity and stay within my budget. We had numerous discussions amongst the 12 ITEs on configurations and options for storage. I made the executive decision (cost related) to use a server with our iSCSI target for our storage so we could have another server we could use in the mix if the need arose. I would have loved to create a fully production ready environment, but for what we wanted to do and the cost constraints, I had to make the decision to not follow best practices for our environment. 

We will run most of the infrastructure as Virtual Machines, but will also have some components that are running on bare metal. Some of the bare metal servers will also be dual purpose (such as a Hyper-V host and an iSCSI target). Also, we will not have a backup solution even though I would never do that in true production.

We have already started working on building out our environment and documenting the process. My peer in Central Region, Kevin Remde, has already written a blog post about our initial experience. I will also be building on that in future posts.

Harold Wong