Guest Post: Supercharging private cloud efficiencies by Andy Hawkins

Andy Hawkins 1Eby Andy Hawkins, Product Manager at 1E

Organizations are continuing to face challenges in the data center. It’s true that virtualization has succeeded in addressing power and space concerns, but at most, it has only reduced spend marginally. Indeed, many companies are finding it hard to recognize ROI as sprawl is generating new VMs all the time. Some sanctioned, others rogue and telling the difference is hard.

Most organizations have tried throwing heads at the problem by spinning up teams of people who were already busy just keeping IT running. Typically tidying up the mess and removing unused, unlicensed and unauthorised VMs gets deprioritized. It’s actually an easy problem to hide with virtualization.

Setting up a private cloud, or aspects of it, means IT departments need to implement an agile approach to provision services for users. Self-service is key, as it has become too easy for end users to turn to public cloud solutions when they can’t see what they require available internally and this creates a whole other set of issues.

System Center 2012 provides much of this agility and automation with products such as Orchestrator and Virtual Machine Manager. These enable businesses to build, deploy, and maintain a private cloud; and the soon to be released Hyper-V 3.0 in Windows Server 2012 makes the Microsoft private cloud solution particularly compelling.

Without monitoring and smart analytics it is easy for self-service provisioning to spiral out of control and turn into a costly problem, particularly from a licensing perspective. The dilemma is how to stop it sprawling out of control. Microsoft Operations Manager and its ability to monitor availability (is it working?) and performance (is it working well?) are useful but not geared towards this problem specifically. We propose extending their functionality with an extra dimension to achieve greater efficiency and tangible savings.

We see best practice as providing intelligence about actual server usage. There’s no point pestering a user about retrieving a license if they are using it, however if they are not, you want to spot where such inefficiencies lie and from there you can optimize and reclaim licenses.

In some cases potential license liabilities are caused by users creating their own servers, installing software for a one-off or short-term purpose and then leaving the servers running without letting the IT department know that they can be decommissioned or repurposed. This usage monitoring needs to be run as a continuous process as usage rates will change whenever users provision a new virtual machine or when they no longer need one having completed a specific project.

Software that monitors Useful Work tells you exactly which virtual machines are safe to repurpose or decommission. It detects when a system only ever runs maintenance or systems management tasks, and when a system is doing what it was provisioned to do.

This is significantly more sophisticated than looking at average CPU utilization which only shows that a system is active. What CPU won’t do is show whether activity is due to value-producing business computing, as opposed to background administrative, errant, or stuck processes. Useful Work identifies those servers which continuously fail to do productive work which feeds a reclaim and decommissioning process for any redundant capacity.

For the servers which do useful work it is important to avoid other inefficiencies, for example, a Hyper-V host server only running one or two virtual machines.

This visibility provides the certainty to make change in an environment which has traditionally been static and resistant to change. Interestingly the one thing Private Cloud is really influencing is rate of change, and virtualization in particular has been an instrumental tool in enabling this.

Speed and agility are good, but not without appropriate controls.