Security in the Private Cloud

Shelly Bird, Deployment Solution Architect, Microsoft Public Sector

Dispatch from the NIST Security Conference in Baltimore, MD: on September 28th, a panel of experts discussed with the audience “Security in the Private Cloud”. It was a pretty interesting exchange, as the packed audience directly challenged the experts, asking about the dangers of consolidation and what it will take to create Private Clouds responsibly.

image

The Panel:

Name Company
Mark Ryland Microsoft
Mischel Kown       RSA
Jen Nowell Symantec
Steve Orrin Intel

Mark kicked off the discussion with a description of how he sees the landscape today. It is a convergence of the massive scale-out of large, non-stop web services like search engines, and the wave of server virtualization in our customer Data Centers. Although the technologies and purposes of scale-out web services and datacenter virtualization were quite different, Amazon hit a sweet spot and pioneered the public cloud business by combining the two concepts in Amazon Web Services. Essentially, the compute model of Amazon is based upon self-provisioned VMs, but the storage model is based on scaled-out web technologies.

He described how this is driving the data centers to a set of portals and a model that could be described as “Provision Computing”. In fact, he argued that the difference between a virtualized datacenter and a private cloud is whether or not “users” (think: departmental administrators) can self-provision and de-provision computing resources with automated charge-back to their departments. We’ve all heard the phrase “Re-provision not Repair”, and I’ve even heard “disposable computing”.

In short: dynamic build up and tear down is where the real savings and economics for doing all this (and affording it) will be found. One interesting statistic from Microsoft’s Data Center studies: we figure we can build and support 200,000 servers at a 1/10th of the cost of 2,000. All the panelists agreed that the cost savings were undeniable, and like it or not systems are going to move into the cloud.

Mark finally noted that attacks on the hypervisor and other host support components are relatively new and important to future security plans. But it made me realize we all still have a lot to do to support the Provision Computing portion of the equation, in automating and speeding the delivery of the master systems, and making access to them smooth and easy. Automation was mentioned by several of the panelists as key to doing any of this effectively. However, my experiences have been that a lot of servers are still hand built, custom crafted in our customer’s Data Centers. It will be a big cultural change for a lot of server administrators out there.

We have long pushed the benefits of automating builds. Microsoft Consulting Services (MCS) has uses “Hydration” engines to build our Proof of Concepts, often targeting virtual environments, and is now using an automation tool called Opalis to build up series of servers in Hyper-V. Next stop: MCS Public Sector is working on injecting into this process important government mandates for security settings and configurations. However, it is clear the Private Cloud is forcing all of us to take it to a new level.

Mischel was up next, and she was extremely direct in her introduction, describing how automation in the Private Cloud may not prove to be easy-- although it could reduce the number of images one has to manage, which is always a good cost cutter. She noted that in the Cloud, all systems will require continuous monitoring, and the good news is this is much easier to do it in the Cloud, compared to the scattered environments that exist today. Interestingly, she spoke about the contractual problems when trying to build a Private Cloud: for instance, inserting the proper legal text defining SLAs and policies to force compliance. Inclusion of a dashboard with continual incident reporting and tracking would be critical to success, she added.

Mischel was basically asking the audience at one point: “Would you know if you got attacked?” This is a great question. I have seen a lot of customers boast that they have had no attacks, and this is always a red flag. I personally prefer my Security Team to be nervous and jumpy, unable to sleep at night. If they are complacent and constantly claiming victory, they probably aren’t doing the job.

And while Mischel did not directly say that doing P2V is not a smart thing, I believe it was implied when she answered a question from the audience about the potential dangers of having all your eggs in one basket. All the panelists acknowledged it could be a danger, but Mischel pointed out that the situation as it stands in current environments is not good either—and this is “an opportunity”. An opportunity to get it right.

From my admittedly narrow deployment-oriented frame of mind—all I do is mass deployments of desktops and servers to large Public Sector customers—I immediately thought of the implications of doing it wrong. Doing it right means getting it right from the get-go, and nailing down the original build process for the images and the virtual templates is absolutely critical to that. As we well know from monitoring the physical systems, it is hard to monitor when you don’t understand what you are monitoring, because if you don’t know what is the normal state of the system, you have few ideas on what to look for, what should be an alert. Why should it be different with virtuals? Look at how Amazon, Google, and Microsoft run their Data Centers—it is all about standards. Mischel is spot on, this is a critical moment.

Jen Nowell of Symantec agreed that monitoring is the area where the Cloud could really pay off for security, noting that for the first time one can really watch and study what is happening; Symantec is excited at the prospect of getting better data to improve defenses (look at our Security Intelligence Reports here for great information). Again, the emphasis was on the monitoring tools, to continually pull things back into the intended line of operations.

Steve Orrin of Intel reminded us that Private Cloud is a matter of trust, and establishing that trust is a tricky thing. He drew out the fact that above all we need to trust portability of data and applications, and scaling is limited unless we can port workloads across boundaries. It appears everyone on the panel accepts that a hybrid model is inevitable for a period. No giant leaps predicted here, although the panel was unanimous about the drive towards Private Cloud being unrelenting and inevitable.

Orrin sees standards such as SCAP as essential for instrumenting across these boundaries. One of the things we saw with SCAP’s first large implementation, when SCAP data was issued with the Federal Desktop Core Configuration (FDCC), was how it helped security scanners focus on fixing their engines instead of interpreting what NIST, NSA, or DISA wanted. What I’ve loved about SCAP (not sure people often talk about loving SCAP, but there you go) is the fact that it places the management of the benchmarks where they belong—with the standards bodies, such as NIST. There was a lot of trouble before SCAP with the interpretation of paper policies. And I’ve watched it enhance in a healthy way the competition between the security scanning software companies.

First question from the audience made everyone laugh: what about the potential for “VM Sprawl”, just making our current situation worse—just more of the same problems we have today? This is when Mischel talked about the opportunity to do it right. Orrin quoted a statistic that studies show most organizations only know about 66% of their physical assets today. Mark called out the fact that provisioning and re-provisioning will be key to managing all of this, not only doing it rapidly, automatically, and remotely, but doing it across every device or asset out there, whether it is re-loading Droid phone, iPhones, or allowing an untrusted laptop to get temporary access to required files in your intranet via IPSEC. Message: it is time to reset.

Other questions forced the panel to admit that Security in the Cloud is evolving, and that we are likely to stumble several times. In the end, though, nobody is saying it can’t be done. It has to be done, and done well. Automation will be key to this.

Go Social with Private Cloud Architecture! Private Cloud Architecture blog Private Cloud Architecture Facebook page Private Cloud Architecture Twitter account Private Cloud Architecture LinkedIn Group Private Cloud TechNet forums TechNet Private Cloud Solution Hub Private Cloud on the TechNet Wiki