Why Virtualization Is So Darn Popular

by MichaelF on August 22, 2006 03:35pm

Just returning from Linux World in San Francisco, and virtualization was once again the topic du jour.  A lot of you outside of the technology vendor-sphere (where we like to speak in weird acronyms and corporate buzzwords), might wonder why Microsoft and many others can’t stop talking about virtualization.  Go to any IT conference today and it’s highly likely there will be at least some sessions, if not a bevy of keynote speeches, on the topic of virtualization.  These are usually accompanied by marketecture diagrams of lego-block like pictures showing different operating systems all running in some combination on top of a single physical server.  Having once worked at IBM, I’m long familiar with the idea of virtualization, often called ‘logical partitioning’ in IBM mainframe speak.  However, the reason why there’s so much discussion around virtualization today is because it is becoming much more widely available at a much better value than it has in past.  Intel and AMD have improved their microprocessors to make them virtualization aware (in the past, virtual machine managers had to do all sorts of silliness to get around the very virtualization unaware x86 instruction set).  This has allowed virtual machine software developers to build powerful technologies, often called hypervisors, that can reside in the operating system itself, allowing for much more efficient, reliable and seamless virtualization of one operating system or systems on top of the ‘host’ operating system.

Cool science project or is there any real use for this stuff?  Let me give you a simple example of how we’ve used this here in our Open Source Labs.  We provide quite a few different types of Linux distributions of various version levels and hardware architectures for testing and analysis, probably over fifty or so all told.  Typically, you would use a single server (or even a single PC) for each operating system, which would mean about fifty different machines.  Each of those machines requires power, cooling, new parts, maintenance, and so on.  The costs add up quickly; in some data centers I’ve seen, power and cooling can be over half of the total operations costs year over year.  In our lab, we can run almost all of these Linux distributions on one server, a four-way Opteron-based HP server with eight gigs of memory and a lot of disk.  This is for testing, so I wouldn’t run this many virtual guest images on anything with heavy production workloads, but you get the idea.  Bottom line, I save money and time (particularly in systems management).

I’ve also spoken with customers who are using virtualization for disaster recovery and backup scenarios, new deployment scenarios where a call center or branch office can be ‘installed’ with virtualized images in a fraction of the time as traditional server installs, and scenarios where testing and quality assurance groups can do large, diverse and automated testing of hardware and software across dozens of types of operating system configurations.  IDC forecasted that 45 percent of new servers purchased this year will be virtualized.

Virtualization is a critical part of the Microsoft strategy, and we have been in this business for a while with our Virtual PC and Virtual Server 2005 products.  Today, Virtual Server 2005 R2 is available as a free download.  We’ve also opened up the specifications of our Virtual Hard Disk (VHD) Image format with Virtual Server 2005. You can use this specification to learn how to access (read & modify) the data stored in a VPC or Virtual Server virtual hard disk. The VHD format spec is available under a royalty-free license.

We are making even larger investments with our ‘Viridian’ hypervisor and System Center Virtual Machine Manager (code named ‘Carmine’) projects.  These are the names for our Windows Server Longhorn virtualization hypervisor and virtualization management product, respectively.  You can download the beta of System Center Virtual Machine Manager today. From what I’ve seen thus far in the development of these products, you can expect some great software from us in this area. You may want to check out Mike Neil’s post about how we announced and demoed much of this at WinHec this year – Mike also has a link to a video of the WinHec virtualization demos from Bill Gates’ keynote.

Related to this, we recently announced an important partnership between Microsoft and XenSource.  XenSource is the company around the open source Xen project – the leading virtualization technology in Linux.  Peter Levine, CEO of XenSource, discussed our partnership in his Linux World San Francisco keynote.  Together with XenSource we will be working on enabling great virtualization between Windows and Linux, which is significant for customers running heterogeneous environments looking to consolidate servers and to take advantage of the new deployment scenarios – like I described above – in the future.  This work will be part of our Longhorn server plans, taking advantage of our virtualization technology, Viridan.  I’m personally very excited by this partnership and this is an indicator of how we think about our long term product roadmaps vis a vis interoperability.

There is a lot happening in this area of virtualization and I think it’s one of the most important change agents in our industry.  Sure there will be all sorts of hype, which is typical of where we’re at in this adoption curve, but I’ve seen how this can save money/time in my own labs and I’ve talked with customers who are finding similar advantages.  Exciting times indeed.
-bill