More virtualization goes free: more help for the environment

We've announced Windows Server 2003 DataCenter Edition R2 - which is quite a mouthful.

There are two big changes here. One is the systems that can run DataCenter Edition; previously the DataCenter High Availability programme meant the OS was only available pre-installed on qualified hardware systems, with special support arrangements. That route still exists, but now system builders and Volume licence customers will be able to install it, outside of the high availability programme.

Number two is that it has licenses for unlimited Windows VMs ...

Now... I was watching one of the Winhec sessions "Windows Server Virtualization Scenarios and Features" I was struck by stories of customers who just can't have any more servers. There isn't space, or the power grid can't cope. That's a pretty horrendous state of affairs - not do all those servers create a lot of CO2 emissions, but the air-con to dump the heat they create causes even more. In much of the world, for much of the year, more fossil fuels are being burnt to heat the building. When you check, many servers run at 10-15% utilization and those not using SANs typically have multiple disks to get spindles for throughput and redundancy... This is a problem crying out for virtualization. The VHD files for many virtual machines can be stored on a SAN using multiple spindles, but requiring fewer drives overall, so less power and cooling. A system with 4 or 8 processors doesn't use 4 or 8 times the power of a single processor system, so even if virtualization meant one VM per processor there would be a saving. But the ratio is more likely to be 3-5 VMs per processor... Or rather per core - with quad core 64bit processors you could be looking at 12-20VMs per processor - 100+ VMs on an 8 proc system. And with DataCenter supporting 1TB of RAM you could have 250 VMs with 4GB each. Note that Longhorn server’s virtualization is going to be 64 bit only – so you’re planning large systems you need to look at 64 bit. Of course if you ran that many VMs and a node failed. You'd really need clustering; and with 8 way clustering in DataCenter you could have 1000 VMs on a cluster. You could get rid of a lot of servers that way... now if only someone would find a way of using that waste heat.

Comments (2)
  1. Mark Wilson says:


    I’m a great fan of Microsoft’s virtualisation technologies and am looking at just the sort of server consolidation exercise that you suggest right now.

    Unfortunately, Microsoft Virtual Server 2005 R2 – good as it is as a virtualisation platform – just doesn’t cover the management side well enough and is more akin to mid-range scenarios (competing with VMware Server) than to the high end.  That’s why it’s with a heavy heart that I’m helping my customer move from Virtual Server to VMware Infrastructure 3 (which not only gives me ESX Server, but also high availability, consolidated backup, the ability to fail VMs over between hosts, dynamic resource scheduling, alerting, etc., etc.).

    Like I said, Virtual Server 2005 R2 is good.  But before Microsoft can compete in the high end virtualisation space you’ll need Veridian to be released, including all the management features that are required to look after hundreds of VMs on a few boxes.

    You’re right about all that heat and power though!


  2. David Holmes says:

    There’s a fascinating analogy here between VMWare/Virtual Server and Citrix Meta Frame/Terminal Server. Below is a long rambling essay on the subject, but the short version is: It’s all good news, and if you’re serious about Enterprise Server virtualisation, VMWare is the only serious solution, if you want to run a couple of Microsoft only VMs on a a couple of servers then Microsoft Virtual Server will do just fine. If you’re prepared to give up ten minutes of your life for the long version, read on…

    I remember back to the heady days when Microsoft announced the release of Windows Terminal Server Edition, many of us wondered what exactly the future held for Citrix. If you look back to Citrix’ stock price you’ll see that the shareholders had the same concern. Over the years a number of vendors who provided value add software on top of Microsoft products either got acquired by Microsoft or destroyed by Microsoft incorporating functionality for free into their products (HIMEM.SYS anybody?).

    The original deal that Microsoft had with Citrix was unique; on the one hand, Microsoft took a huge risk by releasing their source code on the other they reserved the right to develop their own thin client technology potentially wiping out Citrix at some unspecified point in the future. It was a brave decision on the part of Citrix to go down this route.

    Furthermore, when Citrix WinServer was first released it had it’s own kernel, and you couldn’t apply standard Windows service packs. As technical architects, we were afraid, very afraid.

    But after some testing it was realised that for deploying thick client applications over a wide area network, WinServer did the job and offered enormous cost savings. A few people used WinServer for remote desktops but this was really what could be described as a ‘specialist sport’.

    When Microsoft decided to get into Thin Client with Terminal Services Edition, it wasn’t clear how Citrix would carry on. But then we started testing TSE against Citrix and it all became obvious, the differences were as follows:

    – Citrix could provide applications in ‘seamless windows’ so to the user a thin client app looked exactly the same as a desktop app.

    – Citrix had load balancing, clustering and scalability, with the ability to finely distribute applications across a pool of servers.

    – Citrix had the much more efficient ICA protocol.

    – Citrix had enterprise deployment credibility.

    And so we find ourselves in the situation that for the last eight years, Microsoft has been playing catch up to Citrix which has worked brilliantly. Citrix have been driven to innovate and deliver new functionality to ensure that they are not consumed by the chasing shark of Microsoft. At the same time the underlying thin client technology has become pervasive even to the point of being the underlying architecture of Windows XP Home edition… Who’d have thought that we’d be deploying thin client servers in our homes?

    So how does this all fit in with the VMWare story? Well, I first installed VMWare in 1999 at that time I thought it was a neet solution for technogeeks who wanted to have a corporate desktop but with their own Active Directory, Linux Server and other weirdness without offending the IT Security Police. In no way (at the time) did I think "here’s a tool that will be deployed in the Enterprise for production services".

    Time roles on…

    Now here we are in 2006 with VMWare having established credibility in the enterprise. ESX with its associated components makes for a highly compelling solution. Coupled with e.g. a pair of IBM x460s with four dual core Xeon processors, you end up with a 16 node cluster which as James observes will support an enormous amount of capacity. In addition, you get very high levels of availability, the capability to switch machines across your cluster while they continue to run (you will never tire of seeing that), load balancing and some very cool deployment tools. Of course using ESX allows you to be flexible with your O/S choices so for those of us with a mixed Windows/Linux environment these can live side by side on the same server.

    And so the race begins, Microsoft will drive the commoditisation of virtualisation, VMWare will be driven to innovate and provide new functionality to keep ahead and as consumers of the technology we’re all winners.

    Just one closing thought, somebody somewhere is going to have to reconcile thin client technology with server virtualisation, because right now it’s broken. There’s an opportunity there somewhere for someone to do something really clever.

Comments are closed.

Skip to main content