Insufficient data from Andrew Fryer

The place where I page to when my brain is full up of stuff about the Microsoft platform

Virtual machine density in your data centre

I can only run 11 server based  Virtual machines on my laptop, but all bar three of them are running SQL Server:

  • 3 x VMs running SQL Server 2012 beta and the new AlwaysOn Cluster. Note one of these is running SQL Server 2012 on Windows Server Core
  • 1 x VM running  SQL Server 2012 the database engine plus 3 x instances of analysis services, master data services and data quality services not to mention SharePoint with Office Web Apps enabled.
  • 1x Windows 7 VM with Office 2010, Visual Studio 2010 ultimate and all the System Center client tools, remote server administration tools
  • 2 x System Center Service Manager 2012 beta VM’s one fore the service and one for the data warehouse
  • 1 x VM for System Center Orchestrator 2012 RC
  • 1 x VM for System Center Virtual Machine Manager 2012 RC together with then new System Center App Controller
  • 1 x VM for System Center Operations Manager 2012 beta
  • 1 x VM for Red Hat Linux
  • 1 x VM as my domain controller and DHCP server

The limiting factor I face is RAM -  the minimum memory requirements of many of the System Center tools limits what I can cram into to 16Gb, but dynamic memory is a great help here.  Anyway it’s a fair increase over the four VM per server density that was discussed when Hyper-V came out.  That ratio of virtual to physical can of course be pushed much harder on ‘proper servers’ designed for Hyper-V rather than my laptop mash-up.  A good example of this was the labs run at various big events like the Microsoft Management Summit in May where they were able run 225VMs per host although with 128Gbs or RAM they would only be getting a basic 512Mb per machine.

However there is another way and that’s what Microsoft does in its newer data centres, like the one I visited last week.  The whole data centre runs on a modified Hyper-V but what’s different is that there are thousands of low cost basic servers rather than hundreds of huge monsters.  Blogging in more detail about how these work is more than my job’s worth so if you want to know more then the Global Infrastructure Services site is the place to go (there’s a video tour of one of the data centres  here) .  However what I can say is that all the lessons learnt from operating at this scale are then put into the next releases of Hyper-V and System Center, for example:

  • the bare metal host provisioning in Virtual Machine Manager 2012
  • the separation of duties in Virtual Machine Manager 2012 where the team who look after the physical servers don’t control the services that run on those servers, that’s down to the application teams.
  • the integration of AVIcode into Operations Manager 2012 to understand what problems are affecting the applications themselves.

So if you want to get an idea of how to run a data centre at scale then you’ll want to spend your downtime over Christmas learning virtual Machine Manager either by watching the new content on the Microsoft Virtual Academy or by pulling down the Release candidate (which you can install or uses a preconfigured hyper-V virtual machine)