It seems that over the course of the last 2 years that the number of questions I receive around virtualization has been consistently on the rise. Many of the questions center around how to do things with virtualization. How do I questions are great, and I love answering them, however there is another question that should really be answered first.
What is Virtualization?
This is the perfect place to begin a long discussion of Virtualization. And so we begin.
Wiktionary is not a lot of help.
Merriam-Webster doesn’t even have a listing for Virtualization. Bummer!
One of the key pieces of trouble around a good solid understanding of Virtualization comes from the lack of actual definition around the terms. Unfortunately I do not get to write the definitions for Miriam Webster or Wiki or anyone else for that matter, however I have been close enough to this topic to at least give you some good (and very jaded) ideas about what virtualization is and what it isn’t.
Virtualization: Using one or more hardware and or software technologies that will allow a single set of hardware to run multiple applications or operating systems independently and simultaneously.
That’s the Chris Henley definition. Its pretty broad and I am reserving the right to change it on a whim as I feel like it needs it, but right now its a good place for us to begin our discussion.
Virtualization is all about being able to maximize the potential of a hardware platform and balance that potential with the application and operating system needs of your organization. During the 1980’s, 1990’s, and the early 2000’s the IT world (thanks in large part to the efforts of Microsoft to write operating systems and applications that would use the maximum amount of available hardware resources available in a single box) established a paradigm that one operating system ran on one box. This was supported consistently for the better part of 3 decades because operating systems continued to evolved based on available hardware resources. Windows 95, Windows 98, Windows ME, Windows XP, Windows Vista, and even Windows 7 have all pushed the envelope of hardware capabilities. We take it for granted that a new operating system will require additional hardware capabilities. It is the way things have always worked.
In the world of server operating systems the 80’s, 90’s and early 2000’s were pretty much the same as with the client side of the house. Windows NT ran on a single physical server and used as much resources as it could get. Windows 2000 upped the hardware anti. Windows Server 2003 and Windows Server 2008 pushed it further still. If you wanted to run additional server functionality in your network like Exchange, or SQL, or SharePoint, or any host of other applications or services Microsoft’s recommendation was generally to buy another server; and WOW did we buy servers! Even small networks were often server heavy with multiple servers for everything from directory services to email and everything in between. Microsoft created some awesome tools and each one seemed to require the purchase of a new server. There was even a term for all of those servers. Server Sprawl. It’s one of my favorites.
We don’t often stop to think about what would happen if the hardware capabilities of a box went beyond the needs of the operating system and its applications. Thankfully a long time ago (the 1960’s to be a little more exact) someone did think about this situation. You won’t believe who it was.
And that is where we will pick up our discussion next time…
Homework: Read the Wikipedia article on Virtualization. Don’t worry about going too deep at this point. Just familiarize yourself with some of the terms and get a feel for what’s out there.