[Guest Blogpost] When is virtualization a BAD idea?


Steven Syfuhs (Ottawa, Ontario, IT Pro)

Rod and I wrapped up the Future of your Server Room tour last week in Toronto where the second session on virtualization technologies was still quite a hit. Both Rod and I are big believers in virtualization and are quite passionate about all the offerings on the table - both ours and others. I got this email from a new friend of mine who took in the first stop of the tour in Ottawa - he raises a few questions that I thought the community as a whole might like to bat around and bring to the surface.

Off the top of my head - virtualization does not make sense when you need access to hardware that is not virtualized by the host due to performance requirements (teamed nics or SSL accelerators) or legacy compatibility. What can you come up with? 


From: Steven Syfuhs
Sent: May-25-07 12:30 AM
To: Rick Claus
Subject: FOSR and vritualization question

Great presentation in Ottawa, many take-away’s that has me asking a lot of questions overall.  One question that I find to be an interesting topic for debate is When is virtualization not a good idea?  Everyone has their own answers to this one, and it’s really turning into a highly philosophical debate.  Being in an entrepreneurial position (in many parts of my life) has given me a whole whack of perspectives on this topic.  From K-12 education infrastructures to SMBs, government offices in many sizes, to large corporate infrastructures, they all have common beliefs as to the benefits of virtualization – which happened to be covered in the FOSR tour. 

However, most IT shops within these organizations couldn’t find too many places where introducing virtualization was bad.  That’s not to say they supported it either.  Some took the dot-com bubble approach and started integrating it as soon as possible without realizing long term effects, there were those that decided maybe it was time to upgrade their VAX systems to something that was created AFTER I was born (i.e. Windows), and there were those that took a reasonable approach and upgraded periodically when hardware limitations limited business productivity.  The early-adopters did it simply to be on the bleeding edge as they say, the VAX people (who shall for humilities sake remain nameless) did it because it was VAX “nuff said”, and the others simply because it was good practice.  The commonality between them all was their reasons to switch over – or not to.  The pro’s are not of consequence in this case, but the con’s are odd.  The main argument wasn’t really a reason – they simply didn’t want to do it.  There was no substantial excuse.  You’ll see it with most technologies, and the excuses are generally cost of implementation and laziness.  Now with virtualization, its not that expensive (understatement) and proves to be extremely useful.  That leaves laziness.  Yes it takes some work to get it working, but the cost savings are enormous.  It’s a weak argument.  Yet people are still hazy on implementing.

So the question is asked again for anyone, are there any real problems with virtualization in that it shouldn’t be used?  Obviously there are certain situations were virtualization is inefficient, but as a whole, as a building block in the infrastructure, are there problems with this?  I’m not looking for an absolute answer, but a 360-degree view of this technology.  A lot of people are saying it’s the next generation in computing.  Sure, okay.  I’m not doubting its potential, I’m just playing devils advocate.  I personally like it, and am 100% willing to take a risk with it given that I haven’t seen any major downsides that impede business productivity.

It’s probably a little late to raise this question on tour given how many stops are left, but I’m curious as to what the community has to say about it.  It makes a great topic of debate.


Steven Syfuhs
Chief Network Architect
Doublenatural IT Solutions



Steve Syfuhs is an enterprising young IT Pro in the business of everything technology.  He currently owns his own consulting firm, Doublenatural ITS, and is underway developing a new web-home for the company, currently in the process of helping bring in a new line of Servers into Canada via Testworx and SuperMicro Computer Inc.  He doesn’t sleep much.

Comments (2)

  1. ye110wbeard says:

    Ummm I think if you were to Virtualize your Virtual machine.

    On a "Geek" factor 10 out of 10.




  2. ye110wbeard says:

    Actually here was an idea that should have worked great but bombed.  (Bad hardware base, no hardware acceleration)

    We had all this useful Windows XP licenses laying about from the action pak.  No workstations to speak of as we all work remotely.  

    Boss wanted to try something.  In theory in went off beautifully.  In practice (since the source server was a P4 2.8 it really dragged.  

    Windows Virutal Server 2005.   Did a base install to make a VHD.   Then used that VHD as the base to make 5 differential VHD’s.     Then backed up the VHD’s and sources somewhere else.  

    The end result was an environment that could be used and abused on the fly by techs.   If they "blew up the PC" it would just be throw the clean differential back.

    I was soooo pleased with myself.   Until we found out how slow it was.  (Because of the base hardware and 5 virtual Windows XP machines at the same time on a P4 2.8.   Bummer.

    Still trying to convince the boss to let me use the Dual Xeon 2.8 to sneak in some time to run a Virtual Terminal Server and free up space in the office…. 😉

Skip to main content