One fact strikes me pretty often: Companies have the problem that they have legacy software running on legacy operating systems (e.g. NT4) running on legacy hardware. This is a severe problem as you all know. Now, these companies look into virtualization so solve this problem. From all the three “legacy” up there, only the hardware problem can be addressed by the use of virtualization – definitely not the OS and the application piece (obvious). Now, there are still a lot of people thinking that if they embed the legacy machine in a state-of-the-art virtual environment that the machine itself might be more secure. This can be true – if you do not connect it to the network. Otherwise, the OS and the application are as vulnerable as before.
This is all clear and in the meantime known to a lot of people. Virtualization gives us a lot. I think, it is a great technology to address quite some challenges (especially the challenge of having servers that are mainly “idlein” in the computer room) – but it does not address the challenge above. On the contrary, it adds additional risks.
- All existing OS-level attacks work in the exact same way.
- The hypervisor attack surface is additive to a system’s risk profile.
- Separating functionality and/or content into virtual machines (VM) will reduce risk.
- Aggregating functions and resources onto a physical platform will increase risk.
- A system containing a “trusted” VM on an “untrusted” host has a higher risk level than a system containing a “trusted” host with an “untrusted” VM.
So, manage the risks and have fun with virtualization!