Finding the Hidden Costs of VDI

Brian Madden has an excellent post up today called The hidden costs of VDI. I’ve been working nearly full time the last two months helping to put together a Microsoft Services offering around desktop virtualization in general and VDI in particular so have spent a lot of time looking into both the technical and business considerations that must be taken into account. I’d summarize his post in three points:

  1. TCO models, like statistics, can be made to tell any story you or a vendor wants
  2. Cost models typically assume full replacement of legacy systems to show maximum benefit but his rarely occurs due to technical, political, or other unforeseen reasons
  3. Since VDI is relatively new (compared to traditional desktops and Terminal Services/Citrix Server-based Computing), there are a lot of technical and compatibility issues and considerations that are not well understood outside a small group of experts

As a well known fan and expert on Server Based Computing (SBC), i.e. Terminal Services or Citrix Presentation Server/XenApp, Brian prefaced the article by saying that he likes VDI “where it make sense” . He correctly points out that nearly all vendors and TCO models show that Server Based Computing still provides the lowest TCO due to its high user density but that there are limitations which make other approaches such as VDI relevant.

That is where I’ll jump in with my thoughts because I completely agree with those statements and it has been the foundation of the offering I have been working on. It starts with the notion of flexible desktop computing and desktop optimization that Microsoft has been talking about for some time now. An overview of this approach is presented in this whitepaper. To summarize, there are a variety of ways that a desktop computing environment can be delivered to users ranging from traditional desktops, to server based computing, to VDI, with a multitude of variations in between with the addition of virtualization at the layers illustrated below:

original[1]

Rather than selecting a one-size-fits-all solution, virtualization provides architects a new, more flexible set of choices that can be combined to optimize the cost and user experience of the desktop infrastructure. The following four steps lead to an optimized solution:

Define User Types: Analyze your user base and define categories such as Mobile Workers, Information Workers, Task Workers, etc. and the percent distribution of users among them. The requirements of these user types will be utilized to select the appropriate mix of enabling technologies.

Define Desktop Architecture Patterns: Each architecture pattern should consist of a device type (thin client, PC, etc) and choice of:

  • OS execution (Local, Desktop Virtualization, or Server Based Computing)
  • App execution (Local, Application Virtualization, or Application Remoting)
  • Display (Local or Presentation Virtualization)

For each pattern, determine which user types it can be applied to. For example, with mobile or potentially disconnected users, presentation virtualization alone would not be applicable as it requires a network connection. Power users may  require a full workstation environment for resource intensive applications but may be able to leverage application virtualization for others. These are just a few examples where different user groups have different requirements.

Determine TCO for each Architecture Pattern: Use a recognized TCO model to determine the TCO for each pattern. Minor adjustments to these models can be made to account for specific technology differences but most include TCO values for PCs, PCs with virtualized apps, VDI, and TS/Citrix thin client scenarios. Be wary of vendor provided TCO models. To Brian’s points, be sure to gain a full and complete understanding of the chosen TCO model and what does and does not include. Consistent application of the model across the different architecture patterns is critical for relevant comparisons.

Model Desktop Optimization Scenarios: With the above data, appropriate architecture patterns can be selected for each user type by choosing the lowest TCO architecture pattern that still meets user requirements. By varying the user distribution and selected architecture patterns, an optimized mix can be determined. It is tempting to simply choose the lowest TCO architecture pattern for all users but this can be very dangerous in that it will typically impact your high value, power users the most if their requirements are not accounted for.

A one-size-fits-all approach would result in either a large number of PCs if not using virtualization, a large number of servers if virtualizing everything, or failure to meet power user needs if using only server based computing. An optimized solution is one which utilizes the right mix of technologies to provide the required functionality for each user type at the lowest average TCO. Combined with a unified management system that handles physical and virtual resources across devices, operating systems, and applications, substantial cost savings can be realized.

As I mentioned at the top, a lot of the concepts in addition to very detailed architecture and implementation guidance are part of the Microsoft Services Core IO offerings. For the last two years, in addition to my customer work I have been deeply involved in the creation of the Server Virtualization with Advanced Management (SVAM) offering. The work I mentioned above around VDI architecture will complement that and be available later this summer. Finally, specific to desktop imaging, deployment, and optimization, there is also the Desktop Optimization using Windows Vista and 2007 Microsoft Office System (DOVO) offering. Taken together in concert with the underlying product suites, these illustrate Microsoft’s “desktop to datacenter” solutions and how to plan, design, and implement them.

Bookmark and Share