Microsoft-Novell Interoperability Lab – Sneak Peek

by Sam Ramji on February 14, 2007 11:06am

Based on the email I received I would say that many Port 25 readers noticed my post last week on job openings in my new lab. Thank you for your positive responses (and especially the resume submissions)! Brad Cutler, my counterpart at Novell, has been overwhelmed with responses as well, so thank you on his behalf. 

I’ve called this a sneak peek because there is much work ahead of us, but it’s time to talk in a little more detail about what the lab will be doing. I and my colleagues at Novell and within Microsoft have been putting in long hours for the last several weeks – nights and weekends as well – detailing the plans for our work together. As you may have seen covered in the news this week (“Microsoft and Novell Announce Collaboration for Customers”), we’ve got a solid long-term plan that covers our cooperation in the following areas:

  • Virtualization: We will rigorously test the functionality and reliability of SUSE Linux Enterprise Server on the next generation of Windows Server virtualization as well as Longhorn Server on Xen. This focuses on paravirtualization and enlightenments. Here’s a great discussion of Xen paravirtualization and a set of presentations on Windows Server enlightenments. Part of the challenge in delivering enterprise-grade heterogeneous virtualization is in ensuring correct behavior and performance across a broad range of hardware – AMD and Intel, single/dual/quad socket, and single through multi-core CPUs.
  • Directory and Identity: Directory interoperability is the basis of identity interoperability - directories contain the structure and content that provides the raw material for identity. Through our ongoing testing in the lab, Microsoft and Novell will improve directory and identity interop between Active Directory and eDirectory, using open specifications such as WS-Federation and WS-Security.
  • Management: We’ll test WS-Management for interop between Microsoft System Center and Novell’s WS-Management implementation, which Novell is developing in the open source community under the openwsman project.

Why are these the most important areas for us to work on?

As part of the Interoperability Customer Executive Council, I heard from the heads of IT from Goldman Sachs, UNICEF, American Express, NATO, and 25 more global organizations that server consolidation is essential in allowing them to reduce costs. In order to fully achieve server consolidation, they need to be able to move their existing workloads – both Windows and Linux – to a common set of server hardware. Without interoperable hypervisors, IT shops would be forced to support two separate sets of hardware, software, and personnel in order to consolidate their servers: one set for Windows and another for Linux. We don’t think that’s good enough.

Hypervisor interoperability is critical, but for this scenario it isn’t enough to deliver the full benefits of virtualization for an enterprise. Once the workloads are running on the same server and the same hypervisor, access control and authorization needs to work across the entire environment consistently – otherwise you’re just shifting the interop problem up the stack, only to suffer later. This is where WS-Federation is essential – implementing an open specification to federate identity between existing directory servers enables you to have consistent security policies across your heterogeneous workloads. This is a continuation of work we’ve done with IBM, Apache, Ping Identity and SXIP Identity.

Operations relies on strong management tools to provide availability and reliability across a broad server environment. Ops teams typically have training on specific toolsets to monitor, administrate, and manage their infrastructure. Realistically, moving Windows and Linux workloads onto the same set of servers requires that existing management tools be extended to the new environment. We believe (as do HP, IBM, BMC, CA, and many others) that WS-Management is the solution. Implementing this open specification will enable servers, applications, and services to communicate with management consoles from multiple vendors.

I’ve had a few people approach me about this project who pontificated “If you [Microsoft] would just implement the specifications as they’re written, you wouldn’t have to do all this work!” In fact, this is an incorrect understanding of software engineering and interoperability. Making protocols truly interoperate in every realistic circumstance is one of the great challenges in engineering. In real life, you have to implement the specification correctly – and then the work begins. Were there platform-specific assumptions in the code (as basic as big-endian vs. little-endian format)? Were there parts of the spec that were subject to interpretation? Due to the extensive development and testing embedded in technologies like TCP/IP and HTTP, it’s easy to forget that it took years of work by many parties to deliver what we now take for granted.

This work across virtualization, identity, and management is a pretty awesome undertaking, and I expect that as we continue to progress here we’ll discover new things we need to do in order to deliver interoperable computing. I’m looking forward to reporting on it here, and have submitted a presentation abstract for OSCON ’07 to walk through the Joint Interoperability Lab’s operations in detail. Hopefully I can shed a little light on what makes interoperability so challenging, even in an age of open specifications.

Cheers,

Sam