by MichaelF on October 06, 2006 12:57pm
In the IT industry it is axiomatic that whatever is new will be old, and will then be new again! Consider the “Service Bureau” approach that was used in the mainframe days, in which an organization’s computing needs were taken care of by a “Service Bureau” that maintained the infrastructure, served up the applications and provided the support for the users. The Service Bureau typically served many organizations needs, had to keep their customer’s data separate and provide an SLA to each one of their customers. Sound suspiciously like SaaS (Software as a Service) doesn’t it? Service bureaus were not as sexy as SaaS and they never completely went away. With the advent of PCs, computing was available at the individual level – which meant that small teams could manage their own IT infrastructure, applications and support for the users. This freed them from constraints (and some cynics say from discipline!) on their ability to adopt and adapt new software technology. The software on PCs grew capable of handing mission critical applications allied with the raw compute power provided by networked distributed PCs. The small teams began to feel the pain of managing and maintaining such infrastructure, and the PCs were serving not just these small teams but entire enterprises. With the commoditization of hardware and the adoption of common software standards, the “Service Bureau” idea re-emerged as “SaaS”, and was immediately attractive to the customers. Of course, I am oversimplifying, but I think I can use my blog writers license here. The new avatar (SaaS) is better for the users because it doesn’t force the compromises the old avatar (Service Bureau) on its customers. The idea is the same but the implementation is improved immeasurably!
I think I have identified a similar new, old, new cycle in OS technology (ladies and gentlemen, please save the standing ovation for later!). Of course, I have to insert the “don’t-try-this-at-home-kids” warning. I am not an operating systems expert – I just play one on blogs!
Heard about microkernels? They were all the rage back in the late 80s/early 90s (that’s in the nineteen hundreds). This is how Wikipedia defines them “A microkernel is a minimal computer operating system kernel providing only basic operating system services (system calls), while other services (commonly provided by kernels) are provided by user-space programs called servers.”
Microkernels were a reaction to the bloat introduced into operating systems, which started out lean but then added all kinds of services as part of the operating system. This meant that operating systems were not as portable as they used to be, because all services had to be ported over – whether to a new processor or to a new board. There were attempts made to make operating systems that were minimal that also had the effect of making them portable – because to port an operating system, all you had to do was port the teensy weensy kernel. I know this because one summer as a mere stripling I was working at the Indian Institute of Technology, Bombay which had just got a bunch of tapes of one of the first microkernel based operating systems called Mach. Some highly talented academic and industry folks were building a Unix workstation (from the hardware up) and wanted to do the least amount of work to do it! To someone using the workstation it was hard to distinguish between that system and a vanilla (think Sun/DEC) Unix workstation. But the speed with which the port could be done was astounding or so the people working there assured me.
Mach, which came out of Carnegie Mellon University, was much more than the portable operating system thing I made it out to be– it was a radical new way to look at OS’s. The idea was to abstract away the non-essentials of an OS and leave a very small microkernel to be dealt with. This had far reaching impact on how portable both the kernel and the services written on it could be from hardware platform to hardware platform. An interesting fact, Rick Rashid who was the Prof leading the MACH team, came to Microsoft and is now the SVP responsible for MS Research. Hmm, worked on Mach, was at Carnegie Mellon and then came to Microsoft to become SVP! Looks like my career is on track here!
Then two things happened:
- suddenly there was only hardware platform that mattered (the one that Linux and Windows run on) and
- Linux and Windows were certainly not microkernel based – think MACROkernel. There were both monolithic operating systems, at least as far OS researchers go. Ok, some people will argue that they are really “hybrid” kernels with a kernel and a user mode division, but microkernels they certainly were not!
So the old (microkernel) went out, and the new came in (macrokernel – it’s not a term, just something I invented!).
Ah ha! You say. When is the old going to come back?
I think it has already sneaked in by another name – virtual machines. I don’t mean Virtual Machines as they have been defined by Xen, Microsoft and VMWare. I mean in the computer science understanding of the term virtual machines which implement “virtualization”. According to Wikipedia (what would I do without it!) “In computing, virtualization is the process of presenting a logical grouping or subset of computing resources so that they can be accessed in ways that give benefits over the original configuration”. Hardware abstraction layers (think Xen) and virtualization engines (think network-seen-as-one-computer) are just different aspects of virtualization.
This probably needs some more explanation. A concrete example would be even better. One dropped into my lap as I was writing this blog. My attention was guided towards a company called 3Tera which claimed to be building a “Grid operating system”. They make the claim that they take an existing web application and without changes drop it onto a grid, so that the web application is able to be scaled by the provision of on-demand resources dictated by consumer demand. They are platform independent i.e. it doesn’t matter to them that the application runs on Windows or Linux.
The reason that they are able to do this is because they have redefined a conventional OS using the concept of virtualization. That gave me the idea of how I could diagrammatically show you what this means.
The new microkernel does very few things, it just manages the allocation of today’s OS’s (which are just applications to the new microkernel) on to a grid of commodity hardware. This grid could be a single computer or more likely a widely distributed network of computers. All this is possible because the two dominant OS’s Windows and Linux rely on the same commodity hardware. This allows for the capability of a mainframe (in terms of manageability, security & protection, partitioning) while retaining the advantages of using cheaper distributed computers.
The dream of “a view of computing resources is not restricted by the implementation, geographic location or the physical configuration of underlying resources” will be realized. It is also realized by utilizing in full the investment made in today’s OSs – which do their thang much as they do now!
So what? You ask. Patience, young Jedi!
(Peering into my trusty crystal ball)
Operating systems will be sets of services. OS’s will be chosen based on some favorite service, without giving though to “platform lock in”.
Operating systems components will be componentized. So that a Linux daemon will (gasp!) be able to use a Windows security component, via the use of standardized protocols. Without it knowing that it is.
Clustering and failover will not be high end luxuries, but will be baked into all OS’s – and schedulers will schedule multiple OS’s across Internet scale networks
……….VB programmers will be able to make the equivalent of the salaries they were making during the dot com boom. Ok – so that will never happen!
I think these predictions are probably going to raise some level of discussion. (Understatement, understatement!).