Virtualization - never knowing where your next core comes from ?

Returning from TechReady4 I've noticed some trends in how we are going to think about datacenters over the next few years.

Moore's law just keeps on going (to everyone's surprise including Gordon Moore*). When I was a student in the mid 1980's I had to learn the Transputer programming language "Occam"  because my professors thought that parallelism was the way things were going to go. Risc processors were also the future. In the 90's I remember my managing director saying "Risc doesn't buy you anything you can't get more easily with Complex instruction sets if you wait a few months". We've had multi-processor support since Windows NT hit the streets in 1993, and 14 years on multiple chips are still rare on the desktop. Even in the datacenter machines with core counts in double figures are rare - High Performance Computing uses clusters of many "commodity" machines.  

But the world is changing. One of the things I took away from Techready4 last week was that Moore's law won't bring faster and faster clock speeds for much longer. Problems like dissipating heat see to that. So if we can get more on a chip ... what can else can we do with it. As anyone buying computers recently will have noticed, parallelism in the form of more cores has hit the mainstream. It's not just in servers or even in Games Consoles - my new Dell latptop isn't anything exotic, but it has two 64 bit cores. (64 bit parallel processing ? In a laptop ? Which runs cooler and gets more battery life from a smaller battery than it's predecessor ? Woo ! ).

The BBC had the story that Intel have shown an experimental 80 core, terraflop processor.  I'll say it again 80 cores. 1 Teraflop (Supercomputing territory.) One of Intel's documents points out    "A hummingbird beats it’s wings about 75 times per second in normal flight. It would take a hummingbird about 423 years to beat its wings a trillion times. You could call that a “teraflap”!   Don't give up the day job guys - and you can do your own search for the Engineering conversions document which has "1 trillion Pins = 1 Terrapin"

Both the BBC and Intel point out the first teraflop computer provided used 500,000 Watts the experimental chip uses only 62 Watts, no that's not a typo - sixty two Watts. Intel published a paper in June 2006 which talked about the power savings offered by the Dual core Xeon 5100 (Woodcrest) chip - the variant for mainstream servers uses 65 Watts. Multi-core is key to greener datacenters - at present these convert a huge amount of electrical power into heat - ironically that heat isn't put to use in buildings and more power is used to get rid of it. 
BUT... do we see a need for 80 core Domain Controllers ? 80 Core DNS servers ? 80 Core Exchange servers , Web severs, File servers ... in fact do we see a need for more than a handful of cores anywhere but scientific computing ? Here's another take away from TechReady4: there has been a change of language - machines in the datacentre run "workloads" not "services" . The idea is simple even if the implementation is still in development. We're moving to a model where Processes / services aren't tightly connected with Processors / Servers, but are floating tasks which can be assigned to machines dynamically. A session I went to on Compute Cluster Server at Techready 4 mentioned using it as a pool of servers with a job control language, to do things like reporting on different processors as required. CCS workloads are tasks which run to completion on a processor - but they don't have to be scientific / high performance computing. Other workloads - traditional services - don't complete in the same way. And the way to partition a server with many cores between workloads is with virtualization.  This is why Windows server Virtualization is a core part of Longhorn server and why it provides critical things like support mutliple cores per processor and 64bit guest operating systems which are missing from today's Virtual Server, which isn't part of the OS (although it is free). Virtual Server 2005 isn't a dead-end though, Virtual Machines will transfer to longhorn and System Center Virtual Machine manager  will be used to manage Longhorn VMs as well as VS2005 ones.

 

 

Technorati tags: Microsoft, Virtualization, Longhorn, Intel, Multi-core

 

* According to WikiPedia

Gordon Moore's observation was not named a "law" by Moore himself, but by the Caltech professor, VLSI pioneer, and entrepreneur Carver Mead. Moore's original statement can be found in his publication "Cramming more components onto integrated circuits", Electronics Magazine 19 April 1965:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year ... Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.”