Windows Azure Benchmarks Show Top Performance for Big Compute

by STB Blogger

Today, Microsoft announced a new set of Windows Azure capabilities for Big Compute, a term that applies to applications that require large amounts of compute power. Examples of Big Compute applications include modeling complex engineering problems, understanding financial risk, researching disease, simulating weather, transcoding media, or analyzing large data sets.

Customers who rely on such Big Compute are increasingly turning to the cloud as an economical and scalable platform to support the ever-growing need for compute power. Windows Azure now offers customers a cloud platform that can handle Big Compute needs cost effectively and reliably.

These new capabilities include:

  • Hardware for Big Compute: Enhanced capabilities of Windows Azure with a new infrastructure for Big Compute. The new Big Compute configurations in Windows Azure are in private preview now with partners and will be publicly available in 2013.
  • Microsoft HPC Pack 2012: The Microsoft HPC Pack 2012 makes it easy to run Big Compute workloads entirely on-premises, on Windows Azure, or a hybrid combination of both. The Microsoft HPC Pack 2012 will be available in December 2012.

To demonstrate the performance capabilities of the Big Compute hardware, we ran the LINPACK benchmark and the results were so impressive –151.3 TFlops on 8,065 cores with 90.2 percent efficiency—that we submitted them to be certified as one of the Top 500 of the world’s largest supercomputers.

With a massively powerful and scalable infrastructure, the new instance configurations, and the Microsoft HPC Pack 2012, Windows Azure is designed to be the best platform for your Big Compute applications.

For more information about Windows Azure and Big Compute, please visit the Windows Azure blog.