We are excited to announce availability of new compute intensive VM sizes for Cloud Services. The new A8 and A9 instances provide an Intel processor with 8 and 16 cores respectively, 7 GB of RAM per core, and a backend 32 Gbps InfiniBand network for low-latency and high-throughput communication. A9 is our largest instance size with 16 cores and 112 GB of RAM.
These instances are designed for compute-intensive workloads, particularly High Performance Computing (HPC) applications such as computational fluid dynamics, finite element analysis, and weather forecasting. Manufacturing, energy exploration, life sciences, and other industries all require HPC for innovation and will benefit from this new offering.
At GreenButton we work with a variety of organizations to enable their HPC workloads to run in the cloud. Many of these workloads weren’t suited to the current state of cloud hardware, specifically those with low-latency networking requirements. That is until now. We have a number of applications, including for Bioinformatics and Oil & Gas that would not scale in other cloud environments, but perform superbly in Microsoft’s new HPC capability in Windows Azure. We’ve been waiting a long time for this day and are super excited at the possibilities this opens for our customers and the wider HPC industry.
–Dave Fellows, CTO, GreenButton
What’s unique about A8 and A9 instances is the backend network that supports remote direct memory access (RDMA) communication between compute nodes. We have virtualized RDMA through Hyper-V with near bare metal performance of less than 3 microsecond latency and greater than 3.5 gigabytes per second bandwidth. RDMA capabilities are accessed through the Network Direct interface. Network Direct is supported in Microsoft MPI (MS-MPI), and we are working with partners to enable Network Direct in other MPI stacks, as well as bringing RDMA capabilities to Linux virtual machines.
MPI, or Message Passing Interface, is a standard programming model used for developing parallel applications. MPI is language-independent, and our customers run applications written in Fortran, C, and .NET. MPI is used in engineering applications to model stress in building or part designs, simulate impact and falls, and other processes to build and manufacture better products. MPI is also at the heart of sophisticated weather modeling. Customers have asked us to provide this capability in Windows Azure because their on-premises HPC clusters are at capacity, or they need a way to scale models that are too large or complex to run on a workstation but they don’t have access to a cluster.
Customers have asked us for high performance VM instances for other applications as well, including financial risk analysis and media transcoding. A8 and A9 instances provide an option for customers that need more horsepower than the A5-A7 high memory instances.
The availability of robust elastic HPC Azure capacity will be a great help to our clients especially when peak loads greatly exceed normal loads. Microsoft is offering attractive terms that may prove to be a game changer in the months and years to come.
–Phil Gold, VP R&D, GGY AXIS
We are introducing A8 and A9 instances for Cloud Services (PaaS) initially. Capacity is limited as we build-out in more regions. We will also be adding support for Virtual Machines (IaaS).
Important note: At this time, A8 or A9 instances cannot be deployed by using a cloud service that is part of an existing affinity group. Likewise, an affinity group with a cloud service containing A8 or A9 instances cannot be used for deployments of other instance sizes.
We are also announcing availability of HPC Pack 2012 R2, our cluster management and job scheduler solution for Windows Server clusters and for “bursting” compute jobs to Windows Azure. HPC Pack makes it easy to take advantage of the cloud to add resources to an existing on-premises cluster, deploy a cluster entirely in the cloud, or start a proof of concept in Windows Azure to test performance and scalability.
HPC Pack 2012 R2 is optimized for Windows Server 2012 R2 head nodes. In this release, we also support head nodes on Windows Server 2012. Compute nodes are supported down to Windows Server 2008 R2.
We have added a number of new features to improve bursting HPC jobs to Windows Azure. Naturally, A8 and A9 instances are supported, and this is the easiest way to test applications that use Microsoft MPI and Network Direct for low-latency RDMA networking. Additionally, individual Windows Azure compute nodes in your deployments can now be stopped from HPC Pack. For example, if there is a long tail of tasks in your job, you can now turn off idle Windows Azure compute nodes to help reduce costs.
Microsoft MPI 2012 R2 has a new capability to dynamically create MPI communicators between independent processes through the standard MPI_Comm_connect and MPI_Comm_accept APIs. In addition, the MS-MPI 2012 R2 release introduces an interface enabling 3rd party job schedulers to start applications that use Microsoft MPI.
HPC Pack is a free add-on for Windows Server and is available for download here. An updated redistributable package for Microsoft MPI is also available for download here. Learn more about Windows HPC on TechNet and about Microsoft MPI in the MSDN Library.
Let us know about your experience doing compute-intensive work on Windows Azure. Email us at firstname.lastname@example.org with any feedback or suggestions.
–Alex Sutton, Group Program Manager, Windows Azure Big Compute