Today, we are pleased to announce the Update 3 for HPC Pack 2012 R2 is now available. With this latest release of update, HPC Pack gets into some exciting new areas like GPU support, on-premises Linux nodes support and burst to Batch Services.
We know that GPGPU in HPC is now getting very popular not just in academic research but also in enterprise engineering works. HPC pack customers were asking for it for quite a while. In this update, we introduced the support of the NVidia Tesla GPU So HPC Pack users can manage and monitor the supported HPC resources and schedule HPC jobs to fully leverage GPU resources. For more details, please go to see the follow up blog post of How to fully use GPU resources to run GPGPU jobs in Microsoft HPC Pack and detailed GPU supports introduction article in TechNet.
About 6 months ago, we introduced Azure Linux VM support for HPC Pack in Update 2. Now with Update 3, HPC Pack starts to support Linux for on premise compute node. Customers running high-performance computing clusters on Linux can now use HPC Pack deployment, management and scheduling capabilities, and the user experience is very similar to the Windows nodes. For more details, please go to see the description of On-premises Linux support in TechNet.
Microsoft introduced Azure Batch Service months ago, it provides high scalable job scheduling and compute management service. From this update, HPC Pack is able to deploy Azure Batch pools from the Head Node and treat the compute resource pool as a “fat” node in the system and thus batch jobs and tasks can be scheduled on those pool nodes. For more information about Azure Batch support with HPC Pack, please visit burst azure batch from Microsoft HPC Pack
There are also great improvements introduced, for example:
SOA improvements – this release includes a better mapping for broker worker logs with sessions, Support Excel running under console session by default and provide SOA job view in web portal.
Auto Grow/Shrink supports SOA workloads – Now the service can grow nodes based on the outstanding calls in a SOA job instead of task numbers.
Per instance heat maps – Originally we only have aggregated instance heat maps but now you can view per instance heat maps through the “Overlaying view”
Customizable Idle Detection Logic – Originally the workstation nodes or the unmanaged server nodes are treated as idle through Keyboard/Mouse detection or CPU usage for process other than HPC ones. In This update we provide several customizable ways to detect whether these machines are idle.
Remove the limitation on the number of parametric sweep task in a job. HPC Pack used to have a limitation of 100 tasks in a parametric sweep job, now the limitation is removed
Microsoft MPI 7.0 included.
To learn more about the detailed description and how to get the new package, please visit the what’s new and release notes. As always, we look forward to hearing from you. You can get in touch with us via the Windows HPC Forum, or you can email us directly.