Lessons learned from MMS Labs solution

 

1. Using Windows Server 2008 SP1 - Dynamic Memory the XB guys scaled well past 100 VMs (mixed sizes up to 8GB) and achieved 225 VMS per host with 128GB of RAM.

2. You need A LOT of IOPS for that and they used the HP IO Accelerator cards with 100,000 IOPS per server, and put the VM differencing disks on that card. We saw a single Hyper-V server hit 30K-40K IOPS (total) with 125 VMs running (240 IOPS each) with users logged in running heavy labs like SCOM, SCCM, Sharepoint, Lync.

a. With the proper Storage and HW design Hyper-V SP1 can handle ANY workload

b. The ONLY issue today,  is that CSV volumes are a shared IO resource. We need VM-to-LUN QoS for storage paths, for us to guarantee IOPS performance to the VM, for shared disks

c. The IO Accelerator cards are a GREAT VDI story

3. Trunking of 1GB networks is very cost effective alternative to 10GB Ethernet – which VMWare and Cisco are pushing heavily.

a. Trucking allows you to consolidate say (4) 1GB links, into the Blade Enclosure Switch, and use Virtual Networking to distribute the load to the VMs. 

b. At MMS they only used (2) 1GB Trunked links, and found less than 30% utilization from 100+ VMs running labs.

c. HP’s FlexFabric is pretty amazing – it allows your split 10GB bandwidth between Fibre Channle and Ethernet. I’d say a more flexible solution that pure 10GB-E switching that Cisco/VMWare vBlock is pushing.