Hyper-V, Live Migration, and the upgrade to 10 gigabit Ethernet


<!--[if lt IE 9]>

<![endif]-->

Comments (21)

  1. Kevin Holman says:

    Thanks.  They are cheaper than you might think.  You can find the T7500's with nice dual XEON's on ebay for under $1000 or right at it.  I think I paid $1000 for one and $850 for the other.  Then RAM for these was about $650 for the 96GB… a couple SSD's, and you have a sweet setup for under $2000 per box.  These were added over time… I try to buy one lab server per year or so.

  2. Kevin Holman says:

    When looking for lab servers – one thing to keep in mind, is that memory expandability is often related to number of CPU's.  For instance, the T7500 has the additional memory slots on the second CPU riser, and maps memory to that CPU.  So in order to maximize the memory, you have to ensure you get a dual CPU config.  Adding an additional CPU later can cost more than the whole server.

  3. Kevin Holman says:

    @Merlus –

    My disks presented to the cluster in my lab are all on a single "storage server".  It is essentially composed of 3 SSD drives connected to the Intel motherboard SATA controller, and then a RAID0 array composed of 4 15k SAS disks.  Then I created an iSCSI disk on each of the 4 "drives" (3 SSD's, one spinning disk array).  Each cluster node mounts the iSCSI volumes, which are presented as ClusterSharedVolumes.  I put my critical VM's on the SSD's, and all the ancillary VM's on the larger RAID0 array.  If I could afford some 512GB SSD drives, I'd do away with all the spinning disks.  With Hyper-V storage migration, it makes adding/removing/changing/upgrading disks really easy.  If you didn't want to use iSCSI, you could easily create a single node Scale Out File Server and do the same thing without the complexity of iSCSI, which I am planning on transitioning to once I upgrade everything to WS 2012 R2.  Yes, gigabit is FINE for the storage connection.  With 35 VM's I never come close to saturating it.  The latency on the network is near zero.  You'd have to start all 18 VM's at the same time to even see the network become a bottleneck.  

  4. Ed (DareDevil57) says:

    thanks for sharing.

  5. Anonymous says:

    As always fantastic post, I think I now have test lab envy. Sudden urge to up the RAM in my IBM x3650 m2 and search ebay for a second box so I can play around with the live migration functionality.

  6. Kevin Holman says:

    I think Marnix summed it up pretty well.  Sorry…. don't have better news:

    thoughtsonopsmgr.blogspot.com/…/a-farewell-to-old-friend.html

  7. Anonymous says:

    Great content Kevin. I really need to switch to some Xeon lab servers, thanks for the heads up on the T7500's. Ebay search starts in 3,2,1…..

  8. Tracy says:

    Kevin:

    Now that MS has ditched the TechNet subscriptions, what options are there for testing out software.

    –Tracy

  9. Aidan Finn says:

    Thanks for the links, Kevin.  I look forward to the day when we get vRSS for management OS vNICs.  Then we might see the all-virtual converged network getting better Live Migration results.

  10. Merlus says:

    I too have lab envy. I am looking for the best option for the storage for a lab.

    How many disks and raid volumes do you have in the third workstation?

    How are you connecting your hosts to the storage? Is gigabit enough for 18 VM?

  11. Karthick kesavan says:

    Hi Kevin

    i am having one query that if particular service is running under specfic service accounts and if that service failed means whether SCOM will be able to restart the service

  12. karthick kesavan says:

    it will start service which is running under local system or local service account ?

  13. ktaber says:

    @ Kevin have you re-created this lab setup with 2012 R2? With vRSS I’d like to hear how much it will saturate now.

  14. Baatch says:

    @ Kevin can you give some more specific info on the 10gb ethernet card that you bought?

  15. Interested says:

    Hi there. I’m just wondering how you got the network cards so cheap? Were they second hand?

  16. This is one more reason why I like hardware based converged networking so much, like CIsco UCS. You then create converged network interfaces which each of them support RSS (Receive Side Scaling). Of course not something for in your lab, but good to know.

  17. strict says:

    Great walk through! Do you know if any progress been made to improve the ~3Gb/s speed limitation for Virtual Network Adapters on a 10Gb/s converged Virtual Switch? Thanks!

  18. Rajeev says:

    Hi Kevin,
    It has been a few years since you did this testing….Is there a solution now? A quick Google search shows that vRSS is available and supported by MS now. Have you tried it and more importantly, have you seen it working?
    If not, I think I may want to use the CNA hardware to present virtual NICs instead of using HV’s converged networking.

    PS: I am experiencing the same problem in a new HV cluster I setup using converged networking. It pushes less than 1 Gbps while LV.
    The main production HV cluster I built a few years back uses discrete 10GB NICs and is able to push 7-8 Gbps easily while doing LV.

    1. Kevin Holman says:

      Wow… haven’t thought about this in a long time. Unfortunately I am no longer in a role where I would work with this stuff, and no longer have a lab with Hyper-V clusters and 10Gbe. So I am not current, nor is my equipment. Sorry.

      1. Rajeev says:

        Ok, no problem. I will continue your research and post on my own blog – rajdude.com

        By the way:
        1. Just checked, vRSS and pRSS and VMQ is enabled by default on all my pNICs and vNICs on my servers
        2. I noticed that the send and receive numbers are different.
        725Mbps sending LMs from HV1 to HV2
        350Mbps receiving at LMs HV1 from HV2

        So something is really off in my Hyper-V Converged Networking setup.

Skip to main content