Storage Spaces Survival Guide (Links to presentations, articles, blogs, tools)


In this post, I’m sharing my favorite links related to Storage Spaces in Windows Server 2012 R2. This includes TechEd Presentations, TechNet articles, Blogs and tools related to Storage Spaces in general and more specifically about its deployment in a Failover Cluster or Scale-Out File Server configuration. It’s obviously not a complete reference (there are always new blogs and articles being posted), but hopefully this is a useful collection of links.

 

TechEd Presentations

TechNet Articles – Storage Spaces

TechNet Wiki – Storage Spaces

Microsoft Cloud Platform System (CPS) powered by Dell

TechNet Articles – Cost-Effective Storage for Hyper-V

Blogs – Storage Spaces

TechNet Download – Tools

Updates required for deployment

Windows Server Catalog

Partner Articles on Storage Spaces (alphabetical order, just a sample of the many partners solutions out there)

 

Thanks for the suggestions in the comments section (some of them already added to the list). Keep them coming…

Comments (13)

  1. JoseBarreto says:

    @Jens – Fixed it. Thanks for catching it.

  2. JoseBarreto says:

    @jaromirk – Thanks for the great suggestion. Added some partner articles. The SQLIO blog does provide some of the information you asked.

  3. Matt Garson[MSFT] says:

    @paul

    Paul, I work on Storage Spaces. We appreciate the feedback and I’m sorry that this has been challenging.

    I’d like to understand better what you’re seeing. Would you mind sending me an email first initial last name at Microsoft.Com

  4. IT-Assistans says:

    Mattias Gustavsson, we have the same issues with Tiering, so the jobs that do the tiering have been disabled until we have a resolution on this. When this was enabled all our VMs froze and crashed.

    We are also waiting on an update to this.
    If you like me are based in Sweden and you feel like it, please get in touch with me d a v e AT itassistans .se

  5. Jens Peter Christensen says:

    The link for "Step-by-Step for Mirrored Storage Spaces Resiliency using PowerShell" is wrong (identical to link just above).

  6. jaromirk says:

    And I would love to see something SQL IO reference test script – like industry wide performance test which professional could use to assess his storage. With most common settings (reads, writes, 70/30, 8k cluster size, 64k cluster, …) with some templates
    like simulating typical SQL workload, simulating typical Hyper-V workload.. And with some html result. So you could just "Stamp" the SAN solution you’ve built before putting to production. Thank you for great work you are doing! And of course thank you for
    great TechEd/Techready sessions!

  7. jaromirk says:

    I also miss this one:
    http://blogs.technet.com/b/josebda/archive/2013/08/16/3587652.aspx Windows Server 2012 R2 Storage: Step-by-step with Storage Spaces, SMB Scale-Out and Shared VHDX (Virtual)

  8. Paul Spontaneo says:

    Absolutely no documentation on setting up "Enclosure Fail-Over" properly, official or unofficial. Yes, there is a quick blurb about the required number of enclosures to achieve fail-over using a specific redundancy level (mirror versus parity, etc), but
    no step by step guide for configuring it and testing it properly. We had to experiment for months in order to get this working, and it’s buggy to say the least (e.g. disk configuration / count per enclosure, where quorum disk should reside, whether separate
    pool should be set up, etc.). We are thinking of writing a blog to explain exactly how we got it all working (for mirrored spaces). Have not attempted an enclosure awareness deployment with dual-parity as of yet.

    Aside from getting "Enclosure Fail-over" working, we have had nothing but problems with the Intel JBODs, even though they are on the "Storage Spaces R2 Certified List". We have been working with Intel and Microsoft support for over a month, and tickets on both
    ends remain open. Intel had to release a firmware fix just to allow for multiple JBODs to be identified correctly… …How their initial release ever passed certification is beyond us. Both Microsoft and Intel refuse to inform us as to how certification was
    achieved. Regardless, there are all sorts of other bugs / glitches still apparent including the following:
    a). Single JBOD identified as multiple JBODs in Storage Spaces when MPIO enabled. This only occurs with Intel (sometimes)… …does not happen all the time. Other certified JBODs detect / are listed properly as a single unit when MPIO is enabled (get-storageenclosure).
    This is cosmetic, and does not seem to break anything, but it’s confusing and weird.

  9. paul says:

    b). Sometimes, if a Storage Pool and multiple spaces are created on a single host (stand-alone), and then later one decides to add that host to a cluster (where the JBOD[s] is shared via SAS with other cluster hosts], it is impossible to bring the cluster
    disks online. Storage Pool adds fine and comes online, but cluster disks fail to come online with "Device Not Ready" Error 0x 15". We spent days attempting to resolve this, but no go. Seems to be some sort of issue with Storage Spaces specifically… …even
    if we remove all other nodes from the cluster, disconnect all nodes physically from the JBODs, and only attempt to create a single-node cluster, cluster disks fail to come online. However, if we kill the cluster, and simply mount the disks as local drives
    on a single server they work fine (we can replicate this on any single node attached to the shared JBOD, Intel). Seems to be some sort of metadata or something that gets corrupted / or incorrectly specified that prevents the volumes from coming online when
    added to a cluster. As stated, we have been unable to resolve, but have found some German posts that seem to validate our experience. Not much out there I am afraid, so beyond the German blogs we found, nobody seems to know how to repair this issue. Blowing
    away the machines entirely and rebuilding the cluster and Storages Spaces from scratch seems to be the solution.
    c) Intel JBODs always report as unhealthy (even though they are healthy). This is a known issue, and again, cosmetic… …but still, how did this stuff pass certification?

  10. Hi. I can add to Pauls experience. We also use Inte JBODs, and I’ve asked myself the same question. How did this get certified? Storage Enclosure Unhealthy and showing twice. The showing twice bug is resolved by getting a PDB-replacement from Intel and
    updating firmware on the JBOD. This is documented on the Intel supportpage for the JBOD-enclosure.

    We have not expreinced problems with getting the cluster disks online, but maybe we just haven’t hit the bug.
    We do, however, have massive problems with the SOFS-cluster becoming unresponsive and finally crashing. After months of troubleshooting, including a MS case (who didn’t seem to know what a SOFS was and soon pointed finger towards Intel), it seems our error
    is down to a bug in Storage Tier Optimization (so, not Intel). I’ve been told by MS MVPs that there is a bug in Storage Tier Optimization and that the MS Storage Team is developing a fix for it (was supposed to be released on nov 12 but was delayed). But I
    cannot find any information regarding this on the internets. Even now when I know what the problem is I find nothing. (Maybe I must use Bing in order to find this secret MS info)
    This raises the question: Does anyone actually use Storage Tiering, or is the bug just present in some systems? Since I can’t find any info it is hard to know how I should tackle the situation.
    The recommendations I get from the MVPs is to disable Storage Tier Optimization, but what is the use of storage tiering then?
    I feel a bit let down by MS and their certification since in real life it only looks like something to hide behind in order to NOT have to give support.

    /Mattias – waiting "patiently" for the MS fix

Skip to main content