Case Studies on Storage Spaces, Scale-Out File Servers with SMB3 or both


There are many customers out there using Storage Spaces and Scale-Out File Servers with SMB3 since their initial release in Windows Server 2012 a few years back.

Every once in a while, someone will ask me for details on how these technologies were deployed by customers. The best source for those examples is the Microsoft Case Studies site.

The list below includes case studies on how a customer deployed a solution using Storage Spaces, SMB3 file servers or both combined:

And you should also note that the recently release Cloud Platform System (CPS) is another example of a solution that uses both Storage Spaces and Scale-Out File Servers with SMB3:

If you’re focused on gathering data about the performance of Storage Spaces and Scale-Out File Servers, there are a few interesting white papers available:

For more information about Storage Spaces or SMB, you can check these blog posts:

Comments (6)

  1. JoseBarreto says:

    @Wes

    For your first scenario, you should be able to use the same SSDs for both WBC and Tiering. However, the column count and data copies applies to all tiers in your virtual disk. To use 6 columns and 2 copies, you would need at least 12 SSDs and 12 HDDs. With
    12 HDDs but only 2 SSDs, you would need to use 1 column and 2 copies.

    On your second scenario, you would need to create the second HDD-only virtual disk without using tiers. You need to use the -Size parameter instead of the -StorageTierSizes parameter and the -PhysicalDisks parameters (with only the list of HDDs) instead of
    the -StorageTiers parameter.

  2. Wes says:

    Hi Jose, thank you for all the fantastic information. I am setting up a new hyper-v host and we have twelve 10k SAS drives and two 240gb SSD drives. I’d like to create a sizeable WBC, and then set aside the rest of the SSD space as a separately addressed
    drive letter. Is this possible, or do I lose the extra space if I want to use these SSDs to cache with?

    this is successful after setting the SSDs to journal usage but I can’t use my extra SSD space: New-VirtualDisk -StoragePoolFriendlyName test -FriendlyName maindisk -UseMaximumSize -ResiliencySettingName mirror -ProvisioningType Fixed -NumberOfColumns 6 -NumberOfDataCopies
    2 -WriteCacheSize 16GB

    if I don’t set my SSDs to journal usage, and then try this: New-VirtualDisk -StoragePoolFriendlyName test -FriendlyName maindisk -StorageTiers ($tier_hdd) -StorageTierSizes (120GB) -ResiliencySettingName mirror -ProvisioningType Fixed -NumberOfColumns 6 -NumberOfDataCopies
    2 -WriteCacheSize 16GB

    it keeps saying "You must specify the size info (either the Size or UseMaximumSize parameter) or the tier info (the StorageTiers and StorageTierSizes parameters), but not both size info and tier info." I don’t understand why this error is coming up, since I
    am specifying the storagetier and storagetiersizes without any size parameter…

    thanks!!
    Wes

  3. Wes says:

    Although I successfully created a basic virtual disk on the SSD disks, every time I try to build something with the HDD tier it fails:

    New-VirtualDisk -StoragePoolFriendlyName test -FriendlyName hddisk -StorageTiers @($tier_hdd) -StorageTierSizes @(11gb) -ResiliencySettingName mirror -WriteCacheSize 5gb

    New-VirtualDisk : Failed to run CIM method CreateVirtualDisk on the MSFT_StoragePool (ObjectId =
    "{1}\SSTESTroot/Microsoft/Windows/Stor…) CIM object. CIM array cannot contain null elements.
    Parameter name: value
    At line:1 char:1
    + New-VirtualDisk -StoragePoolFriendlyName test -FriendlyName hddisk -StorageTiers …
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : NotSpecified: (MSFT_StoragePoo…indows/Stor…):CimInstance) [New-VirtualDisk], CimJobE
    xception
    + FullyQualifiedErrorId : CimJob_ArgumentException,New-VirtualDisk

  4. Jeff Wilson says:

    Awesome stuff Jose.

    It seems the Storage Spaces team wants some feedback as vNext no doubt grinds to an interesting debut next year. I’d love to chat but just in case we can’t, some brief thoughts.

    I was skeptical that storage spaces was ready for primetime in 2012 and some nasty dedupe experiences reinforced that, but with the constant improvements through R2, R2 + Updates, and the ever-evolving cmdlets, my confidence in the storage product has grown
    to the point that I’m ready to deploy it in production scenarios save for one small problem: lack of off-site vDisk replication.

    I know that Server Technical Preview was going that direction and I hope it’s still in next year’s product. My sense is that virtual disk/pool replication with an off-site storage spaces array utilizing ODX like a 3Par would be a game-changer. It’d be even
    more amazing if I could use odx to replicate my vdisks to Azure and I didn’t have to stress over .vhd vs .vhdx

    Speaking of Azure, the 1023GB limit .vhd stresses me, but I can speak from experience and say D-sized Azure VMs with 12 or more attached virtual disks works just fine with Storage Spaces, even if my brain starts to hurt thinking about 3 way mirrors in the context
    of geo-redundant sets. Cool stuff, especially the way one can use Azure-only cmdlets and commands to copy terabytes of Storage Spaces vdisk data across Azure regions. I don’t know much about object storage, but if it’s object storage that lets me do that,
    I say that I want some more of it!

    Some other general comments:

    – SMB shares mapped to client PC Lettered drive has changed from annoying crutch to a top 10 attack vector in age of increasingly sophisticated ransomware. This isn’t Microsoft’s fault but if ever the bandaid needed tearing off…

    – I think Microsoft should put more effort into explaining the benefits of File Classification Infrastructure as I see so many disorganized, lazy, terribly-insecure department file shares out there it keeps me up at night

  5. Cloud-Ras says:

    Nice with a little insigt of Microsoft’s Windows Build Team 🙂

  6. Dave says:

    Hi Jose, We met back in August/September 2014 in Redmond, after a Dell supported Clustered Storage Spaces deployment where we experienced performance issues. When we met with you and some of your colleagues you described our issue as “excess disk cache flushes”. We’re now back to a clustered solution which again we’re experiencing the same performance issues with event log entries in the SMBServer |Operation Log folder as we had back in 2014. Researching the events that occurred previously and finding new resources I found a paper concerning the PerfMon Cluster CSV File System Flushes counter. I’m looking for your opinion of what an acceptable value is. We’re seeing the values on four volumes as 419,000.000;293,530.000;118,125.000 and 804,080. In task manager the response times(ms) are 500-1000ms which ramp up over a period of time and then drop down to the 10-50ms and then repeat again. Also in task manager the disk performance has 10-20mbs of disk writes. Performance on VMs goes from ok to a drastic poor performance. Can you shed any light on this? What would be acceptable values? We’ll be submitting a ticket with MS tomorrow.
    Thanks.
    Dave

Skip to main content