Storage Spaces, JBODs, and Failover Clustering – A Recipe for Cost-Effective, Highly Available Storage

Hi Folks –

I’m often asked how to deploy inexpensive, reliable, cluster-connected storage using Windows Storage Server 2012—without the cost of expensive RAID adapters, external RAID arrays, and SAN switch fabric. Fortunately, the answer is an easy one. Other than a couple servers running Windows Storage Server 2012 Standard, all you need are SAS host bus adapters and a certified JBOD enclosure (just a bunch of disks). Everything else you need is built into the OS—namely, Storage Spaces and Failover Clustering, which you can use to implement highly available, clustered storage.

How it works:

  • The new Storage Spaces feature in Windows Server 2012 is a software virtualization and abstraction layer, which is used to improve manageability and data protection.

  • Storage administrators can group inexpensive disks into Storage Pools, which enable storage aggregation, elastic expansion, and delegated administration. 

  • The resulting pools of storage can then be used to create virtual disks—with configurable levels of data protection, such as simple, mirrored, or parity. When a virtual disk is created, Storage Spaces will stripe the data across the physical disks.

With Storage Spaces, if you specify that you want mirrored copies of the data and plenty of disks are available, Windows Storage Server will utilize the disks and spread the copies around intelligently. If a drive goes out, Storage Spaces will automatically begin creating a new copy using other available disks to ensure you still have the desired number of copies. When you replace the disk, Storage Spaces will add it to the pool and begin using it again.

Storage Spaces + Failover Clustering + CSV = Scale-Out File Server

Storage Spaces works well in a standalone system, however, you can also use it together with Failover Clustering, Cluster Shared Volumes (CSV) and one or more JBOD enclosures to create a Scale-Out File Server (SOFS). Here is a deployment view of application servers connecting to a highly-available storage cluster.


And here is an example of a 4-node cluster. Each node contains 4 SAS HBAs connecting to 4 separate JBODs. With 4TB drives, this configuration would have a petabyte of raw disk space!


Hardware Requirements and Configuration

To implement clustered Storage Spaces, you’ll need some hardware:

  • Two or more systems running Windows Server 2012 (Standard or Datacenter) or Windows Storage Server 2012 Standard.

  • SAS Host Bus Adapter (HBA). Using a simple SAS HBA is preferred and not a RAID controller. You can use a RAID controller if it supports ‘pass-through’ or non-RAID mode, however, using an expensive RAID controller would be a waste of money if you are just connecting to a JBOD.

  • A Windows Server 2012-certified SAS JBOD enclosure. (You can find a list under the Storage Spaces Category in the Windows Server Catalog.)

  • A minimum of three physical drives, more depending on the number of columns and resiliency. Dual-parity storage spaces require a minimum of seven drives.

  • You can use SSD drives for high-speed caching and data tiering and you can carve up a partition in the SSD for the operating system drive if you want and dedicate the rest to storage spaces.

  • If you want to deploy a cluster that has more nodes, you might also need a SAS switch to enable all the nodes to connect to all the disks in the JBOD.

After you have all the parts, you can find instructions on how to configure a Clustered Storage Space using Windows Server 2012 here. These instructions also tell you how to configure the storage using Windows PowerShell scripts. A great overview on storage pools and Storage Spaces can be found here and this FAQ highlights the new features found in Windows Storage Server 2012 R2.

Certified JBOD Enclosures

If you’re planning to deploy a clustered Storage Space, I can’t stress enough how important it is to use one of the Windows Server 2012-certfied JBOD enclosures listed here, in the Windows Server Catalog. The reasons you’ll want to use a certified enclosure is to ensure that:

  • Its backplane chipset supports the required features and delivers adequate performance.

  • The JBOD supports SCSI Enclosure Services (SES), which makes the correct lights blink to identify the enclosure, power activity, hot spares and, most importantly, if a drive needs replacement.

As of the posting of this blog article, the list of certified enclosures on the market included:

DataON Storage:
DNS-1640 2U 24 Bay 2.5" 6Gb/s SAS JBOD and DNS-1660 4U 60 Bay 3.5" 6Gb/s SAS JBOD


Fujitsu: ETERNUS JX40


Fujitsu: PRMERGY CX420 S1 – a 2-node Cluster-in-a-box with built-in storage.




Quanta Computer: M4240H


Quanta Computer: M4600H




Super Micro Computer: SuperChassis 847E26-RJBOD1


Of course, new vendors and models are being added to the catalog as they become certified, so if you’re reading this post after its publication date, don’t assume that the above vendors and models are your only options. Check the Windows Server Catalog for the most recent list of products.

Useful Links

These other resources may also be useful:

  • Another good overview of Storage Spaces and its capabilities can be found here.
  • An excellent presentation from TechEd 2013 that covers Storage Spaces can be found here.

The speed and horsepower of modern CPUs coupled with the continual drop in memory and hard disk drive costs have really made it possible to integrate storage into an industry standard server without having to offload the storage calculations to a dedicated processor that increases total costs.

When used together, Storage Spaces, Failover Clustering, and external JBODs are a great way to get reliable storage that is easy to manage and doesn’t require a mortgage to afford—inexpensive, reliable storage built using Windows Storage Server is really a great recipe for your data storage needs!

Scott M. Johnson
Senior Program Manager
Windows Storage Server

Comments (18)
  1. Janez Kranjski says:


    I have to design a large solution – have currently not sufficient equipment to test it first. Depending on your kind comment we might go to set-up a test lab. Requirement is not large IOPS number, but one single share as big as possible:

    2 nodes storage spaces cluster with 4 JBOD each with 80 disk 6TB (mirror spaces) compliant to:
    •Up to 80 physical disks in a clustered storage pool (to allow time for the pool to fail over to other nodes)
    •Up to four storage pools per cluster
    •Up to 480 TB of capacity in a single storage pool

    That gives us roughly 960TB formatted space in one CSV?

    Next would be to set-up 2 or 4 nodes SoFS. And here follows most crucial question:
    Is it possible to extend one single CSV over two storage spaces clusters? By that we would get rougly 1,92PB volume? Can we add even more storage spaces clusters? If yes – what would be the limit?

    Thank you very much for your help.


  2. Anonymous says:

    Regarding the second image in your blog post as an example of a 4-node cluster connecting to 4 JBOD enclosures. I’ve seen variations of this image in both Microsoft and non-Microsoft presentations, including a couple at TechEd. However, while adding 4 HBAs to a server is easy enough, I haven’t found a storage spaces enclosure that has dual controllers *and* accepts four connections per controller as pictured. Seems rather misleading, no?

  3. Anonymous says:

    Do you know how to create the cluster between two JBOD and two cluster node? Some ideas ?

  4. Jonas says:

    What about the "SAS Host Bus Adapter (HBA)". Does it have to be the LSI Syncro CS to enable load balancing? Would any certified HBA do if you're only looking for Active/Passive?

  5. Jonas:

    Any normal SAS HBA should work. LSIs standard SAS HBA adapters are the most popular ones.

  6. ArekD says:

    I love idea of JBOD + Storage Spaces + SOFS but last time i checked performance was terrible especially for parity spaces. It disqualifies it when comparing to hardware RAID.

  7. BDJ says:

    ArekD – There's write back cache for SSD. Not entirely sure how this works compared to other venders but usually you'd mirror SSD levels to minimise any write penalties but remain resilenant and this layer becomes your working set. The lower level RAID 5 etc… may be slow but its masked by the SSD up front. You may need considerable amount of SSD to maintain amazing performance mind. Also, don't forget about the RAM cache as well, though I suspect this will be write through. So RAM = read cache, SSD = write cache and slowly dripped to spinning rust.

  8. Anonymous says:

    Hi Folks – If you’ve been following my blog, you know that I’m working my way through the list of my

  9. Anonymous says:

    Pingback from Network Steve

  10. JHBoricua says:

    Anonymous, regarding the 4-node cluster comment being misleading. You only had to pay a little attention to realize that on the 4-node example they are using the 60-bay DataON enclosures. Now click on the link provided for that DataON enclosure shortly below that and you will clearly see that those dual controllers have 4 SAS ports on them each.

  11. gg says:

    What is the recommended h/w spec of the cluster nodes ?
    Do we need to maximize memory and processing ?

  12. jay says:

    I would also be interested in the recommended server CPU and RAM specs.

  13. Ken Wallewein says:

    You could significantly reduce cost and hardware count by dropping SOFS and just using Storage Spaces. What does SOFS add to this scenario?

  14. jamie says:

    Also interested in recommended CPU and RAM specs.

  15. Casper042 says:

    I have to agree with Ken here that SOFS doesn’t seem to be doing anything and certainly is NOT what I would consider a SOFS if everyone has to connect to the SAME JBOD.

    Overall based on this post and how it explains WSS, I am very dissapointed.
    I would have much rather seen a true SOFS based on a HDFS type architecture or how VMware has done VSAN.
    High speed 10+Gbps interconnect between nodes and then have each node use its own storage but cluster data by writing it among multiple nodes. This would be a true SOFS as with a DL380 or R720 with a SAS HBA instead of RAID controller you can just keep scaling
    until you run out of money or 10Gb ports.

  16. Ken says:

    Great article. Does anyone recommend JBOD that accepts sata and ssd drives and both servers in the cluster see the same disks ? I need something cheaper than 1500.

  17. Tonatiuh says:

    We have a issue, we need to add storage spaces to another Failover cluster.
    We have a Clustered Storage Spaces (Cluster 1, storage); and we have another failover cluster (Cluster 2, with several node).
    We want to present the Clustered Storage Spaces (Cluster 1, storage)
    to the failover cluster with several node (Cluster 2, with several node).
    How can we do this?

  18. Tarkan Ateşer says:

    It does it makes sense to use this system as a CCTV recording server?
    This link also compatible models, but there are or as JBOD Should I use as a server? Which HBA cards required for JBOD . IT mode or IR mode?
    Thank you.

Comments are closed.

Skip to main content