After spending the last few years spruiking the message that disaggregated, non-converged storage was the way to go for server virtualization, Microsoft recently changed tack with the introduction of Storage Spaces Direct (“S2D” and no, that’s not a typo).

In Windows Server 2016, it’ll be possible to have a cluster where each node serves as both a Hyper-V host and a storage host, or to have a storage cluster where adding more capacity simply involves plugging more disks into existing servers, or adding more nodes. Before seeing where this technology fits in, a short history lesson is important to understand where Hyper-V storage is today.

Software Defined Storage (SDS) in Windows Server 2012 R2

If you’re implementing a private cloud infrastructure on Hyper-V clusters today, you have a few choices for the storage of the VM’s virtual disks. SANs, both iSCSI and FC, are supported, including piping FC storage directly into the VMs using Virtual Fiber Channel.

The recommended way if you don’t already have a SAN (or it’s at capacity) is to use Scale-Out File Server (SOFS), which is a cluster of two to four ordinary Windows Server 2012 R2 file servers. The Hyper-V hosts connect to this cluster using SMB 3.0 and the virtual disks are stored on normal file shares, making management easy.

Behind the SOFS nodes are one to four JBOD chassis with HDD and SSD disks connected to each SOFS node using external SAS cables. Each SOFS node has access to each disk; therefore, if one of the SOFS nodes fails for some reason, the IO traffic from the Hyper-V hosts is redirected so fast that the VMs won’t notice. If you use more than one JBOD chassis, you can mirror data across them to survive a chassis failure.

Storage Spaces is used to aggregate the HDD and SSD into pools, and storage tiering takes care of performance by moving hot blocks to the SSD tier and colder, less frequently accessed blocks to the HDD tier.

Storage Spaces with shared JBODs

Storage Spaces with shared JBODs

All well and good but this solution has limitations. Today, the maximum size of the SOFS cluster and attached JBODs is limited by the number of SAS cables that can be wired to each SOFS node. And the disks (both HDD and SSD) have to be SAS—no other technology will work. There’s also a maximum of 80 disks per pool, which means a maximum scale limit exists for larger implementations.

On the other end of the scale, it’s still quite costly (although far more cost effective than SANs) for smaller implementations. You have to have separate physical SOFS nodes and (several) JBOD chassis, and the disks have to be SAS. This makes this technology unsuitable for anything smaller than, say, four Hyper-V nodes with maybe 40-60 VMs. Expansion of the capacity is also tricky. You can’t just add another JBOD chassis, nor is it advisable to leave empty slots and later add more HDDs and SSDs.

SDS in Windows Server 2016 TP2

Although all the above holds true for Windows Server 2016, an additional deployment option is coming in the form of S2D. This takes local storage in each storage node and pools it together using Storage Spaces for data protection (two- or three-way mirroring as well as parity). That local storage can be SAS or SATA (SATA SSDs are an order of magnitude cheaper than SAS SSDs) or NVMe. This latter one is an exciting technology that essentially is flash memory on PCI Express cards, making them wicked fast. Up until now, however, you could only use them for speeding up applications on a single host.

A new Software Storage Bus transports data between all the disks on all the storage nodes and presents them to the Storage Spaces layer so they can be pooled together. Data resiliency is ensured through mirroring, with either two or three copies stored across the nodes.

In Technical Preview 2 (TP 2), the S2D cluster needs to have a minimum of four nodes, and they can’t also be Hyper-V nodes. According to presentations at Ignite, however, this will change. For the first time, using only Microsoft technologies (StarWind, for example, has offered this for years), you can run both storage and virtualization on the same nodes—commonly known as converged infrastructure.

Storage Spaces Direct with JBOD

Storage Spaces Direct with JBOD

Expansion of the capacity is also much easier in S2D: simply add more nodes to the cluster with more storage in them. Note that you can also add storage in the form of external JBOD trays, but in this case it’s only connected to a single node and shared through that storage node.

Obviously, the connection between all the storage hosts needs to be fast. Thus, RDMA networking, relying on SMB Direct, will be recommended (required?). The current limit for S2D in TP2 is 240 disks and 12 servers, although the premise is that these figures will be higher at RTM.

If a disk fails, the data is automatically recreated on other disks in the pool until you replace the failed disk; the new disk will then automatically have data moved onto it for rebalancing. If a whole server fails, however, rebuilding doesn’t happen automatically. Let’s say that the host had 50 TB of storage. Recreating all that data on other nodes could take a lot of time and bandwidth. If you know that all the server needs is two new power supplies, you may want to wait for the repair instead.

Resilient File System (ReFS) and S2D

ReFS was introduced in Windows Server 2012. Very few people used it, primarily because it was new (smart sysops don’t trust their data to brand-new file systems) but also because of a feature gap with NTFS. It looks like Windows Server 2016 is going to change all that with ReFS being the recommended file system for S2D and for Hyper-V in general.

The benefits of Hyper-V are numerous. Checkpoints (which are also used for VM backup in Server 2016) are now very fast because ReFS can copy portions of a file to another part of the same file or to a different file as a metadata operation. So applying checkpoints or creating large, fixed-sized VHD(X) files is almost instantaneous.

Storage monitoring

In earlier versions of Windows storage, the monitoring system (Management Pack in Operations Manager for instance) had to be very complex in correlating information provided by the different underlying components of Microsoft’s storage stack. Windows Server 2016 will offer a reworked monitoring solution where a lot more of the “smarts” is located in the system itself, making it more easily consumable by monitoring solutions as well as extensible.

S2D in action

Because it’s still early for S2D, Microsoft hasn’t provided much guidance on actually setting up an S2D solution on physical hardware. However, a blog post and an MSDN article exist on testing it with four Hyper-V VMs. This is, of course, not a production system because the internal disks are just VHDX files, but it works as a proof of concept. Today, S2D is only managed through PowerShell. I followed the 15 steps outlined in the article and it worked as expected. I really hope that a GUI for managing S2D is forthcoming. VMM 2016 has also been demonstrated to be able to create S2D clusters and configure the storage and the file shares automatically.

Conclusion

S2D will be another tool in the tool box for planning private cloud implementations. I suspect that early customers will be hosting providers who will be very keen on providing this type of easily extensible VM storage. S2D will run on Nano Server, so there’s another option to limit the OS overhead.

0 Comments

Leave a reply

Your email address will not be published.

*

© 4sysops 2006 - 2023

CONTACT US

Please ask IT administration questions in the forums. Any other messages are welcome.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account