Latest posts by Paul Schnackenburg (see all)
- Repair a corrupt EDB file with Stellar Repair for Exchange - Tue, Jan 14 2020
- Fixing WSUS issues with the SolarWinds Diagnostic tool for the WSUS agent - Tue, Nov 26 2019
- ManageEngine Desktop Central: Unified endpoint management for Windows, Linux, and Mac - Wed, Sep 18 2019
Cluster sets ^
The big headline feature is definitely cluster sets, the ability to group failover clusters together and move resources between them. The max number of nodes in a cluster has been 64 since Windows Server 2012, but it turns out that enterprises with large deployments don't create clusters that large.
Instead they create several smaller clusters because storage or networking (depending on your design) can be a single point of failure as can the FC software itself. While this works, it does create islands of resources because while you can move a virtual machine (VM) from one cluster to another (for instance), it's not highly available during the move.
Cluster sets are an easy way to scale your infrastructure. Take a Storage Spaces Direct (S2D) hyperconverged cluster (there are 10,000+ of those in production now) with four nodes where you're running out of capacity. You can add more nodes, where you'll need to factor in rebuild times, or you can add more storage or memory to the existing nodes, assuming you have spare slots and can find the right model and firmware.
The limit on the number of nodes that can be down is two (except in two- and three-node clusters), so in a four-node cluster, half the number of nodes down will take the cluster down. But if you expand the capacity to eight nodes, two nodes down will still take down the cluster. So, in this scenario, two four-node clusters will give you higher availability, but prior to cluster sets, they were isolated from each other. Now you can aggregate the capacity and live migrate resources both within and between the clusters.
Cluster sets also bring Azure-like features such as fault domains and availability sets, along with the ability (in PowerShell) to calculate the best placement in any cluster for a new VM, given a specific size. You can have VMs registered with the cluster set running on different clusters, and you can also have VMs not registered with the cluster set.
Retiring a cluster also becomes an easy proposition—simply add a new cluster, seamlessly migrate resources to it, and then retire the old cluster. There's no overall limit on the number of clusters in a cluster set.
To use cluster sets, you create a management cluster (generally with a few nodes, about two to four). These can be VMs in a guest cluster, preferably spread on different clusters. This will create a unified namespace across the entire cluster set, on top of a referral Scale-Out File Server (SOFS).
Windows Server 2019 is the first time you can host a file server on a hyperconverged cluster, using a new feature called SMB loopback mode. This unified namespace hosts referrals to SMB shares hosted on member clusters that you add to the cluster set. The cluster set master is a highly available role that provides the management endpoint for all cluster set interactions, and each member cluster hosts a cluster set worker that helps with VM placement and other things.
Administrators can (optionally) define fault domain (FD) boundaries based on rack placement, power supply, and networking single points of failure. You can then deploy VMs in availability sets (ASes), with VMs in an AS automatically distributed across the available FDs. Note that if your clusters have different generations of processors, you'll have to use the VM setting for processor compatibility, just like you would in a single cluster with different processors.
Other improvements ^
Another under-the-hood feature is that Windows Server 2019 clusters will automatically detect that they're running on Azure IaaS (guest clustering) and adapt to planned maintenance in Azure, something you could automate yourself, but it'd take more work.
The ability to rename clusters and move them between AD domains (without having to destroy the cluster and recreate it) is fantastic and will save many hours of work. Hardware manufacturers can also ship clusters preconfigured, which you can then add to the customer's AD domain, making for a better user experience.
Small two-node clusters get a boost in this version. Up until now, you've always required a third server (or a cloud witness—a small file share in Azure—not always practical where internet connectivity can be spotty) to be the tiebreaker and avoid split-brain situations in your cluster. Now a simple USB (2.0+) key inserted into your router in your branch office can be the file-share witness. Also, it's not possible to create a file-share witness on a Distributed File System (DFS) share, something never supported but now blocked in the wizard.
CAU is now aware of S2D, so that upon draining VMs from a node, installing patches, and restarting the node, it checks validation of the volumes and resynchronization before it moves on to patch the next node.
Cluster communication over SMB for CSV and S2D uses certificates to protect the traffic, removing the last vestige of NTLM in failover clustering. Building on the work done in Server 2016 for workgroup and multi-domain clusters, you can now fairly easily move a cluster from one domain to another. Scenarios include preparing a cluster in one location and then shipping it to its final home or for company mergers.
Overall FC in Windows Server has received a lot of love in this version, and cluster sets will be very useful for larger deployments, I just hope that support comes in Windows Admin Center (and System Center Virtual Machine Manager), rather than just PowerShell. The other improvements aren't as revolutionary, but they'll contribute to a better user experience.