- Configure Secured Core in Windows Server 2022: HVCI, DMA protection, System Guard, and VBS - Mon, Nov 22 2021
- ADMX templates for Office 2021: compatible with 2016 GPOs and 10 new settings - Mon, Nov 15 2021
- Windows Admin Center 2110: Multi-resource dashboard, VHD tool, and support for Azure Stack HCI 21H2 - Thu, Nov 11 2021
Since version 2016, Windows Server has had all the components to not only run virtual machines but also to provide the required storage for them. For this purpose, Storage Spaces Direct (S2D) combines the local drives of the Hyper-V cluster nodes into a storage pool.
New HCI platform ^
Last year, Microsoft introduced Azure Stack HCI, which is based on a modified Windows Server, with different licensing terms and a more rigorous certification program (see: Hyperconverged infrastructure: Azure Stack HCI versus Windows Server with Storage Spaces Direct). Servers that meet its requirements typically come preinstalled with Azure Stack HCI OS.
If you want to evaluate a hyperconverged system without immediately purchasing the necessary hardware, you can do so relatively easily using Hyper-V Nested Virtualization. This article describes the installation of such a lab environment (download the eval version).
Preparing VMs for nodes ^
For VMs that serve as cluster nodes, Microsoft recommends four network adapters and at least two virtual hard disks, in addition to the system drive. The setup of such relatively complex virtual machines can be automated with the help of PowerShell (see my tutorial).
Basically, you can switch to WAC after installing the guest OS on the nodes and start configuring the hyperconverged cluster. On the other hand, there are certain quirks of Azure Stack HCI and WAC that can cause the domain join and the installation of the Hyper-V role in a VM to fail.
The hypervisor should therefore be enabled offline in the VHDX, and sconfig on the Azure Stack HCI console is a better option for the domain join.
Starting the cluster wizard in WAC ^
Once these preliminary tasks are completed, you can start the WAC. There, click +Add under All connections on the Start page, and then click Create new under Server clusters.
The following dialog box then asks you to choose between Windows Server and Azure Stack HCI. We opt for the latter and leave the setting All servers in one site under Select server locations.
After clicking Create, the actual wizard starts. The info page displayed at the beginning contains information about the system requirements, which you can verify again at this point.
The computer names are entered on the next page. To connect to the nodes, enter the credentials for an authorized account. Since the nodes are already members of the domain, use an AD account here.
After the wizard has found and verified the servers, you can skip to the next page for the domain join.
The installation of the required roles and features is next. This should work without problems if you have already activated Hyper-V.
The next step is to install any available updates. This is not a cluster-aware update but an update of the individual nodes; a reboot may be required afterward. This function also turned out to be relatively unreliable in my lab, so you should consider updating the OS via sconfig right after its installation.
After the option to install hardware updates, which are not required in a virtual environment, the nodes must be rebooted.
Configuring the network ^
The next section of the wizard is for setting up the network. In the first dialog box, it checks the existing network adapters. This usually passes without complications.
The next step is to specify the network for managing the nodes. The wizard has already selected suitable adapters, but you can change them if necessary. Moreover, the admin network can be made redundant by assigning a second physical adapter for this purpose. They will then be renamed Management after the dialog box has been confirmed.
The next step is to create one or more virtual switches. The wizard selects only one of them by default, which serves both VM and storage traffic. This is given the name ConvergedSwitch and is sufficient for a lab environment.
After the optional configuration of RDMA, the next step is to define the networks for compute and storage. These should have the same name on all nodes. If this is not the case, the wizard will point this out and ask you to edit the names in the respective input fields.
Clicking Apply and test checks the connections and expects confirmation to activate CredSSP. If successful, it continues to the cluster configuration.
Creating the cluster ^
In the Clustering section, the first thing to do is validate the nodes. It checks whether they have a consistent configuration and are suitable for forming a server cluster.
After the validation is completed, the wizard shows the results for each criterion.
If the nodes have passed the validation, then the actual cluster formation comes next. To do this, specify the name for the CNO and decide whether the IP address should be assigned statically or dynamically. The default is DHCP.
Configuring S2D ^
The setup of the software-defined storage starts with the option of deleting all drives intended to be used for the storage pool. The volume that contains the operating system is, of course, not affected by this; it is purely a matter of the volumes for S2D.
In a configuration with already empty VHDXs, this is unnecessary, and you can proceed to check the drives. It essentially shows the disks found for the storage pool, allowing you to verify that all designated drives are available.
The drives aren't tested for their suitability for Storage Spaces Direct until the next step. For each criterion, the test shows whether the drives have passed the validation.
If the process is successful, the cluster wizard provisions the storage pool based on the selected drives.
The last section is dedicated to the optional configuration of the software-defined network. This is usually not needed for evaluation in a lab environment and can be skipped.
This completes the configuration of a hyperconverged cluster based on Azure Stack HCI.
Microsoft offers a complete GUI-driven workflow for setting up a hyperconverged infrastructure for Azure Stack HCI. Compared to the manual configuration using MMC tools and PowerShell, this represents a big step forward. Unfortunately, the wizard still refuses to work if the nodes are running Windows Server.
Subscribe to 4sysops newsletter!
As usual with the WAC, you must always expect unpleasant surprises during this multistage process. It is therefore advisable to complete certain tasks in advance, such as joining the domain, applying updates, or activating Hyper-V in a VM.