Azure Stack HCI is Microsoft's preferred platform for configuring hyperconverged clusters. Windows Admin Center contains a wizard that can set up the entire environment based on a bare OS installation on the nodes. This also includes software-defined storage with S2D.

Since version 2016, Windows Server has had all the components to not only run virtual machines but also to provide the required storage for them. For this purpose, Storage Spaces Direct (S2D) combines the local drives of the Hyper-V cluster nodes into a storage pool.

New HCI platform ^

Last year, Microsoft introduced Azure Stack HCI, which is based on a modified Windows Server, with different licensing terms and a more rigorous certification program (see: Hyperconverged infrastructure: Azure Stack HCI versus Windows Server with Storage Spaces Direct). Servers that meet its requirements typically come preinstalled with Azure Stack HCI OS.

If you want to evaluate a hyperconverged system without immediately purchasing the necessary hardware, you can do so relatively easily using Hyper-V Nested Virtualization. This article describes the installation of such a lab environment (download the eval version).

Preparing VMs for nodes ^

For VMs that serve as cluster nodes, Microsoft recommends four network adapters and at least two virtual hard disks, in addition to the system drive. The setup of such relatively complex virtual machines can be automated with the help of PowerShell (see my tutorial).

Basically, you can switch to WAC after installing the guest OS on the nodes and start configuring the hyperconverged cluster. On the other hand, there are certain quirks of Azure Stack HCI and WAC that can cause the domain join and the installation of the Hyper-V role in a VM to fail.

The hypervisor should therefore be enabled offline in the VHDX, and sconfig on the Azure Stack HCI console is a better option for the domain join.

Starting the cluster wizard in WAC ^

Once these preliminary tasks are completed, you can start the WAC. There, click +Add under All connections on the Start page, and then click Create new under Server clusters.

Creating a new server cluster in the Windows Admin Center

Creating a new server cluster in the Windows Admin Center

The following dialog box then asks you to choose between Windows Server and Azure Stack HCI. We opt for the latter and leave the setting All servers in one site under Select server locations.

Selecting the cluster type Windows Server versus Azure Stack HCI and cluster in one location versus stretched cluster

Selecting the cluster type Windows Server versus Azure Stack HCI and cluster in one location versus stretched cluster

After clicking Create, the actual wizard starts. The info page displayed at the beginning contains information about the system requirements, which you can verify again at this point.

The computer names are entered on the next page. To connect to the nodes, enter the credentials for an authorized account. Since the nodes are already members of the domain, use an AD account here.

Enter the names of the computers that are to become members of the cluster

Enter the names of the computers that are to become members of the cluster

After the wizard has found and verified the servers, you can skip to the next page for the domain join.

During the domain join WAC insists on renaming the computers otherwise it shows an error message

During the domain join WAC insists on renaming the computers otherwise it shows an error message

The installation of the required roles and features is next. This should work without problems if you have already activated Hyper-V.

The wizard installs all required roles and features. Hyper V has already been activated here.

The wizard installs all required roles and features. Hyper V has already been activated here.

The next step is to install any available updates. This is not a cluster-aware update but an update of the individual nodes; a reboot may be required afterward. This function also turned out to be relatively unreliable in my lab, so you should consider updating the OS via sconfig right after its installation.

After the option to install hardware updates, which are not required in a virtual environment, the nodes must be rebooted.

Configuring the network ^

The next section of the wizard is for setting up the network. In the first dialog box, it checks the existing network adapters. This usually passes without complications.

Cluster wizard checking the network adapters

Cluster wizard checking the network adapters

The next step is to specify the network for managing the nodes. The wizard has already selected suitable adapters, but you can change them if necessary. Moreover, the admin network can be made redundant by assigning a second physical adapter for this purpose. They will then be renamed Management after the dialog box has been confirmed.

Configuration of the management network

Configuration of the management network

The next step is to create one or more virtual switches. The wizard selects only one of them by default, which serves both VM and storage traffic. This is given the name ConvergedSwitch and is sufficient for a lab environment.

Creating a virtual switch on the cluster nodes

Creating a virtual switch on the cluster nodes

After the optional configuration of RDMA, the next step is to define the networks for compute and storage. These should have the same name on all nodes. If this is not the case, the wizard will point this out and ask you to edit the names in the respective input fields.

The adapters for VM and storage traffic should be named consistently on all nodes

The adapters for VM and storage traffic should be named consistently on all nodes

Clicking Apply and test checks the connections and expects confirmation to activate CredSSP. If successful, it continues to the cluster configuration.

Creating the cluster ^

In the Clustering section, the first thing to do is validate the nodes. It checks whether they have a consistent configuration and are suitable for forming a server cluster.

Validation of the cluster

Validation of the cluster

After the validation is completed, the wizard shows the results for each criterion.

Successful validation of the cluster nodes with some warnings

Successful validation of the cluster nodes with some warnings

If the nodes have passed the validation, then the actual cluster formation comes next. To do this, specify the name for the CNO and decide whether the IP address should be assigned statically or dynamically. The default is DHCP.

Specify a name for the cluster and configure the IP settings

Specify a name for the cluster and configure the IP settings

Configuring S2D ^

The setup of the software-defined storage starts with the option of deleting all drives intended to be used for the storage pool. The volume that contains the operating system is, of course, not affected by this; it is purely a matter of the volumes for S2D.

If necessary the wizard deletes all data from the drives that are intended for S2D

If necessary the wizard deletes all data from the drives that are intended for S2D

In a configuration with already empty VHDXs, this is unnecessary, and you can proceed to check the drives. It essentially shows the disks found for the storage pool, allowing you to verify that all designated drives are available.

The wizard searches the cluster nodes for drives that are available for S2D

The wizard searches the cluster nodes for drives that are available for S2D

The drives aren't tested for their suitability for Storage Spaces Direct until the next step. For each criterion, the test shows whether the drives have passed the validation.

Checking the drives for their suitability for Storage Spaces Direct

Checking the drives for their suitability for Storage Spaces Direct

If the process is successful, the cluster wizard provisions the storage pool based on the selected drives.

Activating S2D and setting up software defined memory

Activating S2D and setting up software defined memory

The last section is dedicated to the optional configuration of the software-defined network. This is usually not needed for evaluation in a lab environment and can be skipped.

Optional SDN configuration in the Windows Admin Center cluster wizard

Optional SDN configuration in the Windows Admin Center cluster wizard

This completes the configuration of a hyperconverged cluster based on Azure Stack HCI.

Conclusion ^

Microsoft offers a complete GUI-driven workflow for setting up a hyperconverged infrastructure for Azure Stack HCI. Compared to the manual configuration using MMC tools and PowerShell, this represents a big step forward. Unfortunately, the wizard still refuses to work if the nodes are running Windows Server.

Subscribe to 4sysops newsletter!

As usual with the WAC, you must always expect unpleasant surprises during this multistage process. It is therefore advisable to complete certain tasks in advance, such as joining the domain, applying updates, or activating Hyper-V in a VM.

0
0 Comments

Leave a reply

Please enclose code in pre tags

Your email address will not be published. Required fields are marked *

*

© 4sysops 2006 - 2021

CONTACT US

Please ask IT administration questions in the forums. Any other messages are welcome.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account