- How to use VMware vSAN ReadyNode Configurator - Fri, Dec 17 2021
- VMware Tanzu Kubernetes Toolkit version 1.3 new features - Fri, Dec 10 2021
- Disaster recovery strategies for vCenter Server appliance VM - Fri, Nov 26 2021
VMware VSAN needs a minimum of 3-nodes contributing with local storage in order to protect the VM’s components. Each node must have at least one SAS or SATA solid-state drive (SSD) or PCIe flash drive for caching.
Each node must also have at least one SAS or SATA host bus adapter (HBA) or RAID controller that is set up in non-RAID (passthrough) or RAID0 mode. Each node should be configured to boot out of a flash boot device, which can be a USB or SD card with at least 4GB of space.
FTT – Failures to tolerate ^
Before we start the configuration wizard, I’d like to talk about the default VSAN policy.
The number of failures to tolerate (FTT) policy setting is an availability capability that can be applied to all virtual machines or individual Virtual Machine Disks (VMDKs). This policy plays an important role when planning and sizing storage capacity for Virtual SAN.
The default FTT is equal to one, which means that the system can tolerate a single failure within a cluster. It can be a disk drive, host failure, or processor etc. With a single failure, two copies of the data are needed. For n failures tolerated, n+1 copies of the object are created, and 2n+1 hosts contributing storage are necessary. So, for tolerating two failures, VSAN creates 2+1 (3) copies of the object, and 2x2+1 (5) hosts contributing with storage are necessary.
The VSAN 3-node architecture with a default failure policy is set to FTT=1 (two copies of data on two different hosts + a witness component on another host).
If we want to tolerate two failures, we have to set up a VSAN cluster with at least four hosts.
Now, how about looking at a four-node cluster in which the fourth host is there in case we have unpredicted hardware problems?
The recommended number of hosts for VSAN cluster is four. We will see it in the next illustration, where there are four alike VSAN hosts configured with VSAN.
In case there is a single host failure, the system recreates the object from the failed host, putting it on the remaining host within the cluster—the fourth one. Thus, the recommended number of hosts to have is four.
Here’s another reason to have four hosts. Let’s say you’re doing maintenance. You have one host under maintenance state (unavailable for running VMs), and you’re doing firmware patching or upgrading physical RAM. You have a failure on another host within your cluster; however, it is fine because you still have two remaining hosts assuring the balance, but no components can be recreated on another host (there are no more hosts available).
We can see a failed host below. The VM component is recreated on the remaining host (ESXi-04). After the resynchronization, the cluster can protect workloads against single failures again.
VMware VSAN configuration ^
There are not many VMware VSAN configuration steps once all requirements have been met. We have talked a lot about physical and networking requirements, hardware on VSAN Hardware Compatibility lists (HCLs), and the required number of SSDs and disk drives for each host.
I assume you have a vCenter server installed and running. I’ll demonstrate the configuration steps with the latest version of vSphere 6.0 and VSAN 6.2.
I also assume that you have already created a datacenter object and a cluster object within the UI. In the image below, the datacenter object is called ESX Virtualization, and the cluster object is called VSAN. For now, we won’t activate any of the cluster’s features, such as high availability (HA), a distributed resource scheduler (DRS), or VSAN.
The first thing to do after we’re done with the creation of the datacenter and cluster objects is to configure networking.
Networking wizard ^
Without first configuring the networking, we won’t be able to finish the VSAN wizard; so, we’ll start with that.
It’s possible to use vSphere standard switches (vSS) or to configure a vSphere distributed switch (vDS). The walkthrough will show the configuration using vSS.
Select Host > Manage > Networking > VMkernel Adapters > Add. Then, choose the first radio button, VMkernel Network Adapter.
Next, we can use the existing standard switch. However, I like to separate everything related to VSAN. Choose the New standard switch radio button.
We can now add a physical adapter to be used for VSAN traffic. Click OK to validate.
If we are using 10GB switching and are sharing the physical 10GB NICs for purposes other than VSAN traffic, we’ll have to use VLANs.
Assign an IP address.
View the recap screen.
Next, we can see the new VMkernel adapter we have just created. Just check that there is a “green light” to move forward.
VMware VSAN wizard ^
If the requirements discussed earlier (at least one SSD and at least one HDD in a disk group) were fulfilled, the easiest part comes now.
To create a disk group, select one SSD and the HDDs or SSDs from each host to create one for each host. There needs to be at least one disk group per host.
In our case, we’ll use one SSD for caching and two other SSDs for storage so that each disk group has three elements. We have only a single disk group per host.
Note that we could also use spinning media, such as spinning disks. For example, we could create a disk group with one SSD and two HDDs. However, we’ll use this lab to demonstrate some advanced settings only available in the All-Flash version of VSAN.
Select the Cluster Object (in our case, VSAN) > Manage > Settings. Then, select General within the VSAN section and go to Configure to launch the VSAN assistant. Select the Do not configure stretched cluster feature this time, as we’re doing standard VSAN configuration.
Next, just review the networking screen; we don’t have to touch anything. The system checks if each host has one VMkernel adapter with VSAN traffic activated.
This screen claims disks that will be used in VSAN. We can claim disks for a cache tier or for a capacity tier here. (My capacity tier disks are marked as Flash, which is normal as those are SSDs as well.)
We can drop down the menu and group either by host or by disk model or size, which is useful if we’re using the same disk models for caching and the same disk models for capacity.
VSAN Configuration Assistant – Claim Disks
Grouping allows us to check which disks will be a part of the cache tier (high-speed SSDs) and which disks will be a part of the capacity tier (HDDs or commodity SSDs).
You may have to enter a license (there should be a trial available by default). If not, because there isn’t an assigned license, you will just receive a message saying that VSAN cannot create a disk group.
Well, we’re pretty much done with the VSAN configuration of three nodes.
At the end, each host participating in the VSAN cluster will see a shared VSAN datastore. Each host can access the storage, as it was a traditional shared storage device like SAN/NAS, attached via iSCSI or NFS. However, in the case of VMware VSAN, the storage is “built” from the local disks and the SSDs in each host.
As you can see from this post, we need to use the web-based vSphere client instead of the traditional Windows-based vSphere client. This is because VMware is transitioning from the Windows-based client to a web-based model. The plans for upcoming release of VMware vSphere are to have a HTML5-based client because we currently need to install Adobe Flash in order to manage the vSphere infrastructure.
When working with clusters with more than three nodes, we could possibly use vDS because the configuration is fast and simultaneous for each host attached to vDS. We would not need to jump to each host individually in order to create a VMkernel adapter configured for VSAN traffic. We could also take advantage of some other advanced networking features that vDS offers, such as Quality of service (QoS), which can be provided for traffic types via Network IO Control (NIOC).
But standard switching is well known to all IT and virtualization engineers, so I thought showing the configuration steps via standard switching would be easier to follow.
The image below shows the 3-node VSAN configuration with a single host failure. We will talk about VSAN stretched clusters and fault domains in the next post.
Subscribe to 4sysops newsletter!
The next post will explain the architecture requirements for stretched clusters with step-by-step configuration. Virtual SAN stretched clusters with witness hosts refer to a deployment in which a user sets up a Virtual SAN cluster with two active/active sites with an identical number of ESXi hosts distributed evenly between the two sites. Both sites are connected via a high-bandwidth, low-latency link. The third site hosts the Virtual SAN Witness Host and is connected to both of the active/active sites. This connectivity can be via low-bandwidth, high-latency links.