- How to use VMware vSAN ReadyNode Configurator - Fri, Dec 17 2021
- VMware Tanzu Kubernetes Toolkit version 1.3 new features - Fri, Dec 10 2021
- Disaster recovery strategies for vCenter Server appliance VM - Fri, Nov 26 2021
While it is quite easy to configure a VSAN cluster once you have all hardware installed, it is very important to choose the right hardware for VMware VSAN implementation.
Each host that contributes capacity to a Virtual SAN datastore must include at least one flash device and one magnetic disk. To run a Virtual SAN, you need a minimum of three hosts in a vSphere cluster that have to contribute with local storage.
There are two ways to build a Virtual SAN cluster:
- Build your own based on certified components
- Choose from a list of Virtual SAN Ready Nodes (easier)
VSAN Ready Nodes
A VSAN has special requirements for each piece of hardware in each host. Some VMware admins try to design their own configurations. While it's possible for this to be successful, you might end up with a solution that is over-provisioned, or if you work with low-end components, the overall VSAN performance will degrade.
The vast majority of people are better off with pre-configured VSAN Ready Nodes for different workload profiles. You can either buy these as a SKU or use them as a starting point for your own configurations.
Numerous VMware partners have preconfigured VSAN Ready Nodes that comply with the various aspects of the vSphere and VSAN compatibility guides. With VSAN Ready Nodes, system vendors have already selected and tested the sizing, disk/SSD technology, network standards, etc.
In my honest opinion, the VSAN Ready Node approach is the best way to go if you take the VMware hardware compatibility list (HCL) seriously and don't want to make an error with sizing. A Virtual SAN Ready Node is a validated server configuration in a tested, certified hardware form factor for Virtual SAN deployments, jointly recommended by the server OEM and VMware. The cache SSDs are sized according to the recommended workflows and number of VMs in each host.
There are online tools that allow you to specify your requirements (number of VMs, number of VMs per hosts, etc.) and will help you chose nodes accordingly. One such tool is the VSAN total cost of ownership (TCO) and sizing calculator.
Another tool that you can use to select a particular node with a particular hardware vendor is VSAN Ready Nodes.
VMware VSAN system requirements
- Hardware from the VSAN HCL: All hardware used for a Virtual SAN deployment must be on the VMware HCL. All I/O controllers, HDDs, and SSDs must be on the Virtual SAN HCL.
- Minimum number of ESXi hosts required: You'll need minimum of three hosts, although four are recommended because if one server fails, the VSAN can rebuild the components on another host in the cluster. This is not the case in a three-host scenario because if one host fails the VSAN will continue to run with two hosts, however, the system lacks a host where it could rebuild the components.
What is the witness?
A witness host is an ESXi hypervisor configured as a "judge" in a stretched cluster or remote office branch office (ROBO) scenario. We'll cover the details in one of the next posts. For now, you should know that a ROBO scenario is a special case in which only two hosts contribute storage to the VSAN cluster, and the witness component runs on the third host (which can be smaller).
In this scenario, you will have two node clusters at the remote offices and a witness in your primary datacenter. The primary datacenter has to have a host that isn't part of any cluster. That host is designated as a witness. You must have a single witness per VSAN cluster.
To lower costs, you can use a nested ESXi. The witness does not need real hardware, as it only stores witness objects, but you have to deploy the OVF on real hardware running ESXi.
I recommend going through the VSAN Ready Nodes online tools mentioned above. You'll see all the nodes available with detailed specs. You can also use this PDF document that lists all the VSAN ready nodes with detailed hardware specs. It is regularly updated, as hardware partners are adding new hardware all the time.
VMware VSAN network requirements
The hosts in a Virtual SAN cluster must be part of the Virtual SAN network and on the same subnet regardless of whether the hosts contribute storage. A Virtual SAN requires a dedicated VMkernel port type and uses a proprietary transport protocol for the traffic between the hosts. You can configure VSAN standard switches or use a vSphere distributed switch. For each network that you use for Virtual SAN, configure a VMkernel port group with the Virtual SAN port property activated.
We have to distinguish here between VSANs with SSDs and HDDs and a VSAN All-Flash.
VMware VSAN Hybrid: VMware supports 1 GB or 10 GB networking for VSAN traffic for hybrid configurations only (flash cache tier + spinning hard disks as a capacity tier). Only VSAN 5.5 and 6.0 are supported. You can work with up to five nodes, but it is recommended that you have 10 GB of Ethernet even for smaller environments. Note that the 10 GbE network interfaces can be shared among VSAN and other workloads.
VMware VSAN All-Flash: VMware only supports 10 GB NICs or greater for Virtual SAN network traffic.
Multicast - VSAN needs multicast because it improves the performance. Usually low-end switches do not provide good multicast performance. You can test multicast performance with the Virtual SAN Health Service. You must enable layer 2 multicast on the physical switch connecting all hosts in the VSAN cluster. IP multicast sends source packets to multiple receivers as a group transmission. Packets are replicated in the network only at the points of path divergence, normally switches or routers. This results in the most efficient delivery of data to a number of destinations, with minimum network bandwidth consumption.
IGMP Snooping - You should connect all hosts participating in Virtual SAN to a single layer 2 network that has multicast (IGMP snooping) enabled. You can use IGMP snooping configured with an IGMP snooping querier to limit the physical switch ports participating in the multicast group to only Virtual SAN VMkernel port uplinks. The need to configure an IGMP snooping querier to support IGMP snooping varies by switch vendor.
For network redundancy, you can configure a team of NICs on a per-host basis. VMware considers this a best practice.
Unlike for disks and storage controllers, there is no special compatibility guide for network controllers. Therefore, you can take any 10GbE VMware vSphere–compatible network controller and use it with VSAN.
Memory requirements
VSAN as a kernel module consumes memory that depends how many disk groups per host you have. (You can have several disk groups per host.) Each host should contain a minimum of 32 GB of memory to accommodate the maximum number of 5 disk groups and 7 capacity devices per disk group. Obviously, if you have a host with a single disk group, then the requirements are different.
With that you may ask, why would I want to configure several disk groups per host, and what is the disadvantage over a single disk group per host?
Disk groups can contain at most one SSD. If a vSphere admin wants to use multiple SSDs in an ESXi host that will be configured as a member of VSAN cluster, multiple disk groups will have to be created with one SSD in each group. More SSDs equal larger cache, which equals better performance.
Also, if you have a failure on a single disk within single disk group, then the VSAN needs to rebuild the objects on another disk group which means better availability.
Subscribe to 4sysops newsletter!
So far, we have introduced the VMware hyper-converged architecture based on VSAN products and talked about the system and network requirements. In the next posts, we'll do a deep dive into three-node, stretched cluster configurations and two nodes + witness configurations so you can see all the configuration steps with screenshots. Stay tuned.
Hi Vladan,
I couldn’t access https://4sysops.com/archives/vmware-vsan-3-nodes-mode/, could you fix it?
Thank You
Can I Install vSAN on magnetic disks only? No SSD. If yes, what would be the consequences?
Hi Sakhie,
No, it is not possible to use spinning disks for caching tier. Only SSDs. For a lab testing, it's possible to override (I think) by tagging the magnetic disk as SSD, but it's not really worth it because the performance will be simply awful.