VMware HCI mesh for compute-only nodes was introduced with vSphere 7.0 Update 2 and allows you to connect to VMware vSAN datastores from another cluster. This cluster can be a host-only cluster without internal storage participating in a vSAN capacity.

A previous release of vSphere 7.0 Update 1 introduced the HCI Mesh feature, but you could only connect from another vSAN cluster to create the mesh. It is a very good idea to be able to run your workloads on a remote vSAN datastore from another cluster within your datacenter, just as you connect to NFS or iSCSI.

VMware does not use iSCSI or NFS, but its proprietary vSAN protocol is called Reliable Data Transport (RDT). What's great is that you don't need an additional vSAN license for the client cluster, even if you basically activate a vSAN on that cluster.

Advantages and limitations of VMware HCI Mesh

Possibility of using Storage-based Policy Management (SPBM)—VMware HCI mesh allows you to choose which services you want to use and execute on that datastore from your remote cluster. For example, you can choose whether you want to use deduplication and compression or a RAID level.

Here is a screenshot from the new vSphere 7.0 Update 2 user interface for creating a new storage policy.

VMware storage policy enhancements

VMware storage policy enhancements

HCI Mesh uses the RDT protocol—The RDT protocol is used for communication between hosts over the vSAN VMkernel ports. It uses TCP at the transport layer and is optimized to send large files.

Licensing—While the compute-only cluster does not need an additional vSAN license, the cluster where you have vSAN installed has to have a vSAN Enterprise license.

Networking limitations—Your cluster has to have 10 Gbps NICs at a minimum. VMware recommends 25 Gbps.

Unsupported configurations—You cannot run 2-node vSAN clusters or stretched clusters in combination with HCI mesh. You cannot mount the vSAN datastore to another datacenter. It must be the same datacenter, but another cluster. You cannot have two vCenter Servers in linked mode, either. Both clusters (the one with vSAN with storage and the one that is compute-only) need to be managed by the same vCenter Server.

Maximums—The vSAN datastore can be mounted by a maximum of five vSAN client clusters. On the other hand, the vSAN cluster can mount a maximum of five remote vSAN datastores.

VMware HCI Mesh architecture

VMware HCI clusters share their datastores remotely. Starting with vSphere 7.0 Update 2, you can have "mixed" clusters where the hosts are part of the vSAN storage in one cluster and are not part of it in another cluster (they are compute-only).

What's interesting is that when vMotioning a virtual machine (VM) from one cluster to another, it uses compute-only, not a storage vMotion. This means that the VM's files stay where they are. If the VM was already stored in the vSAN datastore, you can execute a simple vMotion to the client cluster and do a "cross-cluster" vMotion.

Here is a picture from VMware explaining the HCI mesh. As you can see, we have three clusters with local and remote vSAN datastores.

VMware HCI Mesh architecture

VMware HCI Mesh architecture

VMware HCI configuration

Let's have a look at the steps. First, we will have to disable VMware High Availability (HA) on the cluster. Next, we should configure vSAN VMkernel ports that can talk to the remote vSAN cluster VMkernel ports.

Configure VMkernel adapter for vSAN traffic

Configure VMkernel adapter for vSAN traffic

Now, let's see a case for a compute-only cluster. Clusters that will only be consuming remote vSAN clusters will need to be initialized as vSAN Compute Clusters.

So, we will first create a regular cluster with HA and vSAN disabled. Once done, simply select this cluster, select Configure > vSAN, and click the Configure vSAN button.

Configure the vSAN cluster

Configure the vSAN cluster

You'll see a new wizard starting. Select vSAN HCI Mesh Compute cluster via the ratio button option. There are some other cluster options (stretched cluster, 2-node, and custom fault domains if hosts are already in the cluster). But it's only the HCI Mesh that interests us this time.

vSAN HCI Mesh Compute Cluster

vSAN HCI Mesh Compute Cluster

You'll see another screen that tells you that after the wizard finishes, you'll be able to mount remote vSAN datastores from the Remote Datastore view under Configure.

Configure vSAN compute cluster wizard

Configure vSAN compute cluster wizard

After a couple of seconds, you'll see another button appear—Mount remote datastores. Just click this button to start another wizard.

Mount remote vSAN datastore

Mount remote vSAN datastore

You'll change to the Remote datastores menu, where you can click the Mount remote datastore button.

Mount Remote Datastore wizard

Mount Remote Datastore wizard

When you click the Next button, a series of checks is done. In our case, we have green everywhere, so we are ready to go.

All checks are green

All checks are green

Click finish to close the wizard. The remote vSAN cluster is now connected to our compute-only cluster.

The remote vSAN datastore is now connected to our compute cluster

The remote vSAN datastore is now connected to our compute cluster

This is the end of the configuration. We now have two clusters in the lab. There is one cluster in which hosts participate in the vSAN with storage. They have local disks configured as vSAN disks. The second cluster is the compute-only cluster and has no storage. It now has a connection to the remote vSAN datastore. We can create VMs that will be stored in the vSAN datastore because we have a connection.

Final words

VMware HCI Mesh allows vSAN clusters to remotely mount the vSAN datastore of another vSAN cluster. It allows you to create a cluster with diskless servers connecting to a remote vSAN datastore.

vSphere 7.0 Update 2 also increased the limit for vSAN clusters. Now you can connect up to 128 hosts to a vSAN datastore.

Subscribe to 4sysops newsletter!

VMware continues to enhance vSAN in each release, and monitoring a single datastore performance instead of monitoring dozens or hundreds of SAN devices is definitely the preferred way. With the vSAN cluster, you can now mix and match diskless servers for additional compute capacity, if needed.

avatar
0 Comments

Leave a reply

Please enclose code in pre tags

Your email address will not be published.

*

© 4sysops 2006 - 2023

CONTACT US

Please ask IT administration questions in the forums. Any other messages are welcome.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account