The recently released VMware vSphere 7 is not a revolution; most of its features have been included in previous releases. I'd call it an evolution. True, it is possible to use Kubernetes along with vSphere, configure file shares and quotas, and use scalable shares for our resource pools. vSphere also makes it possible to configure clustered virtual disks (VMDKs), which is necessary when you want to deploy Windows Server Failover Cluster (WSFC).

Before, you had to use RDMs in order to configure and use WSFC. Now, with vSphere 7.0, you can enable support for clustered VMDK in a datastore. This provides some advantages, but it also entails many requirements.

The WSFC support is valid for Microsoft Cluster Service, WSFC, and Microsoft SQL Server Always-On Failover Cluster Instances.

Note that SQL server Always On Availability Groups or Exchange Database Availability Groups do not require special storage configurations, such as clustered VMDK support. So these solutions can be used on traditional VMFS or NFS datastores without special storage configuration.

This VMware solution is supported by VMware from a vSphere perspective, but also from Microsoft's side when the Windows Server 2012 and higher OS must be used.

We'll look at the requirements below. One of them is the fact that you can use shared VMDKs only with an array that is connected via fiber channel (FC). This requirement eliminates many arrays connected via ethernet.

Shared VMDK requirements

  • A fiber channel array only.
  • The array must support ATS, SCSI-3 PR–type Write Exclusive–All Registrant (WEAR).
  • The datastore must be formatted with VMFS 6 (VMFS 5 is not supported).
  • VMDK must be Eager Zero Thick (no thin provisioned VMDKs).
  • If you have DRS configured in your environment, you must create an anti-affinity rule so that the VMs can run on separate hosts.
  • vCenter server 7.0 and higher.
  • Snapshots, cloning, and storage vMotion are not supported (no backup of nodes is possible, because backup software uses snapshots).
  • Fault tolerance (FT), hot change to the VMS virtual hardware, and hot expansion of clustered disks are not
  • vMotion is supported, but only for hosts that meet the same requirements.

Enable Clustered VMDK support on a datastore

The clustered VMDK is configured on a per-datastore level. So if you have different arrays connected to your cluster, be sure to create and configure only the correct datastore. As of this writing, only FC storage arrays is supported, but VMware is working on supporting other types of arrays as well.

Here are the steps:

Open the vSphere client, go to Storage, and choose the correct datastore.

Right-click the parent object and select Create a New Datastore.

Create a new datastore with Clustered VMDK support

Create a new datastore with Clustered VMDK support

Follow the assistant to create a datastore by selecting VMFS 6 format and the default partition settings.

Once you have created the datastore, go to Storage view > Select > Configure > General > Datastore capabilities > Clustered VMDK > Enable and click on Enable.

Enable Clustered VMDK support for a datastore

Enable Clustered VMDK support for a datastore

VM-creation workflow

VM-creation workflow consists of creating a new VM and changing an SCSI bus-sharing configuration option to Physical.

Change SCSI bus sharing to physical

Change SCSI bus sharing to physical

Add a new SCSI controller. Choose the recommended one, VMware Paravirtual. In our case, SCSI Controller 1 is created and added to the VM.

Add the new VMDK formatted with Eager Zero Thick (EZT) and store this VMDK on the new datastore that you have just created. Then assign the new VMDK to the SCSI controller.

The overview of the virtual hardware looks like this. Note that we do not use the Multi-writer parameter for our clustered VMDK datastore.

Make sure to select No sharing for the VMDK itself

Make sure to select No sharing for the VMDK itself

Note: The boot disk (and all unshared disks) should be attached to a separate virtual SCSI controller with bus sharing set to None.

Create additional VM nodes

Here are the steps for the second VM, which will be the second node of the cluster and will use the existing disk that we created for the first node (remember, you can have up to five nodes):

Create a second VM > Assign resources, CPU and memory > Select ESXi host > Click OK to finish the wizard.

Add a new virtual SCSI controller (again using the recommended VMware Paravirtual).

Set the SCSI bus-sharing option of that controller to Physical > Point to an existing VMDK from the first node (you will use the Add an Existing Hard Disk option).

Add an existing VMDK to the second node

Add an existing VMDK to the second node

Now assign the VMDK to the new vSCSI controller that you created before. You should use the same SCSI IDs (virtual device node) when assigning disks to all additional nodes.

VMware Kb article 2147661 can give you further help when you are creating and configuring Microsoft WSFC on VMware vSphere.

Other requirements and limitations:

  • You cannot mix clustered and unshared disks on a single virtual SCSI controller. It is unsupported.
  • Mixing clustered VMDKs and other types of disks (vVOLS, pRDMs) is not supported either.
  • A maximum of five nodes are supported.
  • The maximum number of clustered VMDKs per host is 128.
  • Cluster across box is supported (but Cluster-in-a-Box is not supported).

Final words

VMware has added many small and big features in vSphere 7.0. Although most use cases of architectural decisions were possible in previous releases of vSphere, for some use cases, options were missing or unsupported. VMware Clustered VMDK provides support at the datastore level.

For example, this is true of Virtual Hardware 17, new in vSphere 7.0, which has brought support to Virtual Watchdog Timer, enabling guest OS monitoring of multi-VM applications.

VMware does not recommend using a cluster-enabled VMDK datastore for storing unclustered VMDKs; nevertheless, it is a supported configuration.

Subscribe to 4sysops newsletter!

Clustered VMDK datastores with vSphere 7.0 are an evolution of the configuration that gives enterprise architects more ways of building Microsoft WSFC environments.

avatar
5 Comments
  1. Wirus (Rank 1) 3 years ago

    Does`t work the same for Linux host which want to share the same disk? Second what about backuping such disk? Veeam won`t backup a VM with multiwrite disk, would it backup VM in this config? 

    • david 8 months ago

      Old comment, but answering anyway.
      Works the same way for linux.
      new SCSI controller, paravirtual, physical compatibility, add the disk, leave the writer mode unset (dont set to multiwriter), add existing disk to the other VMs on their second SCSI controller you added.
      Backing up, you need to do it the same as a physical server, its the same storage side. Its a LUN on the array, you could have it exported to a physical server too. VMware has no control over it, the VM does, so veeam will not back it up as its technically the same as a physical server.

  2. David 8 months ago

    Noticed it was VMDK, not RDM, so the last part about same as a physical server and can export to a physical isn’t the correct for this.

  3. E_r_!_K 3 months ago

    Hi, just curious, but what if you use array based snapshots to back-up the data on these volumes? This should be an option right?

    • Author
      Vladan 3 months ago

      An array based snapshots goes before anything else. So yes, it’s a good option.

Leave a reply

Please enclose code in pre tags

Your email address will not be published.

*

© 4sysops 2006 - 2023

CONTACT US

Please ask IT administration questions in the forums. Any other messages are welcome.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account