- How to use VMware vSAN ReadyNode Configurator - Fri, Dec 17 2021
- VMware Tanzu Kubernetes Toolkit version 1.3 new features - Fri, Dec 10 2021
- Disaster recovery strategies for vCenter Server appliance VM - Fri, Nov 26 2021
Note that you can use the free version of VMware ESXi to connect to iSCSI or NFS storage. I'll demonstrate this below. In fact, no matter the license, any version of the VMware ESXi hypervisor can connect to shared storage.
You can then have several free ESXi hosts accessing the same datastore. However, because you don't have a central management server (VMware vCenter server), you can only manage your VMs individually, host by host. And you don't have the possibility to move VMs from one host to another using vMotion technology.
- At least one ESXi host with two physical NICs (management and storage traffic)
- Shared storage offering the iSCSI protocol
- Network switch between shared storage and the ESXi host
- vSphere client software
We'll be using the new HTML5 host client, which is different than the old Windows client. One advantage of the HTML5 host client is that you don't need to install any software on your management computer because you connect via a web browser. You don't need plugins, and you can just connect through the URL https://ip_of_ESXi/ui (replace "ip_of_ESXi" with your installation's IP address). Note that if you're managing a vCenter server-based VMware infrastructure, you'll still need an Adobe Flash plugin.
First, we need to add a VMkernel network interface card (NIC) to our system so the ESXi host can connect to the remote storage device. To do so select Networking > VMkernel NICs > Add VMkernel NIC.
We will also add a new port group. Please name the port group accordingly. In my example, I named it simply iscsi. Note that our storage system is on another subnet.
The result should look like the screenshot below:
But that's not all. Before we go further, we need to make sure that:
- At the vSwitch level: Both physical NICs connected to the ESXi hosts are active.
- At the port group level: A single NIC is set as active. The other one is set as unused.
Navigate to Networking > Virtual Switches > select vSwitch0 > Edit settings, expand the NIC teaming section and make sure both NICs are marked as active. If for some reason one of the NICs isn't active, please select the NIC and click the Mark as active button.
We also need to override the failover order at the port group level because by default, the port group inherits the setting from our vSwitch. First, navigate to Networking > Port groups TAB, select the iscsi port group > Edit settings > expand the NIC teaming section and then make sure to select Override failover order – Yes. Use the correct NIC for iSCSI storage traffic (vmnic1 in our case).
Next, do the same for the Management Network and the VM network, and now, only select vmnic0.
Override failover order on the Management and VM network port groups
iSCSI adapter activation and configuration
After the process above, the next thing to do is to enable the software iSCSI adapter, which is disabled by default. To do this, select Storage > Adapters TAB > configure iSCSi.
Then just go to the Dynamic targets section and click the Add dynamic target button.
Now you can click the Save button. Next, go back to Storage > Adapters > Configure iSCSI. You should see the Static targets section populated.
If this is not the case, something went wrong during your configuration. Check your settings and make sure that everything is as explained above.
Then, while still in the Storage section, go to Datastores and create a new datastore. Click New datastore and create a new Virtual Machine File System (VMFS) datastore.
Then enter a name to recognize this datastore later.
Select VMFS 6 from the drop-down menu. (By default, VMFS 5 is preselected.)
You'll see a summary page. Click the Finish button.
You should see this new datastore populated among all the other datastores.
That's all. We have configured an ESXi 6.5 host, connected it to a remote storage device via iSCSI and created a new VMFS 6 datastore.
If you plan to connect another ESXi host to the same storage device, you don't have to create the datastore again. However, you'll still need to configure the ESXi host in the same manner. Both hosts will be able to use the shared datastore for running virtual machines.
VMFS is a clustered file system that can manage multiple connections and is able to implement per-file locking. The latest VMFS 6 can handle up to 2048 different paths. You can run up to 2048 VMs on single datastore too. The maximum file size is 62 TB, and the maximum volume size can be up to 64 TB.