Latest posts by Vladan Seget (see all)
- VMware Cloud Foundation (VCF): VMware cloud on AWS - Fri, Nov 29 2019
- Verify compatibility before upgrade with vSAT (vSphere Assessment Tool) - Fri, Nov 22 2019
- What is VMware AppDefense? - Fri, Nov 8 2019
When you select the guest OS for a new virtual machine, the controller you'll use is automatically selected based on the drivers that are available in the OS distribution. Since VMware has over 600 different operating systems in its database, be sure to select the correct OS when you are deploying a new VM.
Before we get started, let's have a look at how the storage technology works. Storage controllers are presented to a VM as block devices. The vast majority of those devices are SCSI or SAS, and include BusLogic Parallel, LSI Logic Parallel, and LSI Logic SAS. A newer VMware Paravirtual SCSI was added when vSphere 4.x came out as well.
Other controllers that can be used are AHCI, SATA, and NVMe. These controllers may be used for different workloads. Where AHCI and SATA are generally used for CDROM devices and hard disks, the newest NVMe controllers are used only for hard disks.
Each virtual disk can have its own storage controller. In this case, the VM (or the virtual disk) usually offers better performance. That's why the separation of the SQL database from the OS disk is one of the first benefits when preparing a new VM for running such a workload.
Each VM can be configured with a maximum of two IDE controllers (used for older OS). Then each VM can also have four SCSI controllers, four SATA controllers, and even four NVMe controllers.
When you first create a new VM, the default controller is labeled 0, for each type of controller. As such, the first virtual hard disk is assigned to the default controller 0 at bus node (0:0).
The workflow ^
To assign separate storage controller to new virtual hard drive, follow the steps below:
Step 1: Add a second storage controller to your VM and click OK.
When you add storage controllers to your VM, they are numbered 1, 2, 3, etc.
Step 2: Add a new virtual disk to your VM and chose storage controller 1 as the device. If you simply add a second disk and validate right away, the disk is connected to storage controller 0.
If you do this, the VM will be configured with both disks connected to storage controller 0. You must connect the disk to storage controller 1 so each disk will have its own separate storage controller.
There are many options today for configuration of storage controllers in a VMware environment. Let's look at some of them.
Let's start with the most basic one, BusLogic. This was one of the first controllers and was natively available in all Windows Servers and computer OS versions.
If you choose to use the BusLogic Parallel virtual SCSI adapter, and are using a Windows guest operating system, you should use the custom BusLogic driver included in the VMware Tools package. Be sure to install the latest version of VMware Tools in your VM.
Then there is LSI Logic Parallel, which was also one of the first ones available, and then LSI Logic SAS, which has been popular with Windows 2008 or higher. Those controllers were used back in ESX 3.5 and vSphere 4.0 and the majority of VMs which were migrated from these environments are still using them. There are some possible performance gains here by switching your VMs to more modern storage controllers such as VMware Paravirtual SCSI.
VMware Paravirtual SCSI (or PVSCSI) supports high throughput with minimal overhead and minimum CPU utilization. It is one of the better ones in terms of performance. However, when configuring new VMs, be sure the VM can boot, which was not the case for Windows Server 2008 and Windows Server 2012 in the past. When Windows Server 2008 and 2012 were released, the drivers for allowing direct boots from those controllers were not available on the ISO.
As a workaround, one had to add a second small disk to run the VM, so Windows could install the driver within the OS, reboot the VM, and then shut down the VM and change the controller to PVSCSI. Once done, the VM was able to boot. This fix documented in this VMware KB article is no longer necessary for Windows Server 2016 and Windows Server 2019 systems, as far as I know.
The PVSCSI adapter offers a great reduction in CPU utilization and increased throughput compared to the default virtual storage adapters. Thus, it is one of the best choices. In one paper, VMware reported that the PVSCSI adapter provides 8% better throughput at a 10% lower CPU cost.
vSphere 6.5 introduced a Non-Volatile Memory Express (NVMe) virtual storage adapter (virtual NVMe, or vNVMe). Recent OSs that include a native NVMe driver can use that driver to access storage through ESXi, whether ESXi itself uses NVMe storage.
The vNVMe virtual storage adapter has been designed for extremely low latency flash and non-volatile memory-based storage. As such, it is not recommended to use it when the underlying storage is based on spinning media.
VMware compared this adapter with PVSCSI and realized that the vNVMe virtual storage adapter is similar or better in IOPS and CPU cost per I/O.
However, if this adapter is compared to virtual SATA devices, the vNVMe adapter accesses local PCIe SSD devices with much lower CPU cost per I/O and significantly higher IOPS.
Limits per VM? ^
Are there any limits when it comes to configuration? How many vSCSI adapters are supported per virtual machine?
You can configure up to four vSCSI adapters per VM. Remember to spread your virtual disks across all your virtual adapters, to enable processing more I/O by increasing the number of queues.
When preparing a VM for a specific workload, such as a SQL Server database workload, be sure to follow VMware best practices for configuring storage, CPU, and memory. Enterprise-class applications are very sensitive to wrong or misconfigured VM. In the end, it's all about performance.