There is a new core storage feature in VMware vSphere 7 that will eliminate the performance difference with thin provisioning. Affinity Manager 2.0 improves the write performance to your VMFS storage. Let's take a look at this new feature.
Latest posts by Brandon Lee (see all)

There have been many foundational improvements in VMware vSphere 7. Across the board, VMware has improved the underlying architecture of vSphere with version 7. In this version, core storage is one area that has seen tremendous development and many new features.

Storage performance is seeing a boost in vSphere 7 due to the new way VMFS Resource Clusters are managed with the new release. VMware's storage affinity manager has been updated and is called Affinity Manager 2.0 in vSphere 7 due to the new improvements. What is VMware Affinity Manager 2.0 in vSphere 7? Let's take a closer look first at how the previous version of Affinity Manager worked and see what has changed.

What is Affinity Manager and how has it worked in previous releases?

First, let's understand how data is written to disk in vSphere. VMFS makes use of a component called Resource Clusters (RC). These are on-disk groups of resources used by VMFS. There are two decisions that have to be made when data is written to disk:

  1. Which resource cluster will be used?
  2. Which resources will be allocated from the RC?

To make this determination, there is a mechanism in vSphere called the Affinity Manager. The Affinity Manager's job, among others, is to allocate resource clusters. The process that maps a resource to a resource cluster is called "affinitizing."

We have to remember that VMFS6 is a shared storage file system. This means that multiple hosts can access the file system at the same time. Resource allocations must be coordinated with the VMFS file layer. The resource manager requests resource clusters from the Affinity Manager.

Now that we understand how data writes are coordinated in vSphere releases prior to vSphere 7, let's take a look at how things have changed moving forward.

Affinity Manager 2.0

With VMware vSphere 7, the Affinity Manager has been reintroduced as Affinity Manager 2.0. VMware has greatly improved how Affinity Manager 2.0 works, which leads to better performance. The process of allocating resources with Affinity Manager 2.0 is much more efficient.

With Affinity Manager 2.0, VMware minimizes the overhead with the I/O path and VMFS. With the new approach, a new "cluster-wide view" of the resource clusters is created, so it is known how the RCs are currently allocated. Disk space is then divided into regions. 

High level architecture of the new Affinity Manager 2.0 in vSphere 7 (image courtesy of VMware)

High level architecture of the new Affinity Manager 2.0 in vSphere 7 (image courtesy of VMware)

A mapping or "ranking" is created based on how many VMFS blocks are allocated for the current host as well as other hosts in the cluster. This mapping or ranking is referred to as a region map. With the new region map, Affinity Manager 2.0 can quickly obtain and supply the available resource clusters to the resource manager.

Real-world Affinity Manager 2.0 benefits in vSphere 7

As described, the creation of the region map of available resources allows a much more efficient path for writes to the VMFS layer. How does this help in real-world applications of storage in vSphere 7?

There is one area where this is hugely beneficial—thin provisioning. It seems like thin provisioning has been debated for eons in terms of virtualization and its use for provisioning storage. As a refresher, when provisioning VMDKs for virtual machines, there are three options:

  • Thick-provisioned (lazy zeroed)
  • Thick-provisioned (eager zeroed)
  • Thin-provisioned

What are the differences?

Thick-provisioned (lazy zeroed) disk

With thick-provisioned (lazy zeroed) disks, the disk created assumes all the space at the time of its creation, but the "zeroing out" operation is left until data is written. Disk creation time is very fast since the zeroing process is saved for later; however, "first" writes to this type of disk are slower since the I/O must account for the zeroing operation.

Thick-provisioned (eager zeroed)

The thick-provisioned (eager zeroed) disk assumes all the space up front, the same as the lazy zeroed disk. However, it takes the additional step of zeroing out the space that is assumed. It takes much longer to initially create a lazy zeroed disk; however, it is faster and more efficient on the first write operation since the zeroing process has already been taken care of.


Thin provisioning is different from the other types of thick provisioning since it only consumes the amount of disk space that it needs initially. So, a 100 GB VMDK that only has 10 GB of disk space used will have 10 GB of space used on disk.

There have been various debates about the performance impact of thin provisioning vs thick provisioning. With new flash and other hybrid storage types, the performance impact has been greatly minimized. However, aside from the new storage products on the market, Affinity Manager 2.0 will help to level the playing field for thin provisioning as well as the other disk types across the board. 

Thin disk provisioning a virtual machine traditionally comes with a first write penalty

Thin disk provisioning a virtual machine traditionally comes with a first write penalty

How Affinity Manager 2.0 negates the performance impact of first writes

With Affinity Manager 2.0, when the first write I/O happens with the new region map functionality, the resource clusters are already mapped and ranked. This means the resource clusters do not have to be "found" first. With the new region map, Affinity Manager 2.0 knows which resource cluster resources are available.

Now, when the resource is requested from the VMFS file layer, it goes straight to the Affinity Manager. It relies on the region map to allocate resources. It places a lock on the resources, verifies they are free, and then hands them over to the resource manager.

This is a much more efficient path for file I/O writes. With this in place, the impact of the "first write," which is known to be a drawback of thin provisioning as well as lazy zero disks, is basically nullified. With vSphere 7, this may very well put to rest the notion of performance degradation for thin-provisioned disks on the first write operation.

Wrapping up

There are a lot of improvements with VMware vSphere 7 across the board. The new storage improvements will serve to be a great performance boost for most organizations moving to the new vSphere 7 architecture. With the new Affinity Manager 2.0, a much more efficient way of handling file write operations is introduced with the new region map.

Subscribe to 4sysops newsletter!

The new capabilities of Affinity Manager 2.0 help to eliminate the performance-robbing "back and forth" file operations between the VMFS file system and the resource manager. This will no doubt change the perception of thin provisioning since the performance penalty with the first write with the new, efficient Affinity Manager 2.0 will virtually be eliminated.


Leave a reply

Your email address will not be published.


© 4sysops 2006 - 2023


Please ask IT administration questions in the forums. Any other messages are welcome.


Log in with your credentials


Forgot your details?

Create Account