Faulty VMware infrastructure configuration is the main reason for bad VM performance. The wrong selection of drivers or incorrectly dimensioned hardware can also affect VMware performance. In this post, I'll discuss the major configuration mistakes to steer clear of to avoid performance headaches.

Unsupported VMware ESXi hardware

Nothing is worse than using unsupported hardware to install VMware ESXi. While it might work, this will definitely affect stability and performance. Quite a few components just might not be suitable for running VMware ESXi and virtualized workloads, starting with storage controllers and network cards. Unsupported chipset hardware can be another vector of problems causing slowdowns or purple screens of death (PSODs).

Here I have a single piece of advice. If you (still) want to run VMware ESXi on unsupported hardware, use it only for non-critical or non-production workloads such as monitoring software.

Before installing or upgrading to a version of ESX or ESXi, it's important to confirm that the hardware of the target system is compatible with the hypervisor. Usually the best place to find this out is to go to the VMware Hardware Compatibility List (HCL) web page.

VMware compatibility list web page

VMware compatibility list web page

VMs with screensavers enabled

I know many environments where desktop VMs, part of virtual desktop infrastructure (VDI) workloads, have active screensavers. While it's convenient for the end user, it's certainly a consumer of datacenter resources.

It's OK for few VMs, but at scale, it can drain a lot of power from your physical CPUs. It's a best practice to turn screensavers on VMs off. You can create a global Group Policy Object (GPO) that enforces screensaver deactivation.

VDI with unoptimized VMs

You can also use the VMware OS Optimization Tool (OSOT), which optimizes the base image of your VDI. This free tool optimizes the VDI golden image or the Remote Desktop Session Host (RDSH) server golden image to be "lighter" and faster with the fewest possible resources active.

Thus, when you create your pool of VDI desktops by cloning the optimized base image, you already have each desktop optimized. You can find the VMware OSOT here at VMware Labs.

VMware OS Optimization Tool

VMware OS Optimization Tool

Managing VMs from the console

If you're managing a larger infrastructure, you should use a central console to manage your servers because using any of the consoles on each server individually consumes valuable resources.

Logging in and opening sessions creates increased memory consumption. You can avoid this by managing the resources or functions of the remote VM(s) via a central console installed on your management workstation.

Simply create a personalized Microsoft Management Console (MMC) and add the different snap-ins for different servers of your infrastructure into that single console.

You can also use Windows Admin Center. Windows Admin Center (previously called project Honolulu) is a flexible, lightweight browser-based locally-deployed platform and solution for Windows Server management scenarios used for troubleshooting, configuration, and maintenance.

Windows Admin Center

Windows Admin Center

You should really stop remoting to your Domain Controller (DC) to manage Active Directory or logging into Exchange/SQL Server to manage the server.

Wrong Windows power options configuration

Previous versions of Windows Server came preinstalled with the power options set to "Balanced" instead of "High Performance."

Enable the high performance power option

Enable the high performance power option

Make sure all of your servers have this option active. If not, you are losing some performance. You can easily correct this.

While you might save on your power bill, you'll definitely be losing some of the overall performance if left on balanced.

Note: You can definitely control this via Group Policy Objects (GPO) through your domain. Simply create a new GPO and go to Computer Configuration > Policies > Administrative Templates > System > Power Management.

Once there, you should see a policy called "Select an Active Power Plan." Make sure it's active.

Oversized VMs

A very common mistake in virtual environments is the fact that some VMs have too many virtual CPUs (vCPUs) assigned.

But the best practices from VMware say the exact opposite. The VM should have the fewest vCPUs necessary because it's easier for the ESXi CPU scheduler to find enough physical cores to schedule the VM so the VM can use CPU time.

Some guidelines:

  • Start with one vCPU per VM and increase as needed
  • Do not assign more vCPUs than needed to a VM as this can unnecessarily limit resource availability for other VMs running on the host and also increase the CPU Ready wait time.
One vCPU configuration on a VM

One vCPU configuration on a VM

To check whether your VM has too many vCPUs and whether your ESXi has trouble allocating CPU resources, use esxtop and look for %RDY values. Use the VMware CPU Ready metric to see a percentage of time the VM has been ready but could not get scheduled to run on the physical CPU.

Usually, a VM which has more compute resources assigned than it requires, for example, a VM that uses no more than 15 to 20% of its CPU and has configured 4 vCPUs. What's happening is the VM is oversized, so you could reduce the number of vCPUs to 2 and increase the vCPU utilization to 40%. Or even better, you could reduce the number of vCPUs to 1 and have the VM use 80% of the vCPU.

Subscribe to 4sysops newsletter!

I hope you find some of our tips useful and that they'll help in managing VMware vSphere infrastructures.

avataravatar
2 Comments
  1. @Valdan, thank you for your excellent article. just a clarification about vCPUs, I am interested to understand what is the impact of the number of cores per vCPU opposite to increasing vCPUs number.

    avatar
  2. Author
    Vladan Seget (Rank 3) 5 years ago

    When you configure a vCPU on a VM, that vCPU is actually a Virtual Core, not a virtual socket. In vSphere a vCPU is presented to the operating system as a single core cpu in a single socket.

    VMware introduced multi-core virtual CPU back in vSphere 4.1 (I think) in order to avoid socket restrictions used by certain OS.

    Example: Windows 2008 standard is limited to 4 physical CPUs, so it cannot use more than 4-vCPUs (with 1core each). But you can configure the VM with 1vCPU (with 8-cores ) as a workaround….

    There is no performance impact between using virtual cores or virtual sockets.

    avatar

Leave a reply

Your email address will not be published. Required fields are marked *

*

© 4sysops 2006 - 2023

CONTACT US

Please ask IT administration questions in the forums. Any other messages are welcome.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account