Latest posts by Vladan Seget (see all)
- Install and activate Windows subsystem for Linux (WSL) 2 on Windows Server - Fri, Feb 14 2020
- Upgrade from FRS 2008R2 to DFSR 2019 SYSVOL replication - Fri, Feb 7 2020
- How to change vCenter Server Single Sign-On (SSO) domain - Fri, Jan 24 2020
Unsupported VMware ESXi hardware ^
Nothing is worse than using unsupported hardware to install VMware ESXi. While it might work, this will definitely affect stability and performance. Quite a few components just might not be suitable for running VMware ESXi and virtualized workloads, starting with storage controllers and network cards. Unsupported chipset hardware can be another vector of problems causing slowdowns or purple screens of death (PSODs).
Here I have a single piece of advice. If you (still) want to run VMware ESXi on unsupported hardware, use it only for non-critical or non-production workloads such as monitoring software.
Before installing or upgrading to a version of ESX or ESXi, it's important to confirm that the hardware of the target system is compatible with the hypervisor. Usually the best place to find this out is to go to the VMware Hardware Compatibility List (HCL) web page.
VMs with screensavers enabled ^
I know many environments where desktop VMs, part of virtual desktop infrastructure (VDI) workloads, have active screensavers. While it's convenient for the end user, it's certainly a consumer of datacenter resources.
It's OK for few VMs, but at scale, it can drain a lot of power from your physical CPUs. It's a best practice to turn screensavers on VMs off. You can create a global Group Policy Object (GPO) that enforces screensaver deactivation.
VDI with unoptimized VMs ^
You can also use the VMware OS Optimization Tool (OSOT), which optimizes the base image of your VDI. This free tool optimizes the VDI golden image or the Remote Desktop Session Host (RDSH) server golden image to be "lighter" and faster with the fewest possible resources active.
Thus, when you create your pool of VDI desktops by cloning the optimized base image, you already have each desktop optimized. You can find the VMware OSOT here at VMware Labs.
Managing VMs from the console ^
If you're managing a larger infrastructure, you should use a central console to manage your servers because using any of the consoles on each server individually consumes valuable resources.
Logging in and opening sessions creates increased memory consumption. You can avoid this by managing the resources or functions of the remote VM(s) via a central console installed on your management workstation.
Simply create a personalized Microsoft Management Console (MMC) and add the different snap-ins for different servers of your infrastructure into that single console.
You can also use Windows Admin Center. Windows Admin Center (previously called project Honolulu) is a flexible, lightweight browser-based locally-deployed platform and solution for Windows Server management scenarios used for troubleshooting, configuration, and maintenance.
You should really stop remoting to your Domain Controller (DC) to manage Active Directory or logging into Exchange/SQL Server to manage the server.
Wrong Windows power options configuration ^
Previous versions of Windows Server came preinstalled with the power options set to "Balanced" instead of "High Performance."
Make sure all of your servers have this option active. If not, you are losing some performance. You can easily correct this.
While you might save on your power bill, you'll definitely be losing some of the overall performance if left on balanced.
Note: You can definitely control this via Group Policy Objects (GPO) through your domain. Simply create a new GPO and go to Computer Configuration > Policies > Administrative Templates > System > Power Management.
Once there, you should see a policy called "Select an Active Power Plan." Make sure it's active.
Oversized VMs ^
A very common mistake in virtual environments is the fact that some VMs have too many virtual CPUs (vCPUs) assigned.
But the best practices from VMware say the exact opposite. The VM should have the fewest vCPUs necessary because it's easier for the ESXi CPU scheduler to find enough physical cores to schedule the VM so the VM can use CPU time.
- Start with one vCPU per VM and increase as needed
- Do not assign more vCPUs than needed to a VM as this can unnecessarily limit resource availability for other VMs running on the host and also increase the CPU Ready wait time.
To check whether your VM has too many vCPUs and whether your ESXi has trouble allocating CPU resources, use esxtop and look for %RDY values. Use the VMware CPU Ready metric to see a percentage of time the VM has been ready but could not get scheduled to run on the physical CPU.
Usually, a VM which has more compute resources assigned than it requires, for example, a VM that uses no more than 15 to 20% of its CPU and has configured 4 vCPUs. What's happening is the VM is oversized, so you could reduce the number of vCPUs to 2 and increase the vCPU utilization to 40%. Or even better, you could reduce the number of vCPUs to 1 and have the VM use 80% of the vCPU.
I hope you find some of our tips useful and that they'll help in managing VMware vSphere infrastructures.