Latest posts by Paul Schnackenburg (see all)
- Azure Application Gateway and Web Application Firewall (WAF) - Thu, May 10 2018
- Azure Load Balancer overview - Thu, Apr 26 2018
- Azure network monitoring - Wed, Mar 28 2018
Hyper - V, Microsoft’s answer to VMWare ESXi / VSphere has now been around for a few years and is (arguably) catching up on the feature front. Whether the lower price point and familiar management tools will win businesses over remains to be seen but one thing is certain; when you’ve shelled out for that (or more likely “those”) beefy host servers you want the best performance you can get.
In this article series we’ll first look at design ideas for a Hyper-V environment, then some performance tips and gotchas and finally recommendations for how to measure performance.
One of the first hurdles you have to overcome when you’re designing your Hyper-V infrastructure is working out how many VMs will fit on each host. Today the main limitation for VM density is memory; this will also be true after the introduction of Dynamic Memory in SP1 for Windows Server 2008 R2. Another issue is how to assign processor power to different VMs.
Virtual processors ^
When Microsoft’s documentation talks about “logical” processors (LPs) they’re referring to physical cores in a system. Virtual processors (VPs) are of course what VMs see. Different guest operating systems can take advantage of different numbers of virtual processors, Windows 2008 and 2008 R2 can see up to four, Windows Server 2003 can see two, Red Hat Enterprise Linux and SUSE Linux Enterprise can see up to four. Windows 7 can see four, Vista can use two and XP SP3 can see two, for more details see here. Hyper-Threading (HT) in Intel processors makes each core look like two, the performance impact that early versions of HT had are gone but HT doesn’t double your CPU capacity so when calculating your logical to virtual CPU ratio count each core as an LP, not each HT core.
Should you assign one VP to each VM or assign more – perhaps the maximum for the OS? There is some overhead in having multiple VPs in a VM due to cross processor communication but the penalty is less with each version of Windows, hence for 2008 R2 VMs it’s safe to go with 4 VPs whereas you should really do some testing for Windows 2003 VMs under normal workloads to see if they need more than one VP.
According to Microsoft as a general rule of thumb it’s best to have four virtual processors per logical processor in the system, maximum is eight. But the question of course is how can you find out the ratio on your hosts? You could manually check the number of allocated virtual CPUs for each VM and compare that against the number of cores in your physical processor(s), a process that doesn’t sound very palatable. Especially not in a cluster where VMs might move around a lot, changing your ratio all the time. The simple answer is to run this one line PowerShell cmdlet which will give you the answer.
If you’re buying new metal and you have the choice make sure your processors support Second Level Address Translation (SLAT); AMD refers to this as Rapid Virtualization Indexing (RVI) and earlier called it Nested Page Tables (NPT); Intel calls it Extended Page Tables (EPT). Without SLAT each VM will take up an extra 10-30 MB and overall processor utilization will be up by about 10%. SLAT can make a huge difference for some workloads, Remote Desktop Services / Terminal Services for instance can see up to 40% more sessions on a VM with a SLAT host. Also look for processors with large L2 and L3 cache; this will help with processing for VMs with applications that have large working sets.
In the next part of this series I will discuss memory, networking, storage and networking.