Latest posts by Paul Schnackenburg (see all)
- Microsoft Ignite 2017 Australia - Mon, Mar 6 2017
- Storage Spaces Direct (S2D) - Part 2: setup and monitoring - Tue, Jan 24 2017
- Storage Spaces Direct (S2D) - Part 1: features and fault tolerance - Mon, Jan 23 2017
In part 2 of this series about Hyper-V performance tuning we continue the look at designing your Hyper-V farm and the need to balance budget against resource needs; you’ll want to design with a good mix of memory, disk and networking resources to maximize performance.
Optimizing memory for VMs is a challenge in Hyper-V of today as the memory you assign to each VM is fixed whether the VM actually uses it or not. The good news it’s going to become a whole lot easier when Microsoft releases SP1 for Windows Server 2008 R2 and Dynamic Memory comes into play. There’s a registry key that you can use to tweak the memory reserve for the host but normally you shouldn’t have to touch it, (it’s located at HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\MemoryReserve). This key is a REG_DWORD and you have to create it, default decimal value is 32 for 32 MB, maximum is 1024. If you follow best practises and run minimal services in the host the default should work fine.
Microsoft recommends setting start-up RAM for 2008 R2, 2008 , Windows 7 and Vista to 512 MB whereas Windows 2003 and XP should start with 128 MB.
My testing indicates that the issues that sometimes caused problems in earlier versions of Hyper-V where there wasn’t enough memory reserved for the host on a heavily loaded system are solved in SP1.
Storage is always a tricky part of server design and no less so in the virtual world. The main issue with any virtualization platform (unless you’re using pass through disks) is that any disk optimization that an application does is lost as the “hard drive” that the application sees is actually a huge file stored on a drive. This makes random IO latency increase. An example that comes to mind would be Exchange 2010 databases which are optimized for sequential IO; all that optimization is lost in the world of VHDs. The only way to work around this issue is to increase the speed and decrease the latency of the underlying storage system, the best solution (albeit not the cheapest) is to employ SSDs for VHD storage, especially if you have multiple VHDs on the same drive.
Another design decision is around which type of VHD to use; fixed size or dynamic. The golden rule used to be to use fixed virtual disks as they’re much faster than dynamic disks. The performance gap is closing however and dynamic disks are very close to fixed disks in Hyper –V 2008 R2 as are differencing disks. There will still be some overhead involved when a VHD has to grow, not to mention the possibility of running out of disk space due to oversubscribed storage so sticking with fixed disks in production environments is still best practise.
A mistake I sometime see is looking mostly at memory and processing power when sizing a Hyper-V platform. Networking is very important, allocating enough network capacity to VMs is vital, especially if your VMs are busy file- or database- servers. If you can afford it 10 GB Ethernet is a fantastic way of allocating enough pipe. If you’re employing iSCSI for VHD storage use Jumbo Frames and unbind unnecessary services such as File Sharing and DNS from the iSCSI network card(s). Another important recommendation for network performance is to make sure that the NICs you use are server class, cheaper NICs can cause many problems. Server class NICs will also enables other new NIC features that are offered in 2008 R2 such as Virtual Machine Queues (VMQ) and TCP Chimney Offload. Enable VMQ only for VMs with heavy inbound network traffic.
In my next post I will list list some Hyper-V performance tuning tips and tricks.