- Top 10 new features in Windows Server 2012 R2 - Mon, Jun 10 2013
- FREE: Unitrends Enterprise Backup Free Edition – Hyper-V and VMware backup - Tue, Nov 6 2012
- FREE: Remote Desktop Manager – A powerful RDP client - Wed, Sep 19 2012
Host servers that will be clustered need to meet all of the same base requirements as Hyper-V hosts that will not be clustered. There are also some additional hardware and software considerations.
Windows Server 2008 R2 (Hyper-V version 2) introduced dynamic memory. This allows an admin to assign a startup and maximum memory allocation for a virtual machine rather than a static number. When the VM is started, the minimum amount of memory is allocated and reserved. Anytime the amount of memory used by the VM encroaches upon the currently allocated memory, additional memory is added to the VM. At a later time, this memory could then be returned to the pool for dynamic assignment to another VM.
As mentioned in my series about upgrading from Hyper-V version 1.0 to 2.0, one big advantage with dynamic memory is that all memory on a host can be utilized under normal circumstances. If a host fails and VMs fail-over, memory can be reassigned. Before dynamic memory, host memory in a cluster environment would often sit unused so that VMs would have memory available in the case of failover.
At least 3 network cards should be in each host. One should be used to allow clients to connect and needs to be able to communicate with Active Directory. Another high performance NIC, or SAN NIC, should be used exclusively for SAN traffic. The SAN NIC needs at least Gigabit speeds to perform well. A third NIC should be used for heartbeat traffic.
Depending on your environment and expected server load, separate physical network cards may not be necessary. A small setup could get away with using onboard NICs and one additional two-port NIC.
Live Migration is also an important feature to keep in mind when determining the number of network cards (or ports needed). Whether or not you need to plan for an extra NIC or port for Live Migration also depends on your setup. Fortunately; Microsoft has provided some guidelines for NIC configuration depending on the number available. See Hyper-V: Live Migration Network Configuration Guide for more information.
Special consideration should be taken to assure you don’t have any bottlenecks at the hardware level. There is going to be an immense amount of traffic going through a NIC between the Hosts and the SAN. It’s important to make sure that the SAN traffic coming in the NIC isn’t hampered a motherboard that isn’t designed to accommodate large amounts of throughput between a PCI-Express port and other hardware components. It might be helpful to obtain a diagram of the motherboard to assure that pathways from the NIC are optimized.
Subscribe to 4sysops newsletter!
In this post I covered some of the additional host server hardware considerations for clustering Hyper-V. My next post will cover how to begin getting Hyper-V clustering setup now that all the pieces are in place.