In the last article I discussed how a SAN was required as one of the pieces of the Hyper-V cluster environment. This article will outline the three main switching scenarios. Each of the three should be on its own subnet.

Gigabit or faster SAN switching

The choice of which switch to use relies on which SAN solution was used as well as the number of hosts and virtual machines the switching solution will support.

Hyper-V cluster - Switch diagram

Hyper-V cluster - Switch diagram

Here are two scenarios:

Scenario 1: Small to Medium business with a small budget

  • A small business might have two clustered Hyper-V servers with a total of 4-8 virtual machines split between them. In this case, a relatively low budget Gigabit switch could suffice to support SAN traffic. It might also double as a device to support heartbeat traffic on a different VLAN. Any solution should be as thoroughly tested prior to use in production as possible.

Scenario 2: Large business with a large budget

  • A larger business might have switching integrated as part of the SAN hardware. In this case, switching would likely be 10Gb/s or faster to accommodate very large bandwidth needs. If switches are not integrated, great care should be taken to assure switches meet the throughput requirements for the workload. Switch vendors have specific products that are designed for virtualization in the data center. Larger environments also introduce the possibility of virtualnetworking.

Switches that will carry SAN traffic should have the ability to enable jumboframes, TCP/IPoffload, and receivesidescaling. Jumbo frames allow the use of 9000 byte frames as opposed to the typical 1500 bytes. This can drastically reduce overhead for the SAN. TCP/IP offload features will help offload some of the work from the processor. Receive side scaling will allow processor use to be spread to more than one core.

A word of caution, enabling these features does not always lead to better performance. As mentioned previously, test with different combinations of these features enabled until you find the one that provides the best performance.

Heartbeat switching

Regardless of the size, a Windows cluster will require a reliable means of maintaining a heartbeat. Heartbeat traffic should be isolated from potential bottlenecks at all times. If the cluster cannot retain a reliable heartbeat, the cluster does not live.

Switching for the heartbeat does not have to be Gigabit. The only major requirement for the heartbeat is that latency is very minimal. So even an older 10Mb switch could do the job. For more info see RecommendedprivateHeartbeatconfigurationonaclusterserver. When cluster nodes are separated by a WANtheheartbeatintervalmayneedtobeadjusted. By default the heartbeat is every 1.2 seconds. When two beats are missed, clustering will attempt to failover resources from the node it deems is missing. When you will only have two hosts, a crossover cable could be used instead of a switch.

Client access switching

Client access to the virtual machines running on the hosts should be isolated from SAN and heartbeat traffic. This is important because you don’t want peaks in user activity to affect the operation of the virtual machine. Remember, the host will see the iSCSI connected hard drive as a local hard drive and expect it to act like one. If client access traffic is competing, you could be asking for trouble. I’ve seen this cause problems in a test environment. The result of all that competition for bandwidth results in virtual machines that break.

In the next article, we’ll start looking at requirements on the host servers and then the actual steps involved in getting the pieces all working together.


Leave a reply

Your email address will not be published.


© 4sysops 2006 - 2023


Please ask IT administration questions in the forums. Any other messages are welcome.


Log in with your credentials


Forgot your details?

Create Account