Latest posts by Vladan Seget (see all)
- Back up VMware ESXi configurations via PowerCLI - Fri, Jun 9 2017
- Windows Server 2016 VM with a VMware Paravirtual SCSI controller - Thu, May 18 2017
- Upload files to VMware vSphere datastore - Tue, May 9 2017
One of the enhancements of vMotion is Multi-NIC vMotion, introduced in vSphere 5.1. Multi-NIC vMotion provides load balancing for vMotion network traffic over multiple network interface controllers (NICs). It balances one vMotion session across all available VMkernel adapters. Anyone having a vMotion license and at least two dedicated NICs in each server (host) can use Multi-NIC vMotion.
It is quite easy to configure conventional VMware vMotion. Whether it is vMotion with shared storage or shared-nothing vMotion, the configuration is the same. The configuration consists of four steps:
- Creating a new VMkernel adapter
- Activating the vMotion checkbox
- Assigning an IP address
- Assigning a physical NIC
Configuring Multi-NIC vMotion is a bit different. However, it also not complicated.
Multi-NIC vMotion requirements ^
- The vMotion license (included in the VMware licensing packaging since vSphere Essentials Plus (or Plus Term, even cheaper).
- Two dedicated NICs per host for vMotion operations (you can theoretically work with a single NIC, but it is not recommended).
- Prepared IP addresses for the vMotion network, for each VMkernel adapter; also make sure that all VMkernel ports are on the same IP subnet/virtual LAN (vLAN) on all hosts.
Configuring Multi-NIC vMotion ^
This configuration assumes you have at least two physical NICs in each host, you're using a vSphere standard switch, and you have a vMotion license.
The configuration will create two VMkernel adapters, each with a different network label but within the same IP subnet/vLAN. It will then apply a "cross configuration" of active-standby assignments of physical NICs to both VMkernel adapters.
Connect to your host via vSphere web client > Select your host > Create a New VMkernel adapter and then put some meaningful network label on it, such as vmotion1 or vmotion01.
After this, there are two ways to proceed: Keep everything on the same vSwitch or add new vSwitch.
In this example, we will create a new vSwitch, which by default will be named vSwitch1 (the first vSwitch added to a host is always vSwitch0, the second vSwitch1, etc.).
Note: Keep the VMkernel Network Adapter radio button (default) selected.
Next, select Create a Standard switch. Its name will be vswitch1. Then add both NICs you want to use for the vMotion traffic and set one of them as Active and the other one as Standby.
After this, select the adapter and move the adapter up or down.
Note: I usually pick the lower number NIC for the first VMKernel port and the higher number NIC for the second VMkernel port.
Then continue with the wizard to finish the VMkernel creation, assign a network label, and then select the vMotion service checkbox.
Next, enter the IP settings. I'm using the 10.10.2.x network (which also has a vLAN tag).
Finish the configuration and close the wizard.
Select vSwitch1 and click the Add Host Networking icon.
The wizard starts once again with similar values. In my case:
- vMotion2 as the network label
- 10.2.21 as the network IPv4 address
After this, finish the wizard. Keep the vSwitch1 selected > select the vmotion2 port group > click the Pencil Icon > edit the Teaming and Failover section > check the Override check box > Select the NIC that will be Active, and move it with the arrow to the active adapters position.
Do the same for the Standby NIC.
In the end, your configuration should looks like this:
- You have vmotion1 with vmnic4 as active (and vmnic5 as standby)
- You have vmotion2 with vmnic5 as active (and vmnic4 as standby)
Basically, you have to "cross" the active and standby assignment of the NICs for the VMkernel adapters.
Now you'll need to do the same for the other hosts within your cluster. It won't cost you much time, but for larger clusters, it might make sense to script the process via PowerCLI. Alternatively, if you have a VMware Enterprise Plus license, you may consider using a VMware virtual distributed switch (vDS). This allows you to reuse the configuration for all hosts participating in the cluster.
Speed up vMotion operations ^
For top speed, use 10GbE NICs in your environment, which will give you not only eight different vMotion operations simultaneously, but also the possibility (if you have a Enterprise plus lincense) to use Network I/O Control (NetIOC) to prioritize vMotion traffic.
If you only have two 10 GB NICs, you don't need to dedicate them solely to the vMotion traffic, because you probably don't do vMotion operations 24/7 every minute. You certainly need other types of traffic within your cluster, so the 10GbE NIC can share the overall traffic with other types of traffic, such as Fault Tolerance, virtual storage area networks (vSANs), High Availability (HA), and so on.