- How to use VMware vSAN ReadyNode Configurator - Fri, Dec 17 2021
- VMware Tanzu Kubernetes Toolkit version 1.3 new features - Fri, Dec 10 2021
- Disaster recovery strategies for vCenter Server appliance VM - Fri, Nov 26 2021
One of the enhancements of vMotion is Multi-NIC vMotion, introduced in vSphere 5.1. Multi-NIC vMotion provides load balancing for vMotion network traffic over multiple network interface controllers (NICs). It balances one vMotion session across all available VMkernel adapters. Anyone having a vMotion license and at least two dedicated NICs in each server (host) can use Multi-NIC vMotion.
It is quite easy to configure conventional VMware vMotion. Whether it is vMotion with shared storage or shared-nothing vMotion, the configuration is the same. The configuration consists of four steps:
- Creating a new VMkernel adapter
- Activating the vMotion checkbox
- Assigning an IP address
- Assigning a physical NIC
Configuring Multi-NIC vMotion is a bit different. However, it also not complicated.
Multi-NIC vMotion requirements
- The vMotion license (included in the VMware licensing packaging since vSphere Essentials Plus (or Plus Term, even cheaper).
- Two dedicated NICs per host for vMotion operations (you can theoretically work with a single NIC, but it is not recommended).
- Prepared IP addresses for the vMotion network, for each VMkernel adapter; also make sure that all VMkernel ports are on the same IP subnet/virtual LAN (vLAN) on all hosts.
Configuring Multi-NIC vMotion
This configuration assumes you have at least two physical NICs in each host, you're using a vSphere standard switch, and you have a vMotion license.
The configuration will create two VMkernel adapters, each with a different network label but within the same IP subnet/vLAN. It will then apply a "cross configuration" of active-standby assignments of physical NICs to both VMkernel adapters.
Connect to your host via vSphere web client > Select your host > Create a New VMkernel adapter and then put some meaningful network label on it, such as vmotion1 or vmotion01.
After this, there are two ways to proceed: Keep everything on the same vSwitch or add new vSwitch.
In this example, we will create a new vSwitch, which by default will be named vSwitch1 (the first vSwitch added to a host is always vSwitch0, the second vSwitch1, etc.).
Note: Keep the VMkernel Network Adapter radio button (default) selected.
Next, select Create a Standard switch. Its name will be vswitch1. Then add both NICs you want to use for the vMotion traffic and set one of them as Active and the other one as Standby.
After this, select the adapter and move the adapter up or down.
Note: I usually pick the lower number NIC for the first VMKernel port and the higher number NIC for the second VMkernel port.
Then continue with the wizard to finish the VMkernel creation, assign a network label, and then select the vMotion service checkbox.
Next, enter the IP settings. I'm using the 10.10.2.x network (which also has a vLAN tag).
Finish the configuration and close the wizard.
Select vSwitch1 and click the Add Host Networking icon.
The wizard starts once again with similar values. In my case:
- vMotion2 as the network label
- 10.2.21 as the network IPv4 address
After this, finish the wizard. Keep the vSwitch1 selected > select the vmotion2 port group > click the Pencil Icon > edit the Teaming and Failover section > check the Override check box > Select the NIC that will be Active, and move it with the arrow to the active adapters position.
Do the same for the Standby NIC.
In the end, your configuration should looks like this:
- You have vmotion1 with vmnic4 as active (and vmnic5 as standby)
- You have vmotion2 with vmnic5 as active (and vmnic4 as standby)
Basically, you have to "cross" the active and standby assignment of the NICs for the VMkernel adapters.
Now you'll need to do the same for the other hosts within your cluster. It won't cost you much time, but for larger clusters, it might make sense to script the process via PowerCLI. Alternatively, if you have a VMware Enterprise Plus license, you may consider using a VMware virtual distributed switch (vDS). This allows you to reuse the configuration for all hosts participating in the cluster.
Speed up vMotion operations
For top speed, use 10GbE NICs in your environment, which will give you not only eight different vMotion operations simultaneously, but also the possibility (if you have a Enterprise plus lincense) to use Network I/O Control (NetIOC) to prioritize vMotion traffic.
Subscribe to 4sysops newsletter!
If you only have two 10 GB NICs, you don't need to dedicate them solely to the vMotion traffic, because you probably don't do vMotion operations 24/7 every minute. You certainly need other types of traffic within your cluster, so the 10GbE NIC can share the overall traffic with other types of traffic, such as Fault Tolerance, virtual storage area networks (vSANs), High Availability (HA), and so on.
Why do you configure vmotion1 (with 2 NICs active/standby) and vmotion2 (with 2 the same NICs but standby/active) instead of only one Adapter (vmotion) with 2 NICs active/active.
No, the vmk1 is using single physical NIC (same for vmk2). The article was written based on this VMware KB
So, in the end you have 2 VMkernel ports (vmotion1 and vmotion2) inside of your vSwitch1.
Both VMkernel ports are using vmnic4 and vmnic5, which are configured as active/standby on VMkernel port vmotion1, and as standby/active on VMkernel port vmotion2.
Why not just 1 VMkernel port inside of vSwitch1 with vmnic4 and vmnic5, configured as active/active.
What are the advantages of your configuration?
OK, I found it. The vMotion load balancing doesn't work without multiple VMkernel ports. Only one active NIC is allowed per VMkernel port.
Question: Are two VMkernel ports with the same active NIC allowed? For example, in your configuration when one of the 2 NICs fails.
Addendum:
I haven't tried it, but I think it would work even if the two VMkernel ports were configured without the standby NICs. What do you think about?
Addendum 2:
I believe the multiple VMkernel ports (one port per active physical NIC) are only required for the vMotion network.
In the other networks (e.g. HA, Management or VM network) we can use a single VMkernel port per network for all physical NICs dedicated to the network, and we can set all the physical NICs to active, right?
Are dedicated NICs required or not?
Not if you’re separating the traffic with VLANS.