Windows Docker networking – Part 2: Custom network types

In Part 1, we had a look at general networking details of Windows Container technology, and I explained how to configure NAT in Docker. In this part, we are going to talk about custom networks such as transparent networks, Layer 2 (L2) bridging, and L2 tunnelling in Docker for Windows.
Profile gravatar of Anil Erduran
Follow me:

Anil Erduran

Anil Erduran is a principal consultant and subject matter expert for Hitachi Data Systems EMEA, based in London, UK. He is also a dual category Microsoft Most Valuable Professional in Cloud and Datacenter Management and Microsoft Azure. Anil can be found on Twitter @anil_erduran.
Profile gravatar of Anil Erduran
Follow me:

You may remember from the first part that we can easily change the IP prefix of the default NAT network by playing with the daemon.json Docker configuration file. The Docker engine also gives us a chance to create fully custom networks (for NAT and other types of drivers) using the command line.

Configuring a custom Docker network ^

However, in order to create user-defined networks, we need to configure the daemon.json file again to disable the default NAT network creation. As a first step, the Docker service should be stopped and existing networks should be deleted.

Removing container networks

Removing container networks

Now you are ready to modify the Docker configuration file. Under the C:\ProgramData\Docker\config\ directory, open the daemon.json file or create it manually if you don't have it.

Disabling automatic NAT network creation is controlled by the "bridge" option in the configuration file. Adding the following section to the configuration file and saving it should be sufficient.

Once you start the Docker service again and check available networks, you will see that no network has been created for the container service.

Now you are ready to create user-defined networks using the Docker CLI.

Creating new NAT network

Creating new NAT network

The "docker network create" command can be used to create user-defined networks. The "d" flag stands for DRIVER and specifies the network type you want to create. You can also provide the IP prefix and gateway address using -subnet and -gateway flags.

In the above picture, the "inspect" command also shows that this newly created network is not associated with any containers. In order to attach this network to a container you can use the --network flag along with the "docker run" command.

In the example below, I'm running my microsoft/windowsservercore image with the --network flag to attach it to the custom NAT network that I created earlier.

Let's check the IP configuration inside the containers:

Network configuration of container

Network configuration of container

This new container picked an IP address from the IP range we specified in CustomNatNetwork.

This command is only applicable for new containers. In order to modify network settings for an existing container, we need to stop the container and then use the "network connect/disconnect" command.

Then you can use the connect parameter to attach a custom network to your container:

Now you can start the container and inspect its details:

Inspecting a container network

Inspecting a container network

As you can see, my container picked up CustomNatNetwork as the default network and also chose an IP address from the range.

There are basically four different networking types for Windows Container hosts: NAT, transparent networking, L2 bridging/tunneling, and multiple networks. These network drivers provide internal and external access to containers for different use cases. Let me go through each of these and discuss the details.

NAT (Network Address Translation) ^

As we previously discussed, this is the default networking option in Windows/Hyper-V containers. Each container will pick an IP address either from the default NAT prefix or from the custom NAT prefix you specified.

Port forwarding is also available with this network driver type, so you can easily do port forwarding from host to container to access application/process from an external network.

The NAT network provides both internal–internal and internal–external communication through WinNAT and can also be used for single-node or multi-node container deployments.

You also need to be careful while using the NAT network for Windows containers in production as there are some key limitations in the WinNAT implementation as of today. Here are the some known issues in container scenarios:

  • Multiple internal subnet prefixes are not supported
  • External/Internal prefixes must not overlap
  • No automatic networking configuration exists
  • You cannot access externally mapped NAT endpoint IP/ports directly from the host – you must use internal IP/ports assigned to endpoints

Transparent network ^

This is another type of network driver in Windows container environments, which allows you to connect your containers directly to the physical network. In that case, containers will pick up an IP address from an external DHCP server, or you can assign IP addresses 'statically'.

You can follow the same procedure to create a custom network with a transport driver just by specifying the network driver type with the "-d" flag:

You can also provide subnet and gateway flags along with the same command if you are planning to use static IP addresses for your containers. Subnet and gateway values should be the same as the physical network details on the host.

docker network create -d transparent --subnet=172.16.10.0/24 --gateway=172.16.10.1 CustomTransparentNetwork

You can retrieve the details of the new Transparent vNIC with Get-NetAdapter command:

Retrieving Transparent network

Retrieving Transparent network

The Docker CLI can also list all available networks:  docker network ls

Listing all container networks

Listing all container networks

As we discussed, there are two options to assign IP addresses to containers. The first option is to use your existing external DHCP server; however, if you are using a virtualized container host, you will need to enable MACAddressSpoofing as the Hyper-V host will normally block network traffic when there are multiple MAC Addresses.

The second option is to assign static IP addresses to containers using the "--ip" flag.

L2 bridging and L2 tunelling ^

These network drivers can be used for private and public cloud deployments, and they are a great fit for Software Defined Networking (SDN) deployments.

The L2 bridge network driver does layer-2 address translation, and your containers will have same IP subnet as the container host. Only static IP assignment is supported for this type of network mode. Each container under the L2 bridge network will have a unique static IP address but will share the same MAC address as the container host.

L2 bridge mode also allows containers to leverage overlay networks such as VxLan in order to communicate across multi-host deployments. In order to deploy an L2 Bridge or L2 tunnel for SDN deployments you have to have at least:

  • One network controller
  • One Tenant virtual network
  • One Tenant VM with container, Docker and Hyper-V features enabled.

The following command can be used to create a custom L2 Bridge network on the container host:

docker network create -d l2bridge --subnet=192.168.1.0/24 --gateway=192.168.1.1 MyBridgeNetwork01

Creating multiple networks ^

Due to limitations of the NAT network, the only way to create multiple NAT networks is to partition the physical host's NAT prefix. This means that you can create multiple NAT networks so long as each NAT network prefix falls under the host's NAT network internal prefix.

If you want to create multiple networks for Transparent, L2 Bridge, or L2 Tunnel, you need to configure each network driver to use its own network adapter to connect to an external vSwitch. Binding a network to a specific interface can be done using the " –o" flag along with the "docker network create" command.

Conclusion ^

In this two part series, we discussed what types of network options we have today for Windows containers in production.

The NAT network is the default option and covers most of the requirements you might have for dev/test environments. The Transparent driver is another way to connect containers directly to a physical network.

For Software Defined Networking deployments – for instance AzureStack – the best way to provide network across hosts and tenants is to create L2 Tunnel or L2 Bridge networks.

Take part in our competition and win $100!

2+

Users who have LIKED this post:

  • avatar

Related Posts

1 Comment
  1. avatar
    Piotr 3 weeks ago

    Could you please elaborate on creating multiple nat networks on one windows docker host. What commands I need to run? I am not sure if I understand what you mean by "so long as each NAT network prefix falls under the host's NAT network internal prefix."

    On my machine I have a default nat network defined with following Subnet: "172.31.80.0/20" so should I be able to run following command? -> docker network create -d nat --subnet=172.31.80.0/8 --gateway=172.31.80.1 MyNatNetwork... because it fails with following error:
    Error response from daemon: HNS failed with error : Unspecified error
    which tells me nothing 🙂

    0

Leave a reply

Your email address will not be published. Required fields are marked *

*

CONTACT US

Please ask IT administration questions in the forum. Any other messages are welcome.

Sending
© 4sysops 2006 - 2017
Do NOT follow this link or you will be banned from the site!

Log in with your credentials

or    

Forgot your details?

Create Account