In Part 1, we had a look at general networking details of Windows Container technology, and I explained how to configure NAT in Docker. In this part, we are going to talk about custom networks such as transparent networks, Layer 2 (L2) bridging, and L2 tunnelling in Docker for Windows.
Follow me:
Latest posts by Anil Erduran (see all)

You may remember from the first part that we can easily change the IP prefix of the default NAT network by playing with the daemon.json Docker configuration file. The Docker engine also gives us a chance to create fully custom networks (for NAT and other types of drivers) using the command line.

Configuring a custom Docker network

However, in order to create user-defined networks, we need to configure the daemon.json file again to disable the default NAT network creation. As a first step, the Docker service should be stopped and existing networks should be deleted.

Stop-Service docker
Get-ContainerNetwork | Remove-ContainerNetwork
Removing container networks

Removing container networks

Now you are ready to modify the Docker configuration file. Under the C:\ProgramData\Docker\config\ directory, open the daemon.json file or create it manually if you don't have it.

Disabling automatic NAT network creation is controlled by the "bridge" option in the configuration file. Adding the following section to the configuration file and saving it should be sufficient.

{ "bridge" : "none" }

Once you start the Docker service again and check available networks, you will see that no network has been created for the container service.

Now you are ready to create user-defined networks using the Docker CLI.

docker network create -d nat --subnet=192.168.10.0/24 --gateway=192.168.10.1 CustomNatNetwork
Creating new NAT network

Creating new NAT network

The "docker network create" command can be used to create user-defined networks. The "d" flag stands for DRIVER and specifies the network type you want to create. You can also provide the IP prefix and gateway address using -subnet and -gateway flags.

In the above picture, the "inspect" command also shows that this newly created network is not associated with any containers. In order to attach this network to a container you can use the --network flag along with the "docker run" command.

docker run -it --network= CustomNatNetwork <image> <cmd>

In the example below, I'm running my microsoft/windowsservercore image with the --network flag to attach it to the custom NAT network that I created earlier.

docker run -it --network= CustomNatNetwork microsoft/windowsservercore powershell

Let's check the IP configuration inside the containers:

Network configuration of container

Network configuration of container

This new container picked an IP address from the IP range we specified in CustomNatNetwork.

This command is only applicable for new containers. In order to modify network settings for an existing container, we need to stop the container and then use the "network connect/disconnect" command.

docker stop ac0722b09f2e

Then you can use the connect parameter to attach a custom network to your container:

docker network connect CustomNatNetwork ac0722b09f2e

Now you can start the container and inspect its details:

docker start ac0722b09f2e
docker inspect ac0722b09f2e
Inspecting a container network

Inspecting a container network

As you can see, my container picked up CustomNatNetwork as the default network and also chose an IP address from the range.

There are basically four different networking types for Windows Container hosts: NAT, transparent networking, L2 bridging/tunneling, and multiple networks. These network drivers provide internal and external access to containers for different use cases. Let me go through each of these and discuss the details.

NAT (Network Address Translation)

As we previously discussed, this is the default networking option in Windows/Hyper-V containers. Each container will pick an IP address either from the default NAT prefix or from the custom NAT prefix you specified.

Port forwarding is also available with this network driver type, so you can easily do port forwarding from host to container to access application/process from an external network.

The NAT network provides both internal–internal and internal–external communication through WinNAT and can also be used for single-node or multi-node container deployments.

You also need to be careful while using the NAT network for Windows containers in production as there are some key limitations in the WinNAT implementation as of today. Here are the some known issues in container scenarios:

  • Multiple internal subnet prefixes are not supported
  • External/Internal prefixes must not overlap
  • No automatic networking configuration exists
  • You cannot access externally mapped NAT endpoint IP/ports directly from the host – you must use internal IP/ports assigned to endpoints

Transparent network

This is another type of network driver in Windows container environments, which allows you to connect your containers directly to the physical network. In that case, containers will pick up an IP address from an external DHCP server, or you can assign IP addresses 'statically'.

You can follow the same procedure to create a custom network with a transport driver just by specifying the network driver type with the "-d" flag:

docker network create -d transparent CustomTransparentNetwork

You can also provide subnet and gateway flags along with the same command if you are planning to use static IP addresses for your containers. Subnet and gateway values should be the same as the physical network details on the host.

docker network create -d transparent --subnet=172.16.10.0/24 --gateway=172.16.10.1 CustomTransparentNetwork

You can retrieve the details of the new Transparent vNIC with Get-NetAdapter command:

Get-NetAdapter | where {$_.Name -match "Transparent"}
Retrieving Transparent network

Retrieving Transparent network

The Docker CLI can also list all available networks:  docker network ls

Listing all container networks

Listing all container networks

As we discussed, there are two options to assign IP addresses to containers. The first option is to use your existing external DHCP server; however, if you are using a virtualized container host, you will need to enable MACAddressSpoofing as the Hyper-V host will normally block network traffic when there are multiple MAC Addresses.

The second option is to assign static IP addresses to containers using the "--ip" flag.

docker run -it --network=CustomTransparentNetwork --ip 172.16.10.1 <image> <cmd>

L2 bridging and L2 tunelling

These network drivers can be used for private and public cloud deployments, and they are a great fit for Software Defined Networking (SDN) deployments.

The L2 bridge network driver does layer-2 address translation, and your containers will have same IP subnet as the container host. Only static IP assignment is supported for this type of network mode. Each container under the L2 bridge network will have a unique static IP address but will share the same MAC address as the container host.

L2 bridge mode also allows containers to leverage overlay networks such as VxLan in order to communicate across multi-host deployments. In order to deploy an L2 Bridge or L2 tunnel for SDN deployments you have to have at least:

  • One network controller
  • One Tenant virtual network
  • One Tenant VM with container, Docker and Hyper-V features enabled.

The following command can be used to create a custom L2 Bridge network on the container host:

docker network create -d l2bridge --subnet=192.168.1.0/24 --gateway=192.168.1.1 MyBridgeNetwork01

Creating multiple networks

Due to limitations of the NAT network, the only way to create multiple NAT networks is to partition the physical host's NAT prefix. This means that you can create multiple NAT networks so long as each NAT network prefix falls under the host's NAT network internal prefix.

If you want to create multiple networks for Transparent, L2 Bridge, or L2 Tunnel, you need to configure each network driver to use its own network adapter to connect to an external vSwitch. Binding a network to a specific interface can be done using the " –o" flag along with the "docker network create" command.

docker network create -d transparent -o com.docker.network.windowsshim.interface="Ethernet 1" MyCustomTransparent01

Conclusion

In this two part series, we discussed what types of network options we have today for Windows containers in production.

The NAT network is the default option and covers most of the requirements you might have for dev/test environments. The Transparent driver is another way to connect containers directly to a physical network.

Subscribe to 4sysops newsletter!

For Software Defined Networking deployments – for instance AzureStack – the best way to provide network across hosts and tenants is to create L2 Tunnel or L2 Bridge networks.

avataravatar
8 Comments
  1. Piotr 6 years ago

    Could you please elaborate on creating multiple nat networks on one windows docker host. What commands I need to run? I am not sure if I understand what you mean by “so long as each NAT network prefix falls under the host’s NAT network internal prefix.”

    On my machine I have a default nat network defined with following Subnet: “172.31.80.0/20” so should I be able to run following command? -> docker network create -d nat –subnet=172.31.80.0/8 –gateway=172.31.80.1 MyNatNetwork… because it fails with following error:
    Error response from daemon: HNS failed with error : Unspecified error
    which tells me nothing 🙂

  2. David 6 years ago

    I’m sure that won’t work, as your /8 subnet is not under your /20 host network – ie the docker subnet is larger than the host subnet. A valid subnet would be /24 which would give you 172.31.80.1 – 172.31.80.254 usable addresses.

  3. Arjun Ananthamurthy 6 years ago

    I noticed that one drawback when you bind the physical interface to docker transparent network, it always expects this physical interface handles DHCP. We cannot assign static IP address to containers nor we can provide the subnet range to network itself.

    I see this error

    docker network create -d transparent –subnet=172.18.0.0/24 -o com.docker.network.windowsshim.interface=”nic1″ new_ns2
    Error response from daemon: HNS failed with error : The object already exists.

    docker run -itd –name test –net new_ns –ip 172.18.0.1 microsoft/nanoserver
    b0db602b5f57b65a5d5128913b094c6ee7e6aaf0da29b4490b422f5a09122452
    C:\Program Files\Docker\docker.exe: Error response from daemon: User specified IP address is supported only when connecting to networks with user configured subnets.

    I do not have DHCP for this NIC1 network it is connected to TOR dumb switches. I have containers in other machines connected to same switch which I need to connect to. Any ideas how can I do that ?

    Your help is appreciated!!

    Thanks,

    Arjun

     

  4. Author

    Hi Arjun.

    First of all, you can always leverage DNS for service discovery instead of providing static ip addresses for containers unless you really need it.

    For your question, if I understand it correctly, you need custom network and a compose file where you can define user configured subnets.

    inside the compose file you can describe container name, ports and static ip address.

  5. amjsaxbu 6 years ago

    I have a dotnet/aspnet microservice running inside a docker/hyper-v container on windows server 2016 datacenter edition. I have an appsetting.json file for the microservice where I set the connection string from my db and Host to a custom url http://myservice.domain.com. In my dockerfile I use the dotnet and aspnetcore images, expose port 5000, set the ENV ASPNETCORE_URLS var to the custom url in my appsettings.json, and pass in –server.urls with that same custom url in the ENTRYPOINT.

    I am using the default NAT network when running my container – docker run -p 5000:5000 –name myservicename-test myservicename. The Container starts successfully and I can run docker networks ls and see the nat network. I can run docker container inspect container_name and see that the container is in fact on the nat netowrk. I can also run docker network inspect nat and see that my container appears attached to the nat network.

    I can get the IP of the container and via a powershell prompt inside the container I can run an invoke-webrequest -uri http://containerIP and I get a 200 and appropriate output from the get to the service.

    If I try the same web request from the windows server 2016 datacenter VM, which is the host the container is running on, I get invoke-webrequest : Unable to connect to the remote server. If I ping the container IP from the host, I get request timed out.

    I cannot access it from my laptop on the same domain as the host from browser, invoke-webrequest, or ping, I get the same results as on the host.

    How do I access the service from the host as well as from other machines external to the host/container?

  6. dockerUser 6 years ago

    Is it possible to assign multiple static IP’s to a docker container ?

  7. Hi, I just tried to setup docker swarm cluster on VMS that are running on Hyper-V.  Installed Hyper-V on each VM.  Each VM has static IP.  On one VM created  docker swarm with its static IP address.

    Created Overlay network and extended the Virtual Switch with VFP extension and created a VFP NAT rule. 

    https://docs.microsoft.com/en-us/virtualization/windowscontainers/container-networking/network-drivers-topologies

    The containers in Swarm do not get outbound connectivity, though the latest documentation says it is possible on Windows 2019.

    Can you please provide some guidance.  Thanks

  8. This is exactly I have been looking for. thank you for sharing this impressive post.

Leave a reply

Your email address will not be published. Required fields are marked *

*

© 4sysops 2006 - 2023

CONTACT US

Please ask IT administration questions in the forums. Any other messages are welcome.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account