- Azure Purview: Data governance for on-premises, multicloud, and SaaS data - Wed, Feb 17 2021
- The risk of fake OAuth apps in Microsoft 365 and Azure - Fri, Nov 27 2020
- Azure Sentinel: Microsoft's SIEM for the cloud and on-premises - Fri, Oct 30 2020
Here we're going to look at Hyper-V networking enhancements, software-defined networking (SDN), and the SDN gateway.
Hyper-V: the need for speed ^
Networking in the Hyper-V world has two parts. On the host side, the incredible popularity of hyperconverged infrastructure and Storage Spaces Direct drive the adoption of 40 Gbps+ networking—often with remote direct memory access (RDMA)—to keep the volumes across each host in sync. But equally important in many deployments is the networking for virtual machines (VMs) themselves. Here Windows Server 2019 delivers in spades.
In Windows Server 2016, Microsoft demonstrated 40 Gbps in a single VM using Virtual Machine Multi-Queue (VMMQ), but that came at a high management cost with manual tuning and monitoring. Windows Server 2019 will provide that same performance with lower CPU consumption and much less configuration work.
Microsoft calls buying a preconfigured solution a software-defined data center (SDDC), which comes in Standard and Premium flavors. Note that SDN for instance was only available in Premium in 2016; they've moved most SDN functionality to the Standard tier in 2019.
There are two new features at play: Receive Side Coalescing (RSC) in the virtual switch (vSwitch) and Dynamic Virtual Machine Multi-Queue (d.VMMQ). RSC is a hardware offload that's been around for quite some time, but as soon as you attached a vSwitch to the NIC, it turned it off.
In Windows Server 2019, RSC works with the vSwitch and is software only. The basic technique is combining TCP segments (only—this doesn't work with other types of network traffic) into larger segments for each VM. Below you can see network traffic on a 40 Gbps network. On the left, the CPU usage is higher and the throughput is considerably lower; on the right, RSC is enabled, and the throughput is higher with less CPU usage. This feature is on by default in Windows Server 2019.
It's cool to have 10 Gpbs and faster networking, but if left unaided, keeping up with the TCP calculations burns CPU cores. This leads to Virtual Machine Queue (VMQ) and VMMQ, which spreads the load amongst available CPU cores. As mentioned, however, VMMQ in 2016 requires a lot of manual tuning, and if you have different generations of hardware in your clusters, different tuning for different hosts.
D.VMMQ in Windows Server 2019 (only available in 2019 SDDC Premium) automatically tunes hosts and uses a single core when network throughput is low, expanding to other cores when required. Note that you need updated drivers for your NICs for this to work—look for Receive Side Scaling v2 (RSSv2). Microsoft includes this in the logo requirement for SDDC Premium.
Microsoft has improved guest RDMA in Windows Server 2019 using single-root input/output virtualization (SR-IOV), which virtualizes the NIC into VMs that need high-speed networking such as file servers and virtual desktop infrastructure (VDI)/high-performance computing (HPC) workloads (only available in 2019 SDDC Premium).
Software-defined networking (SDN) ^
Predictably Server 2019 has a lot of improvements in the SDN stack. As a quick overview, SDN is about virtualizing networks (vNets) on top of your physical network so you can create and delete them without having to touch your physical networking switches and routers. SDN is of course a requirement for public clouds, but it has many uses for on-premises deployments, particularly for security and isolation of workloads.
SDN has been part of Windows Server ever since 2012, but in 2016 Microsoft switched to the Virtual Extensible LAN (VXLAN) protocol, removed the dependency on System Center Virtual Machine Manager (VMM), and imported a lot of code and concepts from the SDN stack in Azure.
Deployment is a challenge with many moving parts required, and Microsoft has approached this in several ways. First, Windows Admin Center (WAC) now has support for managing SDN deployed on Windows Server 2019. Second, they've improved the free SDN Express tools (for both 2016 and 2019) with more validation for inputs and a UI to configure the required settings.
If you haven't tried SDN Express, it has options for both VMM and plain Windows Server, and it'll deploy the required components (network controller, SDN gateway, and load balancers) and configure the hosts. In 2016 you needed three separate networks: a provider network, a management network, and a transition network. They've removed the latter, and you only need the first two.
For more control there's also a PowerShell module for SDN Express that lets you customize the initial deployment in more detail than the wizard and also lets you add resources to scale out the SDN infrastructure.
WAC support for SDN is a work in progress, and today it'll only work for hyperconverged clusters and logical networks. Virtual gateways, user-defined routing, and access control lists (ACLs) are all promised for the end of 2018, with load balancers, quality of service (QoS), multi-rack, and SDN deployment coming later in 2019. Support for Server 2019 features such as IPv6, vNet peering, and flow logging is also coming later. Available today is monitoring of all the individual VMs that provide the SDN infrastructure, which is really handy.
Flow/firewall logging (consistent with Azure network watcher logs) is new, and you can enable it on a per-rule basis and store it locally in a comma-separated values (CSV) file or a Server Message Block (SMB) share. Each host will generate its own files and start a new file every hour.
Just like you can do in Azure, you can now peer two vNets (managed by the same network controller). Most SDN components support IPv6 except for site-to-site IPSec tunnels. Also, you can't assign both an IPv4 and an IPv6 address to a load balancer endpoint. Finally, the SDN stack has an application programming interface (API), so independent software vendors (ISVs) can build their own management technologies on top of it. The first company to have done so is 5nine, and their Cloud Manager can deploy and manage SDN networks.
Encryption for traffic on vNets is now easy to configure; it uses TLS, and once you've set up the required certificates, it's just a click to turn it on for all traffic on the vNet. This is independent of the applications in the VMs on that network—all traffic is TLS protected.
Getting network traffic in and out of virtual networks is important. The SDN gateway in 2016 had some serious limitations on performance, such as up to 300 Mbps for a single IPSec tunnel (connecting workloads over the internet). 2019 ups that limit to about 1.8 Gbps and increases the previous limit for several tunnels from about 1.5 to 1.8 Gbps to about 5 Gbps. Note that this performance work also extends to the Routing and Remote Access service in 2019.
Generic Routing Encapsulation (GRE) tunnels—connecting VMs in your vNets to physical resources in your data center or Multiprotocol Label Switching (MPLS) networks—also receive a speed boost, going from 2.5 Gbps in 2016 to 15 Gbps in 2019.
Subscribe to 4sysops newsletter!
I hope you've enjoyed this tour of all the networking improvements "under the hood" in Windows Server 2019. I can't wait to see some of these in action on my client's networks.