Continuing our series about Azure networking, in this article we're going to look at the inbuilt load balancer in Azure and how you would use it.

Azure has had a software load balancer (SLB) built in for a long time, and in fact it's the same code the SLB provided in the software defined networking (SDN) stack in Windows Server 2016. However, this SLB (now called "Basic") only supported a single availability set—a grouping of VMs for high availability (HA). And since it only supports 100 nodes, so does the Basic SLB. It also has no concept of availability zones, recently made generally available (GA) in Azure. These zones spread workloads across separate data centers in the same region, providing very high availability. The new SLB (the "Standard" tier) fully supports load balancing across zones, as well as itself being highly available across zones.

The Standard tier SLB does not require availability sets (and thus can work with standalone VMs). However, you can use it with multiple availability sets if you need to. It currently scales up to 1,000 backend instances. The health metrics are also much better for both the front and backend tiers; moreover, they support HA ports. The latter feature makes it possible to run third-party virtual appliances in Azure in a highly available manner, something that was very fiddly previously. Another change is that network security groups (NSGs) are now required for the SLB—they were optional before.

Before we set one up, let's establish some parameters for the SLB. The Standard SLB can serve as a public (externally facing) load balancer or an internal load balancer. We can connect a VM to one public and one internal SLB. A resource function always contains a frontend, a rule, a health probe, and a backend pool definition. As mentioned, the scope of a single availability set encompasses the Basic SLB, whereas the Standard SLB is linked to a virtual network (vNet). It's flow based, meaning it's aware of the end-to-end activity on the network.

The Standard SLB has a 99.99% service-level agreement (SLA) if you have at least two healthy VMs (the Basic tier has no SLA). And if you're really looking to scale out, be aware of the default limits, as well as what you can increase them to by contacting support. The most pertinent default might be 10 frontend IP configurations (out of a maximum of 600) for the Standard SLB.

If you're currently using the Basic SLB and you want to migrate to Standard, it's really easy. Just create a Standard SLB and recreate all your rules manually. Then create or update the NSGs to whitelist the traffic you want to go through, remove the Basic SLB, and attach all resources to the new SLB. Really—there's no migration story at all, apart from manually recreating all rules and settings. The Basic SLB is free, whereas the Standard SLB costs some money.

Note that if you're publishing web resources and want TLS termination (what we used to call SSL offloading), you should add an Application Gateway. And if you're deploying a global workload, Traffic Manager provides DNS load balancing. We'll look at these solutions in the next article—you can combine them with either flavor of SLB.

Setting up a Standard Load Balancer

To follow along, you'll need an Azure account. We're going to create two Windows VMs running IIS and then load balance them. Login at portal.azure.com, click Create a resource, go to Networking, and pick Load Balancer.

Create a Standard SLB

Create a Standard SLB

Pick or create a public IP address for your selected region and create a new resource group (RG). In this case we're going for a Public SLB. After creating it, go back to Create a resource, Networking, and pick vNet. Select the RG you created for the load balancer (Use existing) and pick your address space for the vNet and the subnet.

Again, click Create resource, Compute, Windows Server 2016 Datacenter, and create a VM in the same RG as before. Note that the A-series (which I would normally recommend for non-production or test workloads) doesn't support load balancing. The B-series (Burstable) does support load balancing, so that's an option; I picked D2S_V3 for this walkthrough.

Creating VM1 for the backend

Creating VM1 for the backend

On the third step, make sure you create a new availability set (AS) and select the correct vNet and subnet for the VM. It's also crucial that you pick None for the Public IP address for the VM (which is not the default). Repeat these steps to create a second identical VM (called "VM2") in the same AS, vNet, and subnet.

Creating VM1 step 3

Creating VM1 step 3

Unlike a Basic SLB, the Standard SLB requires that you whitelist the traffic you want to allow using NSG, in this case HTTP (in a production scenario you’d use HTTPS/port 443) and Remote Desktop Protocol (RDP). Click All resources, find your NSG, click on it, go to Settings -> Inbound security rules, and click on Add. Select Service Tag as the source, which allows you to pick Internet as the tag, change the Destination port ranges to 80, the Protocol to TCP, and click OK.

Adding an NSG rule for HTTP

Adding an NSG rule for HTTP

Click on Microsoft Azure in the top left, select the first VM you created, and then click Connect in the dashboard for the VM. Log in through RDP, go to Server Manager, and install the Web Server (IIS) role. Repeat this for VM2. (In a production scenario you would pick only the IIS components you would need to minimize your attack surface.)

As I was writing this article, I had trouble at this stage. The steps in the official documentation didn't work so I contacted Azure support. After a bit of sorting out the situation, it turned out they'd just copied the steps for the Standard LB from the Basic LB. Note that they may have updated the documentation by the time you read this. The most important step is to make sure your VMs don't have a public IP address. If they do, you can't create the backend address pool.

Now the fun bit starts. We start by creating a backend address pool. Click on All Resources, find the load balancer, and click on it. Then under Settings, click Backend pools and click Add. Give it a name, associate it with your AS, and click on Add a target network IP configuration.

Adding a backend pool

Adding a backend pool

We then need to create a health probe in the list of settings in our load balancer. Click Add, give it a name, pick HTTP and port 80, and then select the interval between each probe connection (perhaps 10 seconds for this lab). Also select the number of failures before the system considers the node down, in this case 2. Note that this supports only HTTP for the health probe, not HTTPS.

Creating a health probe

Creating a health probe

Finally, we need to create a load balancer rule to control the distribution of traffic, which ties the elements we just created together. Click Load balancing rules –> Add and give it a name. Use TCP, 80 for both the port number and the backend port, select your backend pool and health probe, and click OK.

Adding a load balancing rule

Adding a load balancing rule

Open another browser tab and head over to the public IP address of your SLB—you should be seeing the default IIS page. Turn off one of the VMs and make sure the page still displays. Have a look at the monitoring statistics for the SLB.

Other improvements

You may be running third-party virtual appliance services in Azure (they're costly, but sometimes it's good to use the same cloud security solutions as on-premises sites). With the Basic SLB, you had to build a custom solution with ZooKeeper to manage the traffic so that if a virtual appliance failed, a standby one could take over. Not anymore—the Standard SLB uses HA ports. These manage all of this for you, including the ability to have active–active configurations for scalability.

You can either assign public IP addresses to your nodes or you can use the SLB to funnel this traffic out behind a public IP address. The latter case uses source network address translation (SNAT) and port address translation (PAT). If you have a lot of outgoing traffic (perhaps to another service in Azure), it can exhaust the ephemeral ports SNAT uses. Mitigations include connection reuse (on by default in HTTP/1.1) or connection pooling.

The metrics reported are comprehensive, showing data such as virtual IP address (VIP) and dynamic IP address (DIP) availability, SYN packets, SNAT connections, and byte/packet counters.

Conclusion

These sorts of inbuilt platform services in Azure provide capabilities beyond the reach of any small-to-medium business (SMB) on-premises organizations. The fact large enterprises would be hard-pressed to implement the services themselves is (to me) the strongest sign that the cloud is the future. Implementing, testing, verifying, and maintaining an SLB solution on premises would be challenging for any network team. I know there are hardware solutions such as F5 BIG-IP, but they're not cheap. In the cloud, it's just a few clicks.

Subscribe to 4sysops newsletter!

In my next post I will discuss the Azure Application Gateway and Web Application Firewall (WAF).

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

© 4sysops 2006 - 2023

CONTACT US

Please ask IT administration questions in the forums. Any other messages are welcome.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account