Latest posts by Paul Schnackenburg (see all)
- Microsoft Ignite 2017 Australia - Mon, Mar 6 2017
- Storage Spaces Direct (S2D) - Part 2: setup and monitoring - Tue, Jan 24 2017
- Storage Spaces Direct (S2D) - Part 1: features and fault tolerance - Mon, Jan 23 2017
Unless you’ve been totally offline for the last year, you’ll have heard of containers and Docker. Some think they’re going to replace virtual machines; others consider them a nice addition, but don’t see them displacing VMs. We’ve got many articles on both Windows and Linux containers here at 4Sysops: here, here, here, and here, for instance.
But just as when virtualization was starting to gain traction, once that first “cool factor” of the technology wears off, managing it becomes the focus. After all, the ability to run a handful of VMs on a hypervisor isn’t the difficult part: the challenge is running hundreds of VMs across many hosts efficiently. The same applies to containers—one host with a few demo containers can all be done manually, but thousands of containers created and scheduled across many hosts is the real test.
The open source community has stepped up to the plate: the Apache Software Foundation has Mesos, a distributed cluster manager. Mesosphere is another organization that extended the base Mesos into Mesosphere and more recently, into Datacenter Operating System (DC/OS). DC/OS is a collaboration of different players including Microsoft, Autodesk, Cisco, and others. Docker itself has Swarm to manage a large number of distributed containers.
Of course, you can download, install, and configure each of these projects to your liking on your on premises servers or in Azure IaaS VMs. But Microsoft knows that the future is in removing every roadblock to “container nirvana”: write your code in a local container, then just upload to a cloud service that runs it for you with orchestration and scaling as required. We’re not there yet, but the recent and generally available Azure Container Service is definitely a step in the right direction.
I should point out that I’m a systems administrator, not a developer. I don’t write code, but I know that in this cloudy future we’re heading for, I need to understand how containers work, what makes them tick, and how to deploy them at scale. At some stage, that’ll be part of my job description, and I want to be ready.
Since Windows Containers haven’t been released yet, ACS is all about Linux containers. Azure Container Service runs Ubuntu 16.04 LTS in the VMs today. I expect support for Microsoft’s own two flavors of containers (Windows and Hyper-V) will be added once Windows Server 2016 is released.
Azure Container Service ^
Many months ago, Azure Container Service started its life as a (very complex) Azure Resource Manager (ARM) template; today, it’s a full-fledged provider in ARM framework.
Feel free to follow along as I create an ACS service. Click on New – Scroll down to Containers and select Azure Container Service. You’ll need to define a username, location, and resource group name. To connect to and work with your ACS service from Windows, you’ll need PuTTY, and to generate SSH keys, you can use PuTTYgen. If you pick the MSI package on this page, it’ll install both. The tutorial here gives step by step instructions.
In step 2, you choose between a Docker Swarm or DC/OS. Each ACS cluster can only use one or the other. Swarm is going to appeal to people who’ve already invested time in learning Docker’s command line. Step 3 is choosing the size of your cluster, number of agents, the size of the VMs for the agents as well as how many masters you need (1 for testing, 3 for a production deployment, and 5 if you really, really need it highly available). Note that the master is always a D2 VM.
Step 4 summarizes your settings, and step 5 is the final go button.
Managing your Azure Container Service cluster ^
Deployment takes a while. Once it’s up and running, you use PuTTY (or another equivalent tool) to connect (see step by step instructions here). Your username is username@publicdns, but mine, for instance, was 4SysACSAdmin@sysopsacsmgmt.australiaeast.cloudapp.azure.com. The public DNS will vary depending on the location you picked in step 1. Once PuTTY has created the SSH tunnel from your PC to the cluster, you can run your favorite browser and connect to http://localhost/ for DC/OS; http://localhost/marathon for Marathon; and http://localhost/mesos for Mesos.
In a nutshell, Mesos takes all the underlying hosts you give to it and presents them as a single big server with lots of cores and memory. ACS also spins up Chronos. Instead of scheduling jobs on a single instance (cron), Chronos schedules jobs across the entire cluster. Marathon is also included; it runs the application containers (as well as Chronos), and if a container fails, it’ll start another one somewhere in the cluster. Universe is a package repository with a GUI where you can install DC/OS services such as Cassandra, NGINIX, OpenVPN, Storm, and others. All these components are configured and ready when your ACS cluster is up and running—hence the moniker Azure Container Service.
There are several other projects that aim to do the same thing as the building blocks of ACS. The team has said that it’s too early to call a winner and that several different technologies suited for different approaches will co-exist for a long time to come.
With Service Fabric, the underlying manager for a lot of Azure’s own service, and its availability for both on-premises and other clouds, a comparison is appropriate. Service Fabric is internal technology from Microsoft, not built on Open Source components, and it’s battle hardened at hyper scale. Whereas ACS relies on Linux containers today, requiring code that’ll run in Linux, Service Fabric can be used to run any executable, including ASP.NET Core1, node.js, Java VMs, etc.
If you’re looking to write a new microservices based, containerized application/cloud service, do your due diligence in picking a platform, especially considering the skill sets of your operations and developer teams. And if you’re testing ACS, remember to delete the resource group when you’re done, because at the minimum you’ll have a D2 and an A0 VM running, possibly more, chewing up dollars. There’s no charge for ACS itself, only for the underlying storage and compute resources.
No cloud technology has achieved the ultimate goal where your code can be written and simply uploaded, deployed, managed, monitored, and scaled elastically automatically. With ACS and the involvement in DC/OS, Microsoft is definitely in the race towards that goal.