Latest posts by Paul Schnackenburg (see all)
- Microsoft Ignite 2017 Australia - Mon, Mar 6 2017
- Storage Spaces Direct (S2D) - Part 2: setup and monitoring - Tue, Jan 24 2017
- Storage Spaces Direct (S2D) - Part 1: features and fault tolerance - Mon, Jan 23 2017
In this article, we’ll look further into Nano, why Microsoft is pursuing this path, and the scenarios where Nano might work in your business, along with some warnings.
Why Nano Server? ^
In many ways, Nano is a further evolution of Server Core. This installation option, which gives you a server with only a command-line interface, has been around since Windows 2008. In Windows Server 2012, the ability to remove the GUI from a full installation to turn an already installed OS into Server Core improved its appeal. By Windows Server 2012 R2, almost every role that can be installed on Windows Server is available on Core.
In contrast, Nano will be an install-time option. If you want the full server, you have to format and reinstall; there’s no way to “upgrade” from Nano to Server Core/GUI.
So, even though Microsoft has had Core as an option since 2008 and has been touting its advantages, what’s been the uptake? According to two recent surveys by Aidan Finn on his blog, close to 80% of IT pros put a full GUI on their Hyper-V servers when they have a choice (when they’re not using the free Hyper-V server, which doesn’t come with a GUI), and more than 75% use GUI versions for their application servers.
Perhaps the problem is that Core is trying to be everything to everyone? In contrast, Nano will only have two roles to play when it’s released in 2016: as an infrastructure server (Hyper-V hosts/clusters and Scale-Out File Server) and as an application platform for “born-in-the-cloud” applications (more on this below). Microsoft does say that the future of Windows Server is Nano, and more roles will be added going forward.
In contrast to Core, Nano doesn’t come with any roles built in. You have to add the roles/features. And the bits for them don’t live in the image; you have supply them at install time, just like when you install applications.
No local login ^
Nano is completely headless. There’s no way to log on locally at all. You have to manage the server remotely using Server Manager (from a GUI installation of Windows Server 2016 TP2), PowerShell DSC, or a web-based interface (not available in TP2) based on the “Ibiza” Azure portal framework. Third-party DevOps toolchains such as Chef will also play nicely with Nano.
The Azure-flavored, web-based interface offers WAN-friendly remote replacements for tools such as Task Manager, Registry Editor, Event Viewer, Device Manager, Control Panel, Performance Monitor, Disk Management, and User/Group Management. Note that the Device Manager functionality relies on a new PowerShell module called PNP that allows you to use PowerShell to manage devices and drivers. These tools will also be available to manage Core and full GUI installations of Windows Server 2016 (and probably “legacy” Windows Server 2012/2012 R2 as well).
Be aware that the version of PowerShell that runs on Nano is called “Core PowerShell” because it runs on the slimmed-down next version of the .NET framework called CoreCLR. CoreCLR is lean (approximately 55 MB), composable (just pick the functionality you need), Open Source, and cross-platform. In practice, this means that some cmdlets aren’t available (notably, PowerShell Workflow), and, although a Nano Server can be a part of what happens in a workflow, you can’t author it on Nano. There’s also no support for Get-WMI and related cmdlets, but that should be easy to deal with because Get-CIMInstance works. You also can’t use Add-Type on Nano; however, the new classes functionality introduced in PowerShell 5.0 should take care of that.
Nano is forcing Microsoft to make sure that all their management tools will work remotely. New setup and boot event logging functionality gathers ETL logs on a remote server that you configure through BCD Edit.
Although they’re not available in TP2, VMM, OM, Azure Operations Insight, and DSC Local Configuration Manager agents are already in the works for Nano.
Born-in-the-cloud platform ^
Part of the reason for Nano becomes clear when you consider Windows containers. These are a direct result of the meteoric rise of Docker and containers in general on the Linux platform.
Windows containers will come in two flavors: normal ones that will use a shared kernel (just like Linux containers), and Hyper-V containers that have a separate kernel for each container but more secure separation between each container. Either flavor of container can be based on Nano or Core (but not a full GUI server).
Developers will target either Nano or Server when writing their applications. Visual Studio 2015 already has provisions for informing code writers when they’re using an API that’s not available in Nano. Any server application written for Nano will run on both Nano and Server Core. Nano is 64-bit only and does not have the WOW64 translation layer for 32-bit applications.
At deployment time, the application can be targeted at a physical server, a VM, or a container (either flavor). My prediction is that the most popular OS inside containers on Windows will be Nano. And the “born-in-the-cloud application platform” part becomes clear when you see what Nano can run or be managed by:
- Python 3.5
- Java (OpenJDK)
- Ruby (2.1.5)
This is all due to the magic of Reverse Forwarders, a way to project the DLLs that an application is expecting into Nano.
Not nano in performance ^
Based on all we’ve covered so far, the whole point of Nano is to give you a very small and fast OS where you install just the bits you need for the particular role you need the server to play. Nano installed is about 410 MB, compared to more than 6 GB for Core; this much smaller image leads to faster deploy (40 seconds vs. 5 minutes) and boot times. These figures are from early performance testing and should improve even further as Nano nears release next year. The minimal number of bits also leads to fewer necessary patches and fewer reboots.
Adding packages to Nano Server
Trying Nano Server ^
Now that TP2 is out, I thought it’d be interesting to take Nano for a spin. Some official guidance exists here. Be aware that the Convert-WindowsImage.ps1 script won’t run on TP2 as supplied; it’ll give you an error because it’s not running on Windows 8. Find the line in the script (it’s line 3044) that says $isWin8 = (($os.Version -ge 6.2) -and ($os.BuildNumber -ge $lowestSupportedBuild)) and change it to $isWin8 = $True.
The first step in preparing to run Nano on a physical server is to convert the WIM install file from the TP2 ISO, in the folder called NanoServer. Then, use DISM from the Sources folder to mount the VHD and install the required packages (remember, nothing is built into the base Nano image). If you need specific drivers, you can install the full version of TP2 on a server, inventory what drivers Device Manager requires, and use DISM to put those drivers into the VHD.
You then customize the unattend.xml file and use it to give the VHD a server name and organization as well as specify the administrator password. Use BCDEdit to add another entry to the boot manager to be able to boot Nano from the VHD on a host. After the Nano Server is up and running, you can use Offline Domain Join on the command line to join it to a domain.
I’ve spent quite a few hours trying to get Nano to run (physical installation) on one of my Hyper-V servers with no success (it gets stuck on running startup diagnostics) as well as setting up a VM running Nano. It kind of reminds me of my first experiences with getting Linux up and running more than 15 years ago. A script is available here that can help you get a Nano VM up and running. It’s very early days, though; I expect these difficulties will be mitigated in future builds.
Some good sessions exist on Channel 9 for Nano, available here.
I think there’s a disconnect in Microsoft’s thinking with respect to servers. Yes, from a large cloud provider’s point of view, Nano makes perfect sense (and competes nicely with Linux; after all, Amazon, Google, and others all run on flavors of Linux). If I’m installing 50 or 500 servers, taking the time to inject exactly the right drivers into a customized Hyper-V host OS and coming up with an ultra-efficient image is worthwhile. But I think many IT pros see their servers differently, even in today’s partly cloudy world.
And for Nano to really work well, Microsoft is going to have to step up the “certified for Windows Server” hardware certification program so that NIC drivers and other drivers from third-party vendors are rock solid—not the constant source of troubleshooting that they are today. Until then, most IT pros will want to have an easy way to fix those problems on the box itself, not remotely, especially if the NIC driver is causing the issue.