Latest posts by Paul Schnackenburg (see all)
- Azure Application Gateway and Web Application Firewall (WAF) - Thu, May 10 2018
- Azure Load Balancer overview - Thu, Apr 26 2018
- Azure network monitoring - Wed, Mar 28 2018
So far (I’m expecting more features and scale figures to be revealed before the final release), this preview brings us a new VM configuration file format and version, new production checkpoints, rolling upgrade of clusters, integration service updates through Windows Update, hot-added NICs and memory, and Linux Secure Boot. Best of all, connected standby now works on client Hyper-V.
Note that this article is written based on fairly early code in the Technical Preview. A lot can change between now and the final version.
VM configuration file format and version ^
VM information is now stored in the (not directly editable) file format VMCX, whereas the running state of each VM is stored in the new VMRS format. The point of these new file types is that they are more resilient to storage failures; how that will actually play out in real-life production scenarios remains to be seen. Microsoft does have some experience; one of the main “selling points” of the VHDX virtual hard drive format (apart from the capacity increase) that was introduced in Windows Server 2012 R2 was the exact same resiliency.
VMs created in Windows Server 2012 R2 have configuration files that are in version 5 format and are stored in an XML file. These VMs can run on both 2012 R2 and the Technical Preview. Once a VM has been moved to a Technical Preview Hyper-V server (and has been shut down), its configuration can be upgraded to version 6 using PowerShell, like this:
This is a one-way trip. You can’t downgrade a VM configuration and, once complete, this VM won’t run on Windows Server 2012 R2 Hyper-V. In my testing, the upgrade process only took a few seconds.
Upgrade a VM’s configuration version
Mixed versions in a cluster ^
Microsoft’s move from 2012 to 2012 R2 offered an interesting upgrade story in that you could have a 2012 cluster and a new 2012 R2 cluster, and VMs could be live migrated across versions from the old environment to the new one with no downtime. This does, however, require that you add new nodes or take one or two hosts out of the source cluster to upgrade to 2012 R2; the upgrade story in the Technical Preview is even better.
Taking a leaf from the Active Directory upgrade process, it’s possible to mix the old with the new in the same cluster. When you’ve upgraded all your DCs or, in this case, Hyper-V hosts, you can upgrade the functional version of the entire cluster. So, migrate all VMs off one host, upgrade or do a clean installation to Technical Preview, add it to the cluster again, and migrate VMs back onto the host. Rinse and repeat, at your own pace, until all nodes have been upgraded. All the VMs will operate with version 5 configuration versions. None of the new Hyper-V features will be available until you run Update-ClusterFunctionalLevel in PowerShell, just like you manually have to update the functional level of an AD domain to access the new features.
Once the cluster has been upgraded, you can’t add 2012 R2 nodes; however, the Update-VMConfigurationVersion cmdlet becomes available. You’ll still have to shut down each VM to upgrade the configuration, but you can schedule this for a convenient time since they’ll run fine with the old configuration files. While your cluster has both old and new hosts, the recommendation is to manage the cluster from Technical Preview servers.
Safer snapshotting – production checkpoints ^
Hyper-V has always had the ability to create a snapshot of the state of a running VM, but that snapshot has come with a dire warning: don’t use it in production unless you REALLY know what you’re doing. While these snapshots are fine in lab and learning situations, they can wreak havoc on production VMs. Examples include Domain Controllers, clustered SQL servers, and Exchange mailbox servers in a DAG; if any of these are snapshotted and, at some later point in time, the snapshot is applied, you have effectively transported the VM “back in time.” If it then replicates with other VMs, those other VMs won’t realize that this has happened and therefore won’t send changed data to bring the VM up to date. This can lead to data loss or other intermittent problems.
Production checkpoint using VSS
The Technical Preview brings an interesting twist to this scenario by using VSS technology inside a Windows VM and flushing the file system buffers inside a Linux VM to create production checkpoints, which will be the new default. This will be completely supported for all production workloads because it’s like a backup and, unlike traditional checkpoints, the VM is aware of it. You can optionally go back to the legacy type of checkpoints.
Production checkpoint settings for a VM
Keeping up to date ^
Once the Technical Preview is released (Windows Server 2015, anyone?) the integration services inside the VM will be kept up to date using normal Windows update channels such as WSUS, Configuration Manager, or Microsoft Update. This will be a welcome addition as it’ll eliminate a separate process for keeping this particular software up to date.
For Generation 2 VMs (introduced in 2012 R2 and lacking legacy emulated hardware), you can now add or remove network adapters while the VM is running (works for both Windows and Linux VMs), as well as change the amount of memory assigned to the VM even if that VM is not configured for dynamic memory. Both of these changes will contribute to better up time for VMs as more alterations can be done without having to shut down the VM first.
Secure Boot for Linux ^
Anyone remember the uproar in the Linux community about Secure Boot in the days of the Windows 8 betas? This was meant to be evil Microsoft, locking poor Linux out from even being able to be installed on new PCs. Of course, it didn’t work out that way; you can disable Secure Boot in the BIOS on any modern PC and install an OS of your choice. You just won’t be as protected against rootkits and other low-level malware infections.
With the introduction of Generation 2 VMs in 2012 R2, Microsoft went with a virtual UEFI firmware instead of a legacy BIOS, which enabled Secure Boot for VMs. This support has now been extended to Linux with Ubuntu 14.04 and later, and SUSE Linux Enterprise Server 12 can also take advantage of it. You do have to enable Secure Boot before first boot using the following:
Set-VMFirmware vmname -SecureBootTemplate MicrosoftUEFICertificateAuthority
Something that I haven’t tested yet is that Connected Standby now works in client Hyper-V in Windows 10. In Windows 8.1, you lose the convenience of your Surface Pro or other similar device being always connected and “sleeping lightly” if you enable Hyper-V.
The improvements that have been revealed in Hyper-V again show that Microsoft is very serious about innovating in this space, and several of these enhancements are clear winners from a technical and business perspective. I can’t wait to upgrade my cluster to the new version to really take it through its paces. However, that will have to be done in conjunction with the improvements to storage, which we’ll cover in the next article.