- Azure Sentinel—A real-world example - Tue, Oct 12 2021
- Deploying Windows Hello for Business - Wed, Aug 4 2021
- Azure Purview: Data governance for on-premises, multicloud, and SaaS data - Wed, Feb 17 2021
It's been quite a long time since we looked at System Center Virtual Machine Manager (VMM) here at 4sysops—the last time was when 2016 was in Technical Preview. This is primarily because both for 2012 R2 and 2016, Microsoft has progressively enhanced the releases through small quarterly fixes and feature-adding updates.
Secondarily, this is because System Center is really a backburner product, apart from Configuration Manager (CM), which is a separate beast. The cloud is where it's at for Microsoft, and really, "if you want an on-premises solution, we've got Azure Stack" (which doesn't use System Center). "Oh—you don't have a fit for Azure Stack but you still want on-premises software? Well, we have this product called System Center."
Some changes are afoot, however, and mid-last year Microsoft promised a faster release schedule for System Center, with two "major" releases a year to match with the new cadence of Windows Server. Again, this doesn't involve CM, which has three releases a year.
Windows Server 2016 and 1709 ^
As one would expect, this version adds support for features in Windows Server 2016 (and the 1709 Semi-Annual Channel version) that didn't make it into VMM 2016.
One such feature is nested virtualization. You can create a VM running 2016/1709, and then in the properties of the VM, you can enable the feature, followed by adding the VM as a host into VMM. I'm not sure where this would be useful in a production scenario unless you run a learning/lab environment, where nested virtualization is a fantastic feature.
Storage Quality of Service (QoS) was a huge feature in Windows Server 2016. However, VMM 2016 only supported this for VHD/VHDX files on Storage Spaces Direct (S2D) and Scale-Out File Server (SOFS) clusters. Also, you could only scope policies to storage arrays. VMM 1711 can apply Storage QoS on all managed clusters, including storage area networks (SANs), and you can also define a Storage QoS policy in a template. You can now also apply Storage QoS policies at the VMM cloud level.
Windows Server 2016 introduced the ability to shield a Windows VM. Windows Server 1709 extends this to Linux VMs, and VMM 1711 supports this too. The usual caveats apply. You'll need to have a Host Guardian Service (HGS) infrastructure in place, you (or your customer if you're hosting VMs for other people) will need to create a shielding data file (PDK), and you'll need to set up the VM using a template.
If you're interesting in diving deep into shielded VMs and HGS in VMM, I presented VMM 2016 at Ignite Australia 2017. Watch the session here, and here are the additional resources I created, including narrated video screen captures of all the steps involved in configuring VMM for HGS and software-defined networking (SDN). You can now also define VMs with an HGS fallback if the primary HGS server cluster isn't available, making dual datacenter deployments easier.
You can now remote to a VM over enhanced session mode, which is handy when there are interruptions in network connections. Also, refreshing the properties of your hosts is now up to 10 times faster. I haven't tested that last bit extensively, but it certainly seemed snappier on my four-node cluster.
Many headline features in Windows Server 2016 and 1709 are in the networking area. Basically, it's a port of the SDN stack in Azure, and it includes the Network Controller (NC), the Software Load Balancer (SLB), and the Remote Access Server (RAS) Gateway.
VMM 1711 has extended support for the SLB to allow defining virtual IPs (VIPs) using service templates for multitier applications. In VMM 2016, you could only do this using PowerShell. Also, SLB deployments in VMM 1711 can now do internal load balancing, not just public, which matches the capabilities of the SLB in Azure. For guest clustering, VMM 1711 supports floating IPs through the SLB, essentially using Internal Load Balancer (ILB) VIPs so that the SLB knows which node is active. It thus routes traffic from the external VIP to the active node in the cluster.
You can also attach a Health Monitor to a SLB for HTTP or TCP replies, which it will check on an interval you can define.
Another useful new feature in VMM 1711 is the ability to encrypt a VM network with a single click. In earlier versions of Windows, you could of course use Internet Protocol Security (IPSec) or Transport Layer Security (TLS). But they're not easy technologies to deploy, nor are they easy to automate for scale. If you have deployed SDN on a Windows Server 2016/1709 fabric, you simply need to add a certificate (from an internal certificate authority or a self-signed certificate) on each host.
Then you tick the box to enable encryption when you create a new VM network and paste in the certificate thumbprint. Note that this protects against third parties and snooping network admins but not against fabric admins. Microsoft states that Protection against fabric admins is in the pipeline and will be available soon." This is curious—it would bring networking in alignment with shielded VMs, which do protect against a rogue fabric administrator.
They haven't forgotten VMware, and you can now convert an extensible firmware interface (EFI) VM to a Hyper-V Generation 2 VM in addition to the normal VMs from vSphere (ESXi) 4.1, 5.0, 5.1, 5.5, and 6.0.
Finally, you can now manage Azure Resource Manager (ARM) VMs through the Azure plug-in. The earlier version only supports classic Azure Service Management (ASM) resources.
Overall, I like the approach of two System Center releases a year. We can test new features through a preview, and Microsoft can add actual useful features. Whether this will result in a surge in System Center deployments remains to be seen.
I do think the public cloud is where the all the innovation is. I think Microsoft really just keeps System Center alive for all of those on-premises customers who "haven't seen the light yet." If Microsoft were really serious about VMM as a long-term product, they'd rewrite the service template engine to rely on ARM. You could thus use the same deployment templates in Azure, Azure Stack, and VMM.
Subscribe to 4sysops newsletter!
The additions outlined above, however, are useful and raise VMM another notch. It's still my favorite System Center family member.