- Azure Sentinel—A real-world example - Tue, Oct 12 2021
- Deploying Windows Hello for Business - Wed, Aug 4 2021
- Azure Purview: Data governance for on-premises, multicloud, and SaaS data - Wed, Feb 17 2021
Synthetic Fibre Channel
In one way it’s great that Microsoft now has synchronized the shipping of inter related products so that Windows Server 2012 and System Center 2012 ships at the same time, on the other hand it leads to some features not making the cut due to time.
One such feature was Synthetic Fibre Channel support; introduced in Windows Server 2012 this allowed VMs to communicate directly with your FC SAN, including being able to Live Migrate a VM with such connectivity. Unfortunately the support for setting this up in VMM 2012 SP1 was missing and so this had to be done outside of VMM (PowerShell or Hyper-V manager) but VMM could otherwise manage such VMs.
In VMM 2012 R2 this has been fixed and it can now manage the SAN fabric, automatically do zone management and provision VMs with FC connectivity from templates.
Offloaded Data Transfer (ODX)
Another feature that VMM didn’t catch up with in 2012 SP1 was Offloaded Data Transfer (ODX) which promises to vastly increase the speed of data transfers on compatible SANs. VMM 2012 R2 will now take advantage of ODX but only for VM deployment from a library (provided the library share is located on the SAN of course), hopefully other ODX scenarios will follow.
Setting up a file server cluster on those new server that just got racked and wired is now as easy as setting up Hyper-V hosts in VMM.
Bare metal deployment
Arguably the biggest new feature in VMM 2012 R2 is the ability to deploy file server clusters to bare metal servers in a way similar to how earlier versions could deploy Hyper-V servers to new physical servers. This is reflected in the new name for the profile used – “physical computer”. Once a Windows Server Scale out File Server has been deployed VMM can now also manage Storage Spaces on the cluster, including correctly configuring the file share permissions for all cluster nodes. There’s no doubt that Microsoft sees the future of virtualized datacenter storage management as shared JBOD chassis with SMB 3.0 file shares for housing VMs; not SANs.
Guest clustering
While host clustering provides High Availability and Live Migration to be able to move VMs around, there’s downtime associated with an unplanned failure of a host. Yes, the cluster will detect that a host is unavailable, determine which VMs were running on that host and restart those VMs on other nodes. But users of those servers will notice an outage and loose unsaved data.
The answer is to combine host clustering with guest clustering where several VMs are clustered and if one VM goes away they are then are able to compensate resulting in near zero downtime for users.
VMM 2012 could manage and provision guest clusters through service templates but because each node was identical it required extensive logic in scripts to make sure that the first node created the cluster and subsequent VMs joined the cluster. In VMM 2012 R2 services are now cluster aware and you can run different scripts on the initial VM and subsequent guest cluster nodes.
Speaking of services, if you have Windows Server 2012 R2 hosts and guests, VMM can now inject files into a specific path on the VM before the first boot.
Virtual Hard Disk sharing
The other thing that clusters require is shared storage, something that prior to Windows Server 2012 R2 involved connecting your guest cluster nodes to either a Fibre Channel or iSCSI SAN. In Windows Server 2012 R2 a normal VHDX file can now be shared amongst VMs and act as the shared storage and setting up such a shared drive is fully supported through VMM service templates.
Conclusion
One of my favorite features in VMM since the first version has been the powerful Physical to Virtual (P2V) functionality for moving old servers into the new virtualized world. This feature has been inexplicably deprecated in VMM 2012 R2 and there’s even a blog post on how to mitigate the problem by using an older version of VMM. It seems to me that yet again Microsoft lives in a different world to us ordinary IT Pros, where all workloads are virtualized and there are no legacy servers to migrate.
Aside from this minor gripe however it’s clear that VMM continues to be a major piece of the SC suite with more and more management capabilities, now for file servers, Storage Spaces, Network Virtualization gateway provisioning and even physical switch management. It’s not hard to see VMM going towards becoming a full-fledged fabric management tool that really should be renamed Virtualized Data Centre Manager.