Latest posts by Paul Schnackenburg (see all)
- Microsoft Ignite 2017 Australia - Mon, Mar 6 2017
- Storage Spaces Direct (S2D) - Part 2: setup and monitoring - Tue, Jan 24 2017
- Storage Spaces Direct (S2D) - Part 1: features and fault tolerance - Mon, Jan 23 2017
It’s also more capable with each release, and it now manages your storage fabric (both Microsoft and SANs/ NASs), your network fabric (virtual as well as physical top-of-rack switches through Open Management Infrastructure (OMI)), your compute fabric (VMware and Hyper-V), and of course all the VMs. VMM is a bit like Borg technology, taking control of everything; a more appropriate name would be Datacenter Manager.
When Microsoft released the Technical Preview of Windows Server (I’ll call it Server vNext), it also released Technical Preview versions of most of the System Center products. These are even more “alpha” code than Server vNext, and really the only scenarios that can be tested are compatibility with Server vNext and SQL 2014. The advantage of these early releases is that testers can actually have an impact on what features end up in the final version, unlike with normal “almost finished” previews. Word on the street is that Microsoft is listening to users more than ever.
System Center Technical Preview ^
Overall in System Center we know more about what’s not coming than we do about what is: App Controller is no more (replaced by the infinitely more capable Windows Azure Pack); Server App-V (a part of VMM that no one used) and IT GRC Process Management pack (again, something nobody used). Notable on the VMM front is that Citrix Xen Server is no longer supported; only VMWare vCenter 5.5 and 5.8 (4.1 and 5.1 support bit the dust too). Clearly Microsoft sees the virtualization race as a two-horse game at this stage. Full release notes can be found here.
The VMM Console
Storage enhancements ^
Storage Management (SM) API is getting a makeover in the next version of VMM and NAS devices are now supported natively (in 2012 R2 there was a special mode called Pass Through to allow VMM to manage NASs).
You can now use VMM to classify local storage in your Hyper-V hosts just like you can create service classes of SAN / SOFS storage today (bronze, silver, gold for example). VMM will also manage Shared Nothing storage, the new take on file server clusters that uses internal disks in each host instead of a shared SAS fabric. Configuration of storage tiering and deduplication can now be controlled from VMM; previously it had to be created on the file server cluster side.
Adding Local Storage in VMM
I’m keen to see two blockbuster features that are coming to VMM but are not in the current TP: the central policy engine and GUI for managing Storage QoS and the GUI for the new Network Controller. The storage QoS policy engine is a clustered resource itself, so it can fail over between nodes.
Storage QoS policies can be set in tiers with parent-child policies for exception VMs; VMM will tag policies so that a Hyper-V host that’s moved from one cluster to another can pick up the right policies. Storage Replica, the new generic, block-level replication engine in Server vNext, can also be managed by VMM vNext.
Consistent Device Naming (CDN) ^
The only new feature that’s been demonstrated for VMM is Consistent Device Naming (CDN). CDN has been available in the physical world for some time from different server hardware vendors. Basically it means you can identify a particular NIC by looking at the back of the box (NIC1, 2, 3, etc.), and the same name will be assigned in the OS. This make is possible to automate deployment; before CDN, there was no way to know which NIC name the OS would assign to a particular physical NIC.
CDN—Setting Network Name
Hyper-V in vNext takes CDN into the virtual world and allows you to define a name which is then passed into the VM so that scripts can assign the right settings to the right virtual NIC. Currently this is only supported on Generation 2 VMs, and it’s only applicable during the guest OS setup; you can’t change the NIC identifier afterwards. Once you have applied CDN through a VM template to a new VM, it can only run on vNext Hyper-V hosts; 2012 R2 and earlier don’t support it. You can use either a custom string or pass in the name of the virtual switch on your hosts.
VMM management of SAN replication ^
Update Release 5 (UR5) for VMM 2012 R2, currently in public beta and due for final release in January 2015, contains an interesting new feature which will of course also make it into vNext: VMM management of SAN replication.
This is an extension to Azure Site Recovery that not only allows orchestration of Hyper-V replica between two of your datacenters but also lets VMM manage the replication of data from one SAN to another in your two datacenters. Currently eight partners have been named: EMC (VMax, VNX and VNX/e), Netapp (FAS), HP (3Par), Hitachi Data Systems (VSP), Fujitsu (Eternus), Dell (Compellent), Huawei (OceanStor) and IBM (XIV), with the first three supported in the beta of UR5.
You can do test failover in a similar fashion to what Hyper-V Replica allows, but here it uses SAN snapshots/VM cloning to create the test VM in the replica datacenter.
SAN replication also allows guest clusters (two or more VMs) connected to a SAN (Virtual Fibre Channel or iSCSI) to failover between your two datacenters in an orchestrated fashion.
There’s no doubt that VMM will continue to be a very important part of Microsoft’s Cloud OS vision, and I’m looking forward to more complete releases in the new year to be able to test the Storage QoS Controller and the Network Controller.