- Azure Sentinel—A real-world example - Tue, Oct 12 2021
- Deploying Windows Hello for Business - Wed, Aug 4 2021
- Azure Purview: Data governance for on-premises, multicloud, and SaaS data - Wed, Feb 17 2021
VM Storage ^
Hyper-V hosts don’t live by themselves and need shared storage for the VMs to be able to build clusters. While Hyper-V works great with both iSCSI and FC SANs the alternative offered by Microsoft for reduced cost and simplified configuration and management is to store VMs on file shares in Continuously Available Scale Out File server clusters. This option has been much improved with the addition of storage tiering. Take a bunch of ordinary, large and slow spinning disks and combine them with a smaller number of small, very fast SSD drives. The file server in 2012 R2 will automatically move blocks (not whole files, only parts) that are “hot” or frequently accessed to the SSD tier and move them back when they’re less used. You can also manually pin particular files to the SSD tier (think the master image in a VDI deployment).
One area where Hyper-V lagged behind vSphere was in the area of storage QoS which is used to limit the risk of a single VM doing a lot of storage IO and starving other VMs on the same host. This is now on offer to be configured for each VM but only a “per-host” basis. This means that you can balance storage IO between VMs on a single host but if the shared storage for a cluster isn’t keeping up there’s no centralized controlling mechanism. The IOPS are “normalized” 8K for accounting purposes, so anything less than 8K counts as one, a 12K IO would count as two IOs and so forth. You can set either min or max or both settings.
The ability to control IO traffic on a per VM basis is welcome but for complete coverage we’d like to see this as a centralized system taking into account all hosts in the cluster.
Windows Server 2012 offers built in deduplication for files at rest, with spectacular results for VHD libraries for instance (over 90% space saving due to the identical content in many VHD files). But this didn’t work for files in use, in 2012 R2 there’s now online dedupe of VHD files for VDI scenarios only. Microsoft hasn’t had the time to test across all Hyper-V scenarios and so have focused on VDI. It applies the same variable size chunking algorithm that Branch Cache uses and because it does buffered IO operations (using the file system cache) as opposed to Hyper-V which uses unbuffered IO, the performance is actually better with dedupe on.
Shared VHDX ^
Guest clustering is a belt and braces support for application high availability, while host clustering will restart a VM on another host if the original host suffered an unexpected outage, having two or more VMs in a cluster means that the application should continue to serve clients as long as the VMs are on different hosts. Guest clustering has been possible in Hyper-V since Windows Server 2008 but up until 2008 R2 only supported the required shared storage on iSCSI SANs because it was the only way for a VM to connect to the storage. Windows Server 2012 added support for virtual Fibre Channel and also the ability to LM a VM connected to a Fibre Channel SAN. There’s however a fair bit of overhead in configuring VM access to shared storage with either of these options and its definitely not suitable for a hosting environment as the tenant needs to access the underlying infrastructure to setup the cluster.
Windows Server 2012 R2 now offers a much simpler option with shared VHDX disks. This makes it easy to setup a guest cluster with nodes simply accessing a shared file. This file shouldn’t reside on a stand-alone file server but on highly available storage; you can use anti affinity class names to ensure that nodes in the same guest cluster aren’t co-located on the same Hyper-V host. And yes, you can mix physical and virtual cluster nodes in the same cluster. 2012 R2 is required on both the Hyper-V host and the storage platform with 2012 and 2012 R2 required as guest OS (2008 and 2008 R2 should also work but are not officially supported yet), the shared VHDX can only be resized if all VMs accessing it are turned off and it can be dynamic or fixed but not a differencing disk. It can only be backed up safely from within the VMs and you can’t Live Storage Migrate a VM with an attached shared disk (normal LM is OK) but you can shut down all the cluster nodes, then move the disk and reattach it to the nodes.
This new version of Hyper-V is more polished and builds on the great improvements offered in the 2012 version. There are now really very few niche areas where Hyper-V doesn’t do as well as vSphere and in most it exceeds vSphere capabilities at a much more cost effective price point. We didn’t look at System Center Virtual Machine Manager 2012 R2 in this article but of course it builds on these enhanced features to offer a better management solution for your virtualized datacenter.