- Azure Sentinel—A real-world example - Tue, Oct 12 2021
- Deploying Windows Hello for Business - Wed, Aug 4 2021
- Azure Purview: Data governance for on-premises, multicloud, and SaaS data - Wed, Feb 17 2021
One hassle in System Center Virtual Machine Manager 2008 and Hyper-V today is the restriction of having to manage networks on a per NIC basis; SCVMM comes to the rescue with new networking features.
Logical networks in System Center Virtual Machine Manager 2012 ^
A logical network with one or more logical network definitions groups together IP subnets and VLANs to simplify network management in SCVMM 2012. Typical networks would be backend, frontend, management or backup. When you provision a host or VM you associate it with a logical network and it automatically receives a fixed IP address and mac address. Logical networks can span geographies with one or more logical network definitions for each location. You can also use DHCP instead of controlling IP address allocation through SCVMM if you so desire. Each NIC on a host needs to be associated with a logical network in either trunk or access mode. In the latter only a single VLAN ID is allowed whereas in trunk mode multiple VLAN IDs can be used in different VMs that share the NIC.
The logical network system allows assignment of addresses to Windows based VMs running on all three supported hypervisor environments. Both IPv4 and IPv6 addresses are supported but not in the same address pool.
For anyone who’s had to manually manage networks in large Hyper-V installations the new logical network features will be a godsend
Hardware load balancers are now recognised by SCVMM and through creating one or more virtual IP (VIP) templates specific type of traffic can be controlled. A VIP template can control HTTP traffic behaviour on a BIG-IP from F5 for instance. In this beta the only load balancers that are recognised are BIG-IP from F5 and Citrix’s NetScaler but expect this list to grow, including support for Microsoft’s own Network Load Balancing in Windows Server.
Not only does SCVMM give you logical networks, it also integrates tightly with your hardware network load balancers
If you’re integrating with VMware’s environment be aware the SCVMM doesn’t automatically create port groups on ESX hosts, this has to be done in vCenter to match the SCVMM logical network definitions.
Storage Integration in SCVMM 2012 ^
SCVMM can discover and provision remote storage on arrays in the console and available storage can be classified. An 8 Gb Fibre channel SAN could be called “platinum” whereas a slower iSCSI SAN could be known as “silver”. SCVMM uses Storage Management Initiative – Specification (SMI-S) to communicate with external arrays and a provider for your SAN vendor needs to be installed on the server to unlock this functionality. Once communication is established you can create logical units (MBR or GPT), allocate them to host groups and then assign them to individual hosts or to clusters as Cluster Shared Volumes (CSV). In the beta EMC Symmetrix & CLARiiON CX, HP StorageWorks Enterprise Virtual Array (EVA) and NetApp FAS are supported but many more are likely to follow.
Storage groups are a new concept; they bring together host initiators, target ports and logical units. The SAN array integration is only available for Hyper – V hosts, storage for VMware and XenServer hosts have to be provisioned outside of SCVMM.
SCVMM 2012 also supports SAN technologies for deploying VMs like snapshot and clone: simply create a template (from a new or existing VM) that’s SAN copy-capable and the SAN will duplicate the LUN that contains the source .vhd file.