Latest posts by Paul Schnackenburg (see all)
- Microsoft Ignite 2017 Australia - Mon, Mar 6 2017
- Storage Spaces Direct (S2D) - Part 2: setup and monitoring - Tue, Jan 24 2017
- Storage Spaces Direct (S2D) - Part 1: features and fault tolerance - Mon, Jan 23 2017
Good old Terminal Services has been around in Windows Server for a LONG time and is doing well. This venerable technology now powers Microsoft RemoteApp in Azure, and free Microsoft RDP clients exist for Android, iOS, and Windows.
MultiPoint Server functionality (which enables multiple desktops to connect to a single host server via USB connections), previously only available as a separate product, is now a server role that can be added to a Standard or Datacenter server.
Application compatibility ^
Traditional session-based RDS scenarios (200 to 300 sessions on a standard server, all using the one OS) still won’t be able to take advantage of hardware-based Graphical Processing Units (GPUs).
On the other hand, RDS-based Virtual Desktop Infrastructure (VDI) workloads (Hyper-V hosts with virtual machines, running Windows client OS) should now be able to support any program that runs on Windows. This covers traditional information worker applications, design workloads (Illustrator, Photoshop, and the like) and engineering programs (AutoCAD, Blender, LightWave 3D), given the right hardware.
DirectX and OpenGL support ^
Four ways exist to provide accelerated DX or OpenGL graphics inside a virtual machine, Windows Server uses the first and vGPU (see below). You can use software-based, emulated functions that rely on the host CPU for graphics calculations (commonly called WARP); although this method works, it severely limits the number of VMs because the host CPU has a lot of extra work to do.
There’s also device assignment, where a physical GPU on the server is “passed” to the VM. This leads to great performance but low VM density because you need a GPU for each VM. NVIDIA also offers device virtualization (the company calls it vGPU), which splits a GPU into virtual functions that are presented to each VM.
This is very similar to how SR-IOV network cards present virtual function NICs to VMs for low latency/low CPU load networking in Hyper-V. The problem with device assignment and device virtualization is that they are proprietary and create vendor lock-in. At this stage, only NVIDIA GPUs offer these two configurations, and Hyper-V does not support them.
vGPU improvements ^
Finally, there’s host-controlled GPU virtualization (vGPU), where GPUs from any vendor (NVIDIA, AMD, and Intel, as of today) present a synthetic GPU to each VM. Citrix, VMware, and Microsoft all use similar technologies in their VDI solutions.
The amount of video memory that can be dedicated to a VM has been increased from 256 MB in Windows Server 2012 R2 to 1 GB (although 32-bit clients can only have 512 MB) and is now configurable on a per-VM basis. Note that the memory is taken from host memory, not the video memory on the graphics card—something to take into account when planning the specifications for a host.
The maximum resolution is 4K, and, unlike the previous version, the amount of video memory for a VM has been decoupled from the number of monitors and their resolution. So, if a particular application on a specific VM requires a precise amount of video memory, this can be assigned. Windows 10 Standard and Enterprise are supported as VMs in the Technical Preview, whereas previous versions only supported Enterprise as clients.
Creating a pool of desktops for VDI
Current Microsoft testing indicates the support of more than 100 VMs on a host with vGPU configured. The GPU(s) has to be workstation or server class cards that support DirectX 11.0+ and OpenGL 4.0+, and the driver is Windows Display Driver Model (WDDM) 1.2+. You can have multiple GPUs, and RemoteFX will balance the load across the available hardware. NVIDIA has a good range of graphics cards for VDI, including the Grid.
In the Technical Preview, both the UI and PowerShell can be used to configure RemoteFX vGPUs. The command for allocating a vGPU to a VM (it has to be shut down to do this) is:
Set-VMRemoteFX3dVideoAdapter BlenderVM 1920x1200 1024MB
MultiPoint Server as a role ^
MultiPoint Server is an interesting flavor of Windows Server that many IT professionals know little about. Primarily targeted at education scenarios with limited budgets (although retail has also adopted it), MultiPoint Server allows you to take a standard (fairly powerful) workstation and connect multiple monitors, keyboards, and mice and offer individual desktops to multiple users. A more advanced setup in the 2012 version uses zero clients that are connected via USB or Ethernet. Zero clients are monitors with USB ports for a mouse and keyboard that plug into the server.
Settings for Remote Desktop Virtualization Host
In Windows Server vNext, MultiPoint will be offered as a server role (similar to how Server Essentials for small businesses is a role in Windows Server 2012 R2), and the current limit of 20 connected stations will be removed.
Although the advancements in supporting OpenGL/OpenCL for design and engineering applications are welcome, I wonder how much these advancements will affect the market; most “serious” deployments of these types of applications for VDI that I see are using Citrix’s technology. The advancements do, however, open the door for Azure RemoteApp to host engineering and design programs (if Microsoft choses to enable the technology in Azure), which could be another killer application for the public cloud. Offering MultiPoint to a larger audience by making it a role in Windows Server should also lead to wider adoption of this cool technology.