System Center (SC) 2016, now in Technical Preview 4 (TP4) at the time of writing, is coming along nicely. We looked at Configuration Manager in Joseph’s excellent article and at Virtual Machine Manager 2016 TP3. Over the next few weeks, we’ll be looking at the other parts of SC, such as Service Manager, Orchestrator, and Operations Manager. Today we will review Data Protection Manager (DPM) 2016.

If you’re interested in seeing how DPM has evolved, we’ve got you covered: 2012 R2, DPM 2012, and even older versions. With the regular Update Rollup cadence for System Center 2012 / 2012 R2, new features are added every few months, so the 2016 wave of System Center doesn’t come with a huge amount of new innovation. This is, in some ways, a blessing for a resource strapped sysadmin—the delta is smaller, so upgrades should flow easier. To complete the “DPM picture,” you should also look at Microsoft Azure Backup Server (MABS). It’s basically a “free” DPM 2012 R2, without tape support, that lets you backup workloads to local disk and then to Azure.

Coming in DPM 2016 is support for mixed mode Hyper-V clusters (Windows Server 2012 R2 and 2016 nodes in the same cluster) and support for protecting workloads on Storage Spaces Direct (S2D). Also on the table are Resilient Change Tracking (RCT) and support for shielded virtual machines (VMs) that are protected by a virtual TPM chip and Bitlocker.

Reports in the DPM Console

Reports in the DPM Console


Mixed mode clusters

The upgrade story for Hyper-V keeps improving for every version. Going between 2012 R2 and 2016, you’ll be able to evict one existing node (or add a new one), clean install 2016, and then join it back into the cluster. Rinse and repeat until your whole cluster is upgraded, at which point you can “flip the switch” on the cluster functional level.

DPM 2016 will be able to backup VMs across both versions and track which host they’re on. However, note that Nano Server isn’t supported (at least not in TP4). That’s going to be a show stopper for Microsoft’s recommendation that Nano Server as the preferred virtualization host platform. I hope Microsoft will fix this in a later TP.

Storage Spaces Direct

S2D is an evolution of Storage Spaces in Windows Server 2012 / 2012 R2. It can be used in a disaggregated fashion in which local storage on several hosts (four is the minimum, at the moment) is presented to a Hyper-V cluster as highly-available storage. Alternatively, it can be hyper-converged so that local storage in each Hyper-V host is used to store VMs.

DPM 2016 supports backing up VMs in this configuration, but again, not on Nano Server. I tested this with a few VMs on my four-node S2D (physical) cluster; it worked as expected.

Creating a Protection Group

Creating a Protection Group

Resilient Change Tracking

One of the big changes coming in Windows Server 2016 Hyper-V is the shift away from relying on backup vendors writing their own file system filter driver to track changes in virtual disks. This functionality is moving into the core Hyper-V platform. Hopefully, third party backup vendors should be able to support Windows Server 2016 Hyper-V faster, since they don’t have to develop their own filter driver. DPM 2016 will, of course, support this functionality, which means that changed blocks are tracked using an on-disk map as well as an in-memory map. More details can be found in Taylor Brown’s excellent presentation at TechEd Europe 2014.

Backing up Bitlocker-protected VMs

With shielded VMs and virtual TPM chips coming to Windows Server 2016 and Microsoft drawing a strong security boundary between the VM / workload administrators and the fabric administrators, of course, backup becomes a challenge. Sure, you can run an agent inside the VM and back it up on a per-VM basis, but most enterprises want a single backup solution that protects all the VMs from a host perspective. DPM 2016 will support the backup of virtual TPM/Bitlocker-protected VMs.

Conclusion

Here’s a list of the supported workloads that DPM 2016 TP4 (and 2012 R2) can backup; it’s the usual suspects. Note that Exchange 2016 is not supported yet, neither by DPM 2016 TP4 nor 2012 R2. There’s a short survey that may be of interest to you.

Looking at user voice for DPM (and Azure backup), the top requests are being able to do item-level recovery from Exchange (third-party backup solutions offer this, but not through supported methods), 5-minute (instead of 15) intervals for SQL backup, support for protecting SQL Server 2014 databases stored on CSV volumes, support for protecting NAS and CIFS volumes, and automatically adding new VMs on a Hyper-V host. Even existing users seem mostly interested in incremental feature improvements, rather than huge new features.

Interestingly, a planned feature is managing on-premises DPM servers that do “disk to disk to Azure” from the Azure backup interface.

Subscribe to 4sysops newsletter!

If you were looking for compelling features to throw out your current backup solution and invest in System Center, I think the forthcoming 2016 version will disappoint. It’s very much a gradual evolution of the current product, which is a solid backup solution for Microsoft workloads. To be truly competitive, it needs to add VMware backup support, as well as other enterprise workloads. I suspect this will happen in the future, but perhaps in Azure services instead of DPM.

35 Comments
  1. Joseph 7 years ago

    I’d love to use DPM but the way it handles is long term storage and not being able to backup to a SMB share or NAS storage makes it near useless. Are there any changes with this version

  2. Author

    Hi Joseph,

    I agree, only supporting local disk and SAN is hurting DPM.

    It’s an architectural limitation because of how DPM stores data on the disk, it needs block access to the volume, which means local disk and SAN only. A decision made a long time ago that now limits it’s capabilities. As Brad Anderson (at MS) is fond of saying “architecture matters”, except he’s talking about the great architecture of Intune, not the questionable decisions of the DPM team.

    /Paul

  3. Cavemann 7 years ago

    You can effectively achieve this by deploying DPM as a VM on an HV host/cluster and use SOFS for storing its VHDX file(s).

    https://blogs.technet.microsoft.com/dpm/2015/01/05/announcing-deduplication-of-dpm-storage/

    • Author

      Hi Cavemann,

      Not sure what you mean here? Joseph asked about DPM storing backups on SMB file shares or NASs. Storing VHDX files on a SAN with DPM does indeed give you data deduplication but not the ability to store backups on file shares / NASs. The issue is that not everyone wants to use their expensive SAN for storing backups and would prefer less costly options.

      /Paul

  4. Ethan Smith 7 years ago

    Any idea when this product is going to be released as a final/non-preview product?  I need to upgrade to new DPM 2012 R2 hardware soon but may postpone that if DPM 2016 is going to be available relatively soon.

  5. Author

    Hi Ethan,

    Your guess is as good as mine but as far as I understand, TP5 is the final preview and with Ignite coming up in September I think we’ll definitely see both Win 2016 and SC 2016 RTM either before then or at Ignite.

    Hope that helps.

  6. Justin Severinsen 7 years ago

    I use Buffalo NAS boxes for my DPM servers storage.  They support the iSCSI protocol and are cheap storage.  I purchased a cheap 32TB NAS and made it a RAID 10 for 16TB of space for my DPM server.  So you can use NAS storage devices so long as they support the iSCSI protocol.

    • Author

      Hi Justin,

      Thanks for sharing, it helps everybody. Technically speaking, once you use the iSCSI protocol your NASs are acting as a SAN, because you’re providing block level access as opposed to standard NAS access which is file / share access. But that’s semantics – I’m glad to hear that this works. What’s the model number of the Buffalo NASs you’re using?

      Paul Schnackenburg

  7. Steve Hohman 7 years ago

    I am doing a similar thing as Justin with the Hi-Rely systems and iSCSI.  The entire disk is configured on the DPM server as an iSCSI LUN, then DPM handles the rest.  The backups have been a set and forget since day 1.

    https://www.high-rely.com/

    Steve Hohman

  8. Danny D 7 years ago

    +1 for the Synology product line.  We have been doing DPM 2012R2 backups to our Synology NAS over iSCSI for a few years and have never had a problem.  Its been very reliable on backups and restores, and surprisingly fast as well.  We are staging our first DPM 2016 server in a couple weeks, so we hope for similar success.

  9. Emely rogers 7 years ago

    Does anyone know why Disk used is extremely bigger than the available disk space?

    The report Shows

    Total Disk Space: 6TB

    Disk allocated: 6TB

    Disk Used: 19TB

     

    Thanks for your help

  10. Author

    Hi Emely,

    I’m not sure why you’re seeing that. Here’s the documentation on DPM storage, https://technet.microsoft.com/en-us/library/hh758026(v=sc.12).aspx, maybe you can find something in there. Otherwise I’d recommend SystemCenterCentral.com, that’s a good place to ask any SC related questions. And if you do find out – pop another post in here so we can all benefit.

    Paul Schnackenburg

  11. ThomasI 7 years ago

    @Joseph @Paul @Cavemann:

    You can use an SMB share for your storage pool. We do this.

    As Cavemann said, you have to run DPM in a Hyper-V virtual machine, but there is no requirement for iSCSI or SOFS. You can just put your vhdx on the SMB share and attach it in Hyper-V manager. Job done.

    The SMB share has to be SMB3 for Hyper-V to use it.

    It works brilliantly, and you can even use a dynamically expanding vhdx. This was a huge benefit for us since DPM grossly overestimates the sizes it needs for our recovery point volumes. This does not matter now, since HyperV knows which parts of the volumes inside the vhdx is actually used and only expands the vhdx accordingly.

    Highly recommended…

    • Author

      Hi Thomas,

      Yes, that’s a good call. As far as DPM inside the VM is concerned this is a disk where it has block level access so that’ll work just fine.

      Thanks for adding to the conversation!

      Have you tested DPM 2016 with ReFS yet?

      Paul Schnackenburg

  12. ThomasI 7 years ago

    Hi Paul

    I am building a test environment for DPM 2016 now. I was hoping that DPM 2016 with its modern storage could solve a performance problem I have on a protected SQL Server, but Mike Jacquets answer to my post on this leads me to believe that it most likely will not. I will probably carry out the test anyway.

  13. suresh 6 years ago

    where can i download dpm 2016 package can you please send me the download link

  14. Daniel 6 years ago

    As Joseph already pointed – long term backup is available only on tape or Azure. Why do not have long term available also on disk is very strange. This issue comes from previous versions and is still present.
    Also you can’t mix long-term with daily backup on the same data source disk. If this source is already part on a protection group, you can’t build another one using same source. So, the DPM force you to architect data/disk partition by backup strategy, which is again very strange and very restrictive.

  15. Jacob 6 years ago

    Does DPM 2016 support WORM tapes?

  16. Author

    Hi Jacob,

    The full list of the drives and tapes that DPM supports can be found here https://social.technet.microsoft.com/wiki/contents/articles/17105.system-center-dpm-2012-and-2016-compatible-tape-libraries.aspx

    I haven’t personally used DPM with tape drives so I can’t speak from experience.

    Hope that helps,

    Paul Schnackenburg

  17. Dara 6 years ago

    Hi,

    We have DPM2012 running on 1 VM (SQL + DPM on the one box) and are using 2TB RDMs on our VNX connected back to the VM. (VMware BTW). Many issues currently with 2012. Mainly around the server freezing and being really slow when in the management interface. Backups are ok though. It is coping with over 2K clients.

    We are end of life on the VNX and looking at alternatives. I am thinking of the below scenario.

    I have set up a VM on a single ESXi host. This host will have local storage. SAS 10/15K probably in RAID5. Looking at a Dell R630 with direct attached storage.

    I have added a 500GB VMDK to the new DPM 2016 VM which sits on the local storage for a pilot group of client backups. We are only backing up 5GB of data per laptop/Desktop. Hope to roll out to over 2K users if pilot goes well.

    Anyone any thoughts on this good or bad?

    Thanks

     

  18. Author

    Hi Dara,

    I have a few comments on your post:

    2000 protected machines is a lot (the limit in 2012 SP1 is 3000 though). I assume they’re all client machines? https://docs.microsoft.com/en-us/system-center/dpm/back-up-workstations
    Make sure you’ve looked into the new modern storage in DPM 2016. You’ll need to run DPM 2016 on Windows Server 2016 and use ReFS. https://blogs.technet.microsoft.com/dpm/2016/10/19/introducing-dpm-2016-modern-backup-storage/
    The SAS drives at 10 or 15 K will be very expensive. It might be a bit of overkill for backup storage, depending on how long your retention is etc.
    If you’re using RAID for performance, perhaps consider using RAID 0, after all it’s backup data, not live production data.
    Consider long term retention (if you have that requirement, it’s not too common with client PCs) using Azure.

    Hope that helps,

    Paul Schnackenburg

     

  19. Dara 6 years ago

    Hi Paul,

    Thanks for the reply. Yes all client machines with the agent installed. No servers.

    I’ve had a look into the modern storage in DPM and it looks good. I have presented a VMDK file to our DPM (Windows 2016) VM and created a volume on it for our pilot test group. Performance seems ok but will know more when I add more clients. Only have 5 clients backing up at the moment. Hope to get about 50 for the pilot. Latency does spike though to 15-20ms during consistency checks.

    I will check out pricing for disk speeds and maybe consider NL SAS. Would be using RAID 5 but maybe RAID 0 would be a better choice performance.

    With DPM 2012 a colleague told me that due to how DPM does its backups (for every 1 Backup I/O there are 3 I/O’s; 1 read and 2 writes), essentially hammering the disks.

    So for DPM 2012 you would need to make sure there are enough spindles in the array to spread the load enough. So smaller disk capacity but more disks in the storage array.

    I’m not sure if this advice will still apply to DPM 2016 though seeing as modern storage backup seems different.

    Thanks for the advice.

     

  20. Author

    Hi again Dara,

    You’re welcome.

    Let us know how you go once you start adding more load onto DPM 2016 – I’m really curious to see how it modern storage / ReFS works out in the real world.

    Cheers,

    Paul

  21. Dara 6 years ago

    Hi Paul,

    I am thinking of changing this set up slightly now. I currently have the DPM2016 server sitting on a Dell R630 which is using directly attached storage. I am using 2 Dell MD1420’s for the storage. This gives us 31TB usable across 28 x 1.2TB 10k SAS drives in a Raid 6 split over the 2 MD1420’s with some room for growth. I hope to have the same setup at DR and use vsphere replication to get it out there. This is a requirement by the business.

    The original plan was to carve up a VMFS datastore and and create multiple 1TB VMDKs for use in DPM. Each VMDK having it’s own protection group.

    Now I am thinking of using the storage pool feature in 2016. So I can create one storage pool using multiple VMDK disks presented to the VM and then create a virtual disk on this pool. Then I create a volume and add into DPM. I can then grow this as needed by adding more disks to the storage pool within windows. I also get the benefit of deduplication I believe.

    I am concerned on performance here as we will have RAID 6 on the physical disk array and now with the storage pool in windows I will probably use a RAID 5 setup on the Virtual Disk. When setting it up in windows you are asked to use Simple, Mirror, and Parity. This is basically RAID0, RAID1 or RAID5.

    The forum below seems to recommend this set up for DPM 2016.

    https://social.technet.microsoft.com/Forums/en-US/2bbc644b-d8d4-494d-ac9e-f18be7ddb068/dpm-2016-on-server-2016-storage-setup?forum=dpmstorage

    What are your thoughts on this?

    Thanks

  22. Author

    Hi again Dara,

    A couple of points for you to consider:

    If you’re going to use Storage Spaces Direct in Windows Server 2016, your servers will need to be running Datacenter, not Standard.

    As pointed out in the forum posted you linked to if you want file system dedupe, you’ll need to format with NTFS, not ReFS. And go with 1 TB virtual disks.

    But if you go with Modern Backup Storage and ReFS you’ll still get disk space savings.

    You could go DPM to DPM for DR, remember that one DPM server can protect another server in another location.

    As for your question around RAID 6 underlying the storage pool and performance, I know that MS would recommend that storage spaces live on physical disks / SSDs / NVMe, not on virtual disks. I’m not sure how it’s going to go with performance.

    Really keen to hear back on your experiences with this when you try it out.

  23. markm75 6 years ago

    Read through here and looked else where but i cant seem to find the requirements for DPM 2016?

    We have an old DPM server with woodcrest 2.0 ghz dual cpus (i think 4 total cores or similar).. I’m about to build a new DPM 2016 server and i’m trying to custom build one (we have around 30TB for the size as of now, with 10TB being backed up to tape).

    Does anyone know the cpu recommendations and or ram (i assume ill put sql and dpm on the same box).

    The old box is using raid 1 for the OS and raid 6 in a scattering of about 4 arrays with pretty large sata II and sata III drives (some 4GB each in most cases).. this “gets the job done” but i guess could be better..

    Reading here sounds as if many recommend raid 0, but for us that would be bad, in the past 5 years i average about 3 failed disks per year, easily.. so i’m thinking of sticking with raid6 (raid10 is a consideration but shoots the cost up).. and maybe going to 7.2k sas 6 or sas 12gb drives instead..

    I’m not familiar with the “modern backup storage” and refs just yet, appears i need to do some digging, not sure if this affects how much space we should get on this new box.

    Thoughts?

    • Author

      Hi Mark,You can use this calculator https://gallery.technet.microsoft.com/Virtual-machine-size-98673200 which is designed for Azure IaaS VMs but it’ll give you a good idea of the size machine you might need.Modern storage relies on ReFS and will save you disk space, read more here: https://docs.microsoft.com/en-us/system-center/dpm/add-storage. Here’s the overall installation instructions https://docs.microsoft.com/en-us/system-center/dpm/install-dpm. You can also use deduplication with DPM (but not with ReFS yet, although that support has been added in Windows Server 1709) https://docs.microsoft.com/en-us/system-center/dpm/deduplicate-dpm-storage. Here’s some more info on what’s supported and what’s not https://docs.microsoft.com/en-us/system-center/dpm/dpm-support-issues.Basically modern storage promises large disk space savings. It also uses storage spaces which means you won’t be using any type of RAID at all, just a simple volume in Storage Spaces. Hope that helps,Paul Schnackenburg

      • markm75 6 years ago

        Thanks for that info.. ill head in the direction of ReFS then.. One thing i dont quite get as of yet.. so with Refs and the modern storage with storage spaces (i assume storage spaces handles this).. how is failure handled (IE: without raid6 underlying everything or similar)?  In spec’ing out the new server i’ve all but decided to go the way we did for the old.. a chassis that holds 16 or 24 drives, is a sas 12gb expander to go along with a controller card that is 8 port expanded to the full set, which worked well for our hyperv host.  Then for drives sas 12’s, 6TB x 7 in raid 6 I originally thought, along with two 1TB for a mirrored OS..  All this is now irrelevant or still go down that path with a good controller and expander etc?  Feels almost as if the controller card is overkill, though necessary for the max # of drives possible.

  24. Author

    Hi again Mark,

    Yes, that’s the mind shift to take, once you’re using Storage Spaces, you’re not relying on RAID. As a matter of fact you should not have a RAID card in-between Windows and the drives.

    Storage Spaces / Windows Server does data protection just as well as your RAID controller, probably better.

    So no controller card at all – go for what Dell calls an HBA (not sure what Lenovo / HP calls it) that just passes the disks straight up to the OS.

    Good luck,

    Paul

    • markm75 6 years ago

      Thanks again.. i guess i cant wrap my head around how fault tolerance is supposed to work on a single server using Storage Spaces.  I also see alot of people giving it high praise, but yet more people say that its really not tried and true and shouldn’t be used in a production environment.  So i’m half on the fence with storage spaces.. depends i think on fault tolerance in particular, otherwise i may end up just going the route of the raid6.. more research i guess 🙂  Thanks for all the input.

Leave a reply

Please enclose code in pre tags

Your email address will not be published.

*

© 4sysops 2006 - 2023

CONTACT US

Please ask IT administration questions in the forums. Any other messages are welcome.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account