Paul Schnackenburg talks to Jason Buffington, Senior Technical Product Manager at Microsoft and Frederique Dennison, Product Marketing Manager - Security and Management at Microsoft about the Systems Center Suite, Systems Center Configuration Manager vNext, Data Protection Manager (DPM) 2010, Disaster Recovery and backup over the wire, small business backup and the Tech Ed Australia 2010 experience.
Latest posts by Paul Schnackenburg (see all)
- Project Honolulu - A new way to manage Windows Server - Wed, Nov 22 2017
- Use Azure Managed Service Identity (MSI) to store passwords in your code securely - Thu, Nov 9 2017
- Azure Data Lake overview - Fri, Sep 22 2017
Jason Buffington - Paul Schnackenburg
PS Congratulations on becoming Technical Product Manager for Systems Center Operations Manager, you’re sort of straddling three products now. That would be a pretty big load?
JB The Systems Center teams in Microsoft have been hearing for a while that Management and Security, people want to deal with them as one kind of goal. So internally we’ve reorganised our teams to better align with how customers consume those technologies. So the management team which you saw in Systems Center and the security team which you saw in the Forefront line formally merged the organisations together. Different people picked up different products, to make it more organised around how customers leverages those products in data centre scenarios and client scenarios. The integration you see between Systems Center Configuration Manager and Forefront as a way to deliver antivirus and malware protection to the end user; that’s a client scenario. There’s Systems Center, a bit of Forefront but it’s there to meet the client’s needs. As opposed to, I’m a datacentre guy, I’ve always been a raised floor guy so in my space I think about servers. Monitoring your infrastructure and protecting it are really two sides of the same coin. So yes, I picked up Operations Manager, I’m really, really excited because we’re at 2007 R2 right now, we’ve been there for a while so you can imagine this is going to be a very exciting year.
PS vNext is coming
JB Yes, vNext is coming
PS I’ll be writing a review on that. I have already written the reviews on SCE and DPM 2010 so I don’t have any deeply technical questions as such. I sat in your yesterday afternoon session and I also sat in on most of your Disaster Recovery session today. I picked up quite a few things I didn’t know.
So the first question, will there be a DPM 2010 plug in for SBS Aurora? You know how the Aurora console is extensible, and what we all want to add is backup? There needs to be a button in SBS Aurora, saying yes, that’s cool, I’ve got my local backup. Now I want a cloud backup as well.
JB I want that button too. So in general we’re not ready to tell you what will be there as far as that feature set goes. But if you look at what we’ve done so far DPM already has a story for SBS 2008. As for the features for products that haven’t been released yet, that’s absolutely a priority, although it might not appear as a button. Let me unpack this for you a little. The primary goal of DPM wasn’t being number one in the market, although certainly we want to be a trusted, market leading backup solution. When you’re looking at what we’re trying to solve it wasn’t necessarily being the number one backup solution. The core focus in general is around supportability and reliability of Exchange, SQL, SharePoint, Hyper-V, Windows and the client. Whenever there’s a new platform coming out, one of the things we sometimes hear from our customers is “hey I really want these features of this platform but my legacy backup solution doesn’t support it yet so I’m going to wait for that, until my legacy stuff catches up”. So one of the design goals of DPM is always making sure that our customers don’t have to wait. They know that as soon as those platforms come out there’s going to be a wholly supported and reliable backup and recovery solution for those workloads. There shouldn’t be a blocker for adopting new technology. As you can imagine, every new platform that comes out we’re always looking at making sure it’s got a reliable backup and recovery solution.
PS That sounds good. Except it’s not committing to actually putting the button in there but that’s fair enough, you work for Microsoft, you can’t talk about the things that aren’t there yet. So what’s your role in the DPM and SCE teams now that you’ve taken on SCOM?
JB By definition I’m what’s called a Technical Product Manager. In DPM I own basically “how we talk about the product” so how do we translate the features and products and help customers understand the business value? So to me it’s not about the check box, we have Live Migration support on Clustered Shared Volumes; to me it’s about helping the customer understand if you’re going to be leveraging Microsoft’s virtualization platform, and you really want to do that leveraging the High Availability components of that with features like site to site Live Migration, you need to know that there’s a reliable backup and recovery solution for it.
So I do the webcasts, I’m privileged sometime to be invited to do live speaking and this was a blessing for me, I enjoy doing this part of the job. I also write a lot of whitepapers and I manage the blogs. I do pretty much all of the evangelism from the top down about the products and because DPM and SCE has just finished their release cycles, those products, their content is pretty much complete. Their product teams are now blogging about operational issues and best practises, those kinds of things, which gives me a chance to take a deep breath. So it’ll be a few months before we start revving up for DPM vNext and SCE vNext, and what the plan is and then the TAP (Technology Adoption Program), the beta, and the RTM ship; so during this time, instead of just maintaining those products, now is the time to say, let me have something else and because Operations Manager is in a different part of its lifecycle I said yes. Operations Manager vNext will actually be the fifth Systems Center product I’ll be involved in the launch of; I’ve been with three generations of DPM, I managed file services in Windows Server 2008 and 2008 R2, components of certainly much bigger products within Microsoft, and now Operations Manager. The nice thing of course is that SCE has parts of Operations Manager within it so it’s leveraging the management packs, the auto discovery, the diagrams, and the knowledge that’s already embedded there. Those are all things I’m already familiar with from SCE so getting up to speed on the rest of Operations Manager for the Enterprise is going to be fine.
PS You didn’t hear it in the session this morning, you were talking about disaster recovery and site recovery and site resiliency. You said “we have a datacentre here and we have a datacentre here and we replicate from here to there to there, try to do that with tape” and somebody yelled out, “try do to that with the bandwidth in Australia”. And I laughed a bit because it’s true here, some of those scenarios do assume a certain amount of bandwidth that you can get at a reasonable cost elsewhere and I think that in some parts of Australia (at least) they’re going to have to stick with that courier guy with the van and the tapes because the cost per gigabyte is simply that much cheaper.
JB They might. The first design of DPM v1 (version 2006), its only job was centralised backup of branch offices. When we built it there was actually testers in Europe, which has a lot of bandwidth limitations, more constrained than the US but not quite as bad as Australia, something in the middle of the ground when DPM 2006 first came out it’s only goal was to centralise file backups so we assumed we had very thin pipes between your remote sites and back. And when I say that we really assumed a thin pipe, 56Kbps, really next to nothing. Because we could build it from the ground up, because we didn’t have this legacy architecture of taking tape and converting it to disk and that be our replication, the fact that we were just plucking blocks as they changed all really helped. Actually in most cases customers were telling us that they were happy with the bandwidth requirements. You are normally changing a lot less data than you think you are. So in some environments, it won’t work, you can’t put a golf ball through a garden hose, you need some bigger pipes to get that done.
And it may be something were you say that the majority of your data, is going to go by courier. You put the whole datacentre on tape and you use a courier to get that offsite but there’s an operational cost that comes along with that. You might translate that into some bandwidth connections and bolster that up. Moreover if you’re looking at true disaster recovery it’s not 100% of your data, I don’t care about Fred’s word documents, except for the ones you’ll be sending to my boss, so I don’t need all that stuff on day one. If I truly had a site level crisis I would argue that if you reached for the tapes from last Monday’s courier, missing some word documents might be OK. So if you focus on the top 20% of your data which truly is mission critical; now I can afford real time or near real time replication of that portion of the data and then I’ll go ahead and tolerate up to five business days loss of data on tape / disk from courier. That way the savings on those operational costs for daily courier can be used and the stuff that I actually need to resume business I can afford to put on the wire.
It’s hard for me to sign up for “you can’t use the WAN”, I don’t subscribe to that.
PS I agree, you can choose what gets replicated. You outline in your book very clearly the difference between synchronous and asynchronous replication and the costs involved. I work in the small business space running my own business. None of my customers will ever come to me and say “we want a synchronous data replication and site resiliency solution”. That’s just not going to happen.
JB Unless they read it in a book and read it out aloud; “I want a synchronous replication solution”.
PS And I’d say, that’s no problem, here’s the price tag and they’ll just go no thank you, we didn’t want that actually. We’ll buy something else.
JB And that’s what I talk about in the book, calculate the cost of lost data. If you’re a small business, the data that can’t be humanly recreated and the value of that data is enough then the cost justifies a pipe. And if it doesn’t the cost does not justify having a pipe, which was exactly the premise for that chapter: don’t buy more than you need but have an empirical way to figure out what ballpark you’re in, go out and buy it suitable solution.
PS I really like that approach, it’s good. It brings it down to the business level and that’s where you’ve got to talk about it.
JB I actually had that chapter as a hidden download PDF, so if you ever want your readers to get a taste of that, I’ll send you the PDF of just that chapter, if that’s something that they might find interesting.
PS Yes, I can think of at least one client of mine that would actually read that and have an understanding. I can think of another couple that wouldn’t have a clue but that’s alright. This is a side question but I’ll throw it in there anyway, I use Cristalink Firestreamer at one site. What do you hear about that in production? It’s running on a DPM 2007 box and I keep getting weird errors about tapes being cleaning tapes. The virtual tapes that are on the hard drive and I’m like no it’s not a cleaning tape, I promise you. It isn’t a cleaning tape because it’s just a file on a hard drive. Have you ever seen something like that?
JB No I haven’t but recognise that DPM 2007 was our first entry into automated tape management. Automated tape management wasn’t ever part of Windows, and it wasn’t part of DPM 2006 so there were a couple of things that were really quirky in our tape handling in DPM 2007. Tape management was an investment scenario in DPM 2010 so do that in-place upgrade and you might find that experience goes away along with the improvements of everything else.
PS Except I’d need Server 2008 to run it on and they run it on Server 2003,
JB The big things in DPM 2010 weren’t really about widening what we protect. It was really about how we protect it: better tape handling, better scalability, experiences like the SQL self service restore. Stuff that I talked about in the last session, better Disaster Recovery scenarios, better handling of Hyper-V, it was really about improving the experience and performance, auto healing, auto grow, a lot of “auto”.
PS I like the “auto” features.
JB A lot of “fire and forget”, makes DPM truly an Enterprise solution. What we saw in DPM 2007, customers were really excited about this whole “one throat to choke” experience, backup Microsoft with Microsoft but it didn’t scale as well as some of our larger customers wanted it to and in a couple of cases you had to continually go back and manage storage instead of DPM handling that itself. In DPM 2010 we wanted to have that reliability experience, a truly exceptional, “fire and forget”, lights out experience that gave the confidence that I’m using Microsoft to backup Microsoft and those were the business drivers. So having better tape handling is one of those things that DPM 2010 will take care of. The reason we went to Windows Server 2008 only was partly to ensure that higher ability to scale, you just can’t scale your back end backup servers on Server 2003, especially disk, it just doesn’t scale as well. And with Windows Server 2008 we get some capabilities from their disk management with storage handling. In Windows Server 2003 you could grow a volume but you couldn’t shrink it. So if you over allocated your space you just had to wait until you had consumed it. In Windows Server 2008 the shrinking of a volume is now possible. So now you can actually tune the storage which gives a couple of benefits on the operational level which Windows Server 2008 brings to the table.
PS Yes, I think most people can move forward with 2008.
JB And having a back end backup server running Windows Server 2008 with production servers on Windows Server 2003 is entirely possible. We don’t expect everyone to upgrade their entire infrastructure we just need to have the backup server up to date.
PS That’s probably not an issue for most small business customers, in this particular case all their servers are running Windows Server 2003 and there was no budget to actually buy a server for DPM 2007 so it had to be installed on an existing Iomega storage server running 2003.
JB I used to manage Storage Server.
PS Yes I know. Here’s something I’ll throw out to both of you since you’re working on Systems Center products, vNext etc. I’ve written a couple of reviews on Systems Center as a suite and it’s obviously a major focus for Microsoft. A lot of people come to you and say “management is critical”, we’ll give the hypervisor away, that’s great, the competition does as well but it’s the management, that’s where the crunch is. As soon as you’ve got more than five VMs on a single host you’ve got to start managing it. How do you manage it, whether it’s virtualization, whether it’s physical boxes, whatever. So there’s obviously a lot of focus on Systems Center, but one of the things that I found when I wrote the review, this was a while ago now, is that Systems Center Suite is still in most ways separate products. There’s a bit of overlap, like PRO packs linking SCOM to Systems Center Virtual Machine Manager and there’s a few connectors but essentially they are still more or less separate products. Is this something that’s going to alter in the future?
JB So a couple of points, the connectors are a good example, also if you look at how DPM and SCE were developed, there was no accident that those products were released within days of each other, their beta was 24 hours apart, their Release Candidates were about a week apart and the RTMs came within a week of each other. That was not an accident. Based on where those products were in their lifecycle we wanted to align their arrival to market because it made sense. When we synchronize the release cycle it makes it much easier to co-develop. Here’s one thing; remember your hands on experience with SCE? If you ever download the evaluation software for SCE 2010 it’s about 5 GB. There’s no other component in Systems Center that’s 5 GB. And the reason is that if you go to the directories for that install you actually find that there’s a build of Operations Manager 2007 R2 in it, there’s most of Virtual Machine Manager 2008 R2 and there’s all the installable bits of the distribution of WSUS 3.x in it and the fun part of it is that it’s one UI. It’s one UI that pulls the inventory information from WSUS, pulls the management packs from Operations Manager to show you a single pane of glass; this is how everything is doing. And your software distribution, you might do with your WSUS, monitoring which you might do with Operations Manager and the overview of your virtual infrastructure which is done through VMM, all of those are there for you in this single pane of glass. And that’s there today for our mid sized business customers.
So you can see that as our technologies come more into alignment and you can have that single UI framework that we have in Essentials. I don’t have any inside perspective about vNext, whether they all magically plug in or whether that’s coming in the version after vNext, but we’re telling one story in Essentials and little by little the products are talking more to each other. The management packs are also starting to be geared towards tasks, the DPM management pack has tasks embedded in it which lets you use Operations Manager as the central console to manage DPM. In some cases a lot of behaviours are in Management Packs, including Opalis by the way, you can use either Operations Manager to aggregate the actions or you can use Opalis to initiate new actions and not actually having to go to the DPM UI for them.
FD Systems Center is still a platform with a cohesive story. And sometimes you have to pick the information from different parts but it still provides the information. I’m curious to see if you’ve noticed how Systems Center has been received at Tech Ed this year, a few people mentioned it to me but pretty much everywhere regardless of the starting point of the conversation, everyone looks at what Systems Center means in terms of maximising your investment, for your platform as a whole.
JB What’s the most value of the platform, and you don’t have to stop at Systems Center itself, take a look at Forefront, Forefront Endpoint Protection is deliberately integrated into Configuration Manager. It doesn’t even have its own console per see. Configuration Manager is the back end, it’s pushing out the updates, it’s pushing the configuration for how often the clients should scan and check, what jobs to do, how often to update. Configuration Manager, as a part of Systems Center, is actually managing and controlling Forefront whose job it is to protect the whole Windows infrastructure. So little by little you’ll see more integration between the products.
PS I really like Systems Center; I really like Systems Center Virtual Machine Manager. I’ve always liked the fact that you manage VMWare with VMM. It’s cool, it’s quite competitive, and I’m just waiting for Citrix management in the next version.
Just to clarify, today if I have a Small Business Server 2008 at a client and I’m sitting back in my office I have to setup a VPN to do the backup over wire?
JB Yes, that’s correct, there has to be some form of connectivity. If you’re only protecting the files on that Small Business Server we can actually do it in an untrusted way because DPM 2010 can protect non-domain joined machines. There are a couple of other scenarios to consider there, we have a channel partner who have lots of small business clients and one of the things they’re looking at is dropping in a small appliance, a small white box into their customers environment, the white box is actually just a Hyper-V host, with one VM and that VM is a virtualized DPM server. The DPM server joins the domain of that client so that has full, trusted communications with the small business server and can backup all the workloads on that server as well as all the clients. So now we have all that data backed up to the DPM server. The host is not part of the client’s domain; it’s actually part of the reseller’s domain, so now I have a Hyper-V host with a DPM agent on it. So now I get this capability as a channel partner where I can remotely control the backups of all my customer’s sites as a service. More importantly I’m protecting the DPM server without necessarily being able to see into it so that gives me the ability to protect their data without violating the privacy of that data.
But I can still restore that DPM server so if they have a site crisis, if they’re a small business they’re not going to have a disaster recovery site, that’s grandiose for a small business. More importantly, if you’re a small business, and something goes wrong, your trusted reseller is the first people you’re going to call to get your business back up and running. Wouldn’t it be nice if they already had your data? Then the only thing they have to do is to drop in some kind of virtualisation platform and have DPM pushing data back onto it and start reconstituting it. From a phone call they can start and within 15 minutes they can have that VM server restored and be starting to put data back into that environment. If we’ve got a hotel or a conference room someplace, drop in some PCs, arrange data connectivity, we can have you back up and running. And that’s not something that a small business had access to three years ago. Certainly not five years ago. You’ve got your trusted provider, you’ve got software at a cost that’s certainly not astronomical for a mainstream small business, especially if you do it via VL pricing or for an educational customer. Now you actually get disk to disk and long distance vaulting and you get it at for a fairly small total cost.
As a service provider, one of the other things Microsoft has is something called Service Provider Licensing Agreement (SPLA) licensing where even the reseller doesn’t have to purchase the software outright, they can, I think the term is amortise, the license over time and resell that as a service which makes it even more attractive for that local reseller to become that local DR provider for their small business clients. I’m excited about that because, for our large customers, large companies they go to IronMountain, or they go to i365 or locally here in Australia there are other companies; so you’ve got options for disaster recovery to the cloud if you’re a large company. For a small company, I wouldn’t say that you don’t know who IronMountain is but you don’t have that same kind of recognition intuitively but you do know who “Paul’s reseller” is so let them provide that recovery service locally. There are some neat things that are starting to come out in this space this year.
PS I like that scenario, it’s given me something to think about. A colleague of mine who also runs his own business he’s actually doing this, he’s setting up a DPM server in his office to protect a DPM server at one of his clients and then they don’t have to worry about tapes or hard drives or anything else going offsite because it’s already going over the wire.
JB This works in small offices, let’s say you have a collaborative relationship with another business. Using IPSec you set up a tunnel between the two sites and you own the DPM server that backs up his environment and he owns the DPM server that backs up your environment. As far as you’re concerned it’s 2U somewhere in the back closet but now you have two self-standing environments that are symbiotic. In a true small business I wouldn’t be opposed to going with a small server, put it at the owner’s house, in the US at least he can write of the expense of that WAN connection (“this is part of my business infrastructure”), and now your DR site is at the owner’s house so if something happens to the office, the owner is going to want the data. I’ve been in backup for 20 years, and to me it’s always been number one goal, get the data out of there. That’s my primary passion, the data has to survive. And these days there are a lot of options for how to set that up.
PS Well if we get National Broadband Network here, if they actually build it that opens up a whole different set of scenarios. They’re talking gigabit, they already have 100 Mbps and that’s both ways.
JB I have cable to my house in the states so I think I’m like 30 Mbps down and 5 up. Sometimes I’m actually thinking that webpages are changing on their own but boy it’s fast.
PS That’s all the questions I had, it’s been great seeing you finally, after talking to each other on the phone for a couple of years and over email. Thank you also for your book, it’s a good book, this is clearly someone speaking from a lot of experience. I also work part time as a teacher so I’ll be bringing quite a few of the lessons in this book back to my students when I start teaching them about DPM 2010 in a couple of months.
FD Where do you teach?
PS Sunshine Coast TAFE, on the other side of Brisbane.
FD Has it been a good TechEd for you?
PS Yes, it’s been a great Tech Ed. This is my fourth TechEd and it’s a great way of learning but more importantly it’s a great way of networking with people. I meet a lot of really smart people here. The ability to create community and a communal buzz around products is not that easy to do but once you’ve got it going it’s one of your biggest assets.
JB I couldn’t agree more. You know when I come down for events, I would come down just to do the speaking because I love Australia but I don’t come just for the speaking. I come for the connections, I come for the “lets grab coffee”, that’s really great, we were talking earlier about my job description I’m also empowered to be the voice of community so you’ll see that I blog a lot and tweet a lot. But you’re absolutely right, the best voice we can have is not the Microsoft one, not to mention it doesn’t scale. What does works is; there were a few gentlemen in the front row of the session on DR you were in, one of them is Matt Marlor from AU Techheads and the other one is Orin Thomas who’s written books on like 20 different certifications and I met with them to find out what I can do to accelerate them here. That’s what I come for.
I can hit almost the same size audience as a webcast but I can’t connect that way.
PS Thanks guys, I really appreciate your time.
Paul started in IT back in the day of 286 PCs and DOS, he’s now a teacher, technical journalist and IT consultant in Australia. He’s an MCT, MCSE, MCSA, MCITP, and MCTS. Read more on his blog at http://tellITasITis.com.au.