- RTM now and then
- Generation 2 VM in Hyper-V in Windows Server 2012 R2
- Enhanced VM interaction
- USB redirection
- Hyper-V backup
- Hyper-V and Linux
- Live Migration over RDMA vs. vMotion
- Dynamic vs. fixed VHD
- SANs and IOPS
- Storage and memory capacity
- Virtual SAN
- vSphere VM replication vs. Hyper-V replica
- Hyper-V Recovery Manager and Azure
- VMware NSX vs Hyper-V network virtualisation
- Azure Sentinel—A real-world example - Tue, Oct 12 2021
- Deploying Windows Hello for Business - Wed, Aug 4 2021
- Azure Purview: Data governance for on-premises, multicloud, and SaaS data - Wed, Feb 17 2021
RTM now and then
PS: Thanks very much for spending some time with me again. It’s going to end up like this every year. So 12 months later, 2012 R2 is on the horizon. You guys have redefined the term RTM. It used to mean IT Pros got the bits and could play with them; now it means somebody else gets the bits and you guys get the politics until general availability.
BA: Thankfully, I am not involved with any of that. From the engineering side of things, the process is still the same.
PS: You guys sign off and that’s it.
BA: Yes. It’s always been one of the interesting things about RTM because it means releasing software on a global scale like Microsoft does, it’s such a huge undertaking. So it’s always been really interesting from the engineering side of things because we’ve always had this: yes, we’re done, and then there’s this lag before the rest of the world gets their hands on it and gets to play with it and so on, so it’s always an interesting process.
PS: Jeff Alexander was telling me that years ago when he started at Microsoft, one of his first jobs was the master came over from the US and they were duplicating CDs or floppy disks even. They were duplicating stuff to get the software out.
BA: I started working at Microsoft in 2003 just before we signed off on XP SP 2 and back then it was still very much like the, like, okay, now we have to start pressing disks. It was fantastic. I actually once got to go to the mastering room. That was huge. It was spectacular.
PS: You don’t do that anymore.
BA: I know we’re going completely off in the weeds now. Have you actually seen Office 2013 where you buy it and it’s just an empty box.
PS: Yes, I’ve seen that.
BA: I love that.
PS: It’s like, why would you bother with the box?
BA: It should just be a card.
PS: They used to do that. Remember when you used to buy software, you got this huge box and you open it up, and it’s got one CD and a small card in it, the box was just so you felt you got something substantial.
BA: I’m a big IT history buff. I’m one of those people who keep all the old software on my display shelves. I actually have the original box for Microsoft Word – no number, just Microsoft Word. The thing that I love about that box is if you look on the box, it’s Microsoft Word. It includes a dictionary. They actually put a physical dictionary in the box.
PS: That’s software for you.
BA: I just love that. You’re going to need this because it’s not in the software.
Generation 2 VM in Hyper-V in Windows Server 2012 R2
PS: Those were the days. So the benefits of Generation 2 VM in Hyper-V in Windows Server 2012 R2? If you look at the VMware side they have Generations or hardware versions as well that keep on changing.
BA: This is completely different. It’s not a hardware version. It’s a different type of virtual machine and like I said in my breakout session, people get so excited about this, and it is something that we are planning to invest in a lot in the future. The mentality behind the Generation 2 virtual machine was taking a step back taking a fresh look because the reality is with all the virtual software that’s out there, you know, we have been doing this for over a decade. I hate to say this, but so much stuff that’s there doesn’t need to be, you know. I get new people join the team. And they ask, why’d you do it like that? And we say because that’s the way it’s done. Because, you know, 10 years ago we did it that way and it’s worked. Generation 2 virtual machines is really just taking a step back and saying, okay, how would we built this today? So I think it’s fascinating.
PS: I think it’s interesting, too, especially when you look through the device manager, and I can see where it’s heading because there is no point in having all of that old legacy stuff that only exist in the physical world.
BA: It’s a great time for us to do this because of, the evolution of UEFI and because of the stuff that’s going on in the hardware space. The stage is really well set for us to be able to do something new here.
Enhanced VM interaction
PS: So you demoed the Enhanced VM interaction in your session, which is a fantastic improvement.
BA: Have you used it?
PS: I’ve used it. Yes, absolutely.
BA: Because I love this. So at Microsoft, we talk about self-hosting, which is where as we’re developing the software, we’re actually running it ourselves, and I am a very big proponent of self-hosting (dog fooding). I’d say within a couple of months of us RTMing a release, I’m on the next version in my office and with both desktop and server, what I always watch for is for the point in time, because I run the latest at work, but I always run RTM at home, and I watch for the point in time when I go home and I try and do something and it doesn’t work, and I’m like, God damn it.
And I’ve got the Enhanced VM connect. Like, I have two Hyper-V servers at home that have remote management, and the number of times that I’ve gone to copy and paste a file in or text and it just doesn’t work.
PS: So you can do USB redirection through that?
BA: We can. I don’t talk about it and there’s a very good reason for that. Let’s just set the stage here. The enhanced VM connect is basically a remote desktop call over VM Bus and we can do everything that you can do from my desktop. The Microsoft piece, the one that we get really excited about is smart card redirection. We use smart cards for everything, you know, so for us, it’s great. We actually also can do folder redirection. You know, so if you want to map a folder from your host into a virtual machine and party on, that works fine. You can do USB redirection and that’s in your remote desktop stack. You do have to set the group policy in the host in order to enable it.
The reason why I don’t mention it, though, is I know from talking to customers who have come over the years come to me and said I’d like to have USB in a Hyper-V virtual machine, that every time I say to them, what do you want it for? Nine times out of 10 the answer is, I have a software licence dongle and I want to connect that to my server virtual machine. That is the scenario, and USB redirection with the only works if you’ve got the remote desktop connection open. As soon as you close it, the USB dongle is going to get “pulled out”. It doesn’t work for a server scenario. It’s great for desktop. It’s great if you’re running on your laptop and you want to plug some USB devices into a VM – fantastic. Honestly, not many people know about that yet.
PS: What about backup because one of the scenarios that I have is you’ve got a small business server, something running virtual on top of Hyper-V and you want to do backup from within the small business server doing an SBS backup, and the only way to solve that today is with third party software; there’s one called USB Redirector which sort of goes over the network.
BA: As a Microsoft person, I’m not a big fan of putting other people’s software in the parent partition.
PS: Me neither, but give me an alternative solution.
BA: I’ll be honest, I’m going to share some ignorance here, so bear with me. The two things that pop in mind first is one, why aren’t you just backing up the VM? Why aren’t you doing a host-level VM backup?
PS: Because if I need to restore an individual file or if I need to restore an individual email or a SharePoint document or something, the backup within the VM will work. If you have DPM or something similar you could do that from the host side but that’s enterprise scale.
BA: Every enterprise backup solution does the item level restore.
PS: No problem at all, but for a small business that has one physical server.
BA: That’s not going to happen.
Hyper-V and Linux
PS: So you guys have got a fair bit of work making sure that Linux is well supported in Hyper-V. Is this just because Azure runs on Hyper-V and you obviously have to support Linux in a public cloud, or is it a demand from your Enterprise customers that we need to run Linux on premises as well?
BA: The reality is it’s both; Linux servers are out there and our focus is we want to be the virtualisation platform of choice, and if that means we have to do a fantastic job of supporting Linux, then we’ll do it. I’m opening a can of worms here so I’m going to go on my little tirade that I have to go on. One of the things that really frustrates me, and I see this far too much, is people picking VMware over Microsoft because, “we’re virtualising Linux”.
Now, if I talk to someone who is virtualising Linux servers and they come to me and say, you know, what, we’re going with KVM over Microsoft because we’re virtualising Linux, okay. Okay. I’m not going to argue with you there. Honestly, I think Hyper-V is a better platform, but it’s a really hard argument to say that Linux on Linux is a bad idea, but I want to be really clear and VMware will back me up on this, VMware isn’t Linux. VMware is a proprietary operating system company just like us, so if you’re a Linux guy and you’re looking at VMware versus Microsoft, you’re picking between two proprietary operating systems and another thing to point out, only one of these companies releases all their drivers under open source. It’s us. Isn’t that whacky?
PS: Fair enough.
BA: You know, once you get through your head that VMware and Microsoft both develop proprietary operating systems and you then actually take a look back at who’s doing more with open standards, who is releasing their stuff under to the public and who is documenting their stuff, like, we’re actually, like, the better company. I mean, it’s fascinating.
PS: Did you see at TechEd US 2013 Mark Russinovich made a comment that the Ubuntu distribution, the head guy there, he said that, they closed the first “bug” that they had because he now felt Linux had gained enough market share because it’s in Android and all the other platforms. Then he also said, you should use Azure because it’s a great platform for Linux. So I thought that was pretty cool. Can you run Linux on a Generation 2 VM?
BA: So, today you can’t. Honestly, this isn’t an evil scheme from Microsoft. There is nothing inherent in a Generation 2 virtual machine that stops Linux from running. It’s just that it’s essentially a new piece of hardware, and there isn’t Linux support for it yet. Hopefully, at some time in the future we will see that come along. I can’t speculate on when that’ll happen.
PS: I know. Never talk about the future.
BA: I know there are going to be tin foil hat people who are like, this is Microsoft being evil and so on.
Live Migration over RDMA vs. vMotion
PS: In Windows Server 2012 you enabled unlimited Live Migrations, really based on how much your hardware could take, and that was a way to go past VMware compared to Windows Server 2008 R2 (which could only do one at a time). So technically, I think you caught up in 2012, but there were some issues around performance and speed of live migration versus VMotion and I’ve read some articles that say vMotion is faster. Do you feel now that with the compression stuff and Live Migration over RDMA (editor’s note: Remote Direct Memory Access) that you introduced in 2012 R2 that you’ve overtaken them?
BA: Absolutely. We’re leaving them in the dust, absolutely. What just blows my mind is the live migration over RDMA.
Dynamic vs. fixed VHD
PS: So should we use dynamic or fixed VHD in production?
BA: That’s a really good question.
PS: I know, because you kept changing your answer over the years.
BA: Let me kind of go into some depth here. You know, if you put a gun to my head, the answer that I’ll give you today is you should use fixed VHD. I really want to be able to tell you that you should use dynamic and I think there are some benefits to dynamic and it’s really where we want to go, and what happens is in every release that we do, we make dynamic VHD a little bit better. It’s our goal that one day we’re just going to go, hey, use dynamic for everything. So when we started out on this journey the primary reason for using fixed VHD was performance. And with the work that we did in 2008 R2 and 2012, the performance really isn’t an issue now, you know, and if all you are concerned about is, you know, are my systems going to perform, then fixed, dynamic, whatever. It’s all going to work.
The one reason that I have left that I would say use fixed VHD in production environments is most people that I talk to haven’t got their head around thin provisioning. I’ve shot myself in the foot here. Now, if you came to me and you said, like, Ben, I want to use dynamic VHD on a thin provision SAN or a thin provisioned storage space and I have my thin provisioning alerts set up. I’ve got all my monitoring in place. I’ve got the alert so that if we get to 70% utilization, I’m going to get the email and I’m going to know to go on and do things. And great, yes, my answer is use dynamic. It’s fantastic.
The concern that I have is that, you know, on one hand the people that have been in the storage industry for a while and they understand the risks of thin provisioning in general, the know you don’t fully utilise. You know, you set your alerts at 70%, you manage yourselves, so those guys, yes, dynamic’s is the way to go.
The challenge I have – and it’s fascinating; we’ve had this for a while with networking and now we’re starting to see it with storage – is with virtualisation, we get a lot of people where this is their first exposure to this problem space. You know, one of the things, in the networking example, one of the things that I find so hard is if you go to, like, our public Hyper-V support forums, I swear 60% of the questions on there are people who can’t get their VMs to connect to the network, you know that they’ve never done this on physical hardware. You know, this is their first time doing basic networking.
Two, in 10 plus years of working on virtualisation, we have never had a bug in the virtualisation layer that specifically caused networking to stop working. Luckily we have the Hyper-V MVPs and, God bless them because they’re in the forums every day helping out, and I think I would go insane if I had to have do that. I’ve tried so often to figure out how I could get on to one of these forums to say look: Trust me, this isn’t our fault. Go double check your subnets, double-check your IP settings. There’s something wrong there.
In a lot of ways we have the same thing going on right now with storage in that we get people who haven’t had exposure to thin provisioning in a storage infrastructure before, and now they’re coming across it for the first time in virtualisation. That’s really the only reason why I would say fixed, you know, and it’s going to depend on the customer. If I have just the guy off the street who’s a rookie and he came to me and said, Ben, what should I use? I’d say, you know what? Use fixed. You’ll be happy, you won’t shoot yourself in the foot and I won’t have to deal with the support call. But if I have someone who, like, is a data centre admin who’s got tons of SANs and he comes to me and says, hey, Ben, do you guys have a thin provision storage story? I’m like, yes, absolutely, put dynamic disks on your SAN. You know, party on. That’s where the mixed message comes from.
SANs and IOPS
PS: Okay, yes, that’s fair enough. It makes sense because often when you do your presentations, you do talk about performance and you show that dynamic is on par so why not use dynamic? It’s much easier. Okay, so now, storage obviously I think that’s the big story in 2012 and 2012 R2 is that you guys are basically out to kill the SAN. I know you’re not saying that, but you are saying that.
BA: Are you recording that?
PS: It’s being recorded, yes.
BA: There’s no video feed.
PS: There’s no video feed so you can gesture. You don’t have to answer.
BA: Time for charades. Super.
PS: I think it’s pretty clear because it’s a trend in the industry, because if you look at the big public clouds like Azure, like Amazon, they do not use SANs. They couldn’t because there’s no way they could scale that to all those physical servers. You couldn’t build Azure and hook up all the physical machines you guys have for SANs. You’d price yourself out of the market. You couldn’t do it.
BA: There are a lot of people in the industry right now who are tackling this. You know, the interesting trends that are coming out are, everyone acknowledges that storage is too expensive. It just is. The real interesting thing that I find is the thing that is really coming to the forefront is the cost of IOPS (editor’s note: Input/Output Operations Per Second), not cost per gigabyte and the thing that bugs me here – and I take on my peers about this all the time – is like we’ve started to talk about this and I kind of, like, rein people in all the time, is that, like, yes, go and talk about the cost of IOPS.
Do understand that half your audience has no idea what you’re talking about? I mean, honestly, like, you get a roomful of system administrators and you ask them, what are your IOPS requirements? You know that there’ll be some guys there who’ll be able to answer. There’ll be a lot of them who are like “I don’t know”. It’s fascinating because it’s something that the industry as a whole, needs to mature. It’s been really interesting for us as we’ve been, you know, digging into the space and starting to build technology to address this.
So something fascinating is as we were working on the storage cost and the IOPS metering and so on is –stuff like this points to, like, real gaps in the industry – is if you build a system today for storage there is no standards-based way to know the IOPS capacity of that system. Pretty much the only way to know the IOPS capacity of that system is to run up its workloads and then observe what performance and IOPS you get. So there’s this real interesting thing and everyone’s unhappy about how much they’re spending on storage and our impression from our research and so on, we go out and we talk to people about what’s the bottleneck in your data centre? What’s stopping you from getting more? And the answer today is IOPS.
And so, yes, we’re really looking at this. I just figured out how to say this and not get in trouble: we’re really looking at what disruptive innovations we can bring to the storage marketplace. Everyone’s looking at this. You could just go down to the expo floor and you’ll find a half a dozen storage companies who are looking at different ways to tackle this problem, and I’m actually really excited. I think we’re going to see a huge amount of innovation and creativity in the storage space over the next period of time.
PS: I think so, too.
PS: So with the Scale Out File server and with storage tiering in 2012 R2, and you cap all this with the SMB 3.0 Direct / RDMA that you guys support, is there anything major missing from that story, to say that Windows file servers are now competitive with SANs?
BA: No. We’re in a really good place. You know, honestly, the biggest thing that we have to do right now, we have to be working with our hardware partners to get really compelling offerings out of them. You know, today for most people who want to look at this, you’re going to end up doing this as a DIY, build-it-yourself thing, and that’s not where we want to be. Have you seen the hardware from Data ON?
PS: Not physically.
BA: I’ve seen it physically. That is some sexy hardware. I’m not kidding. I’ve been talking to people this week. I’m trying to see if we can get some Australian distributors set up for that because I would love to see that coming out here. That’s the sort of thing that we need to make happen, but once you’ve got the metal there, we’re in a really great place. One of the things I’ve been having fun with is with the storage provisioning in my overflowing list of things to do. So my lab environment that I showed on my blog with multiple clusters and so on, the first thing, and I completely geeked out on this – I completely automated the deployment of that environment. One of the challenges I have, which I guess this is an internal Microsoft challenge I have is I get new builds of Windows every day, and if I get a bug and I’m running a build of Windows that’s over two weeks old, the developers won’t talk to me because I’m running an old build.
So I actually scripted the entire deployment, including storage spaces, clustering… like, the whole nine yards… I’ve got it down to three hours. You can give me a build of Windows, I put in my credentials, three hours later, two scale-up file servers, two Hyper-V clusters, VMs deployed, everything. The really nice thing for me – keeping in mind as I’ve said, you know, a lot of my hardware comes from dumpster diving – I’ve got storage tiering in there and I worked on the script so that I could just put in random drives in there and it will optimally configure them. If there are SSDs in there, it configures tiering. If there aren’t, it doesn’t. It’s really nice.
PS: That’s great.
BA: Yes, I need to do a long series of blog posts on that.
PS: So the storage IOPS QoS that you have in 2012 R2 is per VHD.
BA: Yes, yes.
PS: Which really makes it a per host thing. If you have a cluster, you can control it per host, but how do you control it cluster wide?
BA: You’re a very clever man to spot that.
PS: I didn’t, actually. Somebody else did, but go on.
BA: I’ve been skirting that issue on stage because I just don’t want to get into this conversation. So let me lay out where we are today. First thing – you’re right. We have more work to do. It’s a version 1 feature. There are a couple of aspects to storage. The first aspect is we have the cap (max storage IOPS), which is what I demonstrated today. That always works. Even though you configure it per VHD, it’s sitting with the VM, it doesn’t matter whether it’s local storage or shared storage or whatever, you set the cap, and we don’t let the VM go over the cap. Fine.
Where things become really interesting is the reserve (minimum storage IOPS) because if you set the reserve and you’re running on local storage, yes, great, but, yes, if you set the reserve and they’re on shared storage and there’s a bunch of other VMs accessing that storage, how do we honour that? The reality is today we can’t, but what we did do is we built an infrastructure where one, as I mentioned, you can set a cap. That’s great. If you set the reserve and you’re on shared infrastructure, and we find that your reserve is not being honoured, we can detect that and we actually have an eventing system and on shared storage we will fire events to say, warning, you’ve set reserves. They’re not being met. Not a perfect story. Something that we know we need to work on.
Storage and memory capacity
PS: So VSphere 5.5 was announced at VM World. They’ve made the vmdk format 62 terabyte.
BA: They haven’t caught up to us. (editor’s note: VHDX supports 64TB)
PS: I was trying to find the figures on how big their clusters can be, and I didn’t find any figures.
BA: I don’t think they’ve changed their number.
PS: Okay. That’s a bit of moot point though, isn’t it? I mean, most companies won’t have 32 or 64 node virtualization clusters.
BA: First, I have to blow my horn for a moment. I’m so happy that the last two VMworlds have been so boring. It’s been great. I love that. You know, I look forward to VMworld every year. You know, competition’s a great thing and, you know, there’s part of me that sits there with bated breath, being like, I’m going to see something crazy and new. The last two VMworlds have really been predictable. I can’t complain. But anyway, on the scale, I’m so happy because, like, with 2012, we blew it out of the water and yes, I’ve had conversations with CEOs where when I brought up the scale their response was like, it’s just a pissing match. Okay, that’s fine. Actually, I won’t name names because I would get in trouble if I did, but I actually found my first customer with a Hyper-V cluster with two terabyte of memory in each node. Yes.
PS: That’s pretty cool. They would have paid a lot of money for that.
BA: They would have paid a lot of money. I was really happy because I’m like, yes, someone who’s actually going to, like, get close to our limit because we support four terabyte of memory.
PS: So one of the things they talked a lot about at VMworld was the virtual SAN. How do you see that versus the scale-out file server and how you guys do storage?
BA: You know, honestly, right now, it’s not an apples-to-apples comparison. There are pros and cons on to both approaches. Of everything that came out of VMworld, I think that the virtual SAN was the most interesting thing but once again, for me this just was a confirmation of everyone seems to think that the IOPS per dollar issue is where we’re going. You know, honestly, I don’t talk futures, but that’s not keeping me up at night. We have far more expertise in storage here at Microsoft. I have high confidence in the guys working on that.
vSphere VM replication vs. Hyper-V replica
PS: Okay, the vSphere VM replication versus Hyper-V replica. That can keep up to 24 snapshots, but they can’t do test failovers and they don’t have an automatic injection of IP addresses in the replica side. How do you see that? You only support four snapshots?
BA: We support… how many do we support? I believe we support 12.
PS: 12, okay. Has there been a change in R2?
BA: No, we support 12. Honestly, no one has complained to me about that limitation.
Hyper-V Recovery Manager and Azure
PS: No, I don’t think so either. How does the Azure service, the Recovery Manager, how does that fit in with the replication story?
BA: So first I have to say this because everyone misses this. It’s that your data’s not going through Azure. I have to be really clear because I have stood on stage and presented the architecture and said, “your data’s not going through Azure”, and then the first question from the audience is, I don’t like that my data is going through Azure. It’s not. It’s really not.
PS: I know you don’t talk in futures, actually, but that would be a fairly cool service.
BA: Not saying anything. Really, that’s an interesting idea.
PS: Yes, I’m sure you guys have never thought of that. It never crossed your mind. Nobody at Microsoft ever thought of that. Let’s make Azure a Hyper-V replica destination. Anyway, go on.
BA: Saying nothing. Just saying nothing. Keeping in mind when we first designed Hyper-V replica, you know, the real market we were going after was the small and medium business, you know, and we knew the larger businesses would be interested in it, but we very much constrained ourselves and said small and medium business. Where the Hyper-V replica manager comes in is in helping those larger businesses because, the small and medium business, frankly, we’re not expecting them to use the Azure service.
Where Hyper-V replica management comes in is when, okay, now I’ve got the medium or large business and I want to do, like, serious replication. So the story we have here, the scenario is, I’m a larger business. I have two data centres. I have a bunch of Hyper-V servers in each data centre and I have SCVMM in each data centre. So the Hyper-V replica manager comes in and it provides a couple of key pieces of infrastructure. The first thing is we’ve seen a mix here. For some people, they have a unified AD / DNS space between these data centres, but a lot of people don’t. If you don’t have a unified space, you have to use certificates, and setting up a certificate for the small guy, fine. Setting up certificates on all the hosts in a large environment is not fine. That’s a lot of work. So that’s the first thing that Hyper-V replica management comes in and does for you. It does all the certificate management and those things – the whole setup.
The second thing it does for you is it gives you a single console that runs outside of these things to be able to come in and look at. This is a fun problem discussion to have here is if you look at the small guy and, you know, where’s this managed from? Well, that’s easy. It’s managed from the host, you know, because the scenario you’re designing for is this goes down, I call up the service provider and say my server is down, can you failover to my replica? If you look at this in a large business though, trying to figure out where does the infrastructure that provides the management experience for this reside? It’s actually the thorny question.
So it’s actually really nice to have, a third piece out there that gives you the unified view. So no matter which data centre goes down there’s your console for managing that. It also brings in a lot of the things that start making sense at scale, like being able to do batch operations and bulk operations and so on that you would want to do if you’re doing replication for a data centre.
The one other thing that I do like, you know, that part of me goes, I’d like to look at that for the small business case, the concept that Hyper-V replica manager has about the manual steps. Have you seen this?
BA: One of the awkward conversations to have – and we had this conversation around IT departments and it’s one of the things that you feel uncomfortable saying this, but once we say it, people are like, you know what? This is disaster recovery and if a disaster happens, you might not be around. It may not be you executing the failover because maybe you are trapped in your car. Like, it’s not a pleasant topic but, you know, it’s a reality.
So one of the things that they actually built into Hyper-V replica manager is you can design a failover plan, and what you can do in the failover plan is you can literally be: if we are shut down, there’s the logical things where: first you failover these VMs and then you failover those VMs, and it’s like a step-by-step process. You can think of it as a run book: you failover these VMs and you failover these VMs and now call Bob and get him to go and check the wiring or you need to have someone log into this particular box, and you can actually put that knowledge into the Hyper-V replica manager, which is a bit morbid, but, you know…
PS: That makes perfect sense. Well, that’s one of the things that you guys learned over in the U.S. with Katrina is that people don’t come to work. Even if they want to, they often don’t come and most of the time, they certainly don’t want to. They’re going to stay at home. So you can have the best failover system in the world, but if nobody presses the button, it’s not going to happen.
VMware NSX vs Hyper-V network virtualisation
PS: VMware NSX versus Hyper-V network virtualisation.
BA: Well, we are shipping something. You know, I do have to highlight that. You know, firstly, yes, we shipped this last year. We did. And two, as Jeff was saying, like, the stuff that we’re shipping is based on our learnings in Azure, so even though last year was our first release, this is actually proven at cloud scale. VMware is doing some masterful marketing here, and I want to go slap some of our marketing people around, you know. We’re ahead of the curve here. We are, and the software defined networking stuff, we have is really powerful.
The VMware vs. Hyper-V discussion
PS: So last year you were saying that you were having conversations with the CIOs, etc., that were VMware customers and you were coming into the discussion and saying that’s fine, but when it comes time to renew your contract, due diligence means you should have a look at Hyper-V because we’re now a serious competitor. So 12 months later, are you having those conversations? Is that happening?
BA: Honestly, the conversations I’m having today, it is hilarious. I’ve lost track of the number of conversations I’ve had with customers where their opening line is, “so we’re moving from VMware and…”. You know, at the CIO/CEO level, I’m not selling people on Hyper-V anymore. That’s a done deal. Where I’m having the Hyper-V versus VMware discussions is at the fanboy level, and that’s fine, you know? I love to have those discussions, because we have the better offering. I can’t remember the last time I lost a VMware versus Microsoft discussion except for people just going, you know what? I just like VMware better.
PS: I think it’s the other thing, which I’ve seen over 20 years of being in IT. It’s always what people have learned. You spent five years learning that thing. Yes, you’ve read all the books, you’ve done the certification exams, and you know the configuration steps in detail. Somebody comes along with something different and shiny, you’re going to go, I don’t like that because I know this – not that.
BA: Well, if you’ve been in IT 20 years, you know how that story ends.
PS: Yes, I know. If we could avoid the marketing speech, but VMware still sees themselves very much as the best virtualisation provider. This is what they’ve done for a long time. They certainly created the market. What’s the fundamental difference in Microsoft’s approach to their approach for IT infrastructure for private platforms?
BA: Especially with 2012 R2 but also with 2012, I don’t know why you would choose VMware. I mean they have this belief that they’re the best platform. I don’t know where the evidence is for that. I really don’t. You know, 2008 R2 versus, you know, ESX 4, yes, there were some things. Today we’re looking at 2012 R2 versus ESX 5.5. Yes, tell me why are you choosing VMware? Because right now, you know, at a pure virtualisation level, like, we’re racking up the features that they don’t have. You know, we’re showing the innovations before they’re showing it. Popping it up a level, once you start looking at cloud strategy, we’re miles ahead. You know, we absolutely are, and we’re pushing ahead there.
I mean, if you’re a CIO today, first, if you’re not thinking about how am I going to do private and public cloud, I’m sorry, you need to be fired. You’re not earning your paycheck. You know, if you’re still in the world that’s going like, yes, I can do this all on premises and I don’t need to think about external infrastructure, dude, what are you doing? Once you start getting into the world of its not public or private cloud. It’s public and private cloud where it makes sense for my business, and once you start looking for solutions for that, like, we are miles ahead.
PS: Thank you Ben for your time and insight – always a pleasure.
BA: Thank you.
Want to write for 4sysops? We are looking for new authors.