A while ago, I discussed the seven disadvantages of server virtualization. Cost was one factor that I did not cover in that post. Questioning whether one can reduce costs with server virtualization sounds a bit like blasphemy these days. You will hardly find an article about server virtualization that does not claim that it is the best way to save costs in the data center. According to this view, server consolidation is the main benefit for virtualizing servers.
- Pip install Boto3 - Thu, Mar 24 2022
- Install Boto3 (AWS SDK for Python) in Visual Studio Code (VS Code) on Windows - Wed, Feb 23 2022
- Automatically mount an NVMe EBS volume in an EC2 Linux instance using fstab - Mon, Feb 21 2022
The arguments supporting this claim are so obvious that most IT managers do not dare question them. Replacing ten physical servers with just one seems to be the best cost-saving measure one can think of - and since the Green IT hype began, even CEOs talk about power-saving in the data center. The calculation seems to be quite easy, too: ten horses require ten times as much hay as one horse. Every CEO can understand that. So does it really make sense to challenge the costs saving axiom of server virtualization? I think it does.
There are four different considerations to take into account when it comes to calculating the cost of server virtualization: hardware costs, power costs, software costs, and payroll costs. I will cover the first factor today, and the others in the next articles in this series.
It is a fact that the claim that ten servers cost less than one server is not always true. This is not surprising because the same applies to other objects such as cars: ten Volkswagens can be cheaper than one Ferrari. We have pretty much the same situation when it comes to server virtualization.
The price of a server is not proportional to its computational power. High-end servers are disproportionately more expensive than average servers are. There are many reasons for this. For example, a 4 GB RAM module costs more than two 2 GB RAM modules. Moreover, a server that can host ten virtual servers obviously needs a lot of RAM. The same applies to most of the other components of a server such as CPU power, storage, etc.
Another key factor is that server vendors produce more small servers than high-end servers, which drives up prices of the latter. This means that you cannot always save hardware costs if you replace ten servers with one server that is powerful enough to do the same job.
The reason why many IT shops experience cost savings when they move to server virtualization is that their previous physical servers were hopelessly oversized. It is certainly one of the advantages of server virtualization, that it is easier to fine-tune hardware utilization. However, the huge cost savings that people sometimes report are often because they did not care much about server utilization before.
In fact, it is not necessary to host every backend application on a single server. The argument that this might avoid conflicts between different server applications can be countered with the argument that server virtualization can cause problems too, since it adds a new complexity level. In addition, let us not forget that virtualization is always at the expense of performance and this increases the investments in required server hardware.
Whether, and by how much, hardware costs can be reduced by means of server virtualization depends heavily on the server infrastructure of your organization. If you have already optimized hardware utilization by other means, then server virtualization will not be much helpful in reducing hardware costs. One thing is for sure: I would not put my faith in the cost calculation tools of server virtualization vendors. In my view, it is not possible to calculate such costs using a general formula.
There is another thing you should consider. Why do hardware vendors such as Intel or IBM promote server virtualization? Why does this technology not scare them to death? I mean, if everyone can save so much in hardware costs with server virtualization, server vendors would have a serious business problem. I somehow think that vendors just like the idea of selling more high-priced high-end servers. In fact, you can earn more by selling just one Ferrari, even if the ten Volkswagens have the same price.
In addition, even if you can reduce costs by replacing ten servers with one high-end server, you have to take into account that you also reduce redundancy this way. If your host goes down because of a hardware malfunction, everything stands still. Yes, you can add a second or a third server and work with cluster technology, but this will again raise hardware costs and adds yet another layer of complexity. It will be quite difficult to save on hardware costs if you replace ten average servers with two or three high-end servers.
Articles in this series:
Subscribe to 4sysops newsletter!
- Seven disadvantages of server virtualization
- Does server virtualization reduce costs? Part II - Power savings
- Does server virtualization reduce costs? Part I - Hardware expenses
- Does server virtualization reduce costs? Part III - Software and payroll costs
Want to write for 4sysops? We are looking for new authors.
First let me say how much I enjoy 4sysops and I find your contribution to the IT community both informative and helpful. I consult your blog on a daily basis.
When an IT Manager has 10 servers that are end-of-life and no longer under warranty that are hosting mission critical applications, the IT manager is faced with a decision. Do I replace the 10 servers with 10 newer servers, or do I replace them with one large server capable of virtualizing all of them (or two for high availability).
Another scenario is the need to stand up a new 10 server farm, again, does he go with 10 or does he go with 1?
Let’s assume he goes with one server, and that he has a four hour support agreement with the hardware vendor and good backups so he can get a failed part quickly and return to an operational status in 4-6 hours, which is acceptable to his business.
The one server configuration costs $10,000 (Dell PowerEdge 1950 with 32GB RAM) plus $12,000 for DataCenter 2008 4 processor license ($22,000 total).
Ok – so what would be the cost of 10 servers? Normally you would go with redundant power, network, fans, etc, so you’re probably looking at $2,500 to $5,000 per server (perhaps overkill as you stated). Add $1000 per server for 2008 Standard license, and this would cost between $35,000 to $60,000 dollars.
Therefore, one virtual server costs $13,000 to $38,000 less than purchasing 10 individual servers.
When you add the power savings, reduced footprint, and improved backup capabilities (snapshot a VHD instead of backing up the internal guest os) this really is a no-brainer decision.
If you want to add shared storage and a 2nd server to add redundancy to your virtual environment, the cost would be about the same if comparing against 10 $2,500 servers, or saving as much as $23,000 over 10 servers at $5,000 each.
I agree with Joe. Virtualizing the ten servers offers the IT Manager much more flexibility in managing those hosts as well as the “services” they provide. Utilizing a single console to monitor the Real-Time health of each vm, such as Virtual Machine Manager 2005/2008beta, extends managability. VMM also can rate other servers on the network that are candidates to host other VMs. Virtualizing or P2V legacy servers as they reach end-of-life is very efficient and easy to do. Those hosts will actually realize better performance as they are guests on sophisitacted servers with high-speed redundant NICs, multiple spindles, and multi-core procs. This technology also extends your Development environments as you can provide multi-server farms in a matter of minutes as opposed to the old days when you budget for your new server, order, wait for delivery, standup, and finally host your one dedicated application. The VSS Snapshot technology also allows us to take multiple backups that can be reverted back rather easily as we push through development.
I agree with both Joe & Rob – Although Michael has some good points.
I believe the larger picture (Other than simple “Cost”) is in favor of Virtualization. Specifically: Fast provisioning and flexibility.
I work for one of the largest Fortune 1000 software companies – We have a massive R&D Virtual Center farm. Usually I get 2-3 calls per day, Asking me to emulate an environment for a customer’s problem or a version compatilibyt test – Which might include Clustering, Special configuration, Etc.
Last week I had 4-5 calls for this complex environment – Taking me 5-6 hours to create and test. Show me how can you do this with physical servers… It might take days – If not weeks, Require special Hardware purchase and large scale preperations.
Joe, thanks for the compliment. I hope you don’t mind if I disagree with you, anyway. 😉
The kind of calculation you offered is typically found on the web sites of server virtualization vendors. The main point of my article is that it is not possible to calculate costs if no specific information about the applications at stake are available. This is the reason why I can easily refute your calculation. I can get a Dell PowerEdge R200 with 3GB RAM for about $1500 including Windows Server. That makes $15,000 for ten servers. You say you pay $22,000 for one high-end server with 32GB. Since you need at least two servers to get a comparable redundancy, you have to pay $44,000. So your server costs are almost three times as high as mine. And we haven’t even started talking about the costs for the virtualization software. So, you see how easy it is to manipulate numbers to prove ones point. This is exactly what server and virtualization vendors are doing. We shouldn’t take their bait so easily.
Rob, I agree with everything you said. I am an absolute supporter of server virtualization. However, this article was only about hardware costs.
Sharon, my organization is much much smaller. But compatibility testing is probably the most important benefit we have from server virtualization. I absolutely agree that the provisioning and flexibility are very key arguments for server virtualization. But, usually, improved flexibility has its price
Micheal, I am not sure you have the right pricing for Dell Poweredge R200. From Dell’s website, the starting price of an R200 is $2000 with no OS and less than a gig of memory. If you are assuming discounting than you should do the same for larger server and probably also assume that you could get bigger discount on larger system as margins tend to be higher as you mentioned. I think you should use a more common server configuration with 2 CPUs and 4GB of RAM
I agree with you that people should carefully evaluate how much consolidation they can get from server virtualization. Not all hypervisors are born equal and allow for the same consolidation ratios. The other aspect to take into account is the type of application you are virtualizating, as workloads may limit the number of VMs you can put on a server.
Alberto, I just checked it again. I can get a Dell R200 with 3GB RAM for $1,547 including Windows Server 2008 Standard. I selected 3GB RAM because Joe chose a 32GB machine in his example, for a server that is supposed to replace ten average servers. But, I think, you missed the point of my argument. In my view, it doesn’t make sense at all to talk about server prices without knowing the particular situation. This is the trick that vendors use to make you believe that you are saving hardware costs. It is very easy to fantasize about server costs in such a theoretical manner. You can get any result you want this way.