A while ago, I discussed the seven disadvantages of server virtualization. Cost was one factor that I did not cover in that post. Questioning whether one can reduce costs with server virtualization sounds a bit like blasphemy these days. You will hardly find an article about server virtualization that does not claim that it is the best way to save costs in the data center. According to this view, server consolidation is the main benefit for virtualizing servers.
The arguments supporting this claim are so obvious that most IT managers do not dare question them. Replacing ten physical servers with just one seems to be the best cost-saving measure one can think of – and since the Green IT hype began, even CEOs talk about power-saving in the data center. The calculation seems to be quite easy, too: ten horses require ten times as much hay as one horse. Every CEO can understand that. So does it really make sense to challenge the costs saving axiom of server virtualization? I think it does.
There are four different considerations to take into account when it comes to calculating the cost of server virtualization: hardware costs, power costs, software costs, and payroll costs. I will cover the first factor today, and the others in the next articles in this series.
It is a fact that the claim that ten servers cost less than one server is not always true. This is not surprising because the same applies to other objects such as cars: ten Volkswagens can be cheaper than one Ferrari. We have pretty much the same situation when it comes to server virtualization.
The price of a server is not proportional to its computational power. High-end servers are disproportionately more expensive than average servers are. There are many reasons for this. For example, a 4 GB RAM module costs more than two 2 GB RAM modules. Moreover, a server that can host ten virtual servers obviously needs a lot of RAM. The same applies to most of the other components of a server such as CPU power, storage, etc.
Another key factor is that server vendors produce more small servers than high-end servers, which drives up prices of the latter. This means that you cannot always save hardware costs if you replace ten servers with one server that is powerful enough to do the same job.
The reason why many IT shops experience cost savings when they move to server virtualization is that their previous physical servers were hopelessly oversized. It is certainly one of the advantages of server virtualization, that it is easier to fine-tune hardware utilization. However, the huge cost savings that people sometimes report are often because they did not care much about server utilization before.
In fact, it is not necessary to host every backend application on a single server. The argument that this might avoid conflicts between different server applications can be countered with the argument that server virtualization can cause problems too, since it adds a new complexity level. In addition, let us not forget that virtualization is always at the expense of performance and this increases the investments in required server hardware.
Whether, and by how much, hardware costs can be reduced by means of server virtualization depends heavily on the server infrastructure of your organization. If you have already optimized hardware utilization by other means, then server virtualization will not be much helpful in reducing hardware costs. One thing is for sure: I would not put my faith in the cost calculation tools of server virtualization vendors. In my view, it is not possible to calculate such costs using a general formula.
There is another thing you should consider. Why do hardware vendors such as Intel or IBM promote server virtualization? Why does this technology not scare them to death? I mean, if everyone can save so much in hardware costs with server virtualization, server vendors would have a serious business problem. I somehow think that vendors just like the idea of selling more high-priced high-end servers. In fact, you can earn more by selling just one Ferrari, even if the ten Volkswagens have the same price.
In addition, even if you can reduce costs by replacing ten servers with one high-end server, you have to take into account that you also reduce redundancy this way. If your host goes down because of a hardware malfunction, everything stands still. Yes, you can add a second or a third server and work with cluster technology, but this will again raise hardware costs and adds yet another layer of complexity. It will be quite difficult to save on hardware costs if you replace ten average servers with two or three high-end servers.
Articles in this series:
- Seven disadvantages of server virtualization
- Does server virtualization reduce costs? Part II – Power savings
- Does server virtualization reduce costs? Part I – Hardware expenses
- Does server virtualization reduce costs? Part III – Software and payroll costs