By now, most people in the technology field have had the good fortune to work with some sort of virtualization technology such as Hyper-V, Xen, KVM, SolusVM, or OpenVZ. Whether you’re a Linux or Windows Admin, one thing you can be certain of is that virtualization is making your job easier and saving your company money.
In my opinion, one of the clearest and most immediately-realized benefits of implementing virtualization in your environment is the cost savings. For the longest time, business demands would increase and the only practical solution was to deploy more servers. It’s long been understood that while an increased number of physical machines deliver necessary processing power, they also consume more power and produce more heat, which in turn, requires additional cooling. Interestingly enough, you can expect your energy and cooling costs to be an exact 1:1 ratio in most deployments, so you can see how the cost quickly adds up. There’s also the glaring issue of the space consumed by the physical footprint of several cabinets. Long story short, a company faces the increased burdens of wasted space, higher energy costs, and large administrative staffs necessary to maintain all of this equipment. And let’s not forget the cost incurred from buying multiple physical machines.
After some time it dawned on people that processing power and storage capacity had scaled almost exponentially, while the common computing demands of a typical business had remained about the same. Simply put, there were suddenly a lot of underutilized machines sitting around wasting both space and power.
Here’s where virtualization comes to the rescue.
In most cases a server with a large number of resources-centralized, highly redundant storage, multiple processors with multiple cores, and a large amount of memory-will be deployed as a hypervisor. This hypervisor then serves as a very low-resource operating system capable of creating, deploying, and managing “guest” operating systems, or virtual machines. With a lot of clever programming and design, certain hypervisors are capable of allocating more resources to virtual machines, as necessary, scaling their resources back down when they aren’t in immediate need, and just generally doing a lot of really cool things. This also makes the management and administration of the servers much easier from a technical standpoint, as the traditional method of monitoring a server’s performance would involve “keeping an eye on it” and involving a very reactionary approach to troubleshooting-waiting until the server started experiencing issues, then diving in head first to diagnose the issue and hoping for a solution to present itself.
But with a properly executed hypervisor deployment, the hypervisor itself already consolidates a lot of management, monitoring, and notification systems in one package. Compared to traditional methods of monitoring, which eat up resources on your machine, increase your systems attack surface, and are typically difficult to use, all of the statistics about a virtual machine are generated in the context of the hypervisor and place no stress on the node/virtual machine itself.
Additionally, the ease of scaling a virtual machine’s resources is literally no more complicated than a few mouse clicks. When compared to the traditional method of ordering the parts, unracking the server, installing the new parts, burning them in, etc., this is where virtual machines really shine.
And of course, no discussion about virtual machines would be complete without mentioning backups. Staying on top of scheduled updates used to be the bane of every System Administrator’s existence. Now you simply take a snapshot, or similar backup, of the virtual machine prior to performing any sensitive work and restore the image in minutes if necessary. As anyone who’s ever had the misfortune to perform a bare metal disaster recovery can tell you, this is a very convenient feature to have at your disposal.