Virtualisation becoming the norm

A lot of work is being done on virtualisation, especially in the Open Source Community. Which is probably kind of ironical, since the first implementations I've seen that were very viable were stuff like VMware and Virtual PC. Another application, developed more recently I believe, is Qemu. Not really virtualisation, but emulation is done by Bochs. Qemu is Open Source and became quite widely used.

That and the way servers are getting more and more powerful, it made virtualisation not only desirable, but also better feasible. We use Xen a lot, which suits our needs. Other people we know use Linux VServer, OpenVZ or it's commercial counterpart Virtuozzo.

Virtualisation is what we use to create virtual machines on one server. Ten years ago, this would be wierd, since you needed all the computing power and memory that you could get for just running the stuff you wanted to run. But ten years before that, when Mainframes were still being used heavily, this was common place! Not that any user would notice, mind you, and there probably wasn't any virtualisation as we know it today, but the fact that one machine with relatively large performance and capacity was used for doing several tasks in parallel (it was called time-sharing back then) can't be denied. And that's the essence of virtualisation, doing lots of stuff at the same time, on one machine.

Nowadays, when you buy a modern server, you'll get an insanely fast dual-core processor and more bits of memory than the age of the Universe in years... Okay, maybe not that much memory (which would be 14+GB, for those interested), but 2G is standard these days. And we prefer 4G, honestly. What on earth would we, being a non-space-agency, non-mathematical-institution, non-financial-institution, do with so much capacity? Well, divide it into serveral servers of course! And for that, we use virtualisation.

Having several virtual machines gives a lot of benefits. Security-wise, but also practical. For instance, one of our customers uses a production environment (which is critical to their business) and a staging environment (which is non-critical). We gave them two virtual machines, so they could upgrade and test their code in one, while not disturbing anything in the production environment. All in one single hardware box. We can 'reboot' or reinstall the staging area, without having to take down their production server, so it really feels like there are two machines.

When thinking about security, there are several benefits. First of all, you don't have to run all the stuff you need in one environment, which makes it more difficult to hack into a specific service. Think for example about the Apache website defacement from the 3th of May, 2000. As you can read in their whitepaper, the hackers used flaws in (the configuration of) other services running on the Apache.org server to hack into the webserver. This just goes to show that even when you think your own program is very secure, the environment in which it runs needs to be secure too! And one relatively easy way to make your machine more secure, is making sure you run only the most necessary services needed. In other words, if there are less services running on your server, the chances of a break-in are reduced too! So if we can spread out the services on multiple machines, the chances of one of those machines getting hacked might be the same, but the hackers are far better contained. Getting from one virtual machine to another is (about) as difficult as breaking into the other machine directly! Great news!

So even though there are alternatives to all the things I've described here (chroot or FreeBSD's jail, for instance), virtualisation is a really nice and clean solution. Great stuff, really interesting.

No idea why I wrote this post, though.

Comments

Comments powered by Disqus