Virtualization

The Long Life and Slow Death of the Virtual Server

As we continue to move wholesale into a world where virtual servers are the rule, we're starting to see just how different this new environment is. Server farms are evolving in unexpected ways, creating situations we didn't encounter prior to the widespread adoption of virtualization. One of these oddities is the seemingly eternal server. How do you manage the lifecycle of a machine that never dies?

Back before we spun up VMs on a whim to handle whatever application or platform we needed, every deployment was painstaking and time consuming. These servers would be carefully built by installing the OS from the ground up, tweaking the BIOS tweaks, installing drivers, and laying the applications or frameworks over all of above. We would back up that server to tape and hope the server would reach hardware obsolescence before it broke down.

In either case, the server that replaced this physical server would almost certainly be different, and the notion of restoring the bare-metal backup on a new physical server often meant more work than just starting fresh on the new hardware. This was especially true for Windows servers. Starting anew was a good way to clear out the cruft of years of operation and begin again with a blank slate.

In the world of server virtualization, the day for the organic refresh never arrives. Virtual servers don't break down. They don't become obsolete. They simply keep going, while the physical hardware cycles underneath them throughout their existence. In fact, the only reason to rebuild on a new VM is if the OS vendor has stopped supporting that version and there are no more security updates to be had. Even then, you'll find a great many instances where that VM will continue to run forever or until it becomes compromised.

The island of misfit servers

Looking through a collection of VMs on a midsize virtualization farm built five or so years ago, we find a wide variety of operating systems, whether we like it or not. There's a bunch of Windows Server 2003 boxes hanging around, some Windows Server 2008 systems, a plethora of Linux boxes of vastly different lineages, and entire development frameworks sitting mostly idle, but are required for update testing. More than a few Windows XP systems are sitting there, for various reasons, and even one or two Windows NT boxes support a long-deceased application that somehow hasn't been phased out.

How does this happen, you ask? Unless there's an extremely strict (and likely impossible) corporate policy on the maintenance and update of various OS versions, virtual server farm entropy is inevitable -- if it ain't broke, after all. When a new version of your chosen Linux distribution is released, do you immediately purchase upgrade licenses and go through each box, disturbing applications and services that would otherwise continue to run problem-free forever? When Windows Server 2012 is released, how long will it take you to properly test and confirm compatibility with all the applications humming along on Windows Server 2003 R2 or 2008?

In short, lacking a natural server life span, how vigilant should we be in keeping up with large-scale OS upgrades? Essentially, that's up to the OS vendors themselves. Unless you're willing to run with significant security risks, you're essentially forced to toss that stalwart VM in favor of a new one, though there's no functional reason to do so. If you have a perfectly functional refrigerator, do you go buy a new one just because the manufacturer no longer makes that model? Probably not.

What of that upgrade path? As we learned with physical servers, it's generally best to start from scratch. However, mainstream Linux distributions are perfectly capable of solid upgrades in situ. That fact, coupled with the ability to snapshot the current state before the attempted upgrade, makes a live upgrade a fairly safe bet -- at worst, it will cost your time if a failure occurs. Windows is not as well suited to this method, but it's still possible, and the backup plan of rolling back the snapshot is especially compelling in this instance.

Damned if you do

But as we know, the more you upgrade an OS on the same server instance (presuming that an upgrade is even supported), the crustier that instance gets. Odd things can happen. Errors in future updates due to orphaned packages or ancient changes will start to occur. Future application installations or upgrades may become problematic due to the vagaries of upgrading the instance from an OS released five or even eight years ago.

In short, upgrading becomes more of a gamble than just leaving things alone. In a time where IT budgets are lean and staffs are perpetually shorthanded, the planning and execution of a fresh install and service migration for all of these VMs becomes a Herculean task. In many cases, there they sit, running perfectly, functioning without issue, yet orphaned.

Clearly there are different levels of concern in these instances. Public-facing servers need more attention than purely internal boxes, but the latter should not be completely neglected. If you have an internal Linux VM instance that runs a handful of Web applications, security updates to fix BIND problems aren't a big concern. But if you run a public DNS server and a significant security issue rears its head, you better have those bases covered.

Otherwise, it comes down to budget and vigilance. If the budget for both time and licensing exists to maintain a strict upgrade regimen, then there will be plenty of wasted effort, but the VMs will be fresh. If the budget isn't there -- a more likely scenario -- then you may find yourself watching over a bunch of eternal servers.

This story, "The long life and slow death of the virtual server," was originally published at InfoWorld.com. Read more of Paul Venezia's The Deep End blog at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.

Subscribe to the Business Brief Newsletter

Comments