After finally getting the go-ahead to proceed with a project to virtualize a small business infrastructure, it may seem that the hard part is actually making it all happen. In many cases, however, the hardest part is getting the budget together to acquire all the hardware and software necessary–actually making the switch is the easier task.
The most important part of migrating from a physical to a virtual infrastructure is making sure that you have all the pieces in place before you move a single server, before you put anything into production, and even before you start testing. Much like laying out all the tools necessary to put together a table from IKEA makes the task easier, ensuring that you have everything you need before you embark on this journey will make the process smoother and quicker, and will greatly improve the quality of the finished product.
For instance, you may have licenses for live virtual server migrations between hosts, but not for automated load balancing or high-availability, or you may have to forego the advanced memory optimization or similar features.
In the case of the former, you’ll need to manually balance virtual servers across multiple physical hosts, and manually link and restart those servers should a physical host fail. In the case of the latter, you’ll need more memory per physical host than you would otherwise require because the advanced memory sharing isn’t available.
There are several other examples, but these are the most common. In smaller infrastructures, the lack of these features isn’t as critical as it might be otherwise, due to the smaller number of virtual servers and the general lack of unbalanced or highly-variable workloads. Either way, it’s important to understand what you have in your toolkit before you start.
Building the Network
It’s critically important that you have adequate physical server horsepower, ethernet switching, and storage available. There are a plethora of small, cheap storage devices on the market that can handle a virtualized workload, and multi-core servers are very reasonably priced.
If at all possible, make sure that you have a reasonable level of redundancy available in whatever solution you choose, such as redundant power supplies and protective RAID levels, with a minimum of RAID5. In the case that the infrastructure is small enough that there is no plan for shared storage, it’s absolutely critical that the physical host server or servers be outfitted with battery-backed RAID controllers, and ideally a RAID6 array internal to the server.
Also note that if you do forego shared storage, you won’t be able to take advantage of features such as live migrations, nor will you be able to quickly boot downed virtual servers that reside on the local storage of a failed physical host.
On the ethernet switching side, ensure that you have a switch capable of link aggregation, and if you’re planning on using iSCSI storage, information on the iSCSI support in the switch, and specifically, support for jumbo frames. Not all gigabit switches are created equally, and some can hamper iSCSI performance. Seek out switches that explicitly state iSCSI compliance–and these should always include jumbo frame support.
You should configure the storage array similarly to have multiple links to the network in order to account for the failure of any single link.
Once this network is built, you’re ready to install the virtualization software on the physical hosts, and link to your shared storage, if applicable.
Next page: How to handle the migration, and more resources
Handling the Virtualization Migration
Every infrastructure is different, so there are no ready-made plans to move servers from the physical to virtual realm, but there are general rules that you can follow.
Some are better than others, but many servers can be successfully migrated in this manner, saving time and hassles up front. In some cases–usually with servers running niche software, or servers requiring the use of hardware keys, or licenses that are bound to specific physical server elements like ethernet MAC addresses. In some cases, using P2V on these servers is more problematic than simply rebuilding them as virtual servers, but there’s no clear-cut way to tell without trying.
The good news is that in most cases you can attempt a P2V migration of a server without causing problems on the physical server. If the migration fails, the physical server can simply be powered back on without data loss.
That said, before any migrations take place, make sure to test your backups. Always have a backup plan in place should something go awry.
There are servers that shouldn’t be migrated using P2V tools. A common example is a Windows Domain Controller. It’s far simpler and less problematic simply to build a fresh domain controller on a virtual server and bring it up as a full domain controller then retire the physical servers once all replication has taken place and the virtual servers are functioning normally.
It’s also a good idea to retain a separate physical server as a domain controller, so that not all your domain controllers are virtualized. This isn’t a necessity, but without adequate high-availability features, this will provide a substantial safety net in the future.
Other servers can either be migrated using P2V tools, or simply rebuilt as virtual servers. In some cases, rebuilding the servers is a great way to clean out the cruft left behind from older physical servers, providing a clean slate as you transition into the virtual world. Remember, using P2V to migrate a physical server is unlikely to fix any existing problems, and may actually make them worse. If in doubt, you can always attempt a P2V and leave rebuilding the server as a fallback position. http://en.wikipedia.org/wiki/Cruft
It’s important to keep track of IP addressing and physical and virtual server presence. When you’re doing a P2V, take steps to ensure that there isn’t a time when the physical server and its virtual doppelganger aren’t running simultaneously. The P2V process retains the entire presence of the physical server, including the names, domain membership, and IP addressing, so having two present on the network at once will cause significant problems. The best idea is to power down the physical server and remove the power cables following the migration, then power up the new virtual server.
The process of converting a physical server infrastructure to a virtual infrastructure doesn’t have to happen overnight. In fact, it shouldn’t. You can start small by choosing one or two physical servers to convert, and let them run for a period of time as virtual servers in order to determine their viability. You can convert one or two servers per day, or per week–there’s generally no need to attempt a full conversion all at once.
Combined Efforts
It also allows you to start clean with the new solution, which might be a breath of fresh air if you’ve been battling various Windows gremlins in the physical world.
Finally, as you progress through the migration, take your time at certain points to regroup and ensure that you’re following your original plan. Also make sure that plan includes implementation and testing of backups for your newly-minted virtual infrastructure.
Once you’re done with all your conversions and rebuilds, you’ll most likely wonder how you ever lived without virtualization, and the effort and any uncertainty of the migration process will have faded away.
More Virtualization Resources:
How to take the first steps to server virtualization
How to set up a virtualization server
How to build a solid virtualization foundation
How to make the leap to virtual desktop infrastructure
5 ways to cut your storage footprint
Virtualization roundup: Four lab managers tested
Desktop virtualization tool eases Windows 7 move