First Look: VMware VSphere 4.1 Keeps the Virtualization Crown

VMware has never been a company to rest on its laurels, and the release of VMware vSphere 4.1 demonstrates its continued effort on extending the capabilities of virtualization. There are still a few bumps here and there, such as quirky HA configuration issues, but from my preliminary testing, it seems clear that the best virtualization solution available today just got better.

While significant features have been added to the new version, there is no sea change in terms of core functionality or administrative interfaces as we saw with the initial move to VMware vSphere 4.0. VMware has built on that base and added several enterprise-level features that will likely prove extremely handy to midrange and large-scale VMware-based infrastructures.

[ Also on InfoWorld: The six-core Intel Westmere CPU and server blade systems from Dell, HP, and IBM are primed for virtualization. See "InfoWorld review: Intel's Westmere struts its stuff" and "Blade server review: Dell, HP, and IBM battle for the virtual data center." ]

Scaling vSphere hosts, clusters, and data centers First off, some numbers: VMware vSphere 4.1 can support up to 3,000 virtual machines per cluster and 1,000 hosts per vCenter server, both roughly three times the limits in vSphere 4.0. The number of virtual machines per data center now maxes out at 5,000, which is twice the previous limit. Those are some big numbers that affect only the big deployments, but they will greatly simplify administration at that level.

On top of the augmented scalability, VMware has added a pile of new features aimed at simplifying VM management. Some enhancements to the vCLI command-line interface allow for extra virtual machine controls, while some enhancements to the host profile capability permit finer-grained control over ESX servers. Host lockdown mode and host power management have also been improved. Virtual machine serial ports can now be accessed over the network, and you can leverage Active Directory to control authentication to the ESX hosts themselves, rather than relying on local authentication or other methods.

Plenty of other new features in 4.1 should have an immediate impact on VMware deployments. There's a new method of memory management that relies on compression over swap during low memory situations. Someone figured out it's faster to compress memory pages than swap those pages to disk. This carries some processing overhead, naturally, but with today's 6-, 8-, and 12-core CPUs, overhead is much less of an issue. By using memory compression, you can squeeze even more VMs on a single host.

Also, vSphere 4.1 boasts vMotion speed improvements, enhancements to Distributed Resource Scheduler's VM affinity rules, and updates to the Enhanced VMotion Compatibility processor support that now encompasses more CPUs. I can attest to the VMotion speedup, as I was able to conduct a wide variety of VMotion tasks that were faster on vSphere 4.1 than on vSphere 4.0.

The new host affinity rules in DRS might not be useful to everyone, but the ability to create rules regarding what hosts a certain virtual machine can be migrated to (or not to, as the case may be) can help in situations where not every host in a cluster is identical or connected to the same networks. For instance, if only a few hosts have connections to a DMZ network, you can create rules that force DMZ-connected hosts to vMotion only to those hosts.

There have also been improvements in USB device mapping. It's now possible to map a USB device to a virtual machine and maintain that mapping even through a vMotion of the virtual machine. This is especially important for applications that require USB hardware license keys to operate.

Additionally, for those working with Intel's Nehalem-EX 8-core server processors, vSphere 4.1 officially supports that platform.

Network and storage I/O control One of the main thrusts of vSphere 4.1 is two new I/O control frameworks. Storage I/O control is essentially QoS for storage, based on rules assigned to virtual machines. If there is congestion present on a storage link, higher-priority virtual machines will be given a larger share of the pipe than lower-priority virtual machines. While it's never a good idea to operate with a consistently congested storage pathway, this feature can ensure that critical virtual machines aren't choked during high-traffic periods or unexpected traffic surges.

In a similar fashion, network I/O control can be used to dictate bandwidth allotments to particular virtual machines when a network link is at or near capacity. There are some server hardware offerings, such as HP's Virtual Connect, that offer similar functionality on the switching side, but this feature is now available within vSphere proper. It's really designed for high-density hosts with 10G links, but can be leveraged at just about any level.

There are other enhancements at the host level too, such as support for iSCSI offloading NICs from Broadcom, NFS performance enhancements, and a functional boot-from-SAN manager for ESXi that can run over iSCSI, FCoE, and Fibre Channel.

A few new features are found in the HA and DRS functions, mostly providing tighter integration with FT (Fault Tolerance) features. Virtual machines configured for FT can now play nice with DRS, for instance, allowing for load balancing of virtual machines that also require fault tolerance. In addition, Windows clustering services can now be integrated with VMware's HA functions, ostensibly providing a deeper level of failover functionality in Windows environments.

In the lab with vSphere 4.1 I tested a vSphere 4.1 release candidate on a variety of boxes ranging from a new Dell R810 2U server running two Intel Nehalem-EX CPUs to an old Sun X4150 1U server running two Intel E5440 CPUs, all linked to a Dell EqualLogic 3800XV iSCSI SAN array and a Snap Server NAS.

As with previous VMware clients, you'll run into some trouble trying to access older versions of vCenter with the newer client. This can be a problem in environments that are migrating between different versions or that have multiple versions running in production. A new twist is that client downloads are no longer available from the ESX hosts, but from a VMware-hosted client distribution site. Otherwise, the client installation on 64-bit Windows 7 was normal.

Little has changed in the vSphere client since the last revision, although error reporting seems better than in earlier incarnations. Previous error reporting was extremely opaque, which was aggravating at best when dealing with problems within the infrastructure. The error reports in vSphere 4.1 provide more data, which should assist in troubleshooting.

I set up a cluster with Enhanced vMotion Compatibility, and vMotions across the disparate CPUs in the lab servers worked well. vMotions are definitely faster in vSphere 4.1, but not amazingly so. Rough estimates might be a 25 or 30 percent uptake in some situations, but the speed of any vMotion is highly dependent on the host network I/O speeds, as well as storage speeds if a storage vMotion is attempted, and the load on the virtual machine itself.

I did run into a few errors related to HA configuration, which shouldn't come as a big surprise. HA can be relatively cranky to operate on occasion.

I had time to work with the storage and network I/O control features and found them simple enough. These features may not be necessary for typical midsize implementations, but for larger and more I/O-intensive workloads, they will make a significant difference in the day-to-day functioning of the overall environment. The ability to prioritize storage access in times of high contention opens many doors that were previously closed.

The network I/O controls will appeal to a broader audience. Although there are ways of achieving this goal external to the hypervisor, adding this capability within the scope of the virtual machine reduces cost and complexity.

Movin' on upIt's clear that VMware is continuing to push the boundaries in terms of what's possible in the world of server virtualization. The consistent progress shown by each version of VMware's flagship product simply hardens the cement of an already solid foundation. There are still oddities present in various aspects of the solution, but they do not generally impact the bread-and-butter hypervisor functionality; vSphere is as stable as an x86 hypervisor gets these days.

The main concerns with vSphere have historically been the high licensing cost and the squirrelly error reporting. The latter seems to be better in this revision, but it's still not where it should be. As for licensing costs, there's some relief for small businesses, if not for large ones. VMware has lowered the price of the Essentials edition, and added vMotion to the Essentials Plus and Standard editions. 

Nevertheless, the many goodies in vSphere 4.1 address gaps and problem points found in earlier releases. VMware not only has the most advanced virtualization solution available today, but it appears the company won't be slowing down any time soon.

Dive deep into virtualization:

Server Virtualization Deep Dive ReportDramatic cost and agility benefits make server virtualization an essential part of any data center. Here's how to plan, deploy, and maintain a sound virtual infrastructureHigh-Availability Virtualization Deep Dive ReportServer virtualization can help deliver the high levels of reliability that today's enterprises demand, often at a lower cost and with less complexity than non-virtualized methods. Here's how to design one high-availability scheme and apply it to many situationsVDI Deep Dive ReportIf you have the hardware, VDI (virtual desktop infrastructure) can deliver a thin client experience similar to that of a desktop PC, but challenges remainThin Client Deep Dive ReportShould you use the conventional Terminal Services approach, or opt for "real" desktop virtualization? Here's how to make sensible decisions about thin clients

This article, "First look: VMware vSphere 4.1 keeps the virtualization crown," was originally published at InfoWorld.com. Follow the latest developments in virtualization and read Paul Venezia's Deep End blog at InfoWorld.com.

Read more about virtualization in InfoWorld's Virtualization Channel.

Subscribe to the Business Brief Newsletter

Comments