VSphere Rounds Into Form

VMware's vSphere 4.0 cloud operating system, which we tested last June, ushered in new methods to manage virtual machines on internal and external hosts. The 4.1 version, which shipped last month, delivers some much needed polish. Additions to the product include an updated vCenter Configuration Manager (formerly EMC's Ionix Application Stack Manager and Server Configuration Manager) as well as vCenter Application Discovery Manager (formerly EMC's Ionix Application Discovery Manager).

Watch a slideshow version. |How we did it

Prices now range from the free basic hypervisor to Enterprise Plus at $3,495 per processor.

Of the features tested, we found vMotion to have the most immediate effect for administrators, although those trying to cram as many VMs as possible onto a physical server will find newly reduced memory overhead and memory compression options to be highly desirable. After all, virtualization is all about optimization.

VMware 4.1 contains a sorely needed feature, the ability to use vMotion to move more than one VM at a time from server host to server host. VSphere 4.1 allows several VMs to move concurrently, but with a small catch.

The catch is that the source and target machines still need to be similar to each other in terms of processor type, and there needs to be a connection between the machines with reasonable speed - the two Gigabit Ethernet jacks found on most servers will do the job for at least four concurrent VM moves, we found in testing. (See our test methodology.)

With a 10GB switch, enterprise customers can expect to be able to move eight machines at once across a VMware cluster.

These improvements address the issue of how to quickly get production virtual machines off a failing hardware platform. When hardware sends alarms that problems are occurring, maintaining production requires moving the dense number of operating systems instances to another platform rapidly, and eight VMs at a time seems to be a good number.

In the not too distant future, however, the number of CPU sockets and the multiple CPU cores filling servers will lead to organizations cramming even more instances into them, thus elevating the need for still more instances to be moved quickly.And as servers become crammed with more instances, the need for load balancing CPUs and resources becomes more critical. The ability to move groups of VM assets from one server to another goes a long way towards maintaining peak performance from the multi-core servers now popular in network operations centers and data centers.

Upgrade drama

VSphere 4.1's core management application, vCenter, now runs only on 64-bit hosts. Upgrades to existing 32-bit platforms, like our Windows 2003 Server R2 host, aren't allowed, so some administrators will have to upgrade to 64-bit versions of their vCenter host.

Or you can just re-install, which is a more painful exercise. There are upgrade scripts to copy the database, but instructions aren't clear and we couldn't get it to work with our MS SQL Server 2005, as migration scripts failed. We gave up and created a new vCenter data center and imported the ESX machines (switching them to the new vCenter server).

The VMware ESX core hypervisor, has also been upgraded, and provided no upgrade drama at all; vCenter Update Manager upgraded our hypervisors to 4.1 level.

Memory management changes

VMware claims that memory use for VM instances is more efficient in 4.1 compared to 4.0. We tried to test overhead and found that 4.1 decreases memory overhead, but only for 64-bit instances. We couldn't find any real change for 32-bit virtual machine memory use over the same time cycles (many minutes).

Memory can also be compressed so that VMs don't go to disk for virtual memory as often, which saves access speed. We found that the amount of actual time savings from compression could be high, and is likely to be useful in constructions where VMs are crammed and somewhat over-subscribed, resource-wise.

CPU use changes

Unlike most other hypervisors, VMware 4.1's ESX uses processor sleep instructions to save power when idle. Most hypervisor kernels thwart processor sleep in a quest to be responsive to the random requests of hosted VMs, and keep CPUs at full speed all the time.

Sadly, there's not a hardware compatibility list (HCL) that notes which servers can use this feature, and while we were able to set power management policies on our Dell 1950 VMware ESX 4.1 host, we weren't able to deploy it on our HP servers. And on all three mainstream machines we tested, were unable to view power history, power consumption, or the power cap information. According to a VMware support person, "the graphs are supported on systems that have built-in power meters that we know how to read". We hope more can be read, soon.

Virtual CPU changes can now be seen by guests as the number of sockets and number of cores, where previously guests could only see sockets (and one socket equals one core). Now it's possible to do such things as have the VM detect two sockets with two cores for a total of four CPUs - with advance settings. Instructions on this aren't particularly clear. Sadly, there's still a limit of eight vCPUs per VM, as well.

Miscellaneous updates

VMware's vCenter also tracks more storage statistics. We found more console info and tracking on throughput and latency for both hosts and VMs. And to our happiness, vCenter now tracks NFS stats as well.

VMware's vNetwork Distributed Switch (available only in the vSphere Enterprise Plus edition) adds network control, by protocol throttling in the switch. Similarly, storage I/O can be throttled as well, per VM, but this also requires the Enterprise Plus Edition.

As an example, we could limit iSCSI, FT (fault tolerance mirroring), NFS, as well as vMotion traffic. The control over both networking and storage resources allows administrative control over each VM. With vCenter, behavior becomes more molecularly confined, and therefore predictable so that machines can be further optimized with VMs.

VMware says that vCenter can now manage up to 1,000 hosts, and 10,000 powered on VMs. VMware also increased the number of VMs per cluster, from 1,280 to 3,000.

There's another claim that vMotion migration is five times faster. We were unable to corroborate this, but we did find that vMotion no longer queues jobs, one behind the other, it actually runs concurrent VM transfers from one host to another. We're inclined therefore, to believe the claims.

VMware 4.1 has new support for 8GBps Fibre Channel. There's also a vStorage API that supports more rapid array file movement through efficient space allocation. In turn, VMs are more rapidly created and provisioned through their life cycle.


VMware stressed in their pre-release briefing that many of the upgrades in VMware 4.1 were incremental, and related to extensibility and scalability. On most fronts, we found this to be true. If you had reasons to buy VMware 4.0, there are more reasons to buy VMware 4.1 for added scale and features. With Microsoft and Citrix breathing down their necks, VMware distinguishes itself by these incremental additions. Many of the juicier updated features and specs are available only in the Enterprise Plus Edition.

Henderson is principal researcher and Allen is a researcher for ExtremeLabs in Indianapolis. They can be reached at thenderson@extremelabs.com.

Read more about data center in Network World's Data Center section.

Shop ▾
arrow up Amazon Shop buttons are programmatically attached to all reviews, regardless of products' final review scores. Our parent company, IDG, receives advertisement revenue for shopping activity generated by the links. Because the buttons are attached programmatically, they should not be interpreted as editorial endorsements.

Subscribe to the Business Brief Newsletter