Virtualization shoot-out: World's fastest hypervisor
The test plan was straightforward: Take a look at Windows and Linux server performance on the physical hardware, then on an otherwise quiescent hypervisor, as well as several more times on a hypervisor under increasing load levels. Metrics included CPU, RAM, network, and storage I/O performance, time and interruption (if any) during VM migrations, speed and agility in VM template creation and deployment, and overall handling of a few disaster scenarios, such as the abrupt loss of a host and failover to an alternate site.
The benchmarks themselves were based on synthetic and real-world tests. They provide a general picture of hypervisor performance, but as with many facets of virtualization, there's no good way to accurately forecast how any workload will perform under any virtualization solution apart from running the actual workload.
The Linux tests were drawn from my standard suite of homegrown tests. They are based on common tools and scenarios, and they're measured by elapsed time to complete. These included converting a 150MB WAV file to MP3 using the LAME encoder on Linux, as well as using bzip2 and gzip to compress and decompress large files. These are single-threaded tests that are run in series, but with increasing concurrency, allowing performance to be measured with two, four, six, eight, and twelve concurrent test passes running. By running these tests on a virtual machine with four vCPUs (virtual CPUs), we were able to measure how well the hypervisor handled increasing workloads on the VM in terms of CPU, RAM, and I/O performance, as all files were read from and written to shared storage.
The Windows tests were run with SiSoftware's Sandra. We chose to focus on a few specific benchmarks, primarily based on CPU and RAM performance, but also including AES cryptography, which plays a significant part in many production workloads.
Again, all tests were conducted on the same physical hardware, with the same EqualLogic PS6010XV iSCSI array for storage, and on identical virtual machines built under each solution. All the Windows tests were run on Windows Server 2008 R2, and all the Linux tests were run on Red Hat Enterprise Linux 6 -- with the exception of Microsoft Hyper-V. Because Hyper-V does not support Red Hat Enterprise Linux 6, we used RHEL 5.5, which may have had a minor impact on Hyper-V's Linux test results.
The performance test results show the four hypervisors to be closely matched, with no big winners or losers. The main differences emerged in the loaded hypervisor tests, where XenServer's Windows performance and Hyper-V's Linux performance both suffered. Overall, VMware vSphere and Microsoft Hyper-V turned in the best Windows results [see table], while vSphere, Red Hat Enterprise Virtualization, and Citrix XenServer all posted solid Linux numbers [see table]. The crypto bandwidth tests, where XenServer and vSphere proved three times faster than Hyper-V and RHEV, showed the advantages of supporting the Intel Westmere CPU's AES-NI instructions. Charts of a few of these test results are displayed below.
Next page: Understanding the spread