Blade Server Review: HP BladeSystem c7000
At a Glance
HP BladeSystem c7000
Amazon Shop buttons are programmatically attached to all reviews, regardless of products' final review scores. Our parent company, IDG, receives advertisement revenue for shopping activity generated by the links. Because the buttons are attached programmatically, they should not be interpreted as editorial endorsements.
One look at the HP BladeSystem c7000 blade chassis and you understand why HP sells a lot of blades. The unit is aesthetically pleasing, extremely solid, and well appointed, with an LCD panel for chassis monitoring and control, eight half-width I/O slots in the back, six 2,400-watt power supplies, and 10 fans. As with the Dell, all of this is tightly power controlled, as the chassis can dynamically turn power supplies on and off to best meet the electrical load, while reducing energy consumption during lighter loads.
HP's c7000 is a strong blade platform with plenty of options, bells, and whistles. It has high density in 16 blades per chassis; solid management tools, including multichassis management from the embedded management console; and a full range of available blades, along with storage and tape blade options. It's lacking a bit in some smaller management features (such as remote share mounting and chassis-wide BIOS and firmware upgrades) as compared to the other solutions, but overall, it squarely hits the mark. On pricing, it falls into the middle of the pack.
[ InfoWorld compares the leading blade server solutions: "Blade server shoot-out: Dell, HP, and IBM battle for the virtual data center." ]
Chassis and blades hardware
The HP c7000 came with nine BL460c compute blades, each running two 2.93GHz Intel X5670 Westmere-EP CPUs and 96GB of Samsung low-voltage RAM. In spec, these blades are essentially identical to the IBM and Dell blades we tested, with one important difference: While the Dell and IBM blades had dual embedded 1G NICs with a dual 10G mezzanine card, the HP blades had dual embedded 10G NICs with a dual 1G mezzanine card. It's clear that HP sees 10G as the rule now, not the exception.
The c7000 also leverages HP's Virtual Connect architecture, which represents a 10G interface as four independent Ethernet interfaces to the blade. These virtual interfaces can be tuned within the Virtual Connect module for specific tasks, such as allocating more bandwidth and priority to iSCSI traffic rather than normal traffic. The configuration of Virtual Connect is somewhat arcane, dispensing with traditional Ethernet switch configurations in favor of GUI port assignments and server profiles. If you want to dive in and quickly configure 802.1q trunking or bonding, you'll have to dig to get there. In fact, the configuration of these modules was opaque even to the HP techs.
In addition to the compute blades, HP included two SAS storage blades in the test chassis. Unique to HP, storage blades house six 2.5-inch SAS disks or SATA SSD drives and function as NAS, iSCSI, or Fibre Channel arrays to any or all of the blades in the chassis. Interestingly, these blades can run Windows Storage Server for generic file serving and iSCSI tasks, or they can run a supported OpenSolaris build that provides Fibre Channel targets. HP also offers tape drives in a blade form factor. With these storage options, a single c-class chassis can provide just about every feature required for that remote office in a single, easily managed 10U box.
Like IBM, HP offers a virtualization blade in the BL490c. This is a blade designed to run virtualization hypervisors, providing for more RAM and doing away with local 2.5-inch disk in favor of SSDs. For those looking to deploy a blade chassis for virtualization, that's a very attractive option.
Hung off the Fiber-Channel I/O module on the rear was an HP MSA 2324 array running 24 2.5-inch 15K SAS drives. The MSA was used to house some virtual machines, but not in the actual testing in favor of the internal storage blades.
HP's embedded Onboard Administrator offers detailed information on all chassis components from stem to stern. Where Dell couldn't show CPU temps, HP could show you exact temperatures of every chassis or blade component. OA is also nicely laid out, if not as attractively as the Dell CMC.
However, there are some missing pieces to the HP puzzle. For instance, where Dell easily automates ISO image and virtual drive mappings, as well as handles global BIOS and firmware updates with ease, HP can't perform these tasks through the GUI. Also missing from the GUI were other global options, such as the simple act of powering up or down one or more blades. The remote KVM console applications were also somewhat klunky, and lagged even the low-end Supermicro entry in some features and functionality.
There's more to the HP tools than meets the eye, however, as the c7000 can be integrated into the HP Matrix solution to provide for a much more complete data center management picture. The Matrix tools aren't included for free, so we examined only the management functions available in the chassis itself.
One feature present on the HP c-class that isn't available elsewhere is the multichassis management built into the Onboard Administrator. By daisy-chaining several chassis together, you can log into any of them from the same screen and manage them somewhat centrally. It's even possible to restrict login rights so that certain admins can see and modify only certain aspects of specific chassis in the group -- a very handy feature indeed.
The HP Onboard Administrator app is not as streamlined or as simple as the others, but it offers more functionality than the other embedded management tools. It might take more digging and a few extra clicks, but you'll generally find the information or features you want in OA. On the other hand, a few features are oddly missing from OA -- notably multiblade power cycling and BIOS and firmware updates -- but present in the Dell embedded manager. Some of these functions can be found in HP's outboard management toolset, Insight Manager, but they aren't included in the Onboard Administrator.
The c7000 is replete with power controlling features. Like the Dell blade, it offers dynamic power saving options that will automatically turn off power supplies when the system energy requirements are low and fire them back up as required. It will also increase the airflow to only those blades that need it. If the first three blades are working hard, the fans behind those blades will run at higher RPM than the others, providing the necessary cooling without drawing unnecessary power.
We had some trouble measuring power consumption on the c7000 due to the three-phase power supplies provided with the chassis. Where the other vendors arrived with standard L6-30 power supplies, the HP came with L15-30 connectors that required a set of adapters and another power meter that caused some problems. Ultimately, we managed to get a set of numbers off the chassis with two blades running and the remainder of the blades removed, which showed power efficiency to be on par with Dell and better than IBM. The chassis consumed around 1kW with each blade drawing approximately 130 watts at idle, and around 1.25kW with the two blades under heavy load.
The HP c7000 isn't perfect, but it is a strong mix of reasonable price and high performance, and it easily has the most options among the blade system we reviewed.
[ Return to "Blade server shoot-out: Dell, HP, and IBM battle for the virtual data center" | Read the review of the Dell PowerEdge M1000e, HP BladeSystem c7000, IBM BladeCenter H, or Supermicro SuperBlade. ]
- Tsunamis and falling crates: Behind the blade server review
- InfoWorld review: Intel's Westmere struts its stuff
- Modern multi-core and the next generation of IT
- Intel's Nehalem simply sizzles
- InfoWorld review: Dell, HP, and Lenovo rack servers
- InfoWorld review: Dell's virtualization servers surge ahead
- Nehalem tower servers: Dell, Fujitsu, HP square off
- Last of the red hot Sun servers
Read more about computer hardware in InfoWorld's Hardware Channel.
Paul Venezia is senior contributing editor of the InfoWorld Test Center and writes The Deep End blog.