Networking

Networking Gear Goes Green

Servers get most of the glory when it comes to energy management, but networking gear is about to catch up.

Over the past year, network equipment vendors have begun to emphasize energy-efficiency features, something that was never a top priority before, says Dale Cosgro, a product manager in Hewlett-Packard Co.'s ProCurve network products organization.

Data Center Energy Stats

How much power does data center gear consume?

  • Cisco Catalyst 6500 series switch, fully populated: 2kW to 3kW per rack
  • Cisco Nexus 7000 series switch, fully configured: 13kW per rack
  • Fully loaded rack of servers, average load: 4kW per rack

Sources: Cisco; HP Critical Facilities Services

Networking infrastructure isn't in the same class as servers or storage in terms of overall power consumption -- there are far more servers than switches -- but networking can account for up to 15% of the total power budget.

And unlike servers, which have sophisticated power management controls, networking equipment must always be on and ready to accept traffic.

Also, networking power use at the rack level is significant. A Cisco Catalyst 6500 series switch consumes as much as 2kW to 3kW per 42U-high rack. Cisco Systems Inc.'s largest enterprise-class switches, the Nexus 7000 series, can consume as much as 13kW per rack, according to Rob Aldrich, an architect in Cisco's advanced services group. A 13kW cabinet generates more heat than many server racks -- enough that it requires careful attention to cooling.

By way of comparison, most data centers top out at between 8kW and 10kW for server racks, says Rakesh Kumar, an analyst at Gartner Inc. The average cabinet consumes about 4kW, says Peter Gross, vice president and general manager of HP Critical Facilities Services.

Vendors have already adopted some energy-related features, such as high-efficiency power supplies and variable-speed cooling fans. But with switches, there's a limit to what can be done in the area of power management today. Most idle switches still consume 40% to 60% of maximum operating power. Anything less than 40% compromises performance, says Aldrich. "Unless users want to accept latency, you have to have the power," he adds.

But huge improvements are coming, says Cosgro.

More-Efficient Technology

Technology improvements that favor energy efficiency are gradually emerging in several areas. "As new generations of products hit the market, more of these kinds of features will be implemented," says Cosgro.

Some examples include more modular application-specific integrated circuit (ASIC) designs that allow switches to turn off components not in use, from LED panel lights to tables in memory.

Also, general advances in silicon technology will minimize current leakage and gradually boost energy efficiency with each new generation of chips. Eventually, says Cosgro, "we should be able to get networking equipment that uses 100 watts today down to 10 watts."

Improvements in other areas have also helped. Software, for example, is now more efficient, consuming fewer CPU cycles -- and less energy. And hardware is now designed to run at higher operating temperatures to reduce cooling costs.

For example, Cosgro claims that HP's current ProCurve equipment can run safely at temperatures up to 130 degrees -- higher than the specifications for most other data center equipment. "That's driven by requirements of IT managers who want to run data centers at higher temperatures," he says.

It may be possible to move to higher operating temperatures in a single-vendor wiring closet, but network equipment vendors will need to do a better job of testing in mixed environments before temperatures approaching 130 degrees can be sustained -- especially within racks in the data center. "No one knows how networking and other types of equipment will react when sitting next to servers that displace more BTUs," says Drue Reeves, an analyst at Burton Group. Today, each vendor tests with only its own equipment.

More sophisticated power-monitoring systems will also help save energy, as will management tools with more granular controls. Real-time power and temperature monitoring is key to any data center and is essential for managing growth. "If something is not right, you want to know about it before a catastrophe happens," says Rockwell Bonecutter, global lead of Accenture Ltd.'s green IT practice.

Management software could be configured to identify specific network equipment, such as voice-over-IP phones, by using the Link Layer Discovery Protocol. The software could then automatically shut off Power-over-Ethernet current for VoIP handsets at a specific time of day or when the associated PC on each desktop is turned off at day's end.

Another example: Edge switches are typically connected to two routers for redundancy during the day, but a network could be configured to have one router go into low-power sleep mode at night. The sleeping router would "wake up" only when or if it was needed.

These types of applications represent "a huge opportunity for savings," says Cosgro.

Better Standards

Emerging standards could soon help save energy during periods when networks sit unused and will help IT compare the relative efficiency of competing products.

The new IEEE P802.3az Energy Efficient Ethernet (EEE) standard, approved on Sept. 30, may offer the biggest bang for the buck by cutting power consumption for network equipment when utilization is low.

Today, Ethernet devices continuously transmit power between devices, even when network traffic is at a standstill. Equipment supporting the EEE standard will send a pulse periodically but stay quiet the rest of the time, cutting power use by more than 90% during idle periods.

In a large network, that's "a whole lot of energy" that could be saved, Cosgro says.

The standard will allow "downshifting" in other modes of operation as well. In a 10Gbit switch, for example, individual ports that are supporting only a 1Gbit load will be able to drop power down from 10Gbit/sec. to what's required to support a 1Gbit/sec. configuration, saving energy until activity picks up again.

Products built to support the EEE standard should start appearing by 2011, says Aldrich.

Another emerging technology, the PCI-SIG's Multi-Root I/O Virtualization specification, gives servers within a rack access to a shared pool of network interface cards. This happens via a high-speed PCI Express bus -- essentially extending the PCIe bus outside of the server. "Instead of a [network interface card] in every server, you'll have access to a bank of NICs in a rack, and you can assign portions of the bandwidth of one of those NICs to a server," probably using tools provided by the server vendor, says Reeves.

Energy savings will come from increased utilization of the network -- achieved by splitting up the bandwidth in each "virtual NIC" -- and the need for fewer NICs and switch ports, he says. He expects to see standards-compliant products perhaps as early as 2012.

Next page: 5 tips for greening your network

Subscribe to the Business Brief Newsletter

Comments