Nexus 7000 Aims for Data Center Dominance

Throughput and Delay

Beyond slick features and high availability, performance -- moving packets to their destinations as fast as possible -- is often the main event when it comes to routing and switching. While it's tempting to think 256 10G Ethernet ports will offer virtually unlimited capacity, our results suggest that, at least with the line cards we tested, Cisco still has work to do when it comes to removing bandwidth bottlenecks.

We measured Nexus' performance with separate tests of throughput and delay for layer-2 unicast, layer-3 unicast, and layer-3 multicast traffic. As usual with such tests, we configured Spirent TestCenter to offer traffic in a fully meshed pattern among all 256 ports to find the throughput level.

Throughput tests already are stressful by definition, but we added to the burden with extra monitoring and management functions in all tests. We set 500-line QoS and 7,000-line security access control lists on each line card and also enabled NetFlow on up to 512,000 flows, the maximum Nexus supports.

Tests of layer-2 and layer-3 IPv4 unicast traffic produced virtually identical results, with the switch achieving throughput of up to 476 million frames per second (fps) across all 256 10G Ethernet interfaces.

With multicast traffic (50 sources sending traffic to each of 200 groups, resulting in 10,000 multicast routes), throughput was slightly lower, topping out at 353 million fps. Expressed in terms of bandwidth usage, the Nexus switch moved up to 79.52Gbps across each of eight line cards in all tests (L2, L3 and multicast), for a total of around 636.16Gbps.

These numbers are far below theoretical line rate, and also nowhere near the almost 1.7Tbps capacity mentioned earlier. The bottleneck is in the current-generation line cards, which top out at just less than 60 million lookups per second. Cisco says higher-capacity cards, slated for release in mid-2009, will be able to use the full fabric capacity.

Given that the fabric capacity vastly exceeds that of the current line cards, the throughput results are a bit like what you'd get from putting the wheels from a Toyota Prius onto a Mack truck: It's no longer efficient, and it won't carry anywhere near as much as it could.

To get a more complete picture of what the switch will be able to do when outfitted with faster line cards, we did some calculations to determine effective fabric capacity. In resiliency tests with a single fabric card, the switch forwarded traffic at around 338Gbps. Assuming results scale linearly as fabric cards are added, that means Nexus will offer up to 1.691Tbps of capacity -- once faster line cards are available to take advantage of it.

We also measured delay -- the amount of time the switch held onto each frame. We took these measurements at 10% of line rate.

With the exception of jumbo frames, both average and maximum delays for all frame sizes were less than 50 microsec. That kind of delay is unlikely to affect even delay-sensitive voice, video or storage applications. Jumbo frames took longer to process, with delays between 74 microsec (for L3 unicast) and 113 microsec (for L3 multicast). Bulk data-transfer applications usually aren't very sensitive to delay, so the elevated delays with jumbo frames may also be a nonissue.

It's all too easy to dismiss the performance results from these tests as subpar, but that's oversimplifying a bit. The Nexus 7000 Series is a much faster switch than our throughput numbers suggest, but higher performance will have to wait until new line cards ship sometime next year. In the meantime, the new switch's modular design and high-availability and virtualization features make it very much worth considering for large data-center deployment.

Subscribe to the Business Brief Newsletter

Comments