Bridging the Network Gap: Cisco Nexus 5000

How to Fill up a 10G Line

If setting that policy up was easy, testing that it was actually working was a bit more complicated and called for using the powerful features of IP Performance Tester, a traffic generator system by Ixia. One of the problems I had to solve was how to create significant traffic on my 10G connections, which is where IP Performance Tester, luckily already installed in my test system, was called to action. This isn't the only test where I've used IP Performance Tester, and I've found it to be a valuable tool.

For my PFC test, the Ixia system was set to generate enough traffic to cause a level of congestion which would have translated, without PFC, into losing packets. The switch under test passed this test with aplomb and without losses, proving that not only FC but also Ethernet can be a reliable, lossless protocol.

Of the many test scripts I ran on the Nexus 5000 this was, without any doubt, the most significant. The switch offers many powerful features, including guaranteed rate of traffic, automatic bandwidth management, and automated traffic span.

However, PFC is what legitimates FCoE as a viable convergence protocol that can bridge the gap between application servers and storage, and it makes the Nexus 5000 a much-needed component in datacenter consolidation projects.

One last question remained still unanswered in my evaluation: The Nexus 5000 had proven to have the features needed to be the connection point between servers and storage in a unified environment, but did the machine have enough bandwidth and responsiveness for the job?

To answer those I moved the testing to a different setting where the Nexus 5020 was connected to 8 hosts running NetPipe.

NetPipe is a remarkable performance benchmark tool that works particularly well with switches because you can measure end-to-end (host-to-host) performance and record (in Excel-compatible format) how those results vary when using different data transfers sizes.

A summary of what you can do with NetPipe is shown in the figure here (screen image).

In essence you can set NetPipe to use one way or bidirectional data transfers and increase the data transfer size gradually within a range., recording the transfer rate in megabytes per second and the latency in microseconds..

I ran my tests with a data size range from 1 byte to 8,198 bytes, but for clarity I am not listing the whole range of results but only a few, following a power of two pattern.

Also to mimic a more realistic working condition, I ran the same tests first without any other traffic on the switch and then added one and two competing flows of traffic.

Finally, to have a better feeling of how much the switch impacts transfer rate and latency, I ran the same test back to back, in essence replacing the switch with a direct connection between the two hosts.

It's interesting to note how the transfer rate increases gradually with higher data size reaching numbers very close to the theoretical capacity of 10G Ethernet.

The latency numbers, where lower is better, is obviously the most important proof of the switch responsiveness. Even if we consider the best results where the Nexus 5020 is in the path, the delta with the back-to back stays between 3 and 3.5 microseconds, which is essentially the latency added by the switch.

This number is not only very close to what Cisco suggests for the 5020 , but is probably the shortest latency that you can put between your applications and your data.

Subscribe to the Business Brief Newsletter

Comments