Traditionally, network transport has run on two separate technologies, FC (Fibre Channel) and Ethernet, which, like two railroads with different gauges, seemed bound to never meet.
Just about everybody agrees that having a unified network could bring significant financial and administrative benefits, but when exploring possible simplifications to the datacenter fabric, customers faced discouraging and costly options such as tearing down their FC investments or extending the FC network to reach every server and every application.
2008 started with industry signals that it would be the year when those two "railroads" would finally come together. We had a first glimpse that things were changing in that space when Brocade announced the DCX in January. Later that winter a new technology, FCoE (Fibre Channel over Ethernet) -- created by an offspring of Cisco, Nuova Systems -- came to maturity in the Nexus 5000 switches, promising to finally bring these two most critical networks under the same administrative banner.
This spring, about one year after first introducing the concept of FCoE, Cisco announced the Nexus 5000, a 10G Ethernet switch that supports the new protocol and promises to make consolidating FC and Ethernet traffic as easy and as reliable as bringing together Ethernet connections with different speeds on the same switch.
How do the approaches from Brocade and Cisco differ? I won't stretch that rail analogy further than this, but it helps if you think of the first as a converging point for different railroads, and see the second as a unified rail where to roll heterogeneous transports.
In fact, FCoE brings seamlessly together the two protocols, potentially reaching any application server mounting a new breed of adapters, aptly name converged network adapters or CNA. A CNA essentially carries both protocols, Ethernet and FC on a single 10G port, which cuts in half the number of server adapters needed and, just as important, reduces significantly the number of connections and switches needed south of the servers.
The other important component of the FCoE architecture is obviously the Nexus 5000 switch, a device that essentially bridges the FC and Ethernet networks using compatible ports for each technology. Moreover, adding an FCoE switch requires minimal modifications, if any, to the existing storage fabric, which should grab the interest of customers and other vendors.
Cisco declares for the first model released, the Nexus 5020, an aggregate speed in excess of 1Tbit/sec and negligible latency. This, together with an impressive lineout of 10G ports, makes the switch a desirable machine to have when implementing server virtualization. To paraphrase what a Cisco executive said, perhaps a bit paradoxically, with FCoE you can burden a server with just about any traffic load.
Getting to the Nexus of the 5000
A switch that promises to deliver the services of Ethernet and FC over the same wire without packet losses and without appreciable latency is certainly worth reviewing, but it didn't take me long to realize that the evaluation required bringing together more equipment than it's convenient to ship, which is why I ran my battery of tests at the Nuova Systems premises in San Jose, Calif.
In addition to 10G Ethernet ports, my test unit mounted some native FC ports, which made possible running tests to evaluate its behavior when emulating a native FC switch. Other items in my test plan were exploring the management features of the Nexus 5000 and running performance benchmarks to measure latency, I/O operations, and data rate.
The Nexus 5020 is a 2U rack mounted unit and packs in that small space an astonishing number of sockets: 40 to be precise. Each socket can host Ethernet ports running at 10G. Using an optional expansion module (the switch has room for two), you can extend connectivity with six more 10G Ethernet ports, eight more FC ports, or a combo module with four FC and four 10G Ethernet ports.
However, those sockets don't need to be completely filled. For example, my test unit had only 15 10G ports and 4 FC ports active. At review time the Nexus 5000 offered support for all FC connectivity speeds, up to but not including 8G.
Typically, you would deploy the 5020 in the same rack where your app servers reside, or in an adjacent rack. Considering a resilient configuration with two 10G connections for each server, two Nexus 5000 can connect up 40 servers and still have room for more ports with the expansion modules.
The front of the 5000 hosts five large, always spinning and rather noisy fans. With only one power supply (a configuration with dual PSU is also available) I measured around 465 watts absorbed by the switch. Interestingly, the Nexus kept running when I removed one of the fans but, as I had been warned, shut down automatically when I removed a second fan. However, the remaining three fans kept spinning to keep the internal electronics cool.
When reinserted, the two fans I had removed began spinning immediately, but the rest of the system was still no go and I had to power cycle to restart. Taking advantage of this behavior (it's by design), I measured 243 watts with only the five fans spinning, which suggests that the power usage of the other components of the switch is the delta to 465 watts, at least in my configuration.
Having more connections would obviously push up that number, but the consumption I measured seems to be in the same ballpark of what I read from the specs of 20 ports 10G switches from other vendors.