While almost every part of a modern datacenter can be considered mission-critical, the network is the absolute foundation of all communications. That's why it must be designed and built right the first time. After all, the best servers and storage in the world can't do anything without a solid network.
To that end, here are a variety of design points and best practices to help tighten up the bottom end.
[ Get the full scoop on getting more value from your log files in the InfoWorld "Networking Deep Dive" PDF special report. | Better manage your company's information overload with our Enterprise Data Explosion newsletter. ]
Core considerations The term "network" applies to everything from LAN to SAN to WAN. All these variations require a network core, so let's start there.
The size of the organization will determine the size and capacity of the core. In most infrastructures, the datacenter core is constructed differently from the LAN core. If we take a hypothetical network that has to serve the needs of a few hundred or a thousand users in a single building, with a datacenter in the middle, it's not uncommon to find that there are big switches in the middle and aggregation switches at the edges.
Ideally, the core is composed of two modular switch­ing platforms that carry data from the edge over gigabit fiber, located in the same room as the server and storage infrastructure. Two gigabit fiber links to a closet of, say, 100 switch ports is sufficient for most business purposes. In the event that it's not, you're likely better off bonding multiple 1Gbit links rather than upgrading to 10G for those closets. As 10G drops in price, this will change, but for now, it's far cheaper to bond several 1Gbit ports than to add 10G capability to both the core and the edge.
In the likely event that VoIP will be deployed, it may be beneficial to implement small modular switches at the edge as well, allowing PoE (Power over Ethernet) modules to be installed in the same switch as the non-PoE ports. Alternatively, deploying trunked PoE ports to each user is also a possibility. This allows a single port to be used for VoIP and desktop access tasks.
In the familiar hub-and-spoke model, the core connects to the edge aggregation switches with at least two links, either connecting to the server infrastructure with direct copper runs or through server aggregation switches in each rack. This decision must be determined site by site, due to the distance limitations of copper cabling.
Either way, it's cleaner to deploy server aggregation switches in each rack and run only a few fiber links back to the core than try to shoehorn everything into a few huge switches. In addition, using server aggregation switches will allow redundant connections to redundant cores, which will eliminate the possibility of losing server communications in the event of a core switch failure. If you can afford it and your layout permits it, use server aggregation switches.
Regardless of the physical layout method, the core switches need to be redundant in every possible way: redundant power, redundant interconnections, and redundant routing protocols. Ideally, they should have redundant control modules as well, but you can make do without them if you can't afford them.
Core switches will be responsible for switching nearly every packet in the infrastructure, so they need to be balanced accordingly. It's a good idea to make ample use of HSRP (Hot Standby Routing Protocol) or VRRP (Virtual Routing Redundancy Protocol). These allow two discrete switches to effectively share a single IP and MAC address, which is used as the default route for a VLAN. In the event that one core fails, those VLANs will still be accessible.
Finally, proper use of STP (Spanning-Tree Protocol) is essential to proper network operation. A full discussion of these two technologies is beyond the scope of this guide, but correct configuration of these two elements will have a significant effect on the resiliency and proper operation of any Layer-3 switched network.
Minding the storage Once the core has been built, you can take on storage networking. Although other technologies are available, when you link servers to storage arrays, your practical choice will probably boil down to a familiar one: Fibre Channel or iSCSI?
Fibre Channel is generally faster and delivers lower latency than iSCSI, but it's not truly necessary for most applications. Fibre Channel requires specific FC switches and costly FC HBAs in each server -- ideally two for redundancy -- while iSCSI can perform quite well with standard gigabit copper ports. If you have transaction-oriented applications such as large databases with thousands of users, you can probably choose iSCSI without affecting performance and save a bundle.