How to Build Solid, Reliable Networks

Fibre Channel networks are unrelated to the rest of the network. They exist all on their own, linked only to the main network via management links that do not carry any transactional traffic. iSCSI networks can be built using the same Ethernet switches that handle nor­mal network traffic -- although iSCSI networks should be confined into their own VLAN at the least, and possibly built on a specific set of Ethernet switches that separate this traffic for performance reasons.

Make sure to choose the switches used for an iSCSI storage network carefully. Some vendors sell switches that perform well with a normal network load but bog down with iSCSI traffic due to the internal structure of the switch itself. Generally, if a switch claims to be "enhanced for iSCSI," it will perform well with an iSCSI load.

Either way, your storage network should mirror the main network and be as redundant as possible: redun­dant switches and redundant links from the servers (whether FC HBAs, standard Ethernet ports, or iSCSI accelerators). Servers do not appreciate having their stor­age suddenly disappear, so redundancy here is at least as important as it is for the network at large.

Going virtual Speaking of storage networking, you're going to need some form of it if you plan on running enterprise-level virtualization. The ability for virtualization hosts to migrate virtual servers across a virtualization farm absolutely requires stable and fast central storage. This can be FC, iSCSI, or even NFS in most cases, but the key is that all the host servers can access a reliable central storage network.

Networking virtualization hosts isn't like networking a normal server, however. While a server might have a front-end and a back-end link, a virtualization host might have six or more Ethernet interfaces. One reason is performance: A virtualization host pushes more traffic than a normal server due to the simple fact that as many as dozens of virtual machines are running on a single host. The other reason is redundancy: With so many VMs on one physical machine, you don't want one failed NIC to take a whole bunch of virtual servers offline at once.

To combat this problem, virtualization hosts should be constructed with at least two dedicated front-end links, two back-end links, and, ideally, a single management link. If this infrastructure will service hosts that live in semi-secure networks (such as a DMZ), then it may be reasonable to add physical links for those networks as well, unless you're comfortable passing semi-trusted packets through the core as a VLAN. Physical separation is still the safest bet and less prone to human error. If you can physically separate that traffic by adding interfaces to the virtualization hosts, then do so.

Each pair of interfaces should be bonded using some form of link aggregation, such as LACP (Link Aggregation Control Protocol) or 802.3ad. Either should suffice, though your switch may support only one form or the other. Bonding these links establishes load-balancing as well as failover protection at the link level and is an absolute requirement, especially since you'd be hard-pressed to find a switch that doesn't support it.

In addition to bonding these links, the front-end bundle should be trunked with 802.1q. This allowed multiple VLANs to exist on a single logical interface and makes deploying and managing virtualization farms significantly simpler. You can then deploy virtual servers on any VLAN or mix of VLANs on any host without worrying about virtual interface configuration. You also don't need to add physical interfaces to the hosts just to connect to a different VLAN.

The virtualization host storage links don't necessarily need to be either bonded or trunked unless your virtual servers will be communicating with a variety of back-end storage arrays. In most cases, a single storage array will be used, and bonding these interfaces will not necessarily result in performance improvements on a per-server basis

However, if you require significant back-end server-to-server communication, such as front-end Web servers and back-end database servers, it's advisable to dedicate that traffic to a specific set of bonded links. They will likely not need to be trunked, but bonding those links will again provide load-balancing and redundancy on a host-by-host basis.

While a dedicated management interface isn't truly a requirement, it can certainly make managing virtualization hosts far simpler, especially when modifying network parameters. Modifying links that also carry the management traffic can easily result in a loss of communication to the virtualization host.

So if you're keeping count, you can see how you might have seven or more interfaces in a busy virtualization host. Obviously, this increases the number of switchports required for a virtualization implementation, so plan accordingly. The increasing popularity of 10G networking -- and the dropping cost of 10G interfaces -- may enable you to drastically reduce the cabling requirements so that you can simply use a pair of trunked and bonded 10G interfaces per host with a management interface. If you can afford it, do it.

Read more about network setup and management in InfoWorld's free PDF report, "Networking Deep Dive," including:

  • Wide-area networking
  • Securing the network
  • Network monitoring
  • This article, "Everything you need to know about building solid, reliable networks," was originally published at InfoWorld.com. Follow the latest developments in networking at InfoWorld.com.

    Read more about networking in InfoWorld's Networking Channel.

    Subscribe to the Business Brief Newsletter

    Comments