10 Tips to Boost Your Company Network

Know Your Apps

Infrastructure performance monitoring will only get you so far. All the computing and storage resources that you are offering up on your network are being consumed by your applications. For too many of us, those applications form something akin to a black hole -- we can easily observe their effects on our infrastructure, but it's often difficult to see inside them to know what's going on.

Many IT shops are content to let software vendors install and implement the applications on their networks; after all, that's less work for IT. But be careful -- you're on the hook when the network later slows to a crawl.

Spend time testing your apps with an eye to uncovering their soft spots. Whether it's a particularly expensive stored database procedure that gets called when users log in or a massive performance slowdown during third shift when backups kick off, you need to know ahead of time where your likely performance drains reside.

To accomplish this, insist on testing new applications in your infrastructure before they are purchased. Pay close attention to the amount of resources used as you test and project how much performance the application will require under real-life production loads. This kind of testing can uncover severe architectural flaws in the application that may make it inappropriate for your environment. Better to know that in advance than to find yourself fending off users armed with torches and pitchforks.

Terabytes and Spindle Counts, Oh My

The past few years have seen explosive growth in disk capacity. With the advent of 2TB SATA disks, it's now possible to jam more than 10TB into a single two-rack-unit server. And that's great -- because now you need fewer disks, right? Not so fast.

It's crucial to understand that today's SATA disks share an important trait with their smaller predecessors: They're fast. While it may be possible to fit 2TB of data onto a single 7,200-rpm SATA disk, you'll still be limited to an average randomized transactional throughput of perhaps 80 IOPS (I/O operations per second) per disk. Unless you're storing a mostly static data bone yard, be prepared to be thoroughly unhappy with the performance you'll get out of these new drives as compared to twice the number of 1TB disks.

If your applications require a lot of randomized reads and writes -- database and email servers commonly fit this bill -- you'll need a lot of individual disks to obtain the necessary transactional performance. While huge disks are great for storing less frequently used data, your most prized data must still sit on disk arrays made up of faster and smaller disks.

Beware the 10-pound Server in the 5-Pound Bag

Virtualization has to be just about the coolest thing to happen to the enterprise datacenter in a long time. It offers a multitude of manageability and monitoring benefits, scales cleanly, makes disaster recovery simpler than ever before, and dramatically decreases the number of physical servers you need chewing up power and spewing out heat.

[ For more on how best to implement virtualization, download InfoWorld's Server Virtualization Deep Dive and InfoWorld's Storage Virtualization Deep Dive. ]

If used incorrectly, however, virtualization technology can shoot you in the foot. Remember, virtualization isn't magic. It can't create CPU, memory, or disk IOPS out of thin air.

As you grow your virtualization infrastructure, it should be fairly easy to keep tabs on CPU and memory performance. Any virtualization hypervisor worth its salt will give you visibility into the headroom you have to work with. Disk performance, on the other hand, is tougher to track and more likely to get you into trouble as you push virtualization to its limits.

By way of example, let's say you have a hundred physical servers you'd like to virtualize. They're all essentially idling on three-year-old hardware and require 1GHz of CPU bandwidth, 1GB of memory, and 250 IOPS of transactional disk performance.

You might imagine that an eight-socket, six-core X5650 server with 128GB of RAM would be able to run this load comfortably. After all, you have more than 20 percent of CPU and memory overhead, right? Sure, but bear in mind that you're going to need the equivalent of about 140 15,000-rpm Fibre Channel or SAS disks attached to that server to be able to provide the transactional load you'll require. It's not just about compute performance.

To Dedupe or Not to Dedupe

As your data grows exponentially, it's natural to seek out tools that curb the use of expensive storage capacity. One of the best such examples is data deduplication. Whether you're deduplicating in your backup and archiving tier or directly to primary storage, there are massive capacity benefits you can derive weeding out similar data and storing only what is unique.

Deduplication is great for the backup tier. Whether you implement it in your backup software or in an appliance such as a virtual tape library, you can potentially keep months of backups in a near-line state ready to restore at a moment's notice. That's a better deal than having to dig for tape every time you have a restore that's more than a day or two old.

Like most great ideas, however, deduplication has its drawbacks. Chief among these is that deduplication requires a lot of work. It should come as no surprise that NetApp, one of the few major SAN vendors to offer deduplication on primary storage, is also one of the few major SAN vendors to offer controller hardware performance upgrades through its Performance Acceleration Modules. Identifying and consolidating duplicated blocks on storage requires a lot of controller resources. In other words, saving capacity comes at a performance price.

Accelerate your Backups

Backups are almost always slower than you'd like them to be, and troubleshooting backup performance problems is often more art than science. But there is one common problem that nearly every backup administrator faces at some point or another.

If you are backing up direct to tape, it's likely you're underfeeding your tape drives. The current generation of LTO4 tape drives (soon to be supplanted by LTO5) is theoretically capable of more than 120MBps of data write throughput, but few ever see that in real life. Mostly this is because there are very few backup sources that can support sustained read rates to match the tape drive's write performance. For example, a backup source consisting of a pair of SAS disks in a RAID1 array may be capable of raw throughput well beyond 120MBps in a lab environment, but for standard Windows-based file copies over a network, you'll rarely see rates greater than 60MBps. Because many tape drives become significantly less efficient when their buffers are empty, this becomes the root cause of most backup performance problems.

In other words, the problem isn't your tape drive; it's the storage in the servers you're backing up. Though there may not be a great deal you can do about this without investing heavily in a large, high-performance intermediate disk-to-disk backup solution, you have more options if you have a SAN. Though it will depend largely on the kind of SAN you have and what backup software you run, utilizing host backups -- which read directly from the SAN rather than over the network -- can be a great solution to this particularly vexing problem.

Matt Prigge is contributing editor to the InfoWorld Test Center, and the systems and network architect for the SymQuest Group, a consultancy based in Burlington, Vt.

Paul Venezia is senior contributing editor of the InfoWorld Test Center and writes The Deep End blog.

Subscribe to the Business Brief Newsletter

Comments