Guide to Application Acceleration and Traffic Optimization

Speed Safely: Application Acceleration Best Practices

By David Newman, Network World Lab Alliance, 10/1/07

Five times, 10 times, 20 times or more: The performance benefits from application acceleration are real, provided you understand what the technology can and can't do for your network. What follows are selected best practices for deploying application acceleration devices in enterprise networks.

Application acceleration takes many different forms. There's no one definition for "making an application go faster."

For some users, reducing WAN bandwidth consumption and cutting monthly circuit costs may be the key goals. For others, it's speeding bulk data transfer, such as in backup, replication, or disaster recovery scenarios. For yet others, improving response times for interactive applications is most important, especially if those transaction-based applications carry an organization's revenue.

Deciding where to deploy application acceleration is also a consideration. Different types of acceleration devices work in the data center; in pairs with devices deployed on either end of a WAN link; and, increasingly, as client software installed on telecommuters' or road warriors' machines. Identifying the biggest bottlenecks in your network will help you decide which parts of your network can benefit most from application acceleration.

It's also worth considering whether application acceleration can complement other enterprise IT initiatives. For example, many organizations already have server consolidation plans under way, moving many remote servers into centralized data centers. Symmetrical WAN-link application acceleration devices can help here by reducing response time and WAN bandwidth usage, and giving remote users LAN-like performance. In a similar vein, application acceleration may help enterprise VoIP or video rollouts by prioritizing key flows and keeping latency and jitter low.

 

Many acceleration vendors recommend initially deploying their products in "pass-through" mode, meaning devices can see and classify traffic but they don't accelerate it. This can be an eye-opening experience for network managers.

The adage "you can't manage what you can't see" definitely applies here. It's fairly common for enterprises to deploy acceleration devices with the goal of improving performance of two to three key protocols – only to discover your network actually carries five or six other types of traffic that would also benefit from acceleration. On the downside, it's unfortunately also all too common to find applications you didn't realize existed on your network.

The reporting tools of acceleration devices can help here. Most devices show which applications are most common in the LAN and WAN, and many present the data in pie charts or graphs that easily can be understood by non-technical management. Many devices also report on LAN and WAN bandwidth consumption per application, and in some cases per flow.

Understanding existing traffic patterns is critical before enabling acceleration. Obtaining a baseline is a mandatory first step in measuring performance improvements from application acceleration.

For products that do some form of caching, a corollary to classification is understanding the size of the data set. Many acceleration devices have object or byte caches, or both, often with terabytes of storage capacity. Caching can deliver huge performance benefits, provided data actually gets served from a cache. If you regularly move, say, 3 Tbytes of repetitive data between sites and but your acceleration devices have only 1 Tbyte of cache capacity, then obviously caching is of only limited benefit. Here again, measuring traffic before enabling acceleration is key.

Even without acceleration devices deployed, it's still possible (and highly recommended) to measure application performance. Tools such as Cisco NetFlow or the IETF's open sFlow standard are widely implemented on routers, switches, and firewalls; many network management systems also classify application types.

 

If forced to choose between high availability and high performance (even really high performance), network architects inevitably opt for better availability. This is understandable – networks don't go very fast when they're down – and it has implications when deciding which acceleration device type to select.

WAN acceleration devices use one of two designs: in-line and off-path. An in-line device forwards traffic between interfaces, same as a switch or router would, optimizing traffic before forwarding it. An off-path device may also forward traffic between interfaces or it may simply receive traffic from some other device like a router, but in either case it sends traffic through a separate module for optimization. Because this module does not sit in the network path, it can be taken in and out of service without disrupting traffic flow.

There's no one right answer to which design is better. For sites that put a premium on the highest possible uptime, off-path operation is preferable. On the other hand, there may be a higher delay introduced by passing traffic to and from an off-path module. The extra delay may or may not be significant, depending on the application. If minimal delay is a key requirement, in-line operation is preferable.

Some devices combine both modes; for example, Cisco's WAAS appliances perform off-path optimization of Windows file traffic but use in-line mode to speed up other applications.

Note that "pass-through" operation is different than in-line or off-path mode. In case of power loss, virtually all acceleration devices will go into pass-through mode and simply bridge traffic between interfaces. Devices in pass-through mode won't optimize traffic, but then again they won't cause network downtime either.

 

One of the most contentious debates in WAN application acceleration is whether to set up encrypted tunnels between pairs of devices or whether traffic should remain visible to all other devices along the WAN path. The answer depends upon what other network devices, if any, need to inspect traffic between pairs of WAN acceleration boxes.

Some vendors claim tunneling as a security benefit because traffic can be authenticated, encrypted, and protected from alteration in flight. That's true as far as it goes, but encrypted traffic can't be inspected – and that could be a problem for any firewalls, bandwidth managers, QoS-enabled routers or other devices that sit between pairs of acceleration devices. If traffic transparency is an issue, then acceleration without tunneling is the way to go.

On the other hand, transparency is a requirement only if traffic actually requires inspection between pairs of WAN acceleration devices. If you don't have firewalls or other content-inspecting devices sitting in the acceleration path, this is a nonissue.

Application acceleration is a worthy addition to the networking arsenal, but it's not a silver bullet. It's important to distinguish between problems that acceleration can and can't solve.

For example, acceleration won't help WAN circuits already suffering from high packet loss. While the technology certainly can help in keeping congested WAN circuits from becoming even more overloaded, a far better approach here would be to address the root causes of packet loss before rolling out acceleration devices.

Further, not all protocols are good candidates for acceleration. Some devices don't accelerate UDP-based traffic such as NFS (network file system) or multimedia. And even devices that do optimize UDP may not handle VoIP based on SIP (session initiation protocol) due to that protocol's use of ephemeral port numbers (this problem isn't limited to acceleration devices; some firewalls also don't deal with SIP). SSL is another protocol with limited support; in a recent Network World test only two of four vendors' products sped up SSL traffic.

Despite these limitations, application acceleration is still a technology very much worth considering. The performance benefits and cost savings can be significant, even taking into account the few caveats given here. Properly implemented, application acceleration can cut big bandwidth bills while simultaneously improving application performance.

Seven ways to boost application performance:  Network managers and industry experts share some tips, tricks and technologies that help amp up your apps. 

 

 

Subscribe to the Business Brief Newsletter

Comments