Guide to Application Acceleration and Traffic Optimization
Speed Safely: Application Acceleration Best PracticesBy David Newman, Network World Lab Alliance, 10/1/07
Five times, 10 times, 20 times or more: The performance benefits from application acceleration are real, provided you understand what the technology can and can't do for your network. What follows are selected best practices for deploying application acceleration devices in enterprise networks.
Application acceleration takes many different forms. There's no one definition for "making an application go faster."
For some users, reducing WAN bandwidth consumption and cutting monthly circuit costs may be the key goals. For others, it's speeding bulk data transfer, such as in backup, replication, or disaster recovery scenarios. For yet others, improving response times for interactive applications is most important, especially if those transaction-based applications carry an organization's revenue.
Deciding where to deploy application acceleration is also a consideration. Different types of acceleration devices work in the data center; in pairs with devices deployed on either end of a WAN link; and, increasingly, as client software installed on telecommuters' or road warriors' machines. Identifying the biggest bottlenecks in your network will help you decide which parts of your network can benefit most from application acceleration.
It's also worth considering whether application acceleration can complement other enterprise IT initiatives. For example, many organizations already have server consolidation plans under way, moving many remote servers into centralized data centers. Symmetrical WAN-link application acceleration devices can help here by reducing response time and WAN bandwidth usage, and giving remote users LAN-like performance. In a similar vein, application acceleration may help enterprise VoIP or video rollouts by prioritizing key flows and keeping latency and jitter low.
Many acceleration vendors recommend initially deploying their products in "pass-through" mode, meaning devices can see and classify traffic but they don't accelerate it. This can be an eye-opening experience for network managers.
The adage "you can't manage what you can't see" definitely applies here. It's fairly common for enterprises to deploy acceleration devices with the goal of improving performance of two to three key protocols – only to discover your network actually carries five or six other types of traffic that would also benefit from acceleration. On the downside, it's unfortunately also all too common to find applications you didn't realize existed on your network.
The reporting tools of acceleration devices can help here. Most devices show which applications are most common in the LAN and WAN, and many present the data in pie charts or graphs that easily can be understood by non-technical management. Many devices also report on LAN and WAN bandwidth consumption per application, and in some cases per flow.
Understanding existing traffic patterns is critical before enabling acceleration. Obtaining a baseline is a mandatory first step in measuring performance improvements from application acceleration.
For products that do some form of caching, a corollary to classification is understanding the size of the data set. Many acceleration devices have object or byte caches, or both, often with terabytes of storage capacity. Caching can deliver huge performance benefits, provided data actually gets served from a cache. If you regularly move, say, 3 Tbytes of repetitive data between sites and but your acceleration devices have only 1 Tbyte of cache capacity, then obviously caching is of only limited benefit. Here again, measuring traffic before enabling acceleration is key.
Even without acceleration devices deployed, it's still possible (and highly recommended) to measure application performance. Tools such as Cisco NetFlow or the IETF's open sFlow standard are widely implemented on routers, switches, and firewalls; many network management systems also classify application types.
If forced to choose between high availability and high performance (even really high performance), network architects inevitably opt for better availability. This is understandable – networks don't go very fast when they're down – and it has implications when deciding which acceleration device type to select.
WAN acceleration devices use one of two designs: in-line and off-path. An in-line device forwards traffic between interfaces, same as a switch or router would, optimizing traffic before forwarding it. An off-path device may also forward traffic between interfaces or it may simply receive traffic from some other device like a router, but in either case it sends traffic through a separate module for optimization. Because this module does not sit in the network path, it can be taken in and out of service without disrupting traffic flow.
There's no one right answer to which design is better. For sites that put a premium on the highest possible uptime, off-path operation is preferable. On the other hand, there may be a higher delay introduced by passing traffic to and from an off-path module. The extra delay may or may not be significant, depending on the application. If minimal delay is a key requirement, in-line operation is preferable.
Some devices combine both modes; for example, Cisco's WAAS appliances perform off-path optimization of Windows file traffic but use in-line mode to speed up other applications.
Note that "pass-through" operation is different than in-line or off-path mode. In case of power loss, virtually all acceleration devices will go into pass-through mode and simply bridge traffic between interfaces. Devices in pass-through mode won't optimize traffic, but then again they won't cause network downtime either.
One of the most contentious debates in WAN application acceleration is whether to set up encrypted tunnels between pairs of devices or whether traffic should remain visible to all other devices along the WAN path. The answer depends upon what other network devices, if any, need to inspect traffic between pairs of WAN acceleration boxes.
Some vendors claim tunneling as a security benefit because traffic can be authenticated, encrypted, and protected from alteration in flight. That's true as far as it goes, but encrypted traffic can't be inspected – and that could be a problem for any firewalls, bandwidth managers, QoS-enabled routers or other devices that sit between pairs of acceleration devices. If traffic transparency is an issue, then acceleration without tunneling is the way to go.
On the other hand, transparency is a requirement only if traffic actually requires inspection between pairs of WAN acceleration devices. If you don't have firewalls or other content-inspecting devices sitting in the acceleration path, this is a nonissue.
Application acceleration is a worthy addition to the networking arsenal, but it's not a silver bullet. It's important to distinguish between problems that acceleration can and can't solve.
For example, acceleration won't help WAN circuits already suffering from high packet loss. While the technology certainly can help in keeping congested WAN circuits from becoming even more overloaded, a far better approach here would be to address the root causes of packet loss before rolling out acceleration devices.
Further, not all protocols are good candidates for acceleration. Some devices don't accelerate UDP-based traffic such as NFS (network file system) or multimedia. And even devices that do optimize UDP may not handle VoIP based on SIP (session initiation protocol) due to that protocol's use of ephemeral port numbers (this problem isn't limited to acceleration devices; some firewalls also don't deal with SIP). SSL is another protocol with limited support; in a recent Network World test only two of four vendors' products sped up SSL traffic.
Despite these limitations, application acceleration is still a technology very much worth considering. The performance benefits and cost savings can be significant, even taking into account the few caveats given here. Properly implemented, application acceleration can cut big bandwidth bills while simultaneously improving application performance.Seven ways to boost application performance: Network managers and industry experts share some tips, tricks and technologies that help amp up your apps.
A buyer's checklist for application acceleration
6 tips on how to pick a WAN-optimization winnerClear Choice Tests By David Newman,Network World, 08/13/07
Faced with big bandwidth bills every month, it's tempting simply to buy the application accelerator with the best performance. Tempting, but not necessarily correct.
Performance matters, but it's far from the only consideration. Numerous other issues should factor into any buying decision, including functionality, network design, security and application support. What follows are six key questions buyers should take into account while considering which application-acceleration system will best suit their own environment.
1. What are my goals for application acceleration? All accelerators reduce the number of bits on the wire, but they do so with different goals.
Most devices focus on WAN bandwidth reduction. That's a worthy goal when links are overloaded and the cost of adding more WAN capacity is an issue. But reducing bandwidth isn't the only thing application-acceleration devices do.
In other situations, enterprises may need to speed bulk data transfers or improve response times for interactive applications. Examples of the former include backups and disaster-recovery processes, both of which require moving a lot of data in a hurry. (Silver Peak, in particular, focuses on speeding high-bandwidth applications.) Examples of the latter include databases and other transaction-processing applications where there's revenue tied to every transaction.
And organizations may have yet other needs for application acceleration beyond bandwidth reduction or faster transfer times. For example, a company that routinely distributes large videos or databases might want to locate data closer to customers using "prepopulation" or "prepositioning" capabilities, intelligent forms of caching that places frequently requested data on remote-site appliances.
Our advice: Make sure vendors understand your main goal for application acceleration -- bandwidth reduction, faster bulk transfers or response-time improvement – and let them pinpoint which of their systems come closest to achieving that goal.
2. What's the difference between caching and application acceleration? Caching – getting data close to the user – is the oldest trick in performance tuning, and it's still a great idea. Application-acceleration devices use caching, but do so in fundamentally different ways than conventional Web caches and their optimization toolkits extend well beyond caching.
Conventional caches work at the file level. That's fine for static content, but it's no help when something changes. Consider a manufacturing company that routinely distributes a 10GB parts database to multiple sites. If just one record changes, caches would need to retrieve the whole database again.
Application-acceleration devices work smarter: They retrieve only the changes. As user data flows through a pair of devices, each one catalogs the blocks of data it sees and makes an index of those blocks. Note that a "block" is not the same as a file; it's just a fixed amount of data.
The next time users request data, the devices compare their indexes. If nothing changed, the device closest to the user serves up the data. If something's new, the remote device retrieves the changed data, and both devices put new blocks and new indexes into their data stores. Over time, application-acceleration devices build up "dictionaries" that are hundreds of gigabytes or terabytes in size.
Dictionaries have three advantages over conventional caching. First, they require transmission only of changes to an object, not the entire object. Second, they still save bandwidth if an object is changed and then later changed back, because the original data still exists in the dictionaries. Finally, dictionaries are application-agnostic. In contrast, caches typically work only with a single application. All the devices we tested use dictionaries. Blue Coat's devices are also Web caches, while the Cisco devices are CIFS caches.
Acceleration devices perform many other optimizations as well. All compress blocks of data flowing between pairs of devices, with big bandwidth savings shown in our tests. But compression won't help with near-random data patterns, such as video or encrypted data (an exception is when clients repeatedly request the same near-random data).
These devices also tweak TCP and CIFS parameters to speed data transfer. At the TCP level, these devices make use of many high-performance options missing from Microsoft's stack. Some devices do inverse multiplexing of client-side connections, reducing connection setup overhead. The devices we tested also optimize CIFS, Microsoft's infamously chatty file-transfer protocol. For sites looking to optimize Windows traffic, CIFS-optimization efficiency is a top concern.
Our advice: Make certain vendors are not pushing the notion that application-acceleration devices are "just" file caches; they're smarter about storing data and employ other optimizations to boot.
3. How do application-acceleration devices operate with the rest of my network? Imagine the effect on the network if an intermediate device were to terminate TCP connections, alter IP addresses and port numbers, and possibly scramble packet payloads. That's one way of describing exactly what many acceleration devices do. While these effects aren't always harmful and may be desirable, network transparency may be a concern.
Altering or hiding packet contents can cripple devices that need to see those contents, such as firewalls, bandwidth managers, and QoS-enabled routers. All the devices we tested optionally can be configured to run in a transparent mode, but might lose optimization efficiency in doing so. Of course, if other devices don't examine traffic contents, this isn't an issue.
Another design concern is whether devices operate inline or in so-called off-path mode. Cisco and Riverbed devices can be configured for off-path operation, meaning traffic passes up through a separate software module, while the device simply bridges nonoptimized traffic.
All devices tested fall back to passthrough mode if acceleration is disabled, a useful feature in maintaining availability, and all offer failover capabilities. To further enhance availability, the Blue Coat and Cisco devices also support clustering of multiple application-acceleration devices.
Our advice: Grill vendors on whether or not their product will "blind" other devices, such as firewalls or bandwidth managers, that need to see packet contents.
4. What are the security implications for application acceleration? On the plus side, acceleration devices can improve data privacy by setting up encrypted tunnels between sites. Because these tunnels carry all data (or some user-defined portion of data that's sensitive), there's no need to set up authentication and encryption on a per-application basis.
But these devices also keep copies of all user data, creating disclosure concerns and possible compliance issues for industries that require end-to-end encryption. Network managers will need to revise security policies to cover data stored on acceleration devices, not only while it's in use but when it's retired (to ensure its disks are wiped clean before disposal or recycling). The Cisco and Silver Peak devices have an elegant solution: They encrypt data on disk, rendering it useless to an attacker.
Our advice: Push potential vendors to explain how you could revise security policies as appropriate to deal with use and disposal of sensitive data stored on their application-acceleration devices.
5. What's my application mix? Acceleration-device vendors differ in terms of the number and type of applications they can optimize.
Not all application-acceleration devices optimize UDP-based traffic, including the Blue Coat appliances in our tests. Given that voice, video, file sharing and some backup traffic may use UDP, support for UDP-based applications will likely become more important over time.
For many enterprises, the mission-critical, revenue-bearing application is something developed in-house. Even the longest list of supported standard applications won't help here, but even so the application may still be a good candidate for TCP or other optimizations. Testing support for custom applications is critical in such situations.
Our advice: Force any potential vendor to address how its product will directly address the prospect of speeding up your organization's application mix.
6. Where are my users? Most acceleration today is done between sites, with a symmetrical pair of devices at either end of a WAN link. However, some client software is beginning to become available for road warriors and telecommuters.
Blue Coat recently released client software that performs most, but not all, the same functions of its hardware appliances. The client code does caching, compression, L4/L7 optimization, but it doesn't perform WAN-data reduction. Riverbed also has announced an acceleration client, and other vendors are likely to follow.
Our advice: If you need to speed traffic going out to a mobile workforce, press vendors about their plans to provide application-acceleration clients as well as site-to-site appliances.
WAN optimization: Money for nothing, clicks for free
Optimization technology can slash the cost of WAN bandwidth and improve app performanceFeature By Tim Greene, Network World, 01/13/07
WAN optimization technology is a way to save money - take that to the bank. For instance, custom machinery manufacturer Curt G. Joa in Sheboygan Falls, Wis., avoided the cost of adding a server to its German office by installing WAN-acceleration gear from Silver Peak.
The initial plan was to install a server in the German office for $100,000 including time, hardware and licenses. Instead, the company spent $17,000 on a Silver Peak NX appliance that solved the problem by speeding application performance over the WAN.
In another example, Riverbed gear that cost $20,000 reduced the need for bandwidth between a Millard Lumber site in Omaha and a second site in Lincoln, Neb., from three T-1s to one, saving $3,000 per month. That's a payoff within seven months.
In addition, WAN-optimization devices improve transaction times between sites by as much as 90% or more, reducing user frustration and in some cases making it possible for applications to perform at all over long distances.
Despite their impressive results, these devices often are overlooked, mainly because potential users worry they might remove chunks of data, block visibility of traffic through firewalls, or fail and stall out networks altogether, says Eric Siegel, an analyst with the Burton Group.
These concerns are groundless, Siegel says. "They should do it now. You can really save money on these things. And there's soft benefits like happier customers and more cooperation," he says.
In the grand scheme of things, a paltry amount of WAN optimization gear has been sold, says Mattias Machowinski, an analyst with Infonetics. This year, vendors will sell about $300 million worth worldwide.
Even so, the optimization gear is catching on. Sales this year have increased 27% compared with last year, and Machowinski projects double-digit increases in each of the next three years.
These devices help save money in two ways. First, they reduce the need for WAN bandwidth, which translates into buying smaller WAN pipes or staving off the need to add more.
Second, they enable businesses to consolidate servers in data centers, saving money by reducing the number of servers in corporate networks that need to be purchased, installed, maintained and repaired.
Optimization appliances use a variety of means - compression, caching, boosting TCP efficiency, protocol optimization, imposing QoS - to reduce the amount of traffic that crosses WAN circuits, compress the traffic that is sent and make sure it does so efficiently to avoid congestion.
Vendors - Cisco, Citrix, Expand Networks, F5 Networks, Juniper, Packeteer, Riverbed and others - use varying blends of WAN-optimizing technology, so one vendor's gear might do a better job on a particular traffic mix than another's. Consequently, the single most important thing customers can do is test gear made by more than one vendor on live networks.
Making Sense of a Crowded Technology MarketBy David Newman, Network World Lab Alliance, 10/1/07
Confused about application acceleration? You've got company.
Dozens of vendors have entered this hot area, using another dozen or so techniques to reduce response time, cut bandwidth consumption, or both. As with any market where multiple sellers all speak at once, it's easy to get lost amid the claims and counterclaims. It's harder still when the wares for sale are new and unfamiliar to many buyers.
As always, education is key. This article describes the major types of acceleration devices; introduces the players; explains the workings of acceleration mechanisms; and looks into what the future holds for this technology.
Application acceleration products generally fall into one of two groups: Data center devices and symmetrical appliances that sit on either end of a WAN link. A third category, acceleration client software, is emerging, but it is in relatively early stages.
Application acceleration may be a relatively new market niche, but the technology behind it has been around for some time. For close to a decade, companies such as Allot Communications and Packeteer have sold bandwidth optimization appliances that prioritize key applications and optimize TCP performance (Packeteer also offers a symmetrical WAN device.) Other acceleration technologies such as caches, compression devices, and server load balancers have been around even longer. For the most part, though, the application acceleration market today is split between data-center and WAN-based devices.
The two device types differ not just in their location in the network but also in the problems they address and the mechanisms they use to solve these problems.
Data centers have high-speed pipes and numerous servers. Some also have multi-tiered designs, with Web servers arrayed in front of application and database servers. In this context, improving performance means reducing WAN bandwidth usage for out going and incoming traffic and offloading TCP overhead and/or SSL overhead or eliminating servers.
Prominent vendors of data-center acceleration devices include Array Networks, Cisco Systems, Citrix Systems, Coyote Point Systems, Crescendo Networks, F5 Networks, Foundry Networks, and Juniper Networks.
Data-center acceleration devices use a variety of mechanisms to achieve these ends Weapons in their acceleration arsenal include TCP connection multiplexing, HTTP compression, caching, content load balancing, and SSL offload. Though more of a security measure than a performance feature, some data-center accelerators also rewrite content on the fly.
Of these mechanisms, connection multiplexing and HTTP compression do the most to reduce WAN bandwidth usage. Connection multiplexing is helpful when server farms field requests from large numbers of users. Even with load balancers in place, TCP connection overhead can be very significant. Acceleration devices lighten the load by multiplexing a large number of client-side connections onto a much smaller number of server-side connections. Previous test results show reductions of 50:1 or higher are possible.
Note that 50:1 multiplexing doesn't translate into a 50-fold reduction in servers. Other factors such as server CPU and memory utilization come into play. Still, multiplexing can lower overhead and speed content delivery.
As its name suggests, HTTP compression puts the squeeze on Web payloads. Most Web browsers can decompress content; usually the stumbling block is on the server side, where compression is often disabled to reduce delay and save CPU cycles. By offloading this function off the servers and onto the acceleration devices make it feasible to do compression.
Obviously, results vary depending on the compressibility of content. Since most sites serve up a mix of compressible text and uncompressible images, HTTP compression offers at least some bandwidth reduction, and may even be able to reduce the number of Web servers needed. One caveat: Compression won't help at all with seemingly random data streams, such as encrypted SSL traffic, and could even hurt performance.
The remaining data-center application acceleration mechanisms help lighten the load on servers. Caching is one of the oldest tricks in the book. The acceleration device acts as a "reverse proxy," caching oft-requested objects and eliminating the need to retrieve them from origin servers every time. Caching can deliver very real performance gains, but use it with care: Real-time content such as stock quotes must never be cached. Object caching also won't help when a small part of a large object changes, for example when a single byte in a large document is deleted.
Content load-balancing is conceptually similar to previous generations of layer-4 load balancing, but in this case the decision about where to send each request is based on layer-7 criteria. For example, devices run SQL queries and other "health checks" on back-end databases to decide which server will provide the lowest response time.
SSL offload also helps speed delivery of secure communications. In some cases, acceleration devices act as SSL proxies; the encrypted tunnel ends on the acceleration appliance, with cleartext traffic flowing between it and the origin servers. This frees up the server from computationally expensive SSL encryption, and in many cases it can dramatically reduce server count in the data center. It's also possible to achieve end-to-end encryption through proxying; the acceleration device terminates a client's SSL session and then begins a new session with the server. Some performance gain is still possible through TCP multiplexing.
Because data-center acceleration devices are application-aware, they have the added capability of being able to rewrite URLs or even traffic contents on the fly. Citrix recently announced the ability to replace credit card numbers in data streams with Xs instead, preventing theft by interception. Similarly, it's possible to rewrite URLs, either to make them shorter or more recognizable or to hide possible security vulnerabilities. On this latter point, an attacker may be less likely to probe for Microsoft Active Server Page vulnerabilities if a URL ending in ".asp" gets rewritten to end with ".html".
For many enterprises, the biggest bang for the acceleration buck comes not in the data center, but on the dozens or hundreds of WAN circuits linking remote sites to data centers. A recent Nemertes Research survey found that monthly WAN fees alone account, on average, for 31 percent of total enterprise IT spending. In that context, even a small performance improvement can mean big savings.
That's not to suggest that symmetrical WAN devices provide small improvements. Recent Network World test results show WAN bandwidth reduction of up to 80 times (not 80 percent) and 20- to 40-times improvements in file-transfer rates. Considering the huge bite of the IT budget that WAN circuits take every month, symmetrical WAN acceleration devices are very much worth considering.
The technology certainly has gotten vendors' attention, with numerous companies offering this type of acceleration device. Players in this crowded field include Blue Coat Systems, Cisco Systems, Citrix Systems, Exinda Networks, Juniper Networks, Riverbed Technology, Silver Peak Systems, and Streamcore.
All these vendors offer appliances and/or acceleration modules large and small, with size depending on WAN link capacity and the number of connected sites and users. Devices generally include disks for caching (though caching may have a different meaning than the caching capability of data-center devices; more on that later). All seek to address the number one bottleneck in enterprise WAN traffic: the sluggish performance of the Microsoft Windows TCP/IP stack across the WAN.
Beyond those common capabilities, these devices may offer at least some of the following mechanisms to reduce WAN bandwidth usage or to speed data transfer: application- and transport-layer optimizations; pre-positioning (a method of storing content closer to users); data compression; read-ahead/write-behind methods; and protocol prioritization.
Application-layer awareness is the most potent WAN acceleration technique. All vendors in this area can optimize the two most common application-layer protocols in enterprise networks – CIFS (common Internet file system), used in Windows file transfers, and MAPI (messaging application program interface), used by Exchange email servers and Outlook clients.
Because CIFS is notoriously chatty, it's a terrible performer in the WAN. Even a simple operation like opening a directory and listing files can involve the transfer of hundreds or even thousands of CIFS messages, each one adding delay. Acceleration devices streamline and reduce CIFS chatter using a variety of proprietary techniques. The results are impressive: CIFS performance in Network World tests of four offerings in this space was 30 to 40 times faster than a baseline test without acceleration.
All vendors can optimize CIFS, MAPI, and other popular applications such as HTTP, but there's a considerable amount of specsmanship about how many applications are supported beyond the basics. Some vendors' data sheets claim to optimize more than 100 different applications, but often this means simply classifying traffic by TCP or UDP port number, and not necessarily doing anything specific with application-layer headers or payloads. Network managers are well advised to quiz prospective vendors on what specific optimizations acceleration devices offer for their organization's particular application mix.
Pre-positioning, another big bandwidth saver, is essentially an automated form of caching. Say a large electronics distributor regularly distributes a 75-Mbyte parts catalog to all 15,000 of its employees. Rather than have employees retrieve the catalog from headquarters over and over again, a better option is to load the presentation locally at each remote site's acceleration device, and then distribute it locally. Most caches can do that, but pre-positioning goes further by automating the catalog's distribution to all acceleration devices at remote sites. Especially for organizations with many large sites, the bandwidth savings can be very substantial.
Caching can take two forms: object caching, as discussed previously in our data center discussion and byte caching (called "network memory" by some vendors). With byte caching, each appliance inspects and caches the stream of data going by, and creates an index for each block of data it sees. The index may contain some form of hash uniquely identifying that block. The first time a device forwards data, the byte cache will be empty. On each successive transfer, the pair of devices won't transfer the data again; instead, it just sends the indexes, in effect saying "just send block X that you already have stored in your cache."
Byte caching has two benefits. First, like object caching, it greatly reduces the amount of data traversing the WAN. Second, unlike object caching, it chops the byte stream into relatively small blocks rather than dealing with potentially huge objects. If only a small part of a very large file changes, the acceleration device just sends the updated data, not the whole object. Some devices, such as those from Blue Coat and Cisco, employ both forms of caching (in Cisco's case, for Windows file traffic only. Others such as those from Riverbed and Silver Peak rely on byte caching alone.
WAN acceleration devices also use data compression to reduce WAN bandwidth usage. This isn't just the HTTP compression seen in data-center devices; instead, symmetrical WAN devices compress entire payloads of all packets, regardless of application. Compression works best for data streams comprised mainly of text or other repetitive data; for near-random byte patterns (such as images or encrypted data), it's not much help.
Cisco's WAAS acceleration devices use "read-ahead/write-behind" techniques to speed up file transfers. While these techniques aren't new (server and PC designers have employed them for years), they can speed file transfers. Both techniques take advantage of the fact that enterprise data tends to be repetitive. Over time, devices can predict that if a user requests block A of data, then a request for blocks B and C are likely to follow. With that knowledge, the device can line up the next blocks and serve them out of memory instead of a much slower retrieval from disk. And speaking of disk operations, it takes a relatively long time to write data to a disk. Write-behind operation defers write requests until several have accumulated and then does them all at once. From the user's perspective, read-ahead and write-behind both translate into faster response times.
Many acceleration devices (with the notable exception of Cisco's) also use various QoS mechanisms to prioritize key applications or flows during periods of congestion. Cisco also has a prioritization story, but it involves communication with routers, which then perform the actual queuing. For enterprises that already have enabled QoS features on their routers, this is a useful approach; for others just getting started with QoS it may make sense to consider using the acceleration device for queuing. As with application support, there is considerable variation among products as to which types of traffic acceleration devices can prioritize.
Client software, security, and device consolidation are likely to be the next major trends in application acceleration. Acceleration client software already is available from Blue Coat and others have clients in development. These software packages give PC-toting road warriors and telecommuters some if not all the techniques used in acceleration appliances.
Security is another hot-button issue, with acceleration vendors adding support for SSL optimization (supported by Blue Coat and Riverbed in the recent Network World test, with plans announced by Cisco and Silver Peak). Cisco and Silver Peak devices also encrypt all user data stored on appliances, a key consideration for regulatory compliance in some industries.
If past history is any guide, it's also likely switch and router vendors will fold at least some acceleration features into their devices. However, the market for standalone devices is highly unlikely to disappear anytime soon. Most switches and routers aren't TCP-aware today, let alone application-aware, and getting there will take time. Moreover, the form factors and component costs of acceleration devices (many have beefy CPUs, memory, and disks) argue against rapid consolidation, especially into low-end branch office switches and routers. For fully featured acceleration, standalone pairs of devices are likely to be the platform of choice for at least a few more years.