Mixing methods for Web managementBy Thomas A. Powell, Network World Lab Alliance
Web applications are composed of a multitude of hardware and software technologies any of which may slow or break the entire system from the end user's perspective. Application owners struggle daily to understand the details of individual failures across an entire Web system well enough to spot, reproduce and correct such errors. Web management systems aim to solve this challenging puzzle by piecing together data from a wide range of server, client and network components to show what went wrong and help professionals figure out what to fix.
Interestingly the approaches to Web management systems vary nearly as much as the monitored components themselves. Offerings are clearly different in their philosophical approaches but can roughly be divided between those that are more systems oriented monitoring and those that are more application, or user, focused.
Systems-focused Web management offerings, which can be both standalone products or delivered as services (like that offered by Gomez) that attempt to mimic end user interactions, initially focused on basic availability and delivery rates of any Web-based application. Beyond watching simple availability parameters, more recent products can also monitor for HTTP response codes and expected response text messages.
While older, more traditional Web monitors can alert on many application failures, they unfortunately are limited by what the specific parameters they test for. And, even when they alert administrators to the problems they do pinpoint, they usually lack the kind of detail necessary to pinpoint or sometimes even reproduce the problem. Modern Web management systems, such as offerings from firms like PremiTech (premitech.com), add more detailed monitoring of the various layers of a Web application including database, Web server, load balancers and at least local network conditions. To attain such extra detail traditional SNMP facilities will be aggregated from the various layers, but very often agents will have to be installed on monitored systems or devices which makes deployment more difficult.
Agent less approaches do also exist and often collect data via network taps for the purpose of reconstructing user experience and measuring overall performance. Players such as Coradiant (coradiant.com) do an admirable job rebuilding application activity and measuring performance data by passively looking at network traffic. However, without correlating system data such as what might be collected via logs, SNMP, or agents a passive network-focused solution alone may tell only part of the picture.
Obviously the ultimate goal of Web management is to provide essentially a Tivo-like replay of the customer experience, outlining both the server and network state quickly and clearly. Even early to market players like TeaLeaf (tealeaf.com) aimed to meet this lofty goal, but this isn't a trivial undertaking.
Even if it were possible that a single approach or even a combination of available approaches could fully address client, server, and network data collection with perfect replay, significant challenges would still remain. The usefulness of total Web application awareness will certainly be obscured in an explosion of data thereby making the value of such management tools more knowing what to monitor, how to filter to that which is interesting, how to diagnose that data and then what to do to fix the related problem. Regardless of the bright future of Web management tools, no combination of speedometers, replay cams and dummy lights will replace the need for truly skilled Web management professional at the helm so invest in that first and foremost.
This story, "Guide to Web Site Application and Performance Management" was originally published by Network World.