This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
Software quality, as Churchill might say, is an enigma wrapped in a riddle inside a mystery. Even IT pros get flustered about how to define and measure software quality.
Should it depend on technology? Should it depend on the type of methodology? And how do you make quality measurement practical -- easy, inexpensive and reasonably accurate?
Not surprisingly, these difficulties have resulted in an undue focus on the process by which software is built rather than the product that is the result of this process. We think we can define these activities precisely and measure them accurately so what people see and focus on are the activities required to create, enhance and manage software. What they can't see and hence don't focus on is the quality of the product produced by these activities. Out of sight, out of mind.
ANALYSIS: The (sorry) state of software security
But it's pointless to have a flawless process leading to a flawed product. Unfortunately, that's the kind of failure we risk when we're not able to measure software quality.
This lack of visibility into software quality is at the root of many software management problems. Business owners can't understand why software costs so much, takes so long to build and costs even more to change. CIOs can't understand why estimates always drift radically. CFOs and CEOs can't understand why investment in IT is so high.
What's more, these folks can't get their arms around the value of software. What VALUE am I getting for the money I sink into software?
It's as if Michael Phelps tracks his time in the gym, the time it takes him to eat his meals, the time he spends on his Xbox, time walking his dog ... but bizarrely, not the time it takes him to swim the 100 meter butterfly!
The only indication of software quality is what you see on the outside -- how the software behaves in the real world. But external indicators of performance are too little too late. It would be nice to have an early-warning system that saves us from being purely reactive.
At this point you're thinking, "Don't we take care of quality by testing software?" But testing is at best a partial solution. Testing is not really designed to measure the structural quality of software -- the quality of an application's design and the fidelity of its implementation to this design.
Well designed, well architected and well executed software is high quality software. It's easy to work with, maintain and enhance to satisfy a pressing business need. It's not possible to write test cases for this kind of quality.
So we know that measuring software quality -- the quality of the product itself -- is good. It's the key to making the software black box transparent. But can we define it reasonably well and can we measure it without losing our shirts (or our minds)?
Yes, because there are products out there that measure software product quality. And obviously, they must be able to define software quality, which is a prerequisite to measuring it.
The more sophisticated of these software quality measurement products account for the contextual nature of quality. In an application, the quality of any component depends on which other components it is connected to and how it's connected to them. The quality of an application as a whole is thus more than simply the sum of the quality of its component parts. The single biggest mistake in software engineering is to miss this fact that application quality is contextual.
Those who do get the point about application quality being contextual realize how difficult it is to measure quality and just give up. But we don't have to. Because application quality is contextual, any system that measures it must be able to measure five things:
1. Breadth: It must be able to handle multiple technologies, all the way from the GUI to the database. Most modern applications contain multiple languages and systems that are hooked together in intricate ways.
2. Depth: It must be able to generate complete and detailed architectural maps of the entire software application from GUI to database. Without this detailed architectural meta-model, it would be impossible to take application contextuality into account.
3. Make software engineering knowledge explicit: It must be able to check the entire application against hundreds of implementation patterns that encode software engineering best practices.
4. Actionable metrics: The quality metrics must not just inform, they must guide the improvement of software quality by showing what to do first, how to do it, what to next, etc. (i.e. both prioritize and guide action).
5. Automation: Finally, it must be able to do all this in an automated way. No human or team of humans can do this, let alone do it in a reasonable amount of time.
It's important to measure software quality -- the quality of the product itself. It's equally important to measure it correctly, taking the contextual nature of software quality into account. Measurement is very useful in software development, but often it's better to have no measurement than wrong measurement.
CAST provides IT and business executives at 650 companies worldwide with precise analytics and automated software measurement to transform application development into a fact-based discipline and to prevent business disruption while reducing hard IT costs.
Read more about software in Network World's Software section.
This story, "5 Requirements for Measuring Application Quality" was originally published by Network World.