To come to fruition, software projects take investment, support, nurturing and a lot of hard work and dedication. Good release management practices ensure that when your software is built, it will be successfully deployed to the people who want to use it. You have the opportunity to satisfy existing customers and hopefully to win new ones.
A major U.K. telecommunications provider had a problem. It needed to implement a business critical supplier switch, which required it to reengineer its billing and account management systems. These systems had to be in place within three months, otherwise the organization risked losing hundreds of millions of pounds and a decline in their stock value. But the telecom's development processes were poor, and its release management was extremely problematic and inconsistent.
The company brought us in to help deliver the software within the time constraints and to turnaround a failing release management process. Within three months, we'd released both the pending releases and two scheduled releases of the reengineered applications. Most important, we established a straightforward and lightweight release management process to ensure that future releases would happen on time and to the required quality. Follow along as we show you how we did it-including the mistakes we made.
1. Understand the Current State of Release Management.
You can't begin to fix something without understanding what it is, and how and where it is broken. Our first step in improving our client's release management system was to form a detailed picture of the current release process. We began with a number of walk-through sessions with key individuals involved in the software process.
From these sessions we determined that our starting point was pretty bad. When we joined the project, there was software still waiting to be released two months after being completed.
Test environments were limited and not managed, so they were regularly out of date and could not be used. Worse still, it took a relatively long time to turn around new environments and to refresh existing ones.
When we arrived on the scene, regression testing was taking up to three months to manually execute. It was usually dropped, significantly reducing the quality of any software that made it to release.
Overall, morale and commitment were very low. These people had never been helped to deliver great software regularly, and it had worn them down.
2. Establish a Regular Release Cycle.
Once we got a picture of the current state of the process, we set about establishing a regular release cycle.
If the engineering team is the heart of the project, the release cycle is its heartbeat. In determining how often to release into production, we had to understand how much nonfunctional testing was needed and how long it would take. This project required regression, performance and integration testing.
Establishing a release cycle is vital because:
It creates an opportunity to meaningfully discuss nonfunctional testing that the software may need.
It announces a timetable for when stakeholders can expect to get some functionality. If they know that functionality will be regularly released, they can get on with agreeing what that functionality will be.
It creates a routine with which all teams can align (including marketing and engineering).
It gives customers confidence that they can order something and it will be delivered.
Your release cycle must be as accurate as you can make it, not some pie-in-the-sky number that you made up during lunch. Before you announce it, test it out. There is nothing worse for a failing release process than more unrealistic dates!
We started out by suggesting a weekly cycle. That plan proved unfeasible; the client's database environment could not be refreshed quickly enough. Then we tried two-week cycles. There were no immediate objections from the participants, but it failed the first two times! In the end, two weeks was an achievable cycle, once we overcame some environment turnaround bottlenecks and automated some of the tests.
Finally we established a cycle whereby, every two weeks, production-ready code from the engineering team was put into system test. Then two weeks later, we released that code into production.
Remember: Your release cycle is not about when your customer wants the release. It's about when you can deliver it to the desired level of quality. Our customers supported our release cycle because we engaged them in determining the cycle. Theirs is only one consideration in determining the release regularity.
3. Get Lightweight Processes in Place. Test Them Early and Review Them Regularly.
If there is one single guiding principle in engineering (or reengineering) a process, it is to do a little bit, review your results and then do some more. Repeat this cyclic approach until you get the results you want.
Lightweight processes are those that do not require lengthy bureaucratic approvals or endless meetings to get agreement. They usually require only the minimum acceptable level of inputs and outputs. What they lack in bulk and bureaucracy, they make up for in response to change and popular adoption!
Underpinning this approach is the thorny issue of documentation. You need to record what you did and how you did it. Otherwise, what do you review and how do you improve?
We don't mean the kind of documentation that endangers rain forests and puts its readers to sleep. We mean documentation that people (technical and otherwise) can read and act on.
The engineering team chose Confluence-a commercial tool-to collaboratively document their work. They used the software to create minimal but effective documentation of what they were agreeing to build in every cycle of work. They recorded what they built, how they built it and what was required to make it work. We saw the value in this approach and rolled it out (both the approach and the tool) to everyone else involved in the process.
Initially, we suggested a sequence of tasks to release the software we got from the engineering teams. It covered how we took delivery from the source control management system; what packages would be called and how each element (executable code, database scripts, etc.) would be run and on which platforms. Then we did a dry run, using dummy code for each element. We tested our sequence, documenting what we did as we did it. This formed the basis of the installation instructions.
The next step was to get the people who would be deploying the real release to walk through another dry run, using only our documentation. They extended, amended and improved our instructions as they went through. The process became a more inclusive one where everyone contributed to the documentation; since they'd been part of its definition, the process became more widely adopted with better quality.
After each release, we reviewed the process. We examined the documentation and identified changes made during the release. Every time, we looked at how the documentation could be improved and fed the enhancements back into the process.