More white papers:
Find out more ...
Neuma White Paper:
CM: THE NEXT GENERATION of Release Management
We build software as part of a
system or as its own ,entire product.
The goal is to meet the requirements established by the Customer, the
Market and/or the Cost/Benefits analysis.
Product releases are meant to move us from some starting point to our
ultimate product over a period of time:
months, years or even decades.
Release Management starts, not with the delivery of software, but with
the identification of what we're planning to put into the product. The timing, and content of releases helps us
to manage releases so that they are not too onerous on the customer, and so
that we stay in a competitive position with our products. Good release management processes will ensure
that you know what is going to go into your product, what actually went into
the product, and what changes the customer is going to realize upon upgrading.
Planning and Tracking the
Release Management starts with
identification of what is being developed for a release. You may have an Agile
development team or a more traditional model.
Either way, you need to plan what's going into your releases. The individual features or problem fixes must
be identified and tracked, that is, linked to the requirements/requests which
were responsible for them, and referenced from the changes that implement them.
With Agile development, your
planning is in the form of identification of features/problems and
prioritization of these. Every week or
two, you re-visit your to-do lists and adjust priorities. Your Agile development is trying to keep your
development team focused for the next couple of weeks. Your release management is dealing with what
you're going to release 3 to 12 months from now, and in the releases that
For successful release management,
your efforts must carefully be traced to the features that are completed, and
last minute specification changes, typical in a fast-feedback agile iteration,
must be adequately captured along the way.
In the end, you will have a product, but you need to know what is in the
product and what is not. If your ALM
tools don't allow you to accurately track this along the way, you're agile
gains will be lost to additional delays and inaccuracies in your release
process. If the ALM tools are intrusive,
they'll interfere with your lean operation.
In a more traditional schedule-based
development environment, release planning is done as part of the requirements
specification effort. Requirements are
identified and ranked by priority/weight.
As the customer requirements are turned into a Functional, an initial
feature-by-feature effort estimate allows you to plan your time frames. Here again, as the plan is executed, it's
critical that the actual development changes reference the features and
problems being addressed. Your CM/ALM
tools along with your peer review process, can ensure that this happens.
One Release per Customer?
Early on in a product's lifetime,
there's a tendency to customize each release to a specific customer's
requirements. This is a good thing, not
a bad thing. But if it's not handled
properly, as I've seen time and time again, you end up managing multiple
releases, one per customer, instead of managing your product.
It's important to recognize that the
development team and the product design are crucial release management
factors. Customization of releases is
always going to give you a competitive edge. However, the goal is to customize
at the customer's site, not in the development shop. If your developers are creating custom builds
for your customers, you will rapidly discover that your resource requirements
grow linearly with the number of customers you have, and that your product
complexity grows exponentially.
The development team will invariably
have some experience on board. Explain
to them the requirement that customizations will need to be done post-delivery
- at the customer's site. There is
actually a whole range of customization capabilities that will need to be
delivered. Some need to be done prior to
delivery. For example, platform-specific
builds must be created before delivery. The development team needs to be in the
habit of identifying which customizations can be held off as late as possible,
and designing the software that way.
Consider whether the customization is:
a coding time customization
a build-time customization
- a run-time customization
Your team must have the goal of
moving customizations as far down the chain as possible. Each level you are able to move down the
chain will save you significantly on your build and release administration and
complexity. Eager programmers might say
that they have a nifty way of managing conditional compilations to put the
right combination of features in place.
An astute product manager will insist, instead, that the feature
selection will be done at run-time, based on the license keys that are
currently installed at the customer site, or on the contents of a feature
specification data file.
Product design is crucial here. From the outset, you must have a way of
enabling and disabling features at run-time if you have any sort of user
interface to your software. Design a
mechanism that will allow you to select feature configurations. This is not difficult to do. It can be a command line capability, a
check-box capability, or perhaps a table in your product's database (for which
there is, presumably, a user interface).
Once it's there, your development team simply needs to be told: check the feature checklist and disable this
feature if it's not active. From a
programmer's perspective, this is a simple task, and it helps designers to
focus on a more structured architecture.
The key is designing this capability into your product up front.
Manage the Superset, Deliver
I've gone into companies where they
have 18 different builds because they have 3 variants of 2 different sizes with
3 optional components. And because there
was no design initiative, it took a different build for each combination. It was easy for the designers to push the
requirements downstream. But then they
started complaining that builds weren't being turned around fast enough. To move this organization to a single build
solution took less than a month.
A colleague of mine has more
recently told me of another project where there are 200 customers with
virtually every one having it's own release definition. They're scrambling for help as their sales
continue to climb. Don't start out on
the wrong foot because you'll tend to delay the move to the right foot until
you seriously impact your organization.
The key, once your feature selection
architecture is in place, is to manage the product superset, and build and deliver
subsets. If you have different
platforms, make sure that your configuration management is done on the
superset. This again requires support
from design. Platform specific items
need to go into separate files which can be included in the appropriate builds
based on the build request. Don't create a nightmare for yourself by forcing
the same name on each of the objects.
Label them according to their platform:
WindowsDefs.h, etc. Then dynamically
create a single file that includes the appropriate file based on the build
options. In this way, the CM remains
simple - you won't have different branches per release, per platform, per
option, etc. The files are managed just like all the others.
Perhaps you have optional features
that are to be packaged, such as language tables or documentation. First ask
the question: Why can't we deliver all of them all of the time? If the answer really is that you can't, even
with your run-time feature checking, (e.g. security reasons), package the
optional features into separate files (again named for their options - not
variants of the same file) and tag those files using your CM system. From a CM perspective, you'll manage all of
them as components of your product. But
from a build perspective, you can specify the tags you need to select the
This is really not rocket science,
but I continue to be amazed at how much administration an organization is
willing to accept as compared to putting 1% of the cost into doing it the right
way. And that administration effort
extends from development, through CM and into release management and ultimately
Are We Ready To Release?
Now that you're doing everything
right, how do you know if you're ready to release the product? You need to be tracking the builds that are
sent to verification, tracking the problems that result from verification, and
validating the set of tests being run by verification.
Test cases need to be linked to the
features/requirements they are addressing.
Your ALM tools should be able to tell you in a single click what
requirements don't have test case coverage or what requirements are covered by
a set of test cases. You should be able
to select a particular build, ask what verification sessions have been run
against it and ask for the results:
what/how many problems were raised?
how many test cases failed? what
percentage of test cases were run? Your
ALM tool should be able to provide you with this picture for each of the
verification builds - so that you can see the progress from build to
build. If you compare this progress
curve from one release to another, you'll notice similarities, with the
greatest variance due to changes in your process and methods. You'll be able to predict when this release
will reach the quality required, based on this curve.
Generic results such as this are
helpful. But there are two more things
you need to do before being able to release your product. And your ALM tools must be front and center
here once again. First of all, you need
your CRB to be analyzing problem reports coming in and identifying which
problems must be fixed prior to release.
One approach is to try to fix them all.
And that's OK, as long as you realize that fixing them all is going to
introduce additional problems which may take you a few months to uncover. The "fix them all" approach is best
done at the beginning of a release cycle so that the side effects can be
discovered before you release. Closer to
release date, you need to be very specific about which problems you fix. I've seen some very, very trivial looking
issues cause great problems after being fixed incorrectly - even when the fix
was reviewed and appeared simple. I
recommend you get into a habit of planning for a "service pack"
release following your initial release, and placing all non-critical problems
into that service pack, rather than trying to address everything prior to
release. (We're talking software here - the same does not apply to hardware, or
at least the weights and balances are different.) Often, the list of must-fix problems is
referred to as the "Gating" problems (i.e. gating the release).
The second thing you need to do is
to get the product into your customer hands.
Plan alpha and beta releases.
Give away the software if you have to, but make sure plain ordinary
every-day users are going to exercise the product. You will never be able to test all
scenarios. If you think so, consider why
NASA, with all of it's tight development, review and verification processes,
still hits problems in flight, or on the launch pad. It's not because of process problems. It's because they have a finite window and
budget to complete a task, just like every development project has. And its
also because the test environment is different from the specific user
environments. Getting the product into
the user's hands is the real way to evaluate the readiness of a release. It's a key part of release management. Track the issues found specifically against
field trials. You'll likely find that users rarely hit the problems your test
cases are there to catch. It's usually
some more obscure case that never made it as a test scenario, at least not in
the same run-time environment. Develop
the same progress curve for field trials as you did for verification. Build confidence that you're ready to
What Are We Delivering -
Now that you're ready to release,
you need to tell the industry what features you're releasing with this
release. If your ALM tools are adequate,
this should be a push-button task, at least down to the details - the
formatting may require a technical writer or graphics designer. But having an accurate list of what has gone
into the release comes from knowing what build you're releasing, how was that
build built and from what source, by tracing your build back to the changes
that were made (and peer reviewed to ensure that the changes covered exactly
what they said they would - physical audits), by tracing the changes back to
the specifications and by verifying that your product conforms to these specs
(functional audit - basically, for software, a verification run record).
Some ALM tools span several
databases and building the complete picture may be non-trivial and even error
prone. That's not good. Often this results from gluing things
together yourselves, but without the experience that shows you where the glue
doesn't really hold up. This is where
you really find out if your processes are bullet proof and simple enough to
completely and correctly capture the data.
If not, you'll find out from one or more of your customers.
Then your tools need to support you
in packaging your release. Whether it's
done over the internet, directly to your in-house customer base, placed on a
DVD, whatever. Your tools need to ensure
that what you intend to deliver gets properly packaged, and delivered.
What Does the Customer Have
It's one thing to tell the customer
what's in a release. It's another thing
to tell the customer what changes they will see. Perhaps they're using an older release or a
specific service pack level. Maybe you've done custom builds for the customer
to get them the features early or to fix a pile of urgent problems.
Every build that goes out the door
needs to be tracked in your ALM tool.
Furthermore, you need to know which build (or builds) every customer is
using. It may be fine to have some basic
rules, but then you have to at least track both the rules and the exceptions. When you deliver to your customer you need to
- This is what you currently have
This is what we're delivering to
Here are the requests you've
asked for, by problem and by feature request
- Here's the requests we're
Here's the requests we're not
satisfying and their current disposition
Here's how to upgrade from where
you are to the new release (ideally this is fully automated, but that's not
- Here's the incremental training
that will most benefit your users
You'll gain customer
confidence. That will give you reference
customers and that will increase your sales.
Make sure you can go to your customer site and identify exactly what
they have installed. Maybe one release,
maybe several. Maybe old releases, maybe
new releases. What variants, etc. Don't
trust that what you sent them is what they're using. Your product should be able to report it's
exact contents, preferably in terms of one or more build identifiers that you
inserted into the deliverables, which you can use to trace exactly which lines
of code they have.
Don't fall into the trap of thinking
release management comes at the end of development. It starts before development does and it
persists after delivery. Think about it
up front and you'll design your development processes appropriately. You won't be scrambling to address the
complexities of an ad-hoc release process.
Joe Farah is the President and CEO of Neuma Technology . Prior to co-founding Neuma in 1990 and directing the development of CM+, Joe was Director of Software Architecture and Technology at Mitel, and in the 1970s a Development Manager at Nortel (Bell-Northern Research) where he developed the Program Library System (PLS) still heavily in use by Nortel's largest projects. A software developer since the late 1960s, Joe holds a B.A.Sc. degree in Engineering Science from the University of Toronto. You can contact Joe by email at email@example.com