More white papers:
Find out more ...
|
|
Neuma White Paper:
CM: THE NEXT GENERATION of Quality Requirements
Quality requirements are an elusive goal for any complex product
development effort. A strong process and good tools can help
requirements to march towards higher quality over time, especially when
appropriate strategies are used in working with the customer on
refining them. The product development team plays an important role in
establishing quality requirements. In a well oiled customer/developer
relationship, frequent feedback will go both ways. Unknowns will be
explored. Change will occur. It's important that the CM/ALM tools can
clearly track requirements and their changes in a way that helps
capture increasingly improved requirement baselines.
Quality requirements are an elusive goal for any complex product
development effort. A strong process and good tools can help
requirements to march towards higher quality over time, especially when
appropriate strategies are used in working with the customer on
refining them. The product development team plays an important role in
establishing quality requirements. In a well oiled customer/developer
relationship, frequent feedback will go both ways. Unknowns will be
explored. Change will occur. It's important that the CM/ALM tools can
clearly track requirements and their changes in a way that helps
capture increasingly improved requirement baselines.
There
are some common sense strategies that can be employed to help ensure
good requirements. For example, the development team should clearly
understand both the customer's business and the industry, along with
its process requirements. This will involve working closely with the
customer and with industry experts.
Another bit of common
sense is never to take a customer's requirements at face value. The
customer may lack expertise in some areas resulting in loose or
inadequate requirements. Or more frequently, you may have more advanced
expertise, about the industry, or about the technology and how it is
advancing. You may also have experience on your side from working with
other customers on similar requirements. So passing this feedback to
your customers will help you to establish trust, and may open the door
for use of more generic product components to fulfill what otherwise
might have been a more difficult set of requirements.
A third
bit of common sense is to realize that the customer's understanding of
the problem space will change over time, for various reasons. By
adapting a process which supports change, you'll remove a number of
obstacles as your customer comes at you with requirements creep.
An Incomplete Requirements Spec
Requirement
specification is an inexact art. It's difficult and at best you'll
only specify a partial set of requirements prior to the start of
development. You may think you have them down exactly, but read
through your requirements document and you'll find something missing,
or something that can be more completely specified. Perhaps for the
design of a bolt, a window, or even a bicycle you might be able to
fully and completely specify the requirements up front. But move to an
aircraft, a communications system or even a PDA, and things get more
difficult. There are several reasons for this.
First of all
technology is moving forward rapidly. Availability of a new component
may heavily influence requirements. Instead of asking for a low cost
last mile copper or fibre connection, one might ask for wireless access
for the last mile - dramatically improving costs. Instead of a 100MB
mini-hard drive, one might instead look at a 64MB SD card or thumb
drive, that dramatically improves reliability while eliminating some of
the damping features to guard against sudden impact.
Second of
all, product development has moved from a "build this" to a "release
this" process, fully expecting that new releases will address the ever
changing set of requirements. In fact, one of the key advantages of
software is that you can change your requirements and still meet them
after delivery. For example, it's nice that the Hubble telescope was
originally designed to work with 3 gyroscopes, but it was even nicer
that in 2005 a software change allowed it to run with just 2
gyroscopes. And engineers have another change that will allow it to
work with just a single working gyro. So one of the key requirements
coming out of this demonstration is flexibility - you often want your
products to be flexible enough to be adapted after production. We've
seen this with modems, cell phones and other common products where
change was rapid over the years. The focus moves from ensuring that
you have all of the right features, to ensuring that over time you can
deliver more features to existing product.
Thirdly, systems are
more interconnected these days. So you want this PDA to work with this
office software, that communications software to use a particular set
of protocols, etc. So the result is that one has an endless wish list
of interface requirements, ordered by what the real market demand is.
And the reasons go on.
Quality Requirements
So
what makes a good requirement? Sometimes it's easy. For example, a
new C compiler might have a requirement that it compile all existing
GNU C programs. Or a new communications system must be able to
communicate with existing Bell trunk lines. An airplane's navigation
system must be able to work with existing Air Traffic Control systems.
These
easy requirements typically owe their ease of specification to the fact
that there is an accepted standard already in place with which it must
comply. Standards are really a type of requirement. A new product
design may choose to have a specific standard compliance as a
requirement, or not. The standards themselves often go through years
of multi-corporate evolution.
More generally, if you want a
quality requirement, you really need to look at two things: (1) Can it
be clearly and completely expressed? (2) Is it testable?
If you
can take your requirements and write test cases for them, you're more
than half way there. In fact, one of the benefits of standards is that
they often have full test suites associated with them. And even if
they don't, plugging them into the real world provides a very good test
bed, when that can be done safely! This is closely related to the the
problem reporting axiom: Most of the work in fixing a problem is in
being able to reproduce it. When you can reproduce a problem, you have
both a clear specification of the problem, and a means of testing the
fix. It's the same with requirements, express it and write a test case
for it and you've don't most of the work, usually.
How Can We Help
So
how can the CM/ALM profession help with producing quality
requirements? By providing tools that manage change to requirements,
and that manage test case traceability.
Let's take a look at the whole picture. This is what a typical requirements flow looks like.
Now
generally there are two different ways of dealing with requirements.
One is to call Customer and Product requirements "Requirements" and to
call System and Design level requirements activities or design tasks.
The input requirements from Product Management are known as the
"Requirements" while those things that the Design Team has authority
and control over are known as activities/tasks. In this scenario
(shown in the diagram), "Requirements" is used to denote a set of
Requirements on the Product Development team.
The other way of
dealing with requirements is simply to treat requirements at different
levels, based on the consumer of the requirements. So a Customer
Requirements tree is allocated to the next level as a Product
Requirements tree, which is allocated to the next level as a System
Requirements tree, which is allocated to the Design Requirements tree.
Each level has a different "owner" and a different customer. The
actual levels, and their names, may differ somewhat from shop to shop.
But the traceability from implementation is from level to level. Each
level must completely cover off the requirements of the preceding level.
I
don't view these as two different ways of working - just two different
ways of identifying requirements and design tasks. Both require full
traceability. Both track the same information with respect to
requirements. I prefer the former because the type of data object and
authority exercised over Product Development Team tasks, is very
different that that for Customer/Product requirements. While the
Development Team may have some input and interaction with the Customer
and Product Management in establishing the Customer/Product
Requirements, it is there that a contractual agreement is made: The
team commits to a set of requirements in a given timeframe. After
that, the allocation of requirements to tasks are usually organized to
suit the development methodology, Agile, Waterfall or otherwise. A
series of work breakdown structures [WBS] (or perhaps one large one) is
used to manage the tasks required to meet the commitment. In my books,
it's preferable to have the "System/Design" level requirements tracked
as the same object that project management is going to use in its WBS.
And there is a whole set of project management information tracked
against these requirements that are not tracked against the
Customer/Product requirements.
Either way works as long as the lines of authority and the required information are respected.
So,
as a CM/ALM provider, we want to be able to manage changes to the
requirements at all levels, and we also want to be able to track and
support full traceability, not only to the test cases, but to the very
test results themselves.
Managing Requirements
To manage requirements we need to look at the process in the small and in the large.
In
the small, requirements will change. We therefore would like to have
revision control exercised over each requirement. In that way we can
see how each requirement has changed. However, just like for software,
a series of requirements will change together based on some input. So
we also want to be able to track changes to requirements in change
packages/updates that package multiple requirement modifications into a
single Change. We may want traceability on such a change back to the
particular input which caused the change.
The CM/ALM provider must provide both of these functions, with as much ease-of-use as possible.
In
the large, we have to understand that change will continually happen.
However, we don't want to be jerking the development team around every
time a requirement changes. So instead, we deal with releases of a
requirements tree. Perhaps this is done once per significant product
release; or maybe it's done for every iteration or every other
iteration of an Agile process. A good rule of thumb is to look at
product releases in general. There are new features that you'll target
for the next release, and there are some things that really have to be
retrofitted into the current release.
A development team can
deal with occasional changes to their current project schedule, but
don't try to fit all of release 3 into release 2 - that's requirements
creep in the extreme. Create an initial requirements tree for release
2, and baseline it. Review it with the development team so that they
can size it and give you a realistic estimate on the effort required.
Go back and forth a couple of times until you reach a contract between
Product Management and the Development Team. Then baseline it again
and deliver the Release 2 spec to development. Everything else goes
into release 3. If it truly cannot wait, you then need to renegotiate
the contract, understanding that either some functionality will be
traded off, or deadlines modified. Don't expect that more resources
can be thrown at the project - it won't work. Your process should
allow only one or two of the "must change" negotiations in a release,
and they need to be kept small. If not, you will need to completely
renegotiate your release 2 content and contract, after agreeing to
first absorb costs to date. Remember, if you reopen the can of worms,
you may never get it closed again and the costs are already incurred
and will continue to be incurred until such time as you reach a new
agreement or call off the release.
Now if you happen to have an
integrated tool suite that can tell you easily where you are in the
project, what requirements have already been addressed or partially
addressed, etc. it may give you some leverage in telling the customer
that they should just wait for the next release for their change in
requirements. This is especially the case if you have a rather short
release cycle. An iterative Agile process may proceed differently so
as to maintain flexibility to the customer's feedback as they develop.
However, this is really just dealing with smaller requirement trees in
each iteration.
I would caution against dealing with one
requirements tree per iteration however, as much advanced thinking is
accomplished by taking a look at a larger set of requirements and after
letting them soak into the brain(s) sufficiently, coming up with an
architecture that will support the larger set of requirements. If the
soak time is insufficient, or the large set is unknown, it's difficult
to establish a good architecture and you may just end up with a patch
work of design.
Whatever the case, if both customer and product
team can easily navigate the requirements and progress, both
relationships and risk management will benefit. Make sure that you
have tools and processes in place to adequately support requirements
management. It is the most critical part of your product development,
culminating in the marching orders.
Traceability to Test Cases and to Test Results
Test
cases are used to verify the product deliverables against the
requirements. When the requirements are Customer requirements or
Product requirements, the test cases are referred to as Black Box test
cases, that is test cases that test the requirements without regard to
how the product was designed. Black Box test cases can, and should, be
produced directly from the product specification as indicated by the
Product Requirements.
When the requirements are System/Design
requirements, the test cases are referred to as White Box test cases,
because they are testing the design by looking inside the product.
Typically White Box test cases will include testing of all internal
APIs, message sequences, etc.
A CM/ALM tool must be able to
track test cases back to their requirements. Ideally, you should be
able to click anywhere on the requirements tree and ask which test
cases are required to verify that part of the tree. Even better, you
should be able to identify which requirements are missing test cases.
This doesn't seem like too onerous a capability for a CM/ALM tool,
until you realize that the requirements tree itself is under continual
revision/change control, as are the set of test cases. So these
queries need to be context dependent.
Going one step further, if
the CM/ALM tool is to help with the Functional Configuration Audit, the
results of running the test cases against a particular set of
deliverable (i.e. a build), need to be tracked as well. Ideally the
tool should allow you to identify which requirements failed
verification, based on the set of failed test cases from the test case
run. It should also be able to distinguish between test cases which
have passed and those which have not been run.
More advanced
tools will allow you to ask questions such as: (1) Which test cases
have failed in some builds, but subsequently passed? (2) What is the
history of success/failure of a particular test case and/or requirement
across the history of builds?
With change control integrated
with requirements management, it should be relatively straightforward
to put together incremental test suites which run tests using only
those test cases which correspond to new or changed requirements. This
is a useful capability for initially assessing new functionality
introduced into a nightly or weekly build.
The ability to manage
test cases and test results effectively and to tie them to requirements
is will result in requirements which are of higher quality. The
feedback loop will help to ensure testability and will uncover holes
and ambiguities in the requirements.
Joe Farah is the President and CEO of Neuma Technology . Prior to co-founding Neuma in 1990 and directing the development of CM+, Joe was Director of Software Architecture and Technology at Mitel, and in the 1970s a Development Manager at Nortel (Bell-Northern Research) where he developed the Program Library System (PLS) still heavily in use by Nortel's largest projects. A software developer since the late 1960s, Joe holds a B.A.Sc. degree in Engineering Science from the University of Toronto. You can contact Joe by email at farah@neuma.com
|