Sie sind auf Seite 1von 201

Agile 2009 eBOOK

Sample Excerpts from the


Latest Titles on Agile Methods

Buy 2
from informIT and

Save 35%
informit.com/agile Enter the coupon code
AGILE2009
during checkout.
Agile 2009
eSampler
eBOOK TABLE OF CONTENTS

• On Software Podcast Channel LEARN MORE… • InformIT LEARN MORE…


• Safari Books Online LEARN MORE… • Learn Agile Web Page LEARN MORE…

Clean Code Stand Back and Deliver


9780132350884 9780321572882
Robert C. Martin Pollyanna Pixton, Niel Nickolaisen, Todd Little,
CHAPTER 1: Clean Code Kent McDonald
CHAPTER 1: Introduction to Key Principles
Agile Testing
Agile Project Management
9780321534460
9780321658395
Lisa Crispin, Janet Gregory
Jim Highsmith
CHAPTER 1: What is Agile Testing, Anyway?
CHAPTER 4: Adapting Over Conforming
The Economics of Iterative Sofware Development
The Software Project Manager’s Bridge to Agility
9780321509352
9780321502759
Walker Royce, Kurt Bittner,
Michele Sliger, Stacia Broderick
Michael Perrow
CHAPTER 5: Scope Management
CHAPTER 3: Trends in Software Economics
Agile Adoption Patterns
Real-Time Agility
9780321514523
9780321545497
Amr Elssamadisy
Bruce Powel Douglass
CHAPTER 5: Adopting Agile Practices
CHAPTER 1: Introduction to Agile and Real-Time
Concepts

Scaling Lean & Agile Development


9780321480965
Craig Larman, Bas Vodde
CHAPTER 7: Feature Teams

THESE BOOKS ARE AVAILABLE AT BOOKSTORES INCLUDING:

Google
Bookmarks Delicious Digg Facebook StumbleUpon Reddit Twitter

TOC_copyright_agile.indd Sec1:1 7/24/09 8:38:41 AM


Many of the designations used by manufacturers and sellers to distinguish their products are claimed
as trademarks. Where those designations appear in this book, and Pearson Education was aware of a
trademark claim, the designations have been printed with initial capital letters or in all capitals.

The authors and publisher have taken care in the preparation of this book, but make no expressed
or implied warranty of any kind and assume no responsibility for errors or omissions. No liability
is assumed for incidental or consequential damages in connection with or arising out of the use of
the information or programs contained herein.

Copyright © 2009 by Pearson Education, Inc.

BROUGHT TO YOU BY

UPPER SADDLE RIVER, NJ | BOSTON | INDIANAPOLIS | SAN FRANCISCO | NEW YORK | TORONTO | MONTREAL | LONDON | MUNICH
PARIS | MADRID | CAPETOWN | SYDNEY | TOKYO | SINGAPORE | MEXICO CITY

TOC_copyright_agile.indd Sec1:2 7/16/09 2:15:24 PM


Google
Bookmarks Delicious Digg Facebook StumbleUpon Reddit Twitter
Buy 2 from informIT and

Save 35%
Robert C. Martin Enter the coupon code
AGILE2009 during checkout.

Clean Code
A Handbook of Agile Software Craftsmanship

Even bad code can function. But if code isn’t clean, it can bring a development
organization to its knees. Every year, countless hours and significant resources available
are lost because of poorly written code. But it doesn’t have to be that way. • Book: 9780132350884
• Safari Online
Noted software expert Robert C. Martin presents a revolutionary paradigm with
• EBOOK: 0136083226
Clean Code: A Handbook of Agile Software Craftsmanship. Martin has teamed up
• KINDLE: B001GSTOAM
with his colleagues from Object Mentor to distill their best agile practice of clean-
ing code “on the fly” into a book that will instill within you the values of a software
craftsman and make you a better programmer—but only if you work at it. About the Author
What kind of work will you be doing? You’ll be reading code—lots of code. Robert C. “Uncle Bob” Martin
And you will be challenged to think about what’s right about that code, and has been a software professional
what’s wrong with it. More importantly, you will be challenged to reassess your since 1970 and an international
professional values and your commitment to your craft. software consultant since 1990.
He is founder and president of
Clean Code is divided into three parts. The first describes the principles, patterns,
Object Mentor, Inc., a team of
and practices of writing clean code. The second part consists of several case stud-
experienced consultants who
ies of increasing complexity. Each case study is an exercise in cleaning up code—
mentor their clients worldwide
of transforming a code base that has some problems into one that is sound and
in the fields of C++, Java, C#,
efficient. The third part is the payoff: a single chapter containing a list of heuristics
Ruby, OO, Design Patterns,
and “smells” gathered while creating the case studies. The result is a knowledge
UML, Agile Methodologies,
base that describes the way we think when we write, read, and clean code.
and eXtreme programming.
Readers will come away from this book understanding
• How to tell the difference between good and bad code
• How to write good code and how to transform bad code into good code
• How to create good names, good functions, good objects, and good classes
• How to format code for maximum readability
• How to implement complete error handling without obscuring code logic
• How to unit test and practice test-driven development

This book is a must for any developer, software engineer, project manager, team
lead, or systems analyst with an interest in producing better code.

informit.com/ph
01_Martin.fm Page 1 Wednesday, July 2, 2008 10:21 AM

1
Clean Code

You are reading this book for two reasons. First, you are a programmer. Second, you want
to be a better programmer. Good. We need better programmers.

1
01_Martin.fm Page 2 Wednesday, July 2, 2008 10:21 AM

2 Chapter 1: Clean Code

This is a book about good programming. It is filled with code. We are going to look at
code from every different direction. We’ll look down at it from the top, up at it from the
bottom, and through it from the inside out. By the time we are done, we’re going to know a
lot about code. What’s more, we’ll be able to tell the difference between good code and bad
code. We’ll know how to write good code. And we’ll know how to transform bad code into
good code.

There Will Be Code


One might argue that a book about code is somehow behind the times—that code is no
longer the issue; that we should be concerned about models and requirements instead.
Indeed some have suggested that we are close to the end of code. That soon all code will
be generated instead of written. That programmers simply won’t be needed because busi-
ness people will generate programs from specifications.
Nonsense! We will never be rid of code, because code represents the details of the
requirements. At some level those details cannot be ignored or abstracted; they have to be
specified. And specifying requirements in such detail that a machine can execute them is
programming. Such a specification is code.
I expect that the level of abstraction of our languages will continue to increase. I
also expect that the number of domain-specific languages will continue to grow. This
will be a good thing. But it will not eliminate code. Indeed, all the specifications written
in these higher level and domain-specific language will be code! It will still need to
be rigorous, accurate, and so formal and detailed that a machine can understand and
execute it.
The folks who think that code will one day disappear are like mathematicians who
hope one day to discover a mathematics that does not have to be formal. They are hoping
that one day we will discover a way to create machines that can do what we want rather
than what we say. These machines will have to be able to understand us so well that they
can translate vaguely specified needs into perfectly executing programs that precisely meet
those needs.
This will never happen. Not even humans, with all their intuition and creativity,
have been able to create successful systems from the vague feelings of their customers.
Indeed, if the discipline of requirements specification has taught us anything, it is that
well-specified requirements are as formal as code and can act as executable tests of that
code!
Remember that code is really the language in which we ultimately express the require-
ments. We may create languages that are closer to the requirements. We may create tools
that help us parse and assemble those requirements into formal structures. But we will
never eliminate necessary precision—so there will always be code.
01_Martin.fm Page 3 Thursday, July 3, 2008 9:52 AM

Bad Code 3

Bad Code
I was recently reading the preface to Kent Beck’s
book Implementation Patterns.1 He says, “. . . this
book is based on a rather fragile premise: that
good code matters. . . .” A fragile premise? I dis-
agree! I think that premise is one of the most
robust, supported, and overloaded of all the pre-
mises in our craft (and I think Kent knows it). We
know good code matters because we’ve had to
deal for so long with its lack.
I know of one company that, in the late 80s,
wrote a killer app. It was very popular, and lots of
professionals bought and used it. But then the
release cycles began to stretch. Bugs were not
repaired from one release to the next. Load times
grew and crashes increased. I remember the day I
shut the product down in frustration and never
used it again. The company went out of business
a short time after that.
Two decades later I met one of the early employees of that company and asked him
what had happened. The answer confirmed my fears. They had rushed the product to
market and had made a huge mess in the code. As they added more and more features, the
code got worse and worse until they simply could not manage it any longer. It was the bad
code that brought the company down.
Have you ever been significantly impeded by bad code? If you are a programmer of
any experience then you’ve felt this impediment many times. Indeed, we have a name for
it. We call it wading. We wade through bad code. We slog through a morass of tangled
brambles and hidden pitfalls. We struggle to find our way, hoping for some hint, some
clue, of what is going on; but all we see is more and more senseless code.
Of course you have been impeded by bad code. So then—why did you write it?
Were you trying to go fast? Were you in a rush? Probably so. Perhaps you felt that you
didn’t have time to do a good job; that your boss would be angry with you if you took the
time to clean up your code. Perhaps you were just tired of working on this program and
wanted it to be over. Or maybe you looked at the backlog of other stuff that you had prom-
ised to get done and realized that you needed to slam this module together so you could
move on to the next. We’ve all done it.
We’ve all looked at the mess we’ve just made and then have chosen to leave it for
another day. We’ve all felt the relief of seeing our messy program work and deciding that a

1. [Beck07].
01_Martin.fm Page 4 Wednesday, July 2, 2008 10:21 AM

4 Chapter 1: Clean Code

working mess is better than nothing. We’ve all said we’d go back and clean it up later. Of
course, in those days we didn’t know LeBlanc’s law: Later equals never.

The Total Cost of Owning a Mess


If you have been a programmer for more than two or three years, you have probably been
significantly slowed down by someone else’s messy code. If you have been a programmer
for longer than two or three years, you have probably been slowed down by messy code.
The degree of the slowdown can be significant. Over the span of a year or two, teams that
were moving very fast at the beginning of a project can find themselves moving at a snail’s
pace. Every change they make to the code breaks two or three other parts of the code. No
change is trivial. Every addition or modification to the system requires that the tangles,
twists, and knots be “understood” so that more tangles, twists, and knots can be added.
Over time the mess becomes so big and so deep and so tall, they can not clean it up. There
is no way at all.
As the mess builds, the productivity of the team continues to decrease, asymptotically
approaching zero. As productivity decreases, management does the only thing they can;
they add more staff to the project in hopes of increasing productivity. But that new staff is
not versed in the design of the system. They don’t know the difference between a change
that matches the design intent and a change that thwarts the design intent. Furthermore,
they, and everyone else on the team, are under horrific pressure to increase productivity. So
they all make more and more messes, driving the productivity ever further toward zero.
(See Figure 1-1.)

Figure 1-1
Productivity vs. time
01_Martin.fm Page 5 Wednesday, July 2, 2008 10:21 AM

The Total Cost of Owning a Mess 5

The Grand Redesign in the Sky


Eventually the team rebels. They inform management that they cannot continue to develop
in this odious code base. They demand a redesign. Management does not want to expend
the resources on a whole new redesign of the project, but they cannot deny that productiv-
ity is terrible. Eventually they bend to the demands of the developers and authorize the
grand redesign in the sky.
A new tiger team is selected. Everyone wants to be on this team because it’s a green-
field project. They get to start over and create something truly beautiful. But only the best
and brightest are chosen for the tiger team. Everyone else must continue to maintain the
current system.
Now the two teams are in a race. The tiger team must build a new system that does
everything that the old system does. Not only that, they have to keep up with the changes
that are continuously being made to the old system. Management will not replace the old
system until the new system can do everything that the old system does.
This race can go on for a very long time. I’ve seen it take 10 years. And by the time it’s
done, the original members of the tiger team are long gone, and the current members are
demanding that the new system be redesigned because it’s such a mess.
If you have experienced even one small part of the story I just told, then you already
know that spending time keeping your code clean is not just cost effective; it’s a matter of
professional survival.

Attitude
Have you ever waded through a mess so grave that it took weeks to do what should have
taken hours? Have you seen what should have been a one-line change, made instead in
hundreds of different modules? These symptoms are all too common.
Why does this happen to code? Why does good code rot so quickly into bad code? We
have lots of explanations for it. We complain that the requirements changed in ways that
thwart the original design. We bemoan the schedules that were too tight to do things right.
We blather about stupid managers and intolerant customers and useless marketing types
and telephone sanitizers. But the fault, dear Dilbert, is not in our stars, but in ourselves.
We are unprofessional.
This may be a bitter pill to swallow. How could this mess be our fault? What about the
requirements? What about the schedule? What about the stupid managers and the useless
marketing types? Don’t they bear some of the blame?
No. The managers and marketers look to us for the information they need to make
promises and commitments; and even when they don’t look to us, we should not be shy
about telling them what we think. The users look to us to validate the way the requirements
will fit into the system. The project managers look to us to help work out the schedule. We
01_Martin.fm Page 6 Wednesday, July 2, 2008 10:21 AM

6 Chapter 1: Clean Code

are deeply complicit in the planning of the project and share a great deal of the responsi-
bility for any failures; especially if those failures have to do with bad code!
“But wait!” you say. “If I don’t do what my manager says, I’ll be fired.” Probably not.
Most managers want the truth, even when they don’t act like it. Most managers want good
code, even when they are obsessing about the schedule. They may defend the schedule and
requirements with passion; but that’s their job. It’s your job to defend the code with equal
passion.
To drive this point home, what if you were a doctor and had a patient who demanded
that you stop all the silly hand-washing in preparation for surgery because it was taking
too much time?2 Clearly the patient is the boss; and yet the doctor should absolutely refuse
to comply. Why? Because the doctor knows more than the patient about the risks of dis-
ease and infection. It would be unprofessional (never mind criminal) for the doctor to
comply with the patient.
So too it is unprofessional for programmers to bend to the will of managers who don’t
understand the risks of making messes.

The Primal Conundrum


Programmers face a conundrum of basic values. All developers with more than a few years
experience know that previous messes slow them down. And yet all developers feel
the pressure to make messes in order to meet deadlines. In short, they don’t take the time
to go fast!
True professionals know that the second part of the conundrum is wrong. You will not
make the deadline by making the mess. Indeed, the mess will slow you down instantly, and
will force you to miss the deadline. The only way to make the deadline—the only way to
go fast—is to keep the code as clean as possible at all times.

The Art of Clean Code?


Let’s say you believe that messy code is a significant impediment. Let’s say that you accept
that the only way to go fast is to keep your code clean. Then you must ask yourself: “How
do I write clean code?” It’s no good trying to write clean code if you don’t know what it
means for code to be clean!
The bad news is that writing clean code is a lot like painting a picture. Most of us
know when a picture is painted well or badly. But being able to recognize good art from
bad does not mean that we know how to paint. So too being able to recognize clean code
from dirty code does not mean that we know how to write clean code!

2. When hand-washing was first recommended to physicians by Ignaz Semmelweis in 1847, it was rejected on the basis that
doctors were too busy and wouldn’t have time to wash their hands between patient visits.
01_Martin.fm Page 7 Wednesday, July 2, 2008 10:21 AM

The Total Cost of Owning a Mess 7

Writing clean code requires the disciplined use of a myriad little techniques applied
through a painstakingly acquired sense of “cleanliness.” This “code-sense” is the key.
Some of us are born with it. Some of us have to fight to acquire it. Not only does it let us
see whether code is good or bad, but it also shows us the strategy for applying our disci-
pline to transform bad code into clean code.
A programmer without “code-sense” can look at a messy module and recognize the
mess but will have no idea what to do about it. A programmer with “code-sense” will look
at a messy module and see options and variations. The “code-sense” will help that pro-
grammer choose the best variation and guide him or her to plot a sequence of behavior
preserving transformations to get from here to there.
In short, a programmer who writes clean code is an artist who can take a blank screen
through a series of transformations until it is an elegantly coded system.

What Is Clean Code?


There are probably as many definitions as there are programmers. So I asked some very
well-known and deeply experienced programmers what they thought.

Bjarne Stroustrup, inventor of C++


and author of The C++ Programming
Language

I like my code to be elegant and efficient. The


logic should be straightforward to make it hard
for bugs to hide, the dependencies minimal to
ease maintenance, error handling complete
according to an articulated strategy, and per-
formance close to optimal so as not to tempt
people to make the code messy with unprinci-
pled optimizations. Clean code does one thing
well.

Bjarne uses the word “elegant.” That’s


quite a word! The dictionary in my MacBook®
provides the following definitions: pleasingly
graceful and stylish in appearance or manner; pleasingly ingenious and simple. Notice the
emphasis on the word “pleasing.” Apparently Bjarne thinks that clean code is pleasing to
read. Reading it should make you smile the way a well-crafted music box or well-designed
car would.
Bjarne also mentions efficiency—twice. Perhaps this should not surprise us coming
from the inventor of C++; but I think there’s more to it than the sheer desire for speed.
Wasted cycles are inelegant, they are not pleasing. And now note the word that Bjarne uses
01_Martin.fm Page 8 Wednesday, July 2, 2008 10:21 AM

8 Chapter 1: Clean Code

to describe the consequence of that inelegance. He uses the word “tempt.” There is a deep
truth here. Bad code tempts the mess to grow! When others change bad code, they tend to
make it worse.
Pragmatic Dave Thomas and Andy Hunt said this a different way. They used the meta-
phor of broken windows.3 A building with broken windows looks like nobody cares about
it. So other people stop caring. They allow more windows to become broken. Eventually
they actively break them. They despoil the facade with graffiti and allow garbage to col-
lect. One broken window starts the process toward decay.
Bjarne also mentions that error handing should be complete. This goes to the disci-
pline of paying attention to details. Abbreviated error handling is just one way that pro-
grammers gloss over details. Memory leaks are another, race conditions still another.
Inconsistent naming yet another. The upshot is that clean code exhibits close attention to
detail.
Bjarne closes with the assertion that clean code does one thing well. It is no accident
that there are so many principles of software design that can be boiled down to this simple
admonition. Writer after writer has tried to communicate this thought. Bad code tries to do
too much, it has muddled intent and ambiguity of purpose. Clean code is focused. Each
function, each class, each module exposes a single-minded attitude that remains entirely
undistracted, and unpolluted, by the surrounding details.

Grady Booch, author of Object


Oriented Analysis and Design with
Applications

Clean code is simple and direct. Clean code


reads like well-written prose. Clean code never
obscures the designer’s intent but rather is full
of crisp abstractions and straightforward lines
of control.

Grady makes some of the same points as


Bjarne, but he takes a readability perspective. I
especially like his view that clean code should
read like well-written prose. Think back on a
really good book that you’ve read. Remember how the words disappeared to be replaced
by images! It was like watching a movie, wasn’t it? Better! You saw the characters, you
heard the sounds, you experienced the pathos and the humor.
Reading clean code will never be quite like reading Lord of the Rings. Still, the liter-
ary metaphor is not a bad one. Like a good novel, clean code should clearly expose the ten-
sions in the problem to be solved. It should build those tensions to a climax and then give

3. http://www.pragmaticprogrammer.com/booksellers/2004-12.html
01_Martin.fm Page 9 Wednesday, July 2, 2008 10:21 AM

The Total Cost of Owning a Mess 9

the reader that “Aha! Of course!” as the issues and tensions are resolved in the revelation
of an obvious solution.
I find Grady’s use of the phrase “crisp abstraction” to be a fascinating oxymoron!
After all the word “crisp” is nearly a synonym for “concrete.” My MacBook’s dictionary
holds the following definition of “crisp”: briskly decisive and matter-of-fact, without hesi-
tation or unnecessary detail. Despite this seeming juxtaposition of meaning, the words
carry a powerful message. Our code should be matter-of-fact as opposed to speculative.
It should contain only what is necessary. Our readers should perceive us to have been
decisive.

“Big” Dave Thomas, founder


of OTI, godfather of the
Eclipse strategy

Clean code can be read, and enhanced by a


developer other than its original author. It has
unit and acceptance tests. It has meaningful
names. It provides one way rather than many
ways for doing one thing. It has minimal depen-
dencies, which are explicitly defined, and pro-
vides a clear and minimal API. Code should be
literate since depending on the language, not all
necessary information can be expressed clearly
in code alone.

Big Dave shares Grady’s desire for readabil-


ity, but with an important twist. Dave asserts that
clean code makes it easy for other people to enhance it. This may seem obvious, but it can-
not be overemphasized. There is, after all, a difference between code that is easy to read
and code that is easy to change.
Dave ties cleanliness to tests! Ten years ago this would have raised a lot of eyebrows.
But the discipline of Test Driven Development has made a profound impact upon our
industry and has become one of our most fundamental disciplines. Dave is right. Code,
without tests, is not clean. No matter how elegant it is, no matter how readable and acces-
sible, if it hath not tests, it be unclean.
Dave uses the word minimal twice. Apparently he values code that is small, rather
than code that is large. Indeed, this has been a common refrain throughout software litera-
ture since its inception. Smaller is better.
Dave also says that code should be literate. This is a soft reference to Knuth’s literate
programming.4 The upshot is that the code should be composed in such a form as to make
it readable by humans.

4. [Knuth92].
01_Martin.fm Page 10 Wednesday, July 2, 2008 10:21 AM

10 Chapter 1: Clean Code

Michael Feathers, author of Working


Effectively with Legacy Code

I could list all of the qualities that I notice in


clean code, but there is one overarching quality
that leads to all of them. Clean code always
looks like it was written by someone who cares.
There is nothing obvious that you can do to
make it better. All of those things were thought
about by the code’s author, and if you try to
imagine improvements, you’re led back to
where you are, sitting in appreciation of the
code someone left for you—code left by some-
one who cares deeply about the craft.

One word: care. That’s really the topic of


this book. Perhaps an appropriate subtitle
would be How to Care for Code.
Michael hit it on the head. Clean code is
code that has been taken care of. Someone has taken the time to keep it simple and orderly.
They have paid appropriate attention to details. They have cared.

Ron Jeffries, author of Extreme Programming


Installed and Extreme Programming
Adventures in C#
Ron began his career programming in Fortran at
the Strategic Air Command and has written code in
almost every language and on almost every
machine. It pays to consider his words carefully.

In recent years I begin, and nearly end, with Beck’s


rules of simple code. In priority order, simple code:
• Runs all the tests;
• Contains no duplication;
• Expresses all the design ideas that are in the
system;
• Minimizes the number of entities such as classes,
methods, functions, and the like.
Of these, I focus mostly on duplication. When the same thing is done over and over,
it’s a sign that there is an idea in our mind that is not well represented in the code. I try to
figure out what it is. Then I try to express that idea more clearly.
Expressiveness to me includes meaningful names, and I am likely to change the
names of things several times before I settle in. With modern coding tools such as Eclipse,
renaming is quite inexpensive, so it doesn’t trouble me to change. Expressiveness goes
01_Martin.fm Page 11 Wednesday, July 2, 2008 10:21 AM

The Total Cost of Owning a Mess 11

beyond names, however. I also look at whether an object or method is doing more than one
thing. If it’s an object, it probably needs to be broken into two or more objects. If it’s a
method, I will always use the Extract Method refactoring on it, resulting in one method
that says more clearly what it does, and some submethods saying how it is done.
Duplication and expressiveness take me a very long way into what I consider clean
code, and improving dirty code with just these two things in mind can make a huge differ-
ence. There is, however, one other thing that I’m aware of doing, which is a bit harder to
explain.
After years of doing this work, it seems to me that all programs are made up of very
similar elements. One example is “find things in a collection.” Whether we have a data-
base of employee records, or a hash map of keys and values, or an array of items of some
kind, we often find ourselves wanting a particular item from that collection. When I find
that happening, I will often wrap the particular implementation in a more abstract method
or class. That gives me a couple of interesting advantages.
I can implement the functionality now with something simple, say a hash map, but
since now all the references to that search are covered by my little abstraction, I can
change the implementation any time I want. I can go forward quickly while preserving my
ability to change later.
In addition, the collection abstraction often calls my attention to what’s “really”
going on, and keeps me from running down the path of implementing arbitrary collection
behavior when all I really need is a few fairly simple ways of finding what I want.
Reduced duplication, high expressiveness, and early building of simple abstractions.
That’s what makes clean code for me.

Here, in a few short paragraphs, Ron has summarized the contents of this book. No
duplication, one thing, expressiveness, tiny abstractions. Everything is there.

Ward Cunningham, inventor of Wiki,


inventor of Fit, coinventor of eXtreme
Programming. Motive force behind
Design Patterns. Smalltalk and OO
thought leader. The godfather of all
those who care about code.

You know you are working on clean code when each


routine you read turns out to be pretty much what
you expected. You can call it beautiful code when
the code also makes it look like the language was
made for the problem.

Statements like this are characteristic of Ward.


You read it, nod your head, and then go on to the
next topic. It sounds so reasonable, so obvious, that it barely registers as something
profound. You might think it was pretty much what you expected. But let’s take a closer
look.
01_Martin.fm Page 12 Wednesday, July 2, 2008 10:21 AM

12 Chapter 1: Clean Code

“. . . pretty much what you expected.” When was the last time you saw a module that
was pretty much what you expected? Isn’t it more likely that the modules you look at will
be puzzling, complicated, tangled? Isn’t misdirection the rule? Aren’t you used to flailing
about trying to grab and hold the threads of reasoning that spew forth from the whole sys-
tem and weave their way through the module you are reading? When was the last time you
read through some code and nodded your head the way you might have nodded your head
at Ward’s statement?
Ward expects that when you read clean code you won’t be surprised at all. Indeed, you
won’t even expend much effort. You will read it, and it will be pretty much what you
expected. It will be obvious, simple, and compelling. Each module will set the stage for
the next. Each tells you how the next will be written. Programs that are that clean are so
profoundly well written that you don’t even notice it. The designer makes it look ridicu-
lously simple like all exceptional designs.
And what about Ward’s notion of beauty? We’ve all railed against the fact that our lan-
guages weren’t designed for our problems. But Ward’s statement puts the onus back on us.
He says that beautiful code makes the language look like it was made for the problem! So
it’s our responsibility to make the language look simple! Language bigots everywhere,
beware! It is not the language that makes programs appear simple. It is the programmer
that make the language appear simple!

Schools of Thought
What about me (Uncle Bob)? What do I think
clean code is? This book will tell you, in hideous
detail, what I and my compatriots think about
clean code. We will tell you what we think makes
a clean variable name, a clean function, a clean
class, etc. We will present these opinions as abso-
lutes, and we will not apologize for our stridence.
To us, at this point in our careers, they are abso-
lutes. They are our school of thought about clean
code.
Martial artists do not all agree about the best
martial art, or the best technique within a martial
art. Often master martial artists will form their
own schools of thought and gather students to
learn from them. So we see Gracie Jiu Jistu,
founded and taught by the Gracie family in Brazil. We see Hakkoryu Jiu Jistu, founded
and taught by Okuyama Ryuho in Tokyo. We see Jeet Kune Do, founded and taught by
Bruce Lee in the United States.
Martin_ch01.fm Page 13 Friday, January 16, 2009 6:11 PM

We Are Authors 13

Students of these approaches immerse themselves in the teachings of the founder.


They dedicate themselves to learn what that particular master teaches, often to the exclu-
sion of any other master’s teaching. Later, as the students grow in their art, they may
become the student of a different master so they can broaden their knowledge and practice.
Some eventually go on to refine their skills, discovering new techniques and founding their
own schools.
None of these different schools is absolutely right. Yet within a particular school we
act as though the teachings and techniques are right. After all, there is a right way to prac-
tice Hakkoryu Jiu Jitsu, or Jeet Kune Do. But this rightness within a school does not inval-
idate the teachings of a different school.
Consider this book a description of the Object Mentor School of Clean Code. The
techniques and teachings within are the way that we practice our art. We are willing to
claim that if you follow these teachings, you will enjoy the benefits that we have enjoyed,
and you will learn to write code that is clean and professional. But don’t make the mistake
of thinking that we are somehow “right” in any absolute sense. There are other schools and
other masters that have just as much claim to professionalism as we. It would behoove you
to learn from them as well.
Indeed, many of the recommendations in this book are controversial. You will proba-
bly not agree with all of them. You might violently disagree with some of them. That’s fine.
We can’t claim final authority. On the other hand, the recommendations in this book are
things that we have thought long and hard about. We have learned them through decades of
experience and repeated trial and error. So whether you agree or disagree, it would be a
shame if you did not see, and respect, our point of view.

We Are Authors
The @author field of a Javadoc tells us who we are. We are authors. And one thing about
authors is that they have readers. Indeed, authors are responsible for communicating well
with their readers. The next time you write a line of code, remember you are an author,
writing for readers who will judge your effort.
You might ask: How much is code really read? Doesn’t most of the effort go into
writing it?
Have you ever played back an edit session? In the 80s and 90s we had editors like Emacs
that kept track of every keystroke. You could work for an hour and then play back your whole
edit session like a high-speed movie. When I did this, the results were fascinating.
The vast majority of the playback was scrolling and navigating to other modules!

Bob enters the module.


He scrolls down to the function needing change.
He pauses, considering his options.
Oh, he’s scrolling up to the top of the module to check the initialization of a variable.
Now he scrolls back down and begins to type.
Martin_ch01.fm Page 14 Friday, January 16, 2009 6:11 PM

14 Chapter 1: Clean Code

Ooops, he’s erasing what he typed!


He types it again.
He erases it again!
He types half of something else but then erases that!
He scrolls down to another function that calls the function he’s changing to see how it is
called.
He scrolls back up and types the same code he just erased.
He pauses.
He erases that code again!
He pops up another window and looks at a subclass. Is that function overridden?

...
You get the drift. Indeed, the ratio of time spent reading vs. writing is well over 10:1.
We are constantly reading old code as part of the effort to write new code.
Because this ratio is so high, we want the reading of code to be easy, even if it makes
the writing harder. Of course there’s no way to write code without reading it, so making it
easy to read actually makes it easier to write.
There is no escape from this logic. You cannot write code if you cannot read the sur-
rounding code. The code you are trying to write today will be hard or easy to write
depending on how hard or easy the surrounding code is to read. So if you want to go fast,
if you want to get done quickly, if you want your code to be easy to write, make it easy to
read.

The Boy Scout Rule


It’s not enough to write the code well. The code has to be kept clean over time. We’ve all
seen code rot and degrade as time passes. So we must take an active role in preventing this
degradation.
The Boy Scouts of America have a simple rule that we can apply to our profession.

Leave the campground cleaner than you found it.5

If we all checked-in our code a little cleaner than when we checked it out, the code
simply could not rot. The cleanup doesn’t have to be something big. Change one variable
name for the better, break up one function that’s a little too large, eliminate one small bit of
duplication, clean up one composite if statement.
Can you imagine working on a project where the code simply got better as time
passed? Do you believe that any other option is professional? Indeed, isn’t continuous
improvement an intrinsic part of professionalism?

5. This was adapted from Robert Stephenson Smyth Baden-Powell’s farewell message to the Scouts: “Try and leave this world a
little better than you found it . . .”
01_Martin.fm Page 15 Wednesday, July 2, 2008 10:21 AM

Bibliography 15

Prequel and Principles


In many ways this book is a “prequel” to a book I wrote in 2002 entitled Agile Software
Development: Principles, Patterns, and Practices (PPP). The PPP book concerns itself
with the principles of object-oriented design, and many of the practices used by profes-
sional developers. If you have not read PPP, then you may find that it continues the story
told by this book. If you have already read it, then you’ll find many of the sentiments of
that book echoed in this one at the level of code.
In this book you will find sporadic references to various principles of design. These
include the Single Responsibility Principle (SRP), the Open Closed Principle (OCP), and
the Dependency Inversion Principle (DIP) among others. These principles are described in
depth in PPP.

Conclusion
Books on art don’t promise to make you an artist. All they can do is give you some of the
tools, techniques, and thought processes that other artists have used. So too this book can-
not promise to make you a good programmer. It cannot promise to give you “code-sense.”
All it can do is show you the thought processes of good programmers and the tricks, tech-
niques, and tools that they use.
Just like a book on art, this book will be full of details. There will be lots of code.
You’ll see good code and you’ll see bad code. You’ll see bad code transformed into good
code. You’ll see lists of heuristics, disciplines, and techniques. You’ll see example after
example. After that, it’s up to you.
Remember the old joke about the concert violinist who got lost on his way to a perfor-
mance? He stopped an old man on the corner and asked him how to get to Carnegie Hall.
The old man looked at the violinist and the violin tucked under his arm, and said: “Prac-
tice, son. Practice!”

Bibliography
[Beck07]: Implementation Patterns, Kent Beck, Addison-Wesley, 2007.

[Knuth92]: Literate Programming, Donald E. Knuth, Center for the Study of Language
and Information, Leland Stanford Junior University, 1992.
Google
Bookmarks Delicious Digg Facebook StumbleUpon Reddit Twitter
Buy 2 from informIT and
Lisa Crispin Save 35%
Janet Gregory Enter the coupon code
AGILE2009 during checkout.

Agile Testing
A Practical Guide for Testers and Agile Teams

“As agile methods have entered the mainstream, we’ve learned a lot about how the
testing discipline fits into agile projects. Lisa and Janet give us a solid look at available
what to do, and what to avoid, in agile testing.” • Book: 9780321534460
— Ron Jeffries, www.XProgramming.com • Safari Online
“An excellent introduction to agile and how it affects the software test community!” • EBOOK: 0321616928
— Gerard Meszaros, • KINDLE: B001QL5N4K
Agile Practice Lead and Chief Test Strategist at Solution Frameworks, Inc.,
an agile coaching and lean software development consultancy
About the Authors
“In sports and music, people know the importance of practicing technique until it
becomes a part of the way they do things. This book is about some of the most Lisa Crispin specializes in showing
fundamental techniques in software development—how to build quality into code— testers and agile teams how testers
techniques that should become second nature to every development team. The book can add value and how to guide
provides both broad and in-depth coverage of how to move testing to the front of the development with business-facing
development process, along with a liberal sprinkling of real-life examples that bring tests. Since 2003, she’s been a
the book to life.” tester on a Scrum/XP team at ePlan
— Mary Poppendieck, Author of Services, Inc., and frequently leads
Lean Software Development and Implementing Lean Software Development tutorials and workshops on agile
Two of the industry’s most experienced agile testing practitioners and consultants, testing at conferences. Lisa regu-
Lisa Crispin and Janet Gregory, have teamed up to bring you the definitive answers larly contributes articles about agile
to these questions and many others. In Agile Testing, Crispin and Gregory define testing to publications such as Better
agile testing and illustrate the tester’s role with examples from real agile teams. Software magazine, IEEE Software,
They teach you how to use the agile testing quadrants to identify what testing is and Methods and Tools. Lisa also
needed, who should do it, and what tools might help. The book chronicles an agile coauthored Testing Extreme
software development iteration from the viewpoint of a tester and explains the Programming (Addison-Wesley,
seven key success factors of agile testing. 2002) with Tip House.
Readers will come away from this book understanding
Janet Gregory is the founder of
• How to get testers engaged in agile development
DragonFire, Inc., an agile quality
• Where testers and QA managers fit on an agile team process consultancy and training
• What to look for when hiring an agile tester firm. Her passion is helping teams
• How to transition from a traditional cycle to agile development build quality systems. Since 1998,
• How to complete testing activities in short iterations she has worked as a coach and tester
• How to use tests to successfully guide development introducing agile practices into both
large and small companies. Janet is
• How to overcome barriers to test automation
a frequent speaker at agile and test-
This book is a must for agile testers, agile teams, their managers, and their customers. ing software conferences, and she
is a major contributor to the North
American agile testing community.

informit.com/aw
Crispin_book.fm Page 3 Tuesday, November 25, 2008 11:06 AM

Chapter 1

WHAT IS AGILE
TESTING, ANYWAY?

Whole-Team
Agile Values
Approach

What We Mean
What Is Agile by “Agile Testing”
Testing, Anyway?

Working on Traditional Teams Customer Team


How Is Agile A Little Context for
Working on Agile Teams Developer Team
Testing Different Roles and Activities
Traditional vs. Agile teams Interaction

Like a lot of terminology, “agile development” and “agile testing” mean different
things to different people. In this chapter, we explain our view of agile, which
reflects the Agile Manifesto and general principles and values shared by different
agile methods. We want to share a common language with you, the reader, so
we’ll go over some of our vocabulary. We compare and contrast agile develop-
ment and testing with the more traditional phased approach. The “whole team”
approach promoted by agile development is central to our attitude toward qual-
ity and testing, so we also talk about that here.

A GILE V ALUES
“Agile” is a buzzword that will probably fall out of use someday and make
this book seem obsolete. It’s loaded with different meanings that apply in dif-
ferent circumstances. One way to define “agile development” is to look at the
Agile Manifesto (see Figure 1-1).

Using the values from the Manifesto to guide us, we strive to deliver small
chunks of business value in extremely short release cycles.

3
Crispin_book.fm Page 4 Tuesday, November 25, 2008 11:06 AM

4 C HAPTER 1  W HAT I S A GILE T ESTING , A NYWAY ?

Manifesto for Agile


Software Development
We are uncovering better ways
of developing software by doing
it and helping others do it.
Through this work we have
come to value:
Individuals and interactions over
processes and tools
Working software over
comprehensive documentation
Customer collaboration over
contract negotiation
Responding to change over
following a plan
That is, while there is value
in the items on the right,
we value the items on the left more.

Figure 1-1 Agile Manifesto

We use the word “agile” in this book in a broad sense. Whether your team is
Chapter 21, “Key
Success Factors,”
practicing a particular agile method, such as Scrum, XP, Crystal, DSDM, or
lists key success FDD, to name a few, or just adopting whatever principles and practices make
factors for agile sense for your situation, you should be able to apply the ideas in this book. If
testing. you’re delivering value to the business in a timely manner with high-quality
software, and your team continually strives to improve, you’ll find useful in-
formation here. At the same time, there are particular agile practices we feel
are crucial to any team’s success. We’ll talk about these throughout the book.

W HAT D O W E M EAN BY “A GILE T ESTING ”?


You might have noticed that we use the term “tester” to describe a person
whose main activities revolve around testing and quality assurance. You’ll
also see that we often use the word “programmer” to describe a person whose
main activities revolve around writing production code. We don’t intend that
these terms sound narrow or insignificant. Programmers do more than turn
a specification into a program. We don’t call them “developers,” because ev-
Crispin_book.fm Page 5 Tuesday, November 25, 2008 11:06 AM

W HAT D O W E M EAN BY “A GILE T ESTING ”? 5

eryone involved in delivering software is a developer. Testers do more than


perform “testing tasks.” Each agile team member is focused on delivering a
high-quality product that provides business value. Agile testers work to en-
sure that their team delivers the quality their customers need. We use the
terms “programmer” and “tester” for convenience.

Several core practices used by agile teams relate to testing. Agile program-
mers use test-driven development (TDD), also called test-driven design, to
write quality production code. With TDD, the programmer writes a test for a
tiny bit of functionality, sees it fail, writes the code that makes it pass, and
then moves on to the next tiny bit of functionality. Programmers also write
code integration tests to make sure the small units of code work together as
intended. This essential practice has been adopted by many teams, even those
that don’t call themselves “agile,” because it’s just a smart way to think
through your software design and prevent defects. Figure 1-2 shows a sample
unit test result that a programmer might see.

This book isn’t about unit-level or component-level testing, but these types
of tests are critical to a successful project. Brian Marick [2003] describes
these types of tests as “supporting the team,” helping the programmers know
what code to write next. Brian also coined the term “technology-facing tests,”
tests that fall into the programmer’s domain and are described using pro-
grammer terms and jargon. In Part II, we introduce the Agile Testing Quad-
rants and examine the different categories of agile testing. If you want to
learn more about writing unit and component tests, and TDD, the bibliogra-
phy will steer you to some good resources.

Figure 1-2 Sample unit test output


Crispin_book.fm Page 6 Tuesday, November 25, 2008 11:06 AM

6 C HAPTER 1  W HAT I S A GILE T ESTING , A NYWAY ?

If you want to know how agile values, principles, and practices applied to test-
ing can help you, as a tester, do your best work, and help your team deliver
more business value, please keep reading. If you’ve bothered to pick up this
book, you’re probably the kind of professional who continually strives to grow
and learn. You’re likely to have the mind-set that a good agile team needs to
succeed. This book will show you ways to improve your organization’s prod-
uct, provide the most value possible to your team, and enjoy your job.

Lisa’s Story
During a break from working on this chapter, I talked to a friend who works in
quality assurance for a large company. It was a busy time of year, and management
expected everyone to work extra hours. He said, “If I thought working 100 extra
hours would solve our problems, I’d work ‘til 7 every night until that was done. But
the truth was, it might take 4,000 extra hours to solve our problems, so working
extra feels pointless.” Does this sound familiar?
—Lisa

If you’ve worked in the software industry long, you’ve probably had the op-
portunity to feel like Lisa’s friend. Working harder and longer doesn’t help
when your task is impossible to achieve. Agile development acknowledges
the reality that we only have so many good productive hours in a day or
week, and that we can’t plan away the inevitability of change.

Agile development encourages us to solve our problems as a team. Business


people, programmers, testers, analysts—everyone involved in software devel-
opment—decides together how best to improve their product. Best of all, as
testers, we’re working together with a team of people who all feel responsible
for delivering the best possible quality, and who are all focused on testing. We
love doing this work, and you will too.

When we say “agile testing” in this book, we’re usually talking about business-
facing tests, tests that define the business experts’ desired features and func-
tionality. We consider “customer-facing” a synonym for “business-facing.”
“Testing” in this book also includes tests that critique the product and focus
on discovering what might be lacking in the finished product so that we can
improve it. It includes just about everything beyond unit and component
level testing: functional, system, load, performance, security, stress, usability,
exploratory, end-to-end, and user acceptance. All these types of tests might
be appropriate to any given project, whether it’s an agile project or one using
more traditional methodologies.
Crispin_book.fm Page 7 Tuesday, November 25, 2008 11:06 AM

A L ITTLE C ONTEXT FOR R OLES AND A CTIVITIES ON AN A GILE T EAM 7

Agile testing doesn’t just mean testing on an agile project. Some testing ap-
proaches, such as exploratory testing, are inherently agile, whether it’s done
an agile project or not. Testing an application with a plan to learn about it as
you go, and letting that information guide your testing, is in line with valuing
working software and responding to change. Later chapters discuss agile
forms of testing as well as “agile testing” practices.

A L ITTLE C ONTEXT FOR R OLES AND A CTIVITIES


ON AN A GILE T EAM
We’ll talk a lot in this book about the “customer team” and the “developer
team.” The difference between them is the skills they bring to delivering a
product.

Customer Team
The customer team includes business experts, product owners, domain ex-
perts, product managers, business analysts, subject matter experts—every-
one on the “business” side of a project. The customer team writes the stories
or feature sets that the developer team delivers. They provide the examples
that will drive coding in the form of business-facing tests. They communi-
cate and collaborate with the developer team throughout each iteration, an-
swering questions, drawing examples on the whiteboard, and reviewing
finished stories or parts of stories.

Testers are integral members of the customer team, helping elicit require-
ments and examples and helping the customers express their requirements as
tests.

Developer Team
Everyone involved with delivering code is a developer, and is part of the de-
veloper team. Agile principles encourage team members to take on multiple
activities; any team member can take on any type of task. Many agile practi-
tioners discourage specialized roles on teams and encourage all team mem-
bers to transfer their skills to others as much as possible. Nevertheless, each
team needs to decide what expertise their projects require. Programmers,
system administrators, architects, database administrators, technical writers,
security specialists, and people who wear more than one of these hats might
be part of the team, physically or virtually.
Crispin_book.fm Page 8 Tuesday, November 25, 2008 11:06 AM

8 C HAPTER 1  W HAT I S A GILE T ESTING , A NYWAY ?

Testers are also on the developer team, because testing is a central compo-
nent of agile software development. Testers advocate for quality on behalf of
the customer and assist the development team in delivering the maximum
business value.

Interaction between Customer and Developer Teams


The customer and developer teams work closely together at all times. Ideally,
they’re just one team with a common goal. That goal is to deliver value to the
organization. Agile projects progress in iterations, which are small develop-
ment cycles that typically last from one to four weeks. The customer team,
with input from the developers, will prioritize stories to be developed, and
the developer team will determine how much work they can take on. They’ll
work together to define requirements with tests and examples, and write the
code that makes the tests pass. Testers have a foot in each world, understand-
ing the customer viewpoint as well as the complexities of the technical imple-
mentation (see Figure 1-3).

Some agile teams don’t have any members who define themselves as “testers.”
However, they all need someone to help the customer team write business-
facing tests for the iteration’s stories, make sure the tests pass, and make sure
that adequate regression tests are automated. Even if a team does have testers,
the entire agile team is responsible for these testing tasks. Our experience
with agile teams has shown that testing skills and experience are vital to
project success and that testers do add value to agile teams.

Interaction of Roles

Programmer Domain
Expert

Tester

Figure 1-3 Interaction of roles


Crispin_book.fm Page 9 Tuesday, November 25, 2008 11:06 AM

H OW I S A GILE T ESTING D IFFERENT ? 9

H OW I S A GILE T ESTING D IFFERENT ?


We both started working on agile teams at the turn of the millennium. Like a
lot of testers who are new to agile, we didn’t know what to expect at first. To-
gether with our respective agile teams, we’ve worked on we’ve learned a lot
about testing on agile projects. We’ve also implemented ideas and practices
suggested by other agile testers and teams. Over the years, we’ve shared our
experiences with other agile testers as well. We’ve facilitated workshops and
led tutorials at agile and testing conferences, talked with local user groups,
and joined countless discussions on agile testing mailing lists. Through these
experiences, we’ve identified differences between testing on agile teams and
testing on traditional waterfall development projects. Agile development has
transformed the testing profession in many ways.

Working on Traditional Teams


Neither working closely with programmers nor getting involved with a
project from the earliest phases was new to us. However, we were used to
strictly enforced gated phases of a narrowly defined software development
life cycle, starting with release planning and requirements definition and
usually ending with a rushed testing phase and a delayed release. In fact, we
often were thrust into a gatekeeper role, telling business managers, “Sorry,
the requirements are frozen; we can add that feature in the next release.”

As leaders of quality assurance teams, we were also often expected to act as


gatekeepers of quality. We couldn’t control how the code was written, or even
if any programmers tested their code, other than by our personal efforts at
collaboration. Our post-development testing phases were expected to boost
quality after code was complete. We had the illusion of control. We usually
had the keys to production, and sometimes we had the power to postpone
releases or stop them from going forward. Lisa even had the title of “Quality
Boss,” when in fact she was merely the manager of the QA team.

Our development cycles were generally long. Projects at a company that pro-
duced database software might last for a year. The six-month release cycles
Lisa experienced at an Internet start-up seemed short at the time, although it
was still a long time to have frozen requirements. In spite of much process
and discipline, diligently completing one phase before moving on to the
next, it was plenty of time for the competition to come out ahead, and the
applications were not always what the customers expected.
Crispin_book.fm Page 10 Tuesday, November 25, 2008 11:06 AM

10 C HAPTER 1  W HAT I S A GILE T ESTING , A NYWAY ?

Traditional teams are focused on making sure all the specified requirements
are delivered in the final product. If everything isn’t ready by the original tar-
get release date, the release is usually postponed. The development teams
don’t usually have input about what features are in the release, or how they
should work. Individual programmers tend to specialize in a particular area
of the code. Testers study the requirements documents to write their test
plans, and then they wait for work to be delivered to them for testing.

Working on Agile Teams


Transitioning to the short iterations of an agile project might produce initial
shock and awe. How can we possibly define requirements and then test and
deliver production-ready code in one, two, three, or four weeks? This is par-
ticularly tough for larger organizations with separate teams for different func-
tions and even harder for teams that are geographically dispersed. Where do
all these various programmers, testers, analysts, project managers, and count-
less specialties fit in a new agile project? How can we possibly code and test so
quickly? Where would we find time for difficult efforts such as automating
tests? What control do we have over bad code getting delivered to production?

We’ll share our stories from our first agile experiences to show you that ev-
eryone has to start somewhere.

Lisa’s Story
My first agile team embraced Extreme Programming (XP), not without some “learn-
ing experiences.” Serving as the only professional tester on a team of eight pro-
grammers who hadn’t learned how to automate unit tests was disheartening. The
first two-week iteration felt like jumping off a cliff.
Fortunately, we had a good coach, excellent training, a supportive community of
agile practitioners with ideas to share, and time to learn. Together we figured out
some ins and outs of how to integrate testing into an agile project—indeed, how
to drive the project with tests. I learned how I could use my testing skills and
experience to add real value to an agile team.
The toughest thing for me (the former Quality Boss) to learn was that the custom-
ers, not I, decided on quality criteria for the product. I was horrified after the first
iteration to find that the code crashed easily when two users logged in concur-
rently. My coach patiently explained, over my strident objections, that our cus-
tomer, a start-up company, wanted to be able to show features to potential
customers. Reliability and robustness were not yet the issue.
I learned that my job was to help the customers tell us what was valuable to them
during each iteration, and to write tests to ensure that’s what they got.
—Lisa
Crispin_book.fm Page 11 Tuesday, November 25, 2008 11:06 AM

H OW I S A GILE T ESTING D IFFERENT ? 11

Janet’s Story
My first foray into the agile world was also an Extreme Programming (XP) engage-
ment. I had just come from an organization that practiced waterfall with some
extremely bad practices, including giving the test team a day or so to test six
months of code. In my next job as QA manager, the development manager and I
were both learning what XP really meant. We successfully created a team that
worked well together and managed to automate most of the tests for the func-
tionality. When the organization downsized during the dot-com bust, I found
myself in a new position at another organization as the lone tester with about
ten developers on an XP project.
On my first day of the project, Jonathan Rasmusson, one of the developers, came
up to me and asked me why I was there. The team was practicing XP, and the pro-
grammers were practicing test-first and automating all their own tests. Participating
in that was a challenge I couldn’t resist. The team didn’t know what value I could
add, but I knew I had unique abilities that could help the team. That experience
changed my life forever, because I gained an understanding of the nuances of an
agile project and determined then that my life’s work was to make the tester role
a more fulfilling one.
—Janet

Read Jonathan’s Story


Jonathan Rasmusson, now an Agile Coach at Rasmusson Software Consulting,
but Janet’s coworker on her second agile team, explains how he learned
how agile testers add value.
So there I was, a young hotshot J2EE developer excited and pumped to
be developing software the way it should be developed—using XP. Until
one day, in walks a new team member—a tester. It seems management
thought it would be good to have a QA resource on the team.
That’s fine. Then it occurred to me that this poor tester would have noth-
ing to do. I mean, as a developer on an XP project, I was writing the
tests. There was no role for QA here as far as I could see.
So of course I went up and introduced myself and asked quite pointedly
what she was going to do on the project, because the developers were
writing all the tests. While I can’t remember exactly how Janet
responded, the next six months made it very clear what testers can do
on agile projects.
With the automation of the tedious, low-level boundary condition test
cases, Janet as a tester was now free to focus on much greater value-
add areas like exploratory testing, usability, and testing the app in ways
developers hadn’t originally anticipated. She worked with the
Crispin_book.fm Page 12 Tuesday, November 25, 2008 11:06 AM

12 C HAPTER 1  W HAT I S A GILE T ESTING , A NYWAY ?

customer to help write test cases that defined success for upcoming sto-
ries. She paired with developers looking for gaps in tests.
But perhaps most importantly, she helped reinforce an ethos of quality
and culture, dispensing happy-face stickers to those developers who
had done an exceptional job (these became much sought-after badges
of honor displayed prominently on laptops).
Working with Janet taught me a great deal about the role testers play on
agile projects, and their importance to the team.

Agile teams work closely with the business and have a detailed understanding
of the requirements. They’re focused on the value they can deliver, and they
might have a great deal of input into prioritizing features. Testers don’t sit
and wait for work; they get up and look for ways to contribute throughout
the development cycle and beyond.

If testing on an agile project felt just like testing on a traditional project, we


wouldn’t feel the need to write a book. Let’s compare and contrast these test-
ing methods.

Traditional vs. Agile Testing


It helps to start by looking at similarities between agile testing and testing in
traditional software development. Consider Figure 1-4.

In the phased approach diagram, it is clear that testing happens at the end,
right before release. The diagram is idealistic, because it gives the impression
there is as much time for testing as there is for coding. In many projects, this
is not the case. The testing gets “squished” because coding takes longer than
expected, and because teams get into a code-and-fix cycle at the end.

Agile is iterative and incremental. This means that the testers test each incre-
ment of coding as soon as it is finished. An iteration might be as short as one
week, or as long as a month. The team builds and tests a little bit of code,
making sure it works correctly, and then moves on to next piece that needs to
be built. Programmers never get ahead of the testers, because a story is not
“done” until it has been tested. We’ll talk much more about this throughout
the book.

There’s tremendous variety in the approaches to projects that agile teams take.
One team might be dedicated to a single project or might be part of another
Crispin_book.fm Page 13 Tuesday, November 25, 2008 11:06 AM

H OW I S A GILE T ESTING D IFFERENT ? 13

Phased or gated—for example, Waterfall

Requirements

Specifications

Code

Testing

Release

Time

Agile:
F Iterative & incremental
E
• Each story is expanded, coded, and tested
D D • Possible release after each iteration
C C C
A B A B A B A B

It 0 It 1 It 2 It 3 It 4

Figure 1-4 Traditional testing vs. agile testing

bigger project. No matter how big your project is, you still have to start some-
where. Your team might take on an epic or feature, a set of related stories at an
estimating meeting, or you might meet to plan the release. Regardless of how
a project or subset of a project gets started, you’ll need to get a high-level un-
derstanding of it. You might come up with a plan or strategy for testing as you
prepare for a release, but it will probably look quite different from any test
plan you’ve done before.

Every project, every team, and sometimes every iteration is different. How
your team solves problems should depend on the problem, the people, and
the tools you have available. As an agile team member, you will need to be
adaptive to the team’s needs.

Rather than creating tests from a requirements document that was created by
business analysts before anyone ever thought of writing a line of code, some-
one will need to write tests that illustrate the requirements for each story days
or hours before coding begins. This is often a collaborative effort between a
Crispin_book.fm Page 14 Tuesday, November 25, 2008 11:06 AM

14 C HAPTER 1  W HAT I S A GILE T ESTING , A NYWAY ?

business or domain expert and a tester, analyst, or some other development


team member. Detailed functional test cases, ideally based on examples pro-
vided by business experts, flesh out the requirements. Testers will conduct
manual exploratory testing to find important bugs that defined test cases
might miss. Testers might pair with other developers to automate and exe-
cute test cases as coding on each story proceeds. Automated functional tests
are added to the regression test suite. When tests demonstrating minimum
functionality are complete, the team can consider the story finished.

If you attended agile conferences and seminars in the early part of this de-
cade, you heard a lot about TDD and acceptance testing but not so much
about other critical types of testing, such as load, performance, security, us-
ability, and other “ility” testing. As testers, we thought that was a little weird,
because all these types of testing are just as vital on agile projects as they are
on projects using any other development methodology. The real difference is
that we like to do these tests as early in the development process as we can so
that they can also drive design and coding.

If the team actually releases each iteration, as Lisa’s team does, the last day or
two of each iteration is the “end game,” the time when user acceptance test-
ing, training, bug fixing, and deployments to staging environments can oc-
cur. Other teams, such as Janet’s, release every few iterations, and might even
have an entire iteration’s worth of “end game” activities to verify release
readiness. The difference here is that all the testing is not left until the end.

As a tester on an agile team, you’re a key player in releasing code to produc-


tion, just as you might have been in a more traditional environment. You
might run scripts or do manual testing to verify all elements of a release, such
as database update scripts, are in place. All team members participate in ret-
rospectives or other process improvement activities that might occur for ev-
ery iteration or every release. The whole team brainstorms ways to solve
problems and improve processes and practices.

Agile projects have a variety of flavors. Is your team starting with a clean
slate, in a greenfield (new) development project? If so, you might have fewer
challenges than a team faced with rewriting or building on a legacy system
that has no automated regression suite. Working with a third party brings
additional testing challenges to any team.

Whatever flavor of development you’re using, pretty much the same ele-
ments of a software development life cycle need to happen. The difference
Crispin_book.fm Page 15 Tuesday, November 25, 2008 11:06 AM

W HOLE -T EAM A PPROACH 15

with agile is that time frames are greatly shortened, and activities happen
concurrently. Participants, tests, and tools need to be adaptive.

The most critical difference for testers in an agile project is the quick feed-
back from testing. It drives the project forward, and there are no gatekeepers
ready to block project progress if certain milestones aren’t met.

We’ve encountered testers who resist the transition to agile development,


fearing that “agile development” equates with chaos, lack of discipline, lack
of documentation, and an environment that is hostile to testers. While some
teams do seem to use the “agile” buzzword to justify simply doing whatever
they want, true agile teams are all about repeatable quality as well as effi-
ciency. In our experience, an agile team is a wonderful place to be a tester.

W HOLE -T EAM A PPROACH


One of the biggest differences in agile development versus traditional devel-
opment is the agile “whole-team” approach. With agile, it’s not only the testers
or a quality assurance team who feel responsible for quality. We don’t think
of “departments,” we just think of the skills and resources we need to deliver
the best possible product. The focus of agile development is producing high-
quality software in a time frame that maximizes its value to the business. This
is the job of the whole team, not just testers or designated quality assurance
professionals. Everyone on an agile team gets “test-infected.” Tests, from the
unit level on up, drive the coding, help the team learn how the application
should work, and let us know when we’re “done” with a task or story.

An agile team must possess all the skills needed to produce quality code that
delivers the features required by the organization. While this might mean in-
cluding specialists on the team, such as expert testers, it doesn’t limit particu-
lar tasks to particular team members. Any task might be completed by any
team member, or a pair of team members. This means that the team takes re-
sponsibility for all kinds of testing tasks, such as automating tests and man-
ual exploratory testing. It also means that the whole team thinks constantly
about designing code for testability.

The whole-team approach involves constant collaboration. Testers collabo-


rate with programmers, the customer team, and other team specialists—and
not just for testing tasks, but other tasks related to testing, such as building
infrastructure and designing for testability. Figure 1-5 shows a developer re-
viewing reports with two customers and a tester (not pictured).
Crispin_book.fm Page 16 Tuesday, November 25, 2008 11:06 AM

16 C HAPTER 1  W HAT I S A GILE T ESTING , A NYWAY ?

Figure 1-5 A developer discusses an issue with customers

The whole-team approach means everyone takes responsibility for testing


tasks. It means team members have a range of skill sets and experience to em-
ploy in attacking challenges such as designing for testability by turning ex-
amples into tests and into code to make those tests pass. These diverse
viewpoints can only mean better tests and test coverage.

Most importantly, on an agile team, anyone can ask for and receive help. The
team commits to providing the highest possible business value as a team, and
the team does whatever is needed to deliver it. Some folks who are new to ag-
ile perceive it as all about speed. The fact is, it’s all about quality—and if it’s
not, we question whether it’s really an “agile” team.

Your situation is unique. That’s why you need to be aware of the potential
testing obstacles your team might face and how you can apply agile values
and principles to overcome them.
Crispin_book.fm Page 17 Tuesday, November 25, 2008 11:06 AM

S UMMARY 17

S UMMARY
Understanding the activities that testers perform on agile teams helps you
show your own team the value that testers can add. Learning the core prac-
tices of agile testing will help your team deliver software that delights your
customers.

In this chapter, we’ve explained what we mean when we use the term “agile
testing.

 We showed how the Agile Manifesto relates to testing, with its empha-
sis on individuals and interactions, working software, customer col-
laboration, and responding to change.
 We provided some context for this book, including some other terms
we use such as “tester,” “programmer,” “customer,” and related terms
so that we can speak a common language.
 We explained how agile testing, with its focus on business value and
delivering the quality customers require, is different from traditional
testing, which focuses on conformance to requirements.
 We introduced the “whole-team” approach to agile testing, which
means that everyone involved with delivering software is responsible
for delivering high-quality software.
 We advised taking a practical approach by applying agile values and
principles to overcome agile testing obstacles that arise in your
unique situation.
Google
Bookmarks Delicious Digg Facebook StumbleUpon Reddit Twitter
Walker Royce Buy 2 from informIT and
Kurt Bittner Save 35%
Michael Perrow Enter the coupon code
AGILE2009 during checkout.

The Economics of Iterative


Software Development
Steering Toward Better Business Results
The authors begin by dispelling widespread myths about software costs, and available
explaining why traditional, “engineering-based” software management introduces
• Book: 9780321509352
unacceptable inefficiencies in today’s development environments. Next, they show
• Safari Online
business and technical managers how to combine the principles of economics and
• EBOOK: 0321637666
iterative development to achieve optimal results with limited resources. Using their
techniques, you can build systems that enable maximum business innovation and • KINDLE: B0023SDQYO
process improvement — and implement software processes that allow you do so
repeatedly and consistently. About the Authors
Highlights include: Walker Royce, vice president
• W hy organizations must start managing software development as a of IBM’s Rational Services, has
core business competency managed many large software
• How results-based management combines the best of iterative methods projects, consulted with
and economic principles many software development
• How to repeatedly quantify the value your project is delivering and to organizations, and developed
quickly correct your course as needed innovative approaches to software
• How to reduce software project size, complexity, and other “project killers” management. He is author of
• How to identify and eliminate software development processes that don’t work Software Project Management
• How to improve development processes, reduce rework, mitigate risk, (Addison-Wesley).
and identify inefficiencies Kurt Bittner, CTO for the
• How to create more proficient teams by improving individual skills, Americas at Ivar Jacobson
team interactions, and organizational capability Consulting, has twenty-seven
• W here to use integrated, automated tools to improve effectiveness years’ software experience in roles
• W hat to measure, and when: specific metrics for project inception, ranging from developer and project
elaboration, construction, and transition manager to architect and business
• How to measure individual projects embedded in broader programs or leader. He coauthored Use-Case
technical initiatives Modeling and Managing Iterative
• How to change your software development culture to make the most of Software Development Projects
this book’s techniques (both from Addison-Wesley).
Mike Perrow, writer and editor
The Economics of Iterative Software Development will help both business for IBM Software Group, is
and technical managers make better decisions throughout the software founding editor of The Rational
development process — and it will help team and project leaders keep any Edge online magazine.
project or initiative on track, so they can deliver more value faster.

informit.com/aw
0321509358_royceCh03.qxd:Royce_book.qxd 3/4/09 8:26 PM Page 23

3
• • •

T R E N D S I N S O F T WA R E E C O N O M I C S

Over the past two decades, the software industry has moved progres-
sively toward new methods for managing the ever-increasing com-
plexity of software projects. We have seen evolutions and
revolutions, with varying degrees of success and failure. Although
software technologies, processes, and methods have advanced
rapidly, software engineering remains a people-intensive process.
Consequently, techniques for managing people, technology,
resources, and risks have profound leverage.
The early software approaches of the 1960s and 1970s can
best be described as craftsmanship, with each project using custom
or ad-hoc processes and custom tools that were quite simple in
their scope. By the 1980s and 1990s, the software industry had
matured and was starting to exhibit signs of becoming more of an
engineering discipline. However, most software projects in this era
were still primarily exploring new technologies and approaches
that were largely unpredictable in their results and marked by dis-
economies of scale. In recent years, however, new techniques that
aggressively attack project risk, leverage automation to a greater
degree, and exhibit much-improved economies of scale have
begun to grow in acceptance. Much-improved software economics
23
0321509358_royceCh03.qxd:Royce_book.qxd 3/4/09 8:26 PM Page 24

24 • • • CHAPTER 3 T R E N D S I N S O F T WA R E E C O N O M I C S

are already being achieved by leading software organizations who


use these approaches.
Let’s take a look at one successful model for describing soft-
ware economics.

A S I M P L I F I E D M O D E L O F S O F T WA R E
ECONOMICS

There are several software cost models in use today. The most popu-
lar, open, and well-documented model is the COnstructive COst
MOdel (COCOMO), which has been widely used by the industry
for 20 years. The latest version, COCOMO II, is the result of a col-
laborative effort led by the University of Southern California (USC)
Center for Software Engineering, with the financial and technical
support of numerous industry affiliates. The objectives of this team
are threefold:

• To develop a software cost and schedule estimation model


for the lifecycle practices of the post-2000 era
• To develop a software project database and tool support for
improvement of the cost model
• To provide a quantitative analytic framework for evaluating
software technologies and their economic impacts

The accuracy of COCOMO II allows its users to estimate


cost within 30% of actuals, 74% of the time. This level of unpre-
dictability in the outcome of a software development process
should be truly frightening to any software project investor, espe-
cially in view of the fact that few projects ever perform better than
expected.
The COCOMO II cost model includes numerous parameters
and techniques for estimating a wide variety of software develop-
ment projects. For the purposes of this discussion, we will abstract
COCOMO II into a function of four basic parameters:
0321509358_royceCh03.qxd:Royce_book.qxd 3/4/09 8:26 PM Page 25

A S I M P L I F I E D M O D E L O F S O F T WA R E E C O N O M I C S • • • 2 5

• Complexity. The complexity of the software solution is typ-


ically quantified in terms of the size of human-generated
components (the number of source instructions or the
number of function points) needed to develop the features
in a usable product.
• Process. This refers to the process used to produce the end
product, and in particular its effectiveness in helping devel-
opers avoid “overhead” activities.
• Team. This refers to the capabilities of the software engi-
neering team, and particularly their experience with both
the computer science issues and the application domain
issues for the project at hand.
• Tools. This refers to the software tools a team uses for devel-
opment—that is, the extent of process automation.

The relationships among these parameters in modeling the


estimated effort can be expressed as follows:

Effort = (Team) × (Tools) × (Complexity)(Process)

Schedule estimates are computed directly from the effort esti-


mate and process parameters. Reductions in effort generally result
in reductions in schedule estimates. To simplify this discussion, we
can assume that the “cost” includes both effort and time. The com-
plete COCOMO II model includes several modes, numerous para-
meters, and several equations. This simplified model enables us to
focus the discussion on the more discriminating dimensions of
improvement.
What constitutes a good software cost estimate is a very tough
question. In our experience, a good estimate can be defined as one
that has the following attributes:

• It is conceived and supported by a team accountable for per-


forming the work, consisting of the project manager, the
architecture team, the development team, and the test team.
0321509358_royceCh03.qxd:Royce_book.qxd 3/4/09 8:26 PM Page 26

26 • • • CHAPTER 3 T R E N D S I N S O F T WA R E E C O N O M I C S

• It is accepted by all stakeholders as ambitious but realizable.


• It is based on a well-defined software cost model with a
credible basis and a database of relevant project experi-
ence that includes similar processes, similar technologies,
similar environments, similar quality requirements, and
similar people.
• It is defined in enough detail for both developers and man-
agers to objectively assess the probability of success and to
understand key risk areas.

Although several parametric models have been developed to


estimate software costs, they can all be generally abstracted into
the form given above. One very important aspect of software eco-
nomics (as represented within today’s software cost models) is that
the relationship between effort and size exhibits a diseconomy of
scale. The software development diseconomy of scale is a result of
the “process” exponent in the equation being greater than 1.0. In
contrast to the economics for most manufacturing processes, the
more software you build, the greater the cost per unit item. It is
desirable, therefore, to reduce the size and complexity of a project
whenever possible.

S O F T WA R E E N G I N E E R I N G : A 4 0 - Y E A R
HISTORY

Software engineering is dominated by intellectual activities focused


on solving problems with immense complexity and numerous
unknowns in competing perspectives. We can characterize three
generations of software development as follows:

1. 1960s and 1970s: Craftsmanship. Organizations used vir-


tually all custom tools, custom processes, and custom com-
ponents built in primitive languages. Project performance
0321509358_royceCh03.qxd:Royce_book.qxd 3/4/09 8:26 PM Page 27

S O F T WA R E E N G I N E E R I N G : A 4 0 - Y E A R H I S T O RY • • • 2 7

was highly predictable but poor: Cost, schedule, and qual-


ity objectives were almost never met.
2. 1980s and 1990s: Early Software Engineering. Organiza-
tions used more repeatable processes, off-the-shelf tools,
and about 70% of their components were built in higher
level languages. About 30% of these components were
available as commercial products, including the operating
system, database management system, networking, and
graphical user interface. During the 1980s, some organiza-
tions began achieving economies of scale, but with the
growth in applications’ complexity (primarily in the move
to distributed systems), the existing languages, techniques,
and technologies were simply insufficient.
3. 2000 and later: Modern Software Engineering. Modern
practice is rooted in the use of managed and measured
processes, integrated automation environments, and mostly
(70%) off-the-shelf components. Typically, only about 30%
of components need to be custom built.

Figure 3.1 illustrates the economics associated with these


three generations of software development. The ordinate of the
graph refers to software unit costs (per source line of code
[SLOC], per function point, per component—take your pick)
realized by an organization. The abscissa represents the lifecycle
growth in the complexity of software applications developed by
the organization.
Technologies for achieving reductions in complexity/size,
process improvements, improvements in team effectiveness, and
tool automation are not independent of one another. In each new
generation, the key is complementary growth in all technologies.
For example, in modern approaches, process advances cannot not
be used successfully without component technologies and tool
automation.
0321509358_royceCh03.qxd:Royce_book.qxd 3/4/09 8:26 PM Page 28

28 • • • CHAPTER 3 T R E N D S I N S O F T WA R E E C O N O M I C S

—1960s–1970s —1980s–1990s —2000 and on


—Waterfall Model —Process improvement —Iterative development
—Functional design —Encapsulation based —Component based
—Diseconomy of scale —Diseconomy of scale —Return on investment
Unit Cost

Software ROI

Software Complexity

Early Modern
Craftsmanship Software Engineering Software Engineering
Complexity/size:
30% component based 70% component based
100% custom 70% custom 30% custom
Process:
Ad hoc Repeatable Managed/measured

Team:
Predominantly Mix of trained Predominantly
untrained and untrained trained
Environments/tools:
Proprietary Mix of proprietary Commercial and
Not integrated and commercial integrated
Not integrated

Typical project performance


Predictable Unpredictable Predictable
Always: Infrequently: Usually:
Over budget On budget On budget
Over schedule On schedule On schedule

FIGURE 3.1 Trends in software economics

KEYS TO IMPROVEMENT: A BALANCED


APPROACH

Improvements in the economics of software development have been


not only difficult to achieve, but also difficult to measure and sub-
stantiate. In software textbooks, trade journals, and market literature,
the topic of software economics is plagued by inconsistent jargon,
inconsistent units of measure, disagreement among experts, and
0321509358_royceCh03.qxd:Royce_book.qxd 3/4/09 8:26 PM Page 29

K E Y S T O I M P R O V E M E N T: A B A L A N C E D A P P R O A C H • • • 2 9

unending hyperbole. If we examine only one aspect of improving


software economics, we are able to draw only narrow conclusions.
Likewise, if an organization focuses on improving only one aspect of
its software development process, it will not realize any significant
economic improvement—even though it may make spectacular
improvements in this single aspect of the process.
The key to substantial improvement in business performance
is a balanced attack across the four basic parameters of the simpli-
fied software cost model: complexity, process, team, and tools.
These parameters are in priority order for most software domains. In
our experience, the following discriminating approaches have made
a difference in improving the economics of software development
and integration:

1. Reduce the size or complexity of what needs to be developed.


• Reduce the amount of developed code by understand-
ing business needs and delivering only that which is
absolutely essential to satisfying those needs.
• Reduce the amount of human-generated code through
component-based technology and use of higher levels
of abstraction.
• Reuse existing functionality, whether through direct
code reuse or use of service-oriented architectures.
• Reduce the amount of functionality delivered in a sin-
gle release to shorten the release cycle and reduce com-
plexity; deliver increments of functionality in a series of
releases.
2. Improve the development process.
• Reduce scrap and rework by transitioning from a water-
fall process to a modern, iterative development process.
• Attack significant risks early through an architecture-
first focus.
• Evaluate areas of inefficiency and ineffectiveness and
improve practices in response.
0321509358_royceCh03.qxd:Royce_book.qxd 3/4/09 8:26 PM Page 30

30 • • • CHAPTER 3 T R E N D S I N S O F T WA R E E C O N O M I C S

3. Create more proficient teams.


• Improve individual skills.
• Improve team interactions.
• Improve organizational capability.
4. Use integrated tools that exploit more automation.
• Improve human productivity through advanced levels
of automation.
• Eliminate sources of human error.
• Support improvements in areas of process weakness.

Most software experts would also stress the significant depen-


dencies among these trends. For example, new tools enable complex-
ity reduction and process improvements; size-reduction approaches
lead to process changes; and process improvements drive tool
advances.
In addition, IT executives need to consider other trends in
software economics whose importance is increasing. These
include the lifecycle effects of commercial components-based
solutions and rapid development (often a source of maintenance
headaches); the effects of service-oriented architectures; the effects
of user priorities and value propositions (often keys to business case
analysis and to the management of scope and expectations); and
the effects of stakeholder/team collaboration and shared vision
achievement (often keys to rapid adaptation to changes in the IT
marketplace).

SUMMARY

The evolution of software project management since the 1960s has


moved through the stages of individual craftsmanship, through the
application of engineering principles, to the beginnings of repeat-
able, somewhat predictable processes based on a better understand-
ing of project risk coupled with the use of automation in the process.
0321509358_royceCh03.qxd:Royce_book.qxd 3/4/09 8:26 PM Page 31

S U M M A RY • • • 3 1

This rise in predictability has allowed the emergence of cost estima-


tion techniques, the most popular of which is COCOMO II.
The cost of a software project can best be estimated in terms of
the four essential COCOMO II parameters: complexity, process,
teams, and tools. Cost improvements result when the following occur:

1. Complexity can be reduced, either in the finished product


or in the iterations produced during the project lifecycle.
2. The process can be improved by addressing risks first and
reducing human error through automation.
3. Teams can become more efficient through skill enhance-
ment and improved communication.
4. Automated tools can be used to heighten productivity and
strengthen areas of the process.

In the next sections, we will elaborate on the approaches listed


above for achieving improvements in each of the four dimensions.
These approaches represent patterns of success we have observed
among successful software development organizations that have
made dramatic leaps in improving the economics of their software
development efforts.
Google
Bookmarks Delicious Digg Facebook StumbleUpon Reddit Twitter
Buy 2 from informIT and

Save 35%
Bruce Powel Douglass Enter the coupon code
AGILE2009 during checkout.

Real-Time Agility
The Harmony/ESW Method for Real-Time and
Embedded Systems Development

“Regardless of your perceptions of agile, this is a must read! Douglass’s book is available
a powerful and practical guide to a well-defined process that will enable engineers
• Book: 9780321545497
to confidently navigate the complexity, risk, and variability of real-time and
• Safari Online
embedded systems–including CMMI compliance. From requirements specification
• EBOOK: 0321617118
to product delivery, whatever your modeling and development environment,
this is the instruction manual.”
–Mark Scoville, Software Architect
About the Author
Real-time and embedded systems face the same challenges as traditional software
Bruce Powel Douglass
development: shrinking budgets and shorter timeframes. However, these systems
is chief evangelist for IBM
can be even more difficult to successfully develop due to additional requirements
Rational, a leading producer
for timeliness, safety, high reliability, minimal resource usage, and, in some cases,
of tools for real-time systems
the need to support rigorous industry standards.
development. He contributed to
In Real-Time Agility, leading embedded-systems consultant Bruce Powel Douglass the original specication of the
reveals how to leverage the best practices of agile development to address all Unified Modeling Language and
these challenges. Douglass introduces the Harmony process: a proven, start-to- is former co-chair of the Object
finish approach to software development that can reduce costs, save time, and Management Group’s Real-Time
eliminate potential defects. Analysis and Design Working
Group. He consults to many
Replete with examples, this book provides an ideal tutorial in agile methods for companies and organizations
real-time and embedded-systems developers. It also serves as an invaluable on building both small- and
reference guide “in the heat of battle” for developers working to advance projects, large-scale, real-time, safety-
both large and small. critical systems. He is the author
Coverage includes of several books showing how
to apply software development
• How Model-Driven Development (MDD) and agile methods work synergistically
best practices in real-time and
• T he Harmony process, including roles, workflows, tasks, and work products embedded systems development,
• Phases in the Harmony microcycle and their implementation including Doing Hard Time,
• Initiating a real-time agile project, including the artifacts you may Real-Time UML, Real-Time
(or may not) need Design Patterns (all from
• Agile analysis, including the iteration plan, clarifying requirements, Addison-Wesley) and Real-Time
and validation UML Workshop for Embedded
Systems (Elsevier).
• T he three levels of agile design–architectural, mechanistic, and detailed
• Continuous integration strategies and end-of-the-microcycle validation testing
• How Harmony’s agile process self-optimizes by identifying and managing
issues related to schedule, architecture, risks, workflows, and the process itself

informit.com/aw
Douglass_ch01.qxd 5/15/09 2:02 PM Page 1

Chapter 1

Introduction to Agile and


Real-Time Concepts

Different people mean different things when they use the term agile. The term
was first used to describe a lightweight approach to performing project develop-
ment after the original term, Extreme Programming (XP),1 failed to inspire le-
gions of managers entrusted to oversee development projects. Basically, agile
refers to a loosely integrated set of principles and practices focused on getting
the software development job done in an economical and efficient fashion.
This chapter begins by considering why we need agile approaches to soft-
ware development and then discusses agile in the context of real-time and em-
bedded systems. It then turns to the advantages of agile development processes
as compared to more traditional approaches.

The Agile Manifesto


A good place to start to understand agile methods is with the agile manifesto.2
The manifesto is a public declaration of intent by the Agile Alliance, consisting
of 17 signatories including Kent Beck, Martin Fowler, Ron Jeffries, Robert
Martin, and others. Originally drafted in 2001, this manifesto is summed up in
four key priorities:

• Individuals and interactions over processes and tools

• Working software over comprehensive documentation

1. Note that important acronyms and terms are defined in the Glossary.
2. http://agilemanifesto.org. Martin Fowler gives an interesting history of the drafting
at http://martinfowler.com/articles/agileStory.html.

1
Douglass_ch01.qxd 5/15/09 2:02 PM Page 2

2 Chapter 1 Introduction to Agile and Real-Time Concepts

• Customer collaboration over contract negotiation

• Responding to change over following a plan

To support these statements, they give a set of 12 principles. I’ll state them
here to set the context of the following discussion:

• Our highest priority is to satisfy the customer through early and continu-
ous delivery of valuable software.

• Welcome changing requirements, even late in development. Agile processes


harness change for the customer’s competitive advantage.

• Deliver working software frequently, from a couple of weeks to a couple


of months, with a preference to the shorter timescale.

• Business people and developers must work together daily throughout the
project.

• Build projects around motivated individuals. Give them the environment


and support they need, and trust them to get the job done.

• The most efficient and effective method of conveying information to and


within a development team is face-to-face conversation.

• Working software is the primary measure of progress.

• Agile processes promote sustainable development. The sponsors, develop-


ers, and users should be able to maintain a constant pace indefinitely.

• Continuous attention to technical excellence and good design enhances agility.

• Simplicity—the art of maximizing the amount of work not done—is essential.

• The best architectures, requirements, and designs emerge from self-organizing


teams.

• At regular intervals, the team reflects on how to become more effective,


then tunes and adjusts its behavior accordingly.

Agile methods have their roots in the XP (Extreme Programming3) move-


ment based largely on the work of Kent Beck and Ward Cunningham. Both

3. See www.xprogramming.com/what_is_xp.htm or Kent Beck’s Extreme Programming


Explained (Boston: Addison-Wesley, 2000) for an overview.
Douglass_ch01.qxd 5/15/09 2:02 PM Page 3

Why Agile? 3

agile and XP have been mostly concerned with IT systems and are heavily
code-based. In this book, I will focus on how to effectively harness the mani-
festo’s statements and principles in a different vertical market—namely, real-
time and embedded—and how to combine them with modeling to gain the
synergistic benefits of model-driven development (MDD) approaches.4

Note
Agility isn’t limited to small projects. Agile@Scale is an IBM initiative
to bring the benefits of agility to larger-scale systems and projects. This
initiative includes agile project tool environments such as Rational
Team Concert (RTC; based on the Jazz technology platform). Inter-
ested readers are referred to www-01.ibm.com/software/rational/agile
and www-01.ibm.com/software/rational/jazz/features.

Why Agile?
But why the need for a concept such as “agile” to describe software develop-
ment? Aren’t current software development processes good enough?
No, not really.
A process, in this context, can be defined as “a planned set of work tasks per-
formed by workers in specific roles resulting in changes of attributes, state, or
other characteristics of one or more work products.” The underlying assump-
tions are the following:

• The results of using the process are repeatable, resulting in a product with
expected properties (e.g., functionality and quality).

• The production of the goal state of the work products is highly predictable
when executing the process in terms of the project (e.g., cost, effort, calendar
time) and product (e.g., functionality, timeliness, and robustness) properties.

• People can be treated as anonymous, largely interchangeable resources.

• The problems of software development are infinitely scalable—that is,


doubling the resources will always result in halving the calendar time.

4. A good place for more information about agile modeling is Scott Ambler’s agile
modeling Web site, www.agilemodeling.com.
Douglass_ch01.qxd 5/15/09 2:02 PM Page 4

4 Chapter 1 Introduction to Agile and Real-Time Concepts

As it turns out, software is hard to develop. Most existing development


processes are most certainly not repeatable or predictable in the sense above.
There are many reasons proposed for why that is. For myself, I think software is
fundamentally complex—that is, it embodies the “stuff” of complexity. That’s
what software is best at—capturing how algorithms and state machines manipu-
late multitudes of data within vast ranges to achieve a set of computational re-
sults. It’s “thought stuff,” and that’s hard.
The best story I’ve heard about software predictability is from a blog on the
SlickEdit Web site by Scott Westfall called the “The Parable of the Cave” (see
sidebar).5 Estimating software projects turns out to be remarkably similar to es-
timating how long it will take to explore an unknown cave, yet managers often
insist on being given highly precise estimates.

The Parable of the Cave

Two people stand before a cave. One is the sagely manager of a cave ex-
ploring company whose wisdom is only exceeded by his wit, charm, and
humility. Let’s call him, oh, “Scott.” The other is a cave explorer of inde-
terminate gender who bears no resemblance to any programmers past or
present that this author may have worked with and whose size may be big
or little. Let’s call him/her “Endian.”
“Endian,” said Scott in a sagely voice that was both commanding and
compassionate, “I need you to explore this cave. But before you do, I need
to know how long you think it will take, so that I may build a schedule
and tell the sales team when the cave will be ready.”
“Great Scott,” replied Endian using the title bestowed upon Scott by
his admiring employees, “how can I give you an answer when surely you
know I have never been in this cave before? The cave may be vast, with
deep chasms. It may contain a labyrinth of underwater passages. It may
contain fearsome creatures that must first be vanquished. How can I say
how long it will take to explore?”
Scott pondered Endian’s words and after a thorough analysis that
might have taken days for others but was completed in but a moment for
him, he replied, “Surely this is not the first cave you explored. Are there
no other caves in this district? Use your knowledge of those caves to form
an estimate.”

5. Used with permission of the author, Scott Westfall. The SlickEdit Web site can be
found at http://blog.slickedit.com/?p207.
Douglass_ch01.qxd 5/15/09 2:02 PM Page 5

Why Agile? 5

Endian heard these words and still his doubt prevailed. “Your words
are truly wise,’’ said Endian, “but even within a district the caves may
vary, one from another. Surely, an estimate based on the size of another
cave cannot be deemed accurate.”
“You have spoken truly, good Endian,” replied Scott in a fatherly, sup-
porting tone that lacked any trace of being patronizing as certain cynical
readers may think. “Here, take from me this torch and this assortment of
cheeses that you may explore the cave briefly. Return ere the morrow and
report what you have learned.”
The parable continues like this for pages, as parables are known to do.
Let’s see, Endian enters the cave . . . something about a wretched beast of
surpassing foulness . . . he continues on . . . hmm, that’s what the assort-
ment of cheeses were for. Ah! Here we go.
Endian returns to Scott, his t-shirt ripped and his jeans covered in mud.
Being always concerned with the well-being of his employees, Scott offers
Endian a cool drink, then asks, “Endian, what news of the cave? Have you
an estimate that I can use for my schedule? What shall I tell the sales team?”
Endian considers all that he has seen and builds a decomposition con-
taining the many tasks necessary to explore the cave based on his earlier
reconnoitering. He factors in variables for risk and unknowns, and then
he responds, “Two weeks.”

In addition, the scope of software is increasing rapidly. Compared to the


scope of the software functionality in decades past, software these days does
orders of magnitude more. Back in the day,6 my first IBM PC had 64kB of
memory and ran a basic disk operating system called DOS. DOS fit on a single
360kB floppy disk. Windows XP weighs in at well over 30 million lines of code;
drives hundreds of different printers, disks, displays, and other peripherals; and
needs a gigabyte of memory to run comfortably. These software-intensive sys-
tems deliver far more functionality than the electronic-only devices they re-
place. Compare, for example, a traditional phone handset with a modern cell
phone. Or compare a traditional electrocardiogram (ECG) that drove a paper
recorder like the one I used in medical school with a modern ECG machine—
the difference is remarkable. The modern machine can do everything the old

6. I know I’m dating myself, but my IBM PC was my fifth computer. I still remember
fondly the days of my TRS-80 model I computer with its 4kB of memory . . .
Douglass_ch01.qxd 5/15/09 2:02 PM Page 6

6 Chapter 1 Introduction to Agile and Real-Time Concepts

machine did, plus detect a wide range of arrhythmias, track patient data,
produce reports, and measure noninvasive blood pressure, blood oxygen con-
centration, a variety of temperatures, and even cardiac output.
Last, software development is really invention, and invention is not a highly
predictable thing. In electronic and mechanical engineering, a great deal of the
work is conceptually simply putting pieces together to achieve a desired goal,
but in software those pieces are most often invented (or reinvented) for every
project. This is not to oversimplify the problems of electronic or mechanical
design but merely to point out that the underlying physics of those disciplines is
far more mature and well understood than that of software.
But it doesn’t really matter if you believe my explanations; the empirical re-
sults of decades of software development are available. Most products are late.7
Most products are delivered with numerous and often significant defects. Most
products don’t deliver all the planned functionality. We have become used to re-
booting our devices, but 30 years ago it would have been unthinkable that we
would have to turn our phones off, remove the batteries, count to 30, reinsert
the batteries, and reboot our phones.8 Unfortunately, that is the “state of the
art” today.
To this end, many bright people have proposed processes as a means of com-
bating the problem, reasoning that if people engineered software rather than
hacked away at it, the results would be better. And they have, to a large degree,
been better. Nevertheless, these approaches have been based on the premise that
software development can be treated the same as an industrial manufacturing
process and achieve the same results. Industrial automation problems are highly
predictable, and so this approach makes a great deal of sense when the underly-
ing mechanisms driving the process are very well understood and are inherently
linear (i.e., a small change in input results in an equally small change in output).
It makes less sense when the underlying mechanisms are not fully understood or
the process is highly nonlinear. Unfortunately, software development is neither
fully understood nor even remotely linear.
It is like the difference in the application of fuzzy logic and neural networks
to nonlinear control systems. Fuzzy logic systems work by applying the concept
of partial membership and using a centroid computation to determine outputs.

7. See, for example, Michiel van Genuchten, “Why Is Software Late? An Empirical
Study of Reasons for Delay in Software Development,” IEEE Transactions on Soft-
ware Engineering 17, no. 6 (June 1991).
8. Much as I love my BlackBerry, I was amazed that a customer service representative
recommended removing the battery to reboot the device daily.
Douglass_ch01.qxd 5/15/09 2:02 PM Page 7

Why Agile? 7

The partial membership of different sets (mapping to different equations) is


defined by set membership rules, so fuzzy logic systems are best applied when
the rules are known and understood, such as in speed control systems.
Neural networks, on the other hand, don’t know or care about rules. They
work by training clusters of simple but deeply interconnected processing units
(neurons). The training involves applying known inputs (“exemplars”) and ad-
justing the weights of the connections until you get the expected outputs. Once
trained, the neural network can produce results from previously unseen data
input sets and produce control outputs. The neural network learns the effects of
the underlying mechanisms from actual data, but it doesn’t in any significant
way “understand” those mechanisms. Neural networks are best used when the
underlying mechanisms are not well understood because they can learn the
data transformations inherent in the mechanisms.
Rigorously planned processes are akin to fuzzy logic—they make a priori as-
sumptions about the underlying mechanisms. When they are right, a highly pre-
dictable scheme results. However, if those a priori assumptions are either wrong
or missing, then they yield less successful results. In this case, the approach
must be tuned with empirical data. To this end, most traditional processes do
“extra” work and produce “extra” products to help manage the process. These
typically include

• Schedules

• Management plans

• Metrics (e.g., source lines of code [SLOC] or defect density)


• Peer and management reviews and walk-throughs

• Progress reports

And so on.
The idea is that the execution of these tasks and the production of the work
products correlate closely with project timeliness and product functionality and
quality. However, many of the tasks and measures used don’t correlate very
well, even if they are easy to measure. Even when they do correlate well, they
incur extra cost and time.
Agile methods are a reaction in the developer community to the high cost and
effort of these industrial approaches to software development. The mechanisms by
which we invent software are not so well understood as to be highly predictable.
Further, small changes in requirements or architecture can result in huge differ-
ences in development approach and effort. Because of this, empiricism, discipline,
Douglass_ch01.qxd 5/15/09 2:02 PM Page 8

8 Chapter 1 Introduction to Agile and Real-Time Concepts

quality focus, and stakeholder focus must all be present in our development
processes. To this end, agile methods are not about hacking code but instead are
about focusing effort on the things that demonstrably add value and defocusing on
efforts that do not.

Properties of Real-Time Embedded Systems


Of course, software development is hard. Embedded software development is
harder. Real-time embedded software is even harder than that. This is not to
minimize the difficulty in reliably developing application software, but there are
a host of concerns with real-time and embedded systems that don’t appear in
the production of typical applications.
An embedded system is one that contains at least one CPU but does not pro-
vide general computing services to the end users. A cell phone is considered an
embedded computing platform because it contains one or more CPUs but pro-
vides a dedicated set of services (although the distinction is blurred in many
contemporary cell phones). Our modern society is filled with embedded com-
puting devices: clothes washers, air traffic control computers, laser printers, tel-
evisions, patient ventilators, cardiac pacemakers, missiles, global positioning
systems (GPS), and even automobiles—the list is virtually endless.
The issues that appear in real-time embedded systems manifest themselves on
four primary fronts. First, the optimization required to effectively run in highly
resource-constrained environments makes embedded systems more challenging
to create. It is true that embedded systems run the gamut from 8-bit processes in
dishwashers and similar machinery up to collaborating sets of 64-bit computers.
Nevertheless, most (but not all) embedded systems are constrained in terms of
processor speed, memory, and user interface (UI). This means that many of the
standard approaches to application development are inadequate alone and must
be optimized to fit into the computing environment and perform their tasks.
Thus embedded systems typically require far more optimization than standard
desktop applications. I remember writing a real-time operating system (RTOS)
for a cardiac pacemaker that had 32kB of static memory for what amounted to
an embedded 6502 processor.9 Now that’s an embedded system!
Along with the highly constrained environments, there is usually a need to
write more device-driver-level software for embedded systems than for standard
application development. This is because these systems are more likely to have

9. It even had a small file system to manage different pacing and monitoring applications.
Douglass_ch01.qxd 5/15/09 2:02 PM Page 9

Properties of Real-Time Embedded Systems 9

custom hardware for which drivers do not exist, but even when they do exist,
they often do not meet the platform constraints. This means that not only must
the primary functionality be developed, but the low-level device drivers must be
written as well.
The real-time nature of many embedded systems means that predictability
and schedulability affect the correctness of the application. In addition, many
such systems have high reliability and safety requirements. These characteristics
require additional analyses, such as schedulability (e.g., rate monotonic analy-
sis, or RMA), reliability (e.g., failure modes and effects analysis, or FMEA),
and safety (e.g., fault tree analysis, or FTA) analysis. In addition to “doing
the math,” effort must be made to ensure that these additional requirements
are met.
Last, a big difference between embedded and traditional applications is the
nature of the so-called target environment—that is, the computing platform on
which the application will run. Most desktop applications are “hosted” (writ-
ten) on the same standard desktop computer that serves as the target platform.
This means that a rich set of testing and debugging tools is available for verify-
ing and validating the application. In contrast, most embedded systems are
“cross-compiled” from a desktop host to an embedded target. The embedded
target lacks the visibility and control of the program execution found on the
host, and most of the desktop tools are useless for debugging or testing the ap-
plication on its embedded target. The debugging tools used in embedded sys-
tems development are almost always more primitive and less powerful than
their desktop counterparts. Not only are the embedded applications more com-
plex (due to the optimization), and not only do they have to drive low-level de-
vices, and not only must they meet additional sets of quality-of-service (QoS)
requirements, but the debugging tools are far less capable as well.
It should be noted that another difference exists between embedded and
“IT” software development. IT systems are often maintained systems that con-
stantly provide services, and software work, for the most part, consists of small
incremental efforts to remove defects and add functionality. Embedded systems
differ in that they are released at an instant in time and provide functionality at
that instant. It is a larger effort to update embedded systems, so that they are
often, in fact, replaced rather than being “maintained” in the IT sense. This
means that IT software can be maintained in smaller incremental pieces than
can embedded systems, and “releases” have more significance in embedded
software development.
A “real-time system” is one in which timeliness is important to correctness.
Many developers incorrectly assume that “real-time” means “real fast.” It clearly
does not. Real-time systems are “predictably fast enough” to perform their tasks.
Douglass_ch01.qxd 5/15/09 2:02 PM Page 10

10 Chapter 1 Introduction to Agile and Real-Time Concepts

If processing your eBay order takes an extra couple of seconds, the server
application can still perform its job. Such systems are not usually considered real-
time, although they may be optimized to handle thousands of transactions per
second, because if the system slows down, it doesn’t affect the system’s correct-
ness. Real-time systems are different. If a cardiac pacemaker fails to induce cur-
rent through the heart muscle at the right time, the patient’s heart can go into
fibrillation. If the missile guidance system fails to make timely corrections to its
attitude, it can hit the wrong target. If the GPS satellite doesn’t keep a highly pre-
cise measure of time, position calculations based on its signal will simply be
wrong.
Real-time systems are categorized in many ways. The most common is the
broad grouping into “hard” and “soft.” “Hard” real-time systems exhibit signif-
icant failure if every single action doesn’t execute within its time frame. The
measure of timeliness is called a deadline—the time after action initiation
by which the action must be complete. Not all deadlines must be in the microsec-
ond time frame to be real-time. The F2T2EA (Find, Fix, Track, Target, Engage,
Assess) Kill Chain is a fundamental aspect of almost all combat systems; the end-
to-end deadline for this compound action might be on the order of 10 minutes,
but pilots absolutely must achieve these deadlines for combat effectiveness.
The value of the completion of an action as a function of time is an important
concept in real-time systems and is expressed as a “utility function” as shown in
Figure 1.1. This figure expresses the value of the completion of an action to the
user of the system. In reality, utility functions are smooth curves but are most often

“Utility Function”
Criticality

Importance

Time
Deadline
Urgency

Figure 1.1 Utility function


Douglass_ch01.qxd 5/15/09 2:02 PM Page 11

Properties of Real-Time Embedded Systems 11

modeled as discontinuous step functions because this eases their mathematical


analysis. In the figure, the value of the completion of an action is high until an in-
stant in time, known as the deadline; at this point, the value of the completion of
the action is zero. The length of time from the current time to the deadline is a
measure of the urgency of the action. The height of the function is a measure of the
criticality or importance of the completion of the action. Criticality and urgency
are important orthogonal properties of actions in any real-time system. Different
scheduling schemas optimize urgency, others optimize importance, and still others
support a fairness (all actions move forward at about the same rate) doctrine.
Actions are the primitive building blocks of concurrency units, such as tasks
or threads. A concurrency unit is a sequence of actions in which the order is
known; the concurrency unit may have branch points, but the sequence of ac-
tions within a set of branches is fully deterministic. This is not true for the ac-
tions between concurrency units. Between concurrency units, the sequence of
actions is not known, or cared about, except at explicit synchronization
points.
Figure 1.2 illustrates this point. The flow in each of the three tasks (shown
on a UML activity diagram) is fully specified. In Task 1, for example, the se-
quence is that Action A occurs first, followed by Action B and then either Ac-
tion C or Action D. Similarly, the sequence for the other two tasks is fully
defined. What is not defined is the sequence between the tasks. Does Action C
occur before or after Action W or Action Gamma? The answer is You don’t
know and you don’t care. However, we know that before Action F, Action X,
and Action Zeta can occur, Action E, Action Z, and Action Gamma have all
occurred. This is what is meant by a task synchronization point.
Because in real-time systems synchronization points, as well as resource shar-
ing, are common, they require special attention in real-time systems not often
found in the development of IT systems.
Within a task, several different properties are important and must be modeled
and understood for the task to operate correctly (see Figure 1.3). Tasks that are
time-based occur with a certain frequency, called the period. The period is the
time between invocations of the task. The variation around the period is called
jitter. For event-based task initiation, the time between task invocations is called
the interarrival time. For most schedulability analyses, the shortest such time,
called the minimum interarrival time, is used for analysis. The time from the ini-
tiation of the task to the point at which its set of actions must be complete is
known as the deadline. When tasks share resources, it is possible that a needed
resource isn’t available. When a necessary resource is locked by a lower-priority
task, the current task must block and allow the lower-priority task to complete
its use of the resource before the original task can run. The length of time the
Douglass_ch01.qxd 5/15/09 2:02 PM Page 12

12 Chapter 1 Introduction to Agile and Real-Time Concepts

Task 1 Task 2 Task 3

Action
Action A Action W
Alpha

Action B

Action
Action Y
Beta

Action D Action C

Action
Action Z
Gamma

Action E

Synchronization Point

Action
Action F Action X
Zeta

Figure 1.2 Concurrency units

higher-priority task is prevented from running is known as the blocking time.


The fact that a lower-priority task must run even though a higher-priority task
is ready to run is known as priority inversion and is a property of all priority-
scheduled systems that share resources among task threads. Priority inversion
is unavoidable when tasks share resources, but when uncontrolled, it can lead
to missed deadlines. One of the things real-time systems must do is bound prior-
ity inversion (e.g., limit blocking to the depth of a single task) to ensure system
timeliness. The period of time that a task requires to perform its actions, includ-
ing any potential blocking time, is called the task execution time. For analysis,
it is common to use the longest such time period, the worst-case execution
time, to ensure that the system can always meet its deadlines. Finally, the time
Douglass_ch01.qxd 5/15/09 2:02 PM Page 13

Properties of Real-Time Embedded Systems 13

Needed Resource Found Needed Resource


to be Locked Unlocked

Blocking Slack Time


Time

Execution Time

Running
Task State

Blocking

Waiting

Deadline

Period Time

Jitter

Arriving Event Initiating


Task Execution

Figure 1.3 Task time

between the end of the execution and the deadline is known as the slack time. In
real-time systems, it is important to capture, characterize, and manage all these
task properties.
Real-time systems are most often embedded systems as well and carry those
burdens of development. In addition, real-time systems have timeliness and
schedulability constraints. Real-time systems must be timely—that is, they must
meet their task completion time constraints. The entire set of tasks is said to
be schedulable if all the tasks are timely. Real-time systems are not necessarily
(or even usually) deterministic, but they must be predictably bounded in time.
Methods exist to mathematically analyze systems for schedulability,10 and there
are tools11 to support that analysis.

10. See Doing Hard Time: Developing Real-Time Systems with UML, Objects,
Frameworks, and Patterns and Real-Time UML: Advances in the UML for Real-
Time Systems, both written by me and published by Addison-Wesley (1999 and
2004, respectively).
11. For example, see www.tripac.com for information about the RapidRMA tool.
Douglass_ch01.qxd 5/15/09 2:02 PM Page 14

14 Chapter 1 Introduction to Agile and Real-Time Concepts

Safety-critical and high-reliability systems are special cases of real-time and


embedded systems. The term safety means “freedom from accidents or losses”12
and is usually concerned with safety in the absence of faults as well as in the
presence of single-point faults. Reliability is usually a stochastic measure of the
percentage of the time the system delivers services.
Safety-critical systems are real-time systems because safety analysis in-
cludes the property of fault tolerance time—the length of time a fault can be
tolerated before it leads to an accident. They are almost always embedded
systems as well and provide critical services such as life support, flight man-
agement for aircraft, medical monitoring, and so on. Safety and reliability are
assured through the use of additional analysis, such as FTA, FMEA, failure
mode, effects, and criticality analysis (FMECA), and often result in a docu-
ment called the hazard analysis that combines fault likelihood, fault severity,
risk (the product of the previous two), hazardous conditions, fault protection
means, fault tolerance time, fault detection time, and fault protection action
time together. Safety-critical and high-reliability systems require additional
analysis and documentation to achieve approval from regulatory agencies
such as the FAA and FDA.
It is not at all uncommon for companies and projects to specify very heavy-
weight processes for the development of these kinds of systems—safety-critical,
high-reliability, real-time, or embedded—as a way of injecting quality into those
systems. And it works, to a degree. However, it works at a very high cost. Agile
methods provide an alternative perspective on the development of these kinds
of systems that is lighter-weight but does not sacrifice quality.

Benefits of Agile Methods


The primary goal of an agile project is to develop working software that
meets the needs of the stakeholders. It isn’t to produce documentation
(although documentation will be part of the delivered system). It isn’t to attend
meetings (although meetings will be held). It isn’t to create schedules (but a
schedule is a critical planning tool for all agile projects). It isn’t to create
productivity metrics (although they will help the team identify problems and
barriers to success).13 You may do all of these things during the pursuit of your

12. Nancy Leveson, Safeware: System Safety and Computers (Reading, MA: Addison-
Wesley, 1995).
13. See Scott Ambler’s discussion of acceleration metrics at www.ibm.com/developerworks/
blogs/page/ambler?tag=Metrics.
Douglass_ch01.qxd 5/15/09 2:02 PM Page 15

Benefits of Agile Methods 15

primary goal, but it is key to remember that those activities are secondary and
performed only as a means of achieving your primary goal. Too often, both
managers and developers forget this and lose focus. Many projects spend
significant effort without even bothering to assess whether that effort aids in the
pursuit of the development of the software.
The second most important goal of an agile project is to enable follow-on
software development. This means that the previously developed software must
have an architecture that enables the next set of features or extensions, docu-
mentation so that the follow-on team can understand and modify that software,
support to understand and manage the risks of the development, and an infra-
structure for change and configuration management (CM).
The benefits of agile methods usually discussed are:

• Rapid learning about the project requirements and technologies used to


realize them

• Early return on investment (ROI)

• Satisfied stakeholders

• Increased control

• Responsiveness to change

• Earlier and greater reduction in project risk14


• Efficient high-quality development

These are real, if sometimes intangible, benefits that properly applied agile
methods bring to the project, the developers, their company, the customer, and
the ultimate user.

Rapid Learning
Rapid learning means that the development team learns about the project earlier
because they are paying attention. Specifically, agile methods focus on early feed-
back, enabling dynamic planning. This is in contrast to traditional approaches
that involve ballistic planning. Ballistic planning is all done up front with the
expectation that physics will guide the (silver) bullet unerringly to its target

14. See, for example, www.agileadvice.com.


Douglass_ch01.qxd 5/15/09 2:02 PM Page 16

16 Chapter 1 Introduction to Agile and Real-Time Concepts

Ballistic Planning

Dynamic Planning

Figure 1.4 Ballistic versus dynamic planning

(see Figure 1.4). Agile’s dynamic planning can be thought of as “planning to re-
plan.” It’s not that agile developers don’t make plans; it’s just that they don’t
believe their own marketing hype and are willing to improve their plans as more
information becomes available.
Since software development is relatively unpredictable, ballistic planning, for
all its popularity, is infeasible. The advantage of early feedback is that it enables
dynamic planning. A Law of Douglass15 is “The more you know, the more you
know.” This perhaps obvious syllogism means that as you work through the proj-
ect, you learn. This deeper understanding of the project enables more accurate
predictions about when the project will be complete and the effort the project will
require. As shown in Figure 1.5, the ongoing course corrections result in decreas-
ing the zone of uncertainty.

Early Return on Investment


Early return on investment means that with an agile approach, partial function-
ality is provided far sooner than in a traditional waterfall process. The latter

15. Unpublished work found in the Douglass crypt . . .


Douglass_ch01.qxd 5/15/09 2:02 PM Page 17

Benefits of Agile Methods 17

Figure 1.5 Reduction in uncertainty

Waterfall
Agile Process
Process
% Value Returned

Time

Figure 1.6 Percent value returned over time

delivers all-or-none functionality at the end point, and the former delivers
incremental functionality frequently throughout the duration of the develop-
ment. As you can see in Figure 1.6, agile delivers high value early, with less
incremental value as the system becomes increasingly complete, whereas the
waterfall process delivers nothing until the end.
Another way to view this is by looking at incremental value over time, as
shown in Figure 1.7. We see that an agile process delivers increasing value over
time, whereas the waterfall process delivers no value until the end.
Delivering value early is good for a couple of reasons. First, if the funding is re-
moved or the project must end early, something of value exists at the point of ter-
mination. This is not true for the waterfall process, but it is a primary value in an
agile process. Additionally, delivering validated, if partial, functionality early
Douglass_ch01.qxd 5/15/09 2:02 PM Page 18

18 Chapter 1 Introduction to Agile and Real-Time Concepts

Waterfall
Process
Agile

Incremental Value
Process

Returned
Time

Figure 1.7 Incremental value

Waterfall
Process
Risk

Agile
Process

Time

Figure 1.8 Risk over time

reduces risk, as we see in Figure 1.8. Exactly how early deliveries do this is a topic
we will discuss in more detail later, but let us say for now that because we validate
each incremental build and we tend to do high-risk things early, we significantly
and quickly reduce the project risk. The waterfall process reduces risk slowly at
first because you only really know about the quality and correctness of things that
you validate, and validation comes only at the end in a waterfall process.
How can we return incremental value for a system that is delivered exter-
nally, such as a cell phone or a missile? Every increment period (which the
Harmony/ESW16 process refers to as a microcycle), a system is designed,
implemented, and validated in accordance with its mission statement. This mis-
sion statement identifies the functionality, target platform, architectural intent,
and defect repairs to be included. The incremental functionality is organized

16. Harmony/Embedded Software. This is one of the members of the IBM Rational Har-
mony family of processes and is the basis of the content of this book. The process
basics will be discussed at some length in Chapter 3.
Douglass_ch01.qxd 5/15/09 2:02 PM Page 19

Benefits of Agile Methods 19

around a small set of use cases running on an identified (but not necessarily
final) target environment. Through the use of good engineering practice, we can
encapsulate away the platform details and ensure that the delivered functional-
ity is correct, given the current target. For example, for one tracking system,
our team originally targeted laptops with simulated radars and created actual
validated functionality on that environment. Over the course of the project, as
hardware became available, we migrated to target hardware of the actual mili-
tary systems. Through this approach, we had high-quality, testable software
earlier than expected.

Satisfied Stakeholders
Stakeholders are simply people who have a stake in the successful outcome of a
project. Projects have all kinds of stakeholders. Customers and marketers are
focused on the functional benefits of the system and are willing to invest real
money to make it happen. Their focus is on specifying to the developers the
needs of the users in a realistic and effective fashion. Managers are stakeholders
who manage that (real or potential) investment for the company to achieve a
timely, cost-effective delivery of said functionality. Their job is to plan and
schedule the project so that it can be produced to satisfy the customer and meet
the users’ needs. The users are the ones who use the system in their work envi-
ronment and need high-quality functionality that enables their workflow to be
correct, accurate, and efficient. All these stakeholders care about the product
but differ in the focus of their concern. The customers care how much they pay
for the system and the degree to which it improves the users’ work. The man-
agers primarily care how much the system costs to develop and how long that
effort takes. The users primarily care about the (positive) impact the system
makes on their work.
Agile methods provide early visibility to validated functionality. This func-
tionality can be demonstrated to the stakeholders and even delivered. This is in
stark contrast to traditional preliminary design review (PDR) and critical design
review (CDR) milestones in which text is delivered that describes promised
functionality in technological terms. Customers can—and should—be involved
in reviewing the functionality of the validated incremental versions of the sys-
tem. Indeed, the functionality can be implemented using a number of different
strategies, depending on what the process optimization criterion is. Possible cri-
teria include the following:

• Highest-risk first

• Most critical first


Douglass_ch01.qxd 5/15/09 2:02 PM Page 20

20 Chapter 1 Introduction to Agile and Real-Time Concepts

• Infrastructure first

• Available information first

All other things being equal, we prefer to deliver high-risk first, because this
optimizes early risk reduction. However, if the users are to deploy early versions
of the system, then criticality-first makes more sense. In some cases, we deploy
architectural infrastructure early to enable more complex functionality or prod-
uct variations. And sometimes we don’t have all the necessary information at
our fingertips before we must begin, so the things we don’t know can be put off
until the necessary information becomes available.

Improved Control
Many, if not most, software projects are out of control, to some degree or
another. This is largely because although projects are planned in detail, they
aren’t tracked with any rigor. Even for those projects that are tracked, tracking
is usually done on the wrong things, such as SLOC delivered. Thus most proj-
ects are either not tracked or track the wrong project properties.
Project tracking requires the answers to three questions:

• Why track?

• What should be tracked?

• How should projects be tracked?

Why track? Project teams that don’t know exactly why they are tracking
project properties rarely do a good job. Only by identifying the goals of track-
ing can you decide what measures should be tracked and how to implement the
tracking procedures.
The biggest single reason for project tracking is that plans are always
made in the presence of incomplete knowledge and are therefore inaccurate
to some degree. Tracking enables the project deviance from plan to be identi-
fied early enough to effectively do something about it. Projects should be
tracked so that they can be effectively managed, replanned as appropriate,
and even scrapped if necessary. You can effectively replan only when you
know more than you did when the original plan was made, and that infor-
mation can come from tracking the right things. Put another way, the funda-
mental purpose of tracking is to reduce uncertainty and thereby improve
project control.
Douglass_ch01.qxd 5/15/09 2:02 PM Page 21

Benefits of Agile Methods 21

What should be tracked? Ideally, tracking should directly reduce uncertainty


in the key project characteristics that relate to the cost, time, effort, and quality
of the product; that is, tracking should directly measure cost, time to comple-
tion, effort to completion, and defect rates. The problem is that these quantities
are not directly measurable.
So projects typically evaluate metrics that are measurable with the expectation
that they correlate with the desired project quantities. Hence, people measure
properties such as lines of code or defects repaired. The flaw in those measures is
that they do not correlate strongly with the project criteria. If, at the end of the
project, you remove lines of code during optimization, are you performing nega-
tive work and reducing the time, cost, or effort? If I don’t know exactly how many
lines of code I’m going to end up with, what does writing another 10,000 lines of
code mean in terms of percent completeness? If I measure cyclomatic complexity,
am I demonstrating that the system is correct? The answer is an emphatic no.
The problem with many of the common metrics is that while they are easy to
measure, they don’t correlate well with the desired information. This is because
those metrics track against the project implementation rather than the project
goal. If you want to measure completeness, measure the number of require-
ments validated, not the number of lines of code written. If you want to
measure quality, measure defect rates, not cyclomatic complexity. The other
measures do add incremental value, but the project team needs to focus on
achievement of the ultimate goal, not weak correlates.
Agile methods provide the best metrics of all—working, validated function-
ality—and they provide those metrics early and often. Agile focuses on deliver-
ing correct functionality constantly, providing natural metrics as to the quality
and completeness of the system over time. This in turn provides improved proj-
ect control because true problems become visible much earlier and in a much
more precise fashion.

Responsiveness to Change
Life happens, often in ways that directly conflict with our opinions about how it
ought to happen. We make plans using the best available knowledge, but that
knowledge is imprecise and incomplete and in some cases just wrong. The impre-
cision means that small incremental errors due to fuzziness in the data can add up
to huge errors by the end of the project—the so-called butterfly effect in chaos
theory.17 Chaos theory is little more than the statement that most systems are

17. See Edward N. Lorenz, The Essence of Chaos (The Jessie and John Danz Lecture
Series) (Seattle: University of Washington Press, 1996).
Douglass_ch01.qxd 5/15/09 2:02 PM Page 22

22 Chapter 1 Introduction to Agile and Real-Time Concepts

actually nonlinear; by nonlinear we mean that small causes generate effects that
are not proportional to their size. That sums up software development in a nut-
shell: a highly nonlinear transfer function of user needs into executable software.
The incompleteness problem means that not only do we not know things
very precisely, but some things we don’t know at all. I remember one project in
which I was working on a handheld pacemaker program meant to be used
by physicians to monitor and configure cardiac pacemakers. It was a based on
a Z-80-based embedded microcomputer with a very nice form factor and
touch screen. The early devices from the Japanese manufacturer provided a
BIOS to form the basis of the computing environment. However, once the
project began and plans were all made, it became apparent that the BIOS
would have to be rewritten for a variety of technically inobvious reasons.
Documentation for the BIOS was available from the manufacturer—but only
in Japanese. The technical support staff was based in Tokyo and spoke only—
you guessed it—Japanese. This little bit of missing information put the project
months behind schedule because we had to reverse-engineer the documenta-
tion from decompiling the BIOS. It wouldn’t be so bad if that was the only
time issues like that came up, but such things seem to come up in every project.
There’s always something that wasn’t planned on—a manufacturer canceling a
design, a tool vendor going out of business, a key person being attracted away
by the competition, a change in company focus, defects in an existing product
sucking up all the development resources, . . . the list goes on and on.
Worst, in some way, is that knowledge you have about which you are both
convinced and incorrect. This can be as varied as delivery dates, effort to per-
form tasks, and availability of target platforms. We all make assumptions, and
the law of averages dictates that when we make 100 guesses, each of which is
90% certain, 10 are still likely to be wrong.
Despite these effects of nonlinearity, incompleteness, and incorrectness, we
still have to develop systems to meet the stakeholders’ needs at a cost they’re
willing to pay within the time frames that meet the company’s schedules. So in
spite of the nonlinearity, we do our best to plan projects as accurately as possi-
ble. And how well do we do that? The answer, from an industry standpoint, is
“not very well at all.”
The alternative to plan-and-pray is to plan-track-replan. Agile methods ac-
cept that development plans are wrong at some level and that you’ll need
to adjust them. Agile methods provide a framework in which you can capture
the change, adjust the plans, and redirect the project at a minimal cost and
effort. The particular agile approach outlined in this book, known as the
Harmony/ESW process, deals with work at three levels of abstraction, as
shown in Figure 1.9.
Douglass_ch01.qxd 5/15/09 2:02 PM Page 23

Benefits of Agile Methods 23

Deployment
Macrocycle
Stakeholder Focus Optimization
Secondary Concept

Months
Key Concept
Project Plan

Microcycle
Demo-able
Team Focus
or
Shippable
Build
Weeks
Iteration Plan

Nanocycle
Personal Focus
Revision of
Work Items
Hours
Project Work Item

Figure 1.9 Harmony/ESW timescales

The smallest timescale, known as the nanocycle, is about creation in the


hour-to-day time frame. In the nanocycle, the developer works off of the work
items list, performs small incremental tasks, and verifies that they were done
properly via execution. In this time frame, small changes with local scope can
be effectively dealt with in the context of a few minutes or hours.
The middle time frame is called the microcycle and focuses on the develop-
ment of a single integrated validated build of the system with specified function-
ality. The microcycle time frame is on the order of four to six weeks and
delivers formally validated, although perhaps limited, functionality. Changes
with medium scope are dealt with in the formal increment review18 and in
the prototype mission statement that identifies the scope for the microcycle
iteration.
The largest time frame is called the macrocycle. The macrocycle concerns it-
self with the beginning and end of the project and primary milestones within
that context. The macrocycle is usually 12 to 24 months long and represents a
final, or at least significant, customer delivery. At this scope, large-scale changes
are managed that may result in significant project replanning.

18. Also known as the “party phase” because it is not only a review, but also a “celebra-
tion of ongoing success”—as opposed to a postmortem, which is an analysis designed
to discover why the patient died.
Douglass_ch01.qxd 5/15/09 2:02 PM Page 24

24 Chapter 1 Introduction to Agile and Real-Time Concepts

Earlier and Greater Reduction in Project Risk


The last of the benefits we will discuss in this section has to do with reduction
of project risks. In my experience, the leading cause of project failure is
simply ignoring risk. Risk is unavoidable, and attempts to ignore it are
rarely successful. I am reminded of a company I consulted to that wanted
help. The development staff of this medical device company had been work-
ing 55 to 60 hours per week for 10 years and had never made a project dead-
line. They asked that I come and see if I could identify why they were having
such problems. As it happens, they did develop high-quality machines but at
a higher-than-desirable development cost and in a longer-than-desirable time
frame. They consistently ignored risks and had a (informal) policy of refusing
to learn from their mistakes. For example, they had a history of projects for
fairly similar devices, and it had always taken them five months to validate
the machines. However, they, just as always, scheduled one month for valida-
tion. They refused to look at why projects were late and adjust future plans to
be more reasonable.
In this context, risk means the same thing as it did in the earlier discussion of
safety. It is the product of the severity of an undesirable situation and its likeli-
hood. For a project, it is undesirable to be late or over budget or to have critical
defects. We can reduce project risks by managing them. We manage them by
identifying the key project risks and their properties so that we can reduce
them. Risks are managed in a document called either a risk list or a risk man-
agement plan. As we will learn later, this risk list contains an ordered list of
conditions, severities, likelihoods, and corrective actions known as risk mitiga-
tion activities (RMAs). These activities are scheduled into the iterations prima-
rily in order of degree of risk (highest-risk first).
For example, if the risk is that CORBA19 is too slow to handle the throughput
required, an early prototype20 should include some high-bandwidth data ex-
change and the performance can be measured. If it is found that CORBA does,
in fact, provide inadequate performance, other technical solutions can be ex-
plored. Because the problem was discovered early, the amount of rework in that

19. Common Object Request Broker Architecture, an OMG standard.


20. A prototype is a validated build of the system produced at the end of an iteration mi-
crocycle. It contains a subset (but usually not all) of the real code that will ship in the
system. Unless specifically described as such, we do not mean a throwaway proto-
type, which is an executable produced to answer a specific set of questions but will
not be shipped in the final product.
Douglass_ch01.qxd 5/15/09 2:02 PM Page 25

Agile Methods and Traditional Processes 25

case will be less than in a traditional “Oh, God, I hope this works” development
approach. In agile methods this kind of an experiment is known as a spike.21
The risk list is a dynamic document that is reviewed at least every iteration
(during the party phase22). It is updated as risks are reduced, mitigated, or dis-
covered. Because we’re focusing attention on risk, we can head off an undesir-
able situation before it surprises us.

Efficient High-Quality Development


High quality is achieved by the proper application of agile methods but in a dif-
ferent way from traditional industrial processes. This is again a dynamic, rather
than a ballistic, approach. Agile achieves high quality through continuous exe-
cution, continuous integration, and continuous testing—begun as early as pos-
sible. Agile holds that the best way not to have defects in a system is not to
systematically test them out but to not introduce them into the software in the
first place (a topic I will address in more detail in upcoming chapters).
Efficiency is why most people in my experience turn to agile methods. In
fact, agile methods have sometimes been thought to sacrifice quality and cor-
rectness in the pursuit of development efficiency. It is true that agile methods
are a response to so-called heavyweight processes that emphasize paper analysis
and ballistic planning over early execution and risk reduction. Nevertheless,
agile emphasizes efficiency because it is a universal truth that software costs too
much to develop and takes too long. A good agile process is as efficient as pos-
sible while achieving the necessary functionality and quality. Agile often recom-
mends lighter-weight approaches to achieve a process workflow.

Agile Methods and Traditional Processes


Agile methods differ from traditional industrial processes in a couple of
ways. Agile planning differs from traditional planning because agile plan-
ning is—to use the words of Captain Barbossa23—“more what you’d call a
guideline.” Agile development tends to follow a depth-first approach rather

21. In agile-speak, a spike is a time-boxed experiment that enables developers


to learn enough about an unknown to enable progress to continue. See www.
extremeprogramming.org/rules/spike.html.
22. See Chapter 9.
23. Pirates of the Caribbean: The Curse of the Black Pearl (Walt Disney Pictures, 2003).
Douglass_ch01.qxd 5/15/09 2:02 PM Page 26

26 Chapter 1 Introduction to Agile and Real-Time Concepts

than the breadth-first approach of traditional methods. Another key agile


practice is test-driven development (TDD), which pushes testing as far up
front in the process as possible. Finally, agile embraces change rather than
fearing it.

Planning
It is a common and well-known problem in numerical analysis that the preci-
sion of a computational result cannot be better than that of the elements
used within the computation.24 I have seen schedules for complex system de-
velopment projects that stretch on for years yet identify the completion time
to the minute. Clearly, the level of knowledge doesn’t support such a precise
conclusion. In addition (pun intended J), errors accumulate during compu-
tations; that is, a long computation compounds the errors of its individual
terms.
If you are used to working in a traditional plan-based approach, agile meth-
ods may seem chaotic and intimidating. The problem with the standard water-
fall style is that although plans may be highly detailed and ostensibly more
complete, that detail is wrong and the computed costs and end dates are in error.
Further, not only is the information you have about estimates fuzzy at best, it
is also usually systematically biased toward the low end. This is often a result of
management pressure for a lower number, with the misguided intention of pro-
viding a “sense of urgency” to the developers. Sometimes this comes from engi-
neers with an overdeveloped sense of optimism. Maybe it comes from the
marketing staff who require a systematic reduction of the schedule by 20%, re-
gardless of the facts of the matter. In any event, a systematic but uncorrected
bias in the estimates doesn’t do anything but further degrade the accuracy of
the plan.
Beyond the lack of precision in the estimates and the systematic bias, there is
also the problem of stuff you don’t know and don’t know that you don’t know.
Things go wrong on projects—all projects. Not all things. Not even most things.
But you can bet money that something unexpected will go wrong. Perhaps a team
member will leave to work for a competitor. Perhaps a supplier will stop producing
a crucial part and you’ll have to search for a replacement. Maybe as-yet-unknown
errors in your compiler itself will cause you to waste precious weeks trying to find
the problem. Perhaps the office assistant is really a KGB25 agent carefully placed to

24. Assuming certain stochastic properties of the error distribution, of course.


25. Excuse me, that should be FSB now.
Douglass_ch01.qxd 5/15/09 2:02 PM Page 27

Agile Methods and Traditional Processes 27

bring down the Western economy by single-handedly intercepting and losing your
office memo.
It is important to understand, deep within your hindbrain, that planning the
unknown entails inherent inaccuracy. This doesn’t mean that you shouldn’t plan
software development or that the plans you come up with shouldn’t be as accu-
rate as is needed. But it does mean that you need to be aware that they contain
errors.
Because software plans contain errors that cannot be entirely removed,
schedules need to be tracked and maintained frequently to take into account the
“facts on the ground.” This is what we mean by the term dynamic planning—it
is planning to track and replan when and as necessary.

Depth-First Development
If you look at a traditional waterfall approach, such as is shown in Figure 1.10,
the process can be viewed as a sequential movement through a set of layers. In the
traditional view, each layer (or “phase”) is worked to completion before moving
on. This is a “breadth-first” approach. It has the advantage that the phase and the
artifacts that it creates are complete before moving on. It has the significant disad-
vantage that the basic assumption of the waterfall approach—that the work
within a single phase can be completed without significant error—has been
shown to be incorrect. Most projects are late and/or over budget, and at least part
of the fault can be laid at the feet of the waterfall lifecycle.
An incremental approach is more “depth-first,” as shown in Figure 1.11.
This is a “depth-first” approach (also known as spiral development) because
only a small part of the overall requirements are dealt with at a time; these
are detailed, analyzed, designed, and validated before the next set of require-
ments is examined in detail.26 The result of this approach is that any defects
in the requirements, through their initial examination or their subsequent im-
plementation, are uncovered at a much earlier stage. Requirements can be se-
lected on the basis of risk (high-risk first), thus leading to an earlier reduction
in project risk. In essence, a large, complex project is sequenced into a series
of small, simple projects. The resulting incremental prototypes (also known
as builds) are validated and provide a robust starting point for the next set of
requirements.

26. The astute reader will notice that the “implementation” phase has gone away. This is
because code is produced throughout the analysis and design activities—a topic we
will discuss in much more detail in the coming chapters.
Douglass_ch01.qxd 5/15/09 2:02 PM Page 28

28 Chapter 1 Introduction to Agile and Real-Time Concepts

First

Requirements

Analysis
Second

Design

Implementation

Test

Final Build

Figure 1.10 Waterfall lifecycle

Put another way, we can “unroll” the spiral approach and show its progress
over linear time. The resulting figure is a sawtooth curve (see Figure 1.12) that
shows the flow of the phases within each spiral and the delivery at the end of each
iteration. This release contains “real code” that will be shipped to the customer.
The prototype becomes increasingly complete over time as more requirements
and functionality are added to it during each microcycle. This means not only
Douglass_ch01.qxd 5/15/09 2:02 PM Page 29

Agile Methods and Traditional Processes 29

Second

Requirements Requirements Requirements Requirements

Analysis Analysis Analysis Analysis


First

Design Design Design Design

Test Test Test Test

Incremental Build Incremental Build Incremental Build Incremental Build

Figure 1.11 Incremental spiral lifecycle

Requirements

Analysis

Design

Test

Release
Time

Figure 1.12 Unrolling the spiral


Douglass_ch01.qxd 5/15/09 2:02 PM Page 30

30 Chapter 1 Introduction to Agile and Real-Time Concepts

that some, presumably the high-risk or most critical requirements, are tested first,
but also that they are tested more often than low-risk or less crucial requirements.

Test-Driven Development
In agile approaches, testing is the “stuff of life.” Testing is not something done
at the end of the project to mark a check box, but an integral part of daily
work. In the best case, requirements are delivered as a set of executable test
cases, so it is clear whether or not the requirements are met. As development
proceeds, it is common for the developer to write the test cases before writing
the software. Certainly, before a function or class is complete, the test cases
exist and have been executed. As much as possible, we want to automate this
testing and use tools that can assist in creating coverage tests. Chapter 8 deals
with the concepts and techniques for agile testing.

Embracing Change
Unplanned change in a project can occur either because of the imprecision of
knowledge early in the project or because something, well, changed. Market con-
ditions change. Technology changes. Competitors’ products change. Develop-
ment tools change. We live in a churning sea of chaotic change, yet we cope.
Remember when real estate was a fantastic investment that could double your
money in a few months? If you counted on that being true forever and built long-
range inflexible plans based on that assumption, then you’re probably reading
this while pushing your shopping cart down Market Street in San Francisco look-
ing for sandwiches left on the curb. We cope in our daily lives because we know
that things will change and we adapt. This doesn’t mean that we don’t have goals
and plans but that we adjust those goals and plans to take change into account.
Embracing change isn’t just a slogan or a mantra. Specific practices enable
that embracement, such as making plans that specify a range of successful
states, means by which changing conditions can be identified, analyzed, and
adapted to, and methods for adapting what we do and how we do it to become
as nimble and, well, agile, as possible.
In the final analysis, if you can adapt to change better than your competitors,
then evolution favors you.27

27. As the saying goes, “Chance favors the prepared mind.” I forget who said it first, but
my first exposure to it was Eric Bogosian in Under Siege 2: Dark Territory (Warner
Bros., 1995).
Douglass_ch01.qxd 5/15/09 2:02 PM Page 31

Coming Up 31

Coming Up
This chapter provided some basic background information to prepare you for
the rest of the book. Agile approaches are important because of the increasing
burden of complexity and quality and the formidable constraint of a decreasing
time to market. Following the discussion of the need for agility, the context of
real-time systems was presented. The basic concepts of timeliness, such as exe-
cution time, deadline, blocking time, concurrency unit, criticality, and urgency,
are fundamental to real-time systems. Understanding them is a prerequisite to
understanding how agile methods can be applied to the development of such
systems. The discussion went on to the actual benefits of agile methods, such as
lowered cost of development, improved project control, better responsiveness
to change, and improved quality. Finally, agile and traditional methods were
briefly compared and contrasted. Agile methods provide a depth-first approach
that embraces change and provides a continual focus on product quality.
The next chapter will introduce the concepts of model-driven development.
Although not normally considered “agile,” MDD provides very real benefits in
terms of conceptualizing, developing, and validating systems. MDD and agile
methods work synergistically to create a state-of-the-art development environ-
ment far more powerful than either is alone.
Chapter 3 introduces the core principles and practices of the Harmony/ESW
process. These concepts form the repeated themes that help define and under-
stand the actual roles, workflows, tasks, and work products found in the
process.
Then, in Chapter 4, the Harmony/ESW process itself is elaborated, including
the roles, workflows, and work products. Subsequent chapters detail the phases
in the Harmony/ESW microcycle and provide detailed guidance on the imple-
mentation of those workflows.
Google
Bookmarks Delicious Digg Facebook StumbleUpon Reddit Twitter
Pollyanna Pixton
Niel Nickolaisen Buy 2 from informIT and
Todd Little Save 35%
Kent McDonald Enter the coupon code
AGILE2009 during checkout.

Stand Back and Deliver


Accelerating Business Agility

Whether you’re leading an organization, a team, or a project, Stand Back and


Deliver gives you the agile leadership tools you’ll need to achieve breakthrough available
levels of performance. It brings together immediately-usable frameworks and • Book: 9780321572882
step-by-step processes that help you focus all your efforts where they matter • Safari Online
most: delivering business value and building competitive advantage. • EBOOK: 032161707X
• KINDLE: B002CT0TUS
You’ll first discover how to use the authors’ Purpose-Based Alignment Model to
make better up-front decisions on where to invest limited resources — and how
to filter out activities that don’t drive market leadership. Next, you’ll learn how to About the Authors
collaborate in new ways that unleash your organization’s full talents for innovation.
Pollyanna Pixton is an
The authors offer practical tools for understanding the unique challenges of
internationally recognized expert
any project and tailoring your leadership approach to reflect them. You’ll find a
on collaborative leadership who
full chapter on organizing information to promote more effective, value-driven has worked and consulted with
decision-making. Finally, drawing on decades of experience working with great companies and organizations
leaders, the authors focus on a critical issue you’ll face over and over again: for thirty-eight years. She is
knowing when to step up and lead, and when to stand back and let your team work. president of Evolutionary Systems
and director of the Institute of
Coverage includes
Collaborative Leadership.
• Effectively evaluating, planning, and implementing large system projects
• Reducing resistance to process improvements Niel Nickolaisen is CIO and
director of strategic planning
• B ringing greater agility to the way you manage products, portfolios,
at Headwaters, Inc.
and projects
• Identifying the tasks that don’t create enough value to be worth your time Todd Little has held a number
of senior leadership positions
• Developing the forms of collaboration that are crucial to sustaining innovation
including director of software and
• M
 itigating project risks more effectively — especially those associated technology at Landmark Graphics,
with complexity a subsidiary of Halliburton.
• Refocusing all decision-making on delivering value to the organization For more than thirty years he
and the marketplace has developed–and led teams
• M
 aking decisions at the right time to leverage the best information without developing–commercial software
applications, primarily for oil and
stifling progress
gas exploration and production.

Kent McDonald has nearly


fifteen years of experience as a
project and program manager in
industries ranging from automotive
to financial services.

Pixton, Nickolaisen, Little, and


informit.com/aw McDonald are all partners at
Accelinnova.
001_0321572882_ch01.qxd 5/27/09 4:08 PM Page 1

C H A P T E R 1

INTRODUCTION TO KEY PRINCIPLES

In this chapter, we explain what we mean by “stand back and deliver” by


first presenting some situations that may seem alarmingly familiar to you.
We then cover some of our core concepts and beliefs that underlie the
tools we recommend you implement to get your organization going in the
right direction.

What Could Go Wrong?

Have you ever done things by the book, but the book was out of print?
Such was the case for a large, successful product company. This company
had developed what it considered to be a revolutionary new product. With
its heavy engineering background, the company did product development
by the book—the way it had worked many times before.
Through its research and development activities, the company had dis-
covered that it could use a waste material as the basic raw material for a
new product. Imagine the possibilities: Currently the company pays to dis-
pose of this material, but now it could use this “waste material” to make an
industrial product. The company did what it had always done. It selected
one of its best engineers to sort through the product design and manufac-
turing options. It set up the engineer with all of the development and test-
ing equipment he would need. It gave the engineer the time and flexibili-
ty he needed to design what he thought the market needed.
Within a year, the engineer had perfected a formulation that worked.
The resulting product had impressive characteristics. It had high worka-
bility and could be formed into various shapes and sizes. The engineer pro-
duced small batches of the product that the company used to generate
early customer interest. Using these small batches, the company formed
several industry joint ventures and alliances. The future looked bright.
The engineer next worked on the manufacturing processes needed to
produce the product. In parallel with this effort, the company issued press

1
001_0321572882_ch01.qxd 5/27/09 4:08 PM Page 2

2 Chapter 1 Introduction to Key Principles

releases and featured the new product in its annual report: “Coming soon,
a revolutionary, green product.” One year later, the engineer announced
that his work was done. He documented the product formulation. He
described in great detail how to scale the manufacturing process from the
small batches he had created to full-scale production. The company built
the first of what it planned would be multiple manufacturing plants and
hired a plant manager to follow the engineer’s scale-up process. In the
meantime, the market waited for the formal product release. The company
hired a dedicated sales force to start generating interest in the product. The
development engineer was promoted and assigned to a different project.
Five months later, the first manufacturing plant came on line. As the
first full-size batches of the product came out of processing and into pack-
aging, problems arose. When subjected to full-sized manufacturing, the
product had tiny cracks. In the small batches prepared by the development
engineer, there had never been any cracks. In expanding the product batch
size by a factor of 10, however, there they were—tiny cracks. At first, no
one gave the cracks much thought, because they did not affect the product
characteristics or performance. But then the company shipped its first
order. As the delivery truck rolled down the road, the cracks propagated
throughout the product. By the time the truck arrived at the customer’s
facility, some of the product had broken into pieces. After two years of
engineering and five months of manufacturing scale-up, the company had
a great product, so long as it did not have to ship the product to anyone!
In retrospect, it is easy to identify some of the mistakes this company
made. We have spent hours with groups dissecting this true story to learn
from—and to not repeat—the mistakes of the past. The development engi-
neer developed the product in isolation and did not think through scale-up
issues. The company did not produce any full-sized product samples until
after it had built the manufacturing plant and then discovered the propaga-
tion of the cracks. It is easy to mock management for the wasted investment.
Before being too critical, however, we should consider this point: If the
final product had not developed the tiny cracks that spread during ship-
ping, the product and the process would have been a success. In fact, many
times previously, the company had used a similar process and gotten good
results. Using what had worked before, those involved in this project were
clueless about the risks that lurked in the shadows of their process. What is
now obvious to us became obvious to them only after this product failure.
The most important lesson we can draw from this story is that we, too,
are brilliant but sometimes clueless. We live in an environment of increas-
ing global competition, an increasing pace of market changes, and a need
to develop solutions that are increasingly complex. In this environment, we
001_0321572882_ch01.qxd 5/27/09 4:08 PM Page 3

What Went Right 3

do not have the luxury of missteps and hidden risks. There is increasing
pressure to deliver complex solutions in less time and to get it “right the
first time.” If we don’t, we can completely miss our business value goals.
The good news is that we can make sure that our brilliance results in things
that work. To see how, we continue our story.

What Went Right

With the development engineer assigned to and buried by a new project,


it fell to the plant manager to sort out the issues with the propagating
cracks. Fortunately, the plant manager recognized that what had worked
before had not worked for the new product. Before he plunged into root
cause analysis on the cracking problem, he took a huge step back, all the
way back to the beginning.
Under pressure to immediately solve the problem, he asked to meet
with company management. At the meeting, he asked some fairly basic
questions: “How important is this product to the company? Is this a prod-
uct that will provide us with a competitive advantage in the marketplace?”
His rationale for asking these questions was to get a sense of the pur-
pose of the product. If the product would generate competitive advantage,
he and the company would treat the product differently than if it did not.
The plant manager got very clear answers from the management team.
This was a product that fit squarely in the company’s strategy. The compa-
ny’s claim to fame was using recycled, recovered raw materials to produce
industrial products. This product was a perfect example of the company’s
expertise and creativity.
In that case, the plant manager asked, could he treat this product as
what it was—something that would help differentiate the company in the
marketplace? Recognizing, in retrospect, the problems with the initial
development, the management team gave the plant manager free rein.
The plant manager started by convening the right people. The right
people included design engineers, production workers, manufacturing
engineers, a sales team, and, in a surprise to everyone, one of the early-
adopter customers. To make sure that the team understood all that had
happened and all that needed to happen, the plant manager gave a painful-
ly honest review of the development of the product and the issues the com-
pany had encountered in moving to full-scale production. After getting the
team up to speed, the plant manager asked a gut-wrenching question: “Can
we fix the problems or would the company benefit more if we halted pro-
duction?” This sparked a lively discussion that ranged from necessary
001_0321572882_ch01.qxd 5/27/09 4:08 PM Page 4

4 Chapter 1 Introduction to Key Principles

changes to the product development process to the humiliation of now


shutting down the product line.
The plant manager let this “airing” continue for some time but then
refocused the discussion on his question: “Let me ask my question anoth-
er way: What made this product so critical to us when we launched the ini-
tiative?” The answers again ranged from the product’s revenue potential to
the commitments that had been made. The plant manager then asked a
more generic question: “How do we differentiate ourselves in the market-
place?” The company had developed proprietary ways to use recycled
products to make new materials. This capability had propelled the compa-
ny to market-leader status. An engineer asked, “Why does that matter?”
The plant manager responded, “It seems to me that if we can solve the
issues, this product aligns perfectly with what makes our company unique.
With this product, we have once again taken a waste material and pro-
duced something of value. For that reason, it seems we should do our best
to fix the problems and get this product to market. This product exempli-
fies what we do. If you agree, let’s move onto how we can approach the
product. Ignore how we have developed the product to date. As a strategic
initiative, what should we do?”
The team then sorted through the specific product features that made
the product different. Only one was apparent—the use of waste material to
make a usable product. With that as the principal requirement of the prod-
uct, the team identified design options that could either eliminate or miti-
gate the full-scale production issues. Would a different form factor reduce
the cracking? Or was changing the manufacturing process the only option?
As the team discussed these alternatives, they associated complexity and
uncertainty with each alternative. In terms of uncertainty, was there a spe-
cific market need that their product could meet? With what certainty did
they understand these needs? Which form factor would the market accept,
and did they know which forms were acceptable? How much did they know
about the reactions taking place in the manufacturing process? How well
could they link cause and effect? In terms of complexity, which options did
they have to simplify the process? How could they simplify the product?
After the team mapped out the options and associated information, the
plant manager asked the team which decisions they needed to make now,
which decisions they could delay, and what they needed to know prior to
making the decisions. All of this information was combined to provide a
logical, rational approach for making the product go/kill decision and, if
possible, fixing the product problems.
001_0321572882_ch01.qxd 5/27/09 4:08 PM Page 5

Why Do We Do This to Ourselves? 5

For example, the team could delay the go/kill decision until after the
members had determined whether there was a form factor the market was
dying to have. Likewise, the team could delay research into the cause of
the large-batch cracking if the market would accept form factors that could
be made with small batch sizes. To explore these issues further, the team
members signed up for the assignments that best matched their interests
and capabilities.
Over the next few weeks, the team worked through the assignments
and options. Based on the work of the sales team and the early-adopter cus-
tomer, the team revised the form factor. The new form factor actually met
a previously unknown—at least to the company—market need. Revising
the form factor enabled the company to manufacture the product in the
small batch sizes it could produce without cracks. Taking this approach let
the company retain most of its current investment in the manufacturing
plant; it just needed to redesign its consumable molds. The team members
took a more measured approach by eliminating uncertainty and complexi-
ty at each step of the process. They solved the problems they could when
they could and postponed work on the most uncertain and complex issues.
Because the product was not “right the first time,” the expected rev-
enue was delayed. Also, because of the initial problems, the revenue
stream grew more slowly than projected. Nevertheless, the company
learned the value of the foundation tools of agile leadership.

Why Do We Do This to Ourselves?

We were all sitting around one afternoon talking about this story. Each of
us found ourselves identifying a different aspect of the story that we
thought was the cause of all the travails of the organization.
We identified the initial failure of the organization to properly align its
approach to the project with its true strategic nature.
We also determined that, initially, the organization did not properly
lead collaboration; in fact, it did not initially have any collaboration on this
particular project.
We found that the organization chose the wrong approach for the
project, tackling a very complex project filled with uncertainty with a
process more suited for a low-complexity and low-uncertainty project. It
also assigned a leader who did not recognize the uncertainties and the
complexities.
001_0321572882_ch01.qxd 5/27/09 4:08 PM Page 6

6 Chapter 1 Introduction to Key Principles

We realized that the organization did not gather all of the information
it needed to make proper decisions about how to market the product and
in which markets to sell the product. We also identified several cases where
the company made commitments earlier than it needed to, especially with
potential customers and industry partners.
As we talked about this case more, we realized that there was no one
cause for the project’s initial problems, but rather several contributing fac-
tors. When we discussed the various tools we would have used to help the
company, we realized that while each tool was powerful in its own right, when
put together the entire toolset could really help an organization succeed.

A Framework of Effective Tools

What are the tools we would use to address the situation described earlier
in this chapter? Through our experiences and sharing stories, we found
that a collection of tools apply to how organizations approach their work,
especially work that involves change and innovation; when used in moder-
ation and in conjunction with each other, these tools can have a dramatic
impact on the success of the organization. We drew the “napkin drawing”
shown in Figure 1.1 to capture our thoughts, and we chose to organize this
book around four main applications of those tools.

Purpose
The Purpose Alignment Model, described in Chapter 2, generates imme-
diately usable decision filters that leaders and teams can use to improve
design. This tool evaluates business activities and options in terms of their
capability to differentiate an organization’s products and services in the
marketplace and their mission criticality. This tool helps teams identify
areas to focus their creativity and those activities and features for which
“good enough” is good enough. This approach lowers direct and opportu-
nity costs and accelerates market leadership. This simple, yet powerful,
concept recognizes that not all activities should be treated in the same way.
Some activities will help the organization win in the marketplace; others
will help keep it in the game. We risk under- and over-investing in activi-
ties if we treat all of them as if they were identical.
001_0321572882_ch01.qxd 5/27/09 4:08 PM Page 7

A Framework of Effective Tools 7

FIGURE 1.1 Leadership tools for use in today’s complex marketplace

Here are the big ideas of Chapter 2:

■ Aligning on process purpose is a smart, simple way to improve deci-


sion making.
■ Designing our work around process purpose helps us quickly iden-
tify how to achieve optimal business value.
■ Strategic decision filters can be cascaded throughout the organiza-
tion to dramatically improve organizational alignment.

Collaboration
As the proverb states, “No one of us is as smart as all of us.” The proper use of
the tools described in this book is dependent on a culture of collaboration. In
the story presented earlier in this chapter, when a single person developed a
product, it took more than two years to produce something that did not
work. When a leader considered purpose, business value, uncertainty, and
001_0321572882_ch01.qxd 5/27/09 4:08 PM Page 8

8 Chapter 1 Introduction to Key Principles

complexity in a culture of collaboration, the team made better decisions and


started to generate results. Developing collaboration skills and capabilities is
essential in today’s dynamic marketplace. Sustainable innovation comes
through collaboration. Sustainable innovation is a prerequisite to change from
market follower to market leader. Today, it hinges on collaboration.
Here are the big ideas of Chapter 3:

■ To develop a sustainable competitive advantage, unleash the talent in


your organization to deliver innovative ideas to the marketplace and
to improve the throughput and productivity in your organizations.
■ The answers are in your organization.

Delivery
Delivery is the ultimate measure of success. Any experienced leader knows
that all projects are not created equal and no single approach is applicable
to every project. The tool described in Chapter 4 provides a practical
model for evaluating uncertainty and complexity as well as guidance for tai-
loring an appropriate leadership approach. The characterization of uncer-
tainty and complexity also correlates to project risk, and we provide a
roadmap for potentially reducing risk. For example, it is possible to break
projects that are both highly complex and uncertain into components with
lower uncertainty and risk. This process reduces the overall project risk.
An understanding of complexity and risk also allows leadership to match
the skills of project leaders to the needs of the project.
Here are the big ideas of Chapter 4:

■ By understanding the uncertainty and complexity characteristics of


your projects, you can identify better ways to lead those projects.
■ High complexity or uncertainty correlates to higher risk. Reduce
these factors, and you reduce your level of risk. Project decomposi-
tion can reduce complexity, while incremental delivery helps lead a
project through uncertainty.
■ Some leaders are natural managers of complexity, while others are
experts at uncertainty. Match leadership styles to project character-
istics, and develop leaders’ skills to broaden their capabilities.
001_0321572882_ch01.qxd 5/27/09 4:08 PM Page 9

Summary 9

Decisions
The tools we describe in this book will help you to make the key decisions
you face on a regular basis, but we felt it important to discuss the actual
approach to decision making. Knowing when to make your decisions and
which information you need to make those decisions is very important.
Chapter 5 introduces the value model tool, which provides a structure for
organizing information—such as purpose, considerations, costs, and bene-
fits—that you can use to aid your decision making.
Here are the big ideas of Chapter 5:

■ Business decisions focus on delivering value to the organization and


to the marketplace. Life is much better if everyone in the organiza-
tion understands what generates value and makes decisions that
improve value.
■ You can develop a value model that helps you make better decisions,
but this model is not just a calculation that generates a numerical
value. Instead, it is a conversation that you should revisit often,
especially when conditions change.

The Leadership Tipping Point


While we describe each tool on its own and provide plenty of examples of
when those tools are useful, we knew this treatment would not be com-
plete without describing how you can put our tools to work as a leader,
addressing the issues of how and when to step back and how and when to
step up without rescuing your teams. This is the big idea of Chapter 6,
which we call the leadership “tipping point.”
Leaders can stifle progress when they interfere with team processes.
At the same time, as a leader, you don’t want to go over the cliff and deliv-
er the wrong results. Sometimes leaders should stand back and let the
team work—and sometimes leaders should step up and lead. In Chapter 6,
we discuss how you can decide which situation you face.

Summary

This chapter introduced the issues involved in involving the right players
in your organization to gain a competitive advantage. It also introduced the
framework of concepts and tools for doing so.
Google
Bookmarks Delicious Digg Facebook StumbleUpon Reddit Twitter
Buy 2 from informIT and

Save 35%
Jim Highsmith Enter the coupon code
AGILE2009 during checkout.

Agile Project Management


Creating Innovative Products, Second Edition

Today, the pace of project management moves faster. Project management needs
to become more flexible and far more responsive to customers. Using Agile available
Project Management (APM), project managers can achieve all these goals without • Book: 9780321658395
compromising cost, quality, or business discipline. In Agile Project Management, • Safari Online
Second Edition, renowned Agile pioneer Jim Highsmith thoroughly updates his • EBOOK: 0321659228
classic guide to APM, extending and refining it to support even the largest projects
and organizations.
About the Author
Writing for project leaders, managers, and executives at all levels, Highsmith
integrates the best project management, manufacturing, and software development Jim Highsmith is a founding
practices into an overall framework designed to support unprecedented speed member of the AgileAlliance,
and mobility. The many topics added in this new edition include incorporating co-author of the Agile Manifesto,
agile values, scaling Agile projects, release planning, portfolio governance, and and director of the Agile Project
enhancing organizational agility. Project and business leaders will especially Management Advisory Service
appreciate Highsmith’s new coverage of promoting agility through performance for the Cutter Consortium. He
measurements based on value, quality, and constraints. consults with development
organizations throughout the
This edition’s coverage includes:
U.S., Europe, Canada, South
• Understanding the Agile revolution’s impact on product development Africa, Australia, Japan, India,
• Recognizing when Agile methods will work in project management, and New Zealand on accelerating
and when they won’t development in today’s
• S etting realistic business objectives for Agile Project Management increasingly complex, uncertain
• Promoting Agile values and principles across the organization environments. Highsmith is
author of Adaptive Software
• Utilizing a proven Agile Enterprise Framework that encompasses
Development, winner of the 2000
governance, project and iteration management, and technical practices
Jolt Award, and (with Alistair
• Optimizing all five stages of the Agile project: Envision, Speculate, Cockburn) co-editor of The Agile
Explore, Adapt, and Close Software Development Series.
• Organizational and product-related processes for scaling Agile to the He has more than 25 years’
largest projects and teams experience as an IT manager,
• Agile project governance solutions for executives and management product manager, project
• T he “Agile Triangle”: measuring performance in ways that encourage manager, consultant,
agility instead of discouraging it and software developer.
• T he changing role of the Agile project leader

informit.com/aw
05_0321502752_ch04.qxd 6/17/09 12:39 PM Page 63

Chapter 4
Adapting over
Conforming

A traditional project manager focuses on following the plan with


minimal changes, whereas an agile leader focuses on adapting
successfully to inevitable changes.
Traditional managers view the plan as the goal, whereas agile leaders view
customer value as the goal. If you doubt the former, just look at the defini-
tion of “success” from the Standish Group, who has published success (and
failure) rates of software projects over a long period of time. Success, per the
Standish Group is “the project is completed on time and on budget, with all
the features and functions originally specified.”1 This is not a value-based
definition but a constraint-based one. Using this definition, then, managers
focus on following the plan with minimal changes. Colleague Rob Austin
would classify this as a dysfunctional measurement (Austin 1996)—one that
leads to the opposite behavior of what was intended.

When customer value and quality are the goals, then a plan becomes a
means to achieve those goals, not the goal itself. The constraints embedded
in those plans are still important; they still guide the project; we still want to
understand variations from the plans, but—and this is a big but—plans are
not sacrosanct; they are meant to be flexible; they are meant to be guides,
not straightjackets.

1 Standish Group. Chaos Reports

(http://www.standishgroup.com/chaos_resources/chronicles.php).

• 63 •
05_0321502752_ch04.qxd 6/17/09 12:39 PM Page 64

Both traditional and agile leaders plan, and both spend a fair amount of
time planning. But they view plans in radically different ways. They both
believe in plans as baselines, but traditional managers are constantly trying
to “correct” actual results to that baseline. In the PMBOK2, for example,
the relevant activity is described as “corrective action” to guide the team
back to the plan. In agile project management, we use “adaptive action” to
describe which course of action to take (and one of those actions may be to
correct to the plan).
The agile principles documents—the Agile Manifesto and the Declara-
tion of Interdependence—contain five principle statements about adapta-
tion, as shown in Figure 4-1.

Figure 4-1
Adaptive Principle Adaptive Principle Statements
Statements
(DOI) We expect uncertainty and manage for it through
iterations, anticipation, and adaptation.
(DOI) We improve effectiveness and reliability through
situationally specific strategies, processes, and practices.
(AM) Responding to change over following a plan.
(AM) Welcome changing requirements, even late in
development. Agile processes harness change for the
customer's competitive advantage.
(AM) At regular intervals, the team reflects on how to
become more effective, and then tunes and adjusts its
behavior accordingly

DOI=Declaration of Interdependence. AM=Agile Manifesto

These principles could be summarized as follows:

• We expect change (uncertainty) and respond accordingly rather than


follow outdated plans.
• We adapt our processes and practices as necessary.

The ability to respond to change drives competitive advantage. Think of the


possibilities (not the problems) of being able to release a new version of a
product weekly. Think of the competitive advantage of being able to pack-

2
The Project Management Institute’s Project Management Body of Knowledge, known as
the PMBOK.

• 64 • ADAPTING OVER CONFORMING


05_0321502752_ch04.qxd 6/17/09 12:39 PM Page 65

age features so customers feel they have software specifically customized for
them (and the cost to maintain the software remains low).
Teams must adapt, but they can’t lose track of the ultimate goals of the
project. Teams should constantly evaluate progress, whether adapting or
anticipating, by asking these four questions:

• Is value, in the form of a releasable product, being delivered?


• Is the quality goal of building a reliable, adaptable product being
met?
• Is the project progressing satisfactorily within acceptable con-
straints?
• Is the team adapting effectively to changes imposed by management,
customers, or technology?

The dictionary defines change as: “To cause to be different, to give a com-
pletely different form or appearance to.” It defines adapt as: “To make suit-
able to or fit for a specific use or situation.” Changing and adapting are not
the same and the difference between them is important. There is no goal
inherent in change—as the quip says, “stuff happens.” Adaptation, on the
other hand, is directed towards a goal (suitability). Change is mindless;
adaptation is mindful.

Adaptation can be considered a mindful response to change.

The Science of Adaptation


Former Visa International CEO Dee Hock (1999) coined the word
“chaordic” to describe both the world around us and his approach to man-
aging a far-flung enterprise—balanced on the precipice between chaos and
order. Our sense of the world dictates management style. If the world is per-
ceived as static, then production-style management practices will dominate.
If the world is perceived as dynamic, however, then exploration-style man-
agement practices will come to the fore. Of course, it’s not that simple—
there is always a blend of static and dynamic, which means that managers
must always perform a delicate balancing act.

The Science of Adaptation • 65 •


05_0321502752_ch04.qxd 6/17/09 12:39 PM Page 66

In the last two decades a vanguard of scientists and managers have artic-
ulated a profound shift in their view about how organisms and organizations
evolve, respond to change, and manage their growth. Scientists’ findings
about the tipping points of chemical reactions and the “swarm” behavior of
ants have given organizational researchers insights into what makes success-
ful companies and successful managers. Practitioners have studied how
innovative groups work most effectively.
As quantum physics changed our notions of predictability and Darwin
changed our perspective on evolution, complex adaptive systems (CAS) the-
ory has reshaped scientific and management thinking. In an era of rapid
change, we need better ways of making sense of the world around us. Just as
biologists now study ecosystems as well as species, executives and managers
need to understand the global economic and political ecosystems in which
their companies compete.

“A complex adaptive system, be it biologic or economic below, is an


ensemble of independent agents

• Who interact to create an ecosystem

• Whose interaction is defined by the exchange of information

• Whose individual actions are based on some system of internal


rules

• Who self-organize in nonlinear ways to produce emergent results

• Who exhibit characteristics of both order and chaos

• Who evolve over time” (Highsmith 2000).


For an agile project, the ensemble includes core team members, customers,
suppliers, executives, and other participants who interact with each other in
various ways. It is these interactions, and the tacit and explicit information
exchanges that occur within them, that project management practices need
to facilitate.
The individual agent’s actions are driven by a set of internal rules—the
core ideology and values of APM, for example. Both scientific and manage-
ment researchers have shown that a simple set of rules can generate complex
behaviors and outcomes, whether in ant colonies or project teams. Complex

• 66 • ADAPTING OVER CONFORMING


05_0321502752_ch04.qxd 6/17/09 12:39 PM Page 67

rules, on the other hand, often become bureaucratic. How these rules are
formulated has a significant impact on how the complex system operates.
Newtonian approaches predict results. CAS approaches create emer-
gent results. “Emergence is a property of complex adaptive systems that cre-
ates some greater property of the whole (system behavior) from the
interaction of the parts (self-organizing agent behavior). Emergent results
cannot be predicted in the normal sense of cause and effect relationships,
but they can be anticipated by creating patterns that have previously pro-
duced similar results” (Highsmith 2000). Creativity and innovation are the
emergent results of well functioning agile teams.
An adaptive development process has a different character from an
optimizing one. Optimizing reflects a basic prescriptive Plan-Design-Build
lifecycle. Adapting reflects an organic, evolutionary Envision-Explore-
Adapt lifecycle. An adaptive approach begins not with a single solution, but
with multiple potential solutions (experiments). It explores and selects the
best by applying a series of fitness tests (actual product features or simula-
tions subjected to acceptance tests) and then adapting to feedback. When
uncertainty is low, adaptive approaches run the risk of higher costs. When
uncertainty is high, optimizing approaches run the risk of settling too early
on a particular solution and stifling innovation. The salient point is that
these two fundamental approaches to development are very different, and
they require different processes, different management approaches, and dif-
ferent measurements of success.
Newtonian versus quantum, predictability versus flexibility, optimiza-
tion versus adaptation, efficiency versus innovation—all these dichotomies
reflect a fundamentally different way of making sense about the world and
how to manage effectively within it. Because of high iteration costs, the tra-
ditional perspective was predictive and change averse, and deterministic
processes arose to support this traditional viewpoint. But our viewpoint
needs to change. Executives, project leaders, and development teams must
embrace a different view of the new product development world, one that
not only recognizes change in the business world, but also understands the
power of driving down iteration costs to enable experimentation and emer-
gent processes. Understanding these differences and how they affect prod-
uct development is key to understanding APM.

The Science of Adaptation • 67 •


05_0321502752_ch04.qxd 6/17/09 12:39 PM Page 68

Exploring
Agility is the ability to both create and respond to change in order to profit
in a turbulent business environment (from Chapter 1). The ability to
respond to change is good. The ability to create change for competitors is
even better. When you create change you are on the competitive offensive.
When you respond to competitors’ changes you are on the defensive. When
you can respond to change at any point in the development lifecycle, even
late, then you have a distinct advantage.

Adaptation needs to exceed the rate of market changes.


But change is hard. Although agile values tell us that responding to change is
more important than following a plan, and that embracing rather than
resisting change leads to better products, working in a high-change environ-
ment can be nerve-wracking for team members. Exploration is difficult; it
raises anxiety, trepidation, and sometimes even a little fear. Agile project
leaders need to encourage and inspire team members to work through the
difficulties of a high-change environment. Remaining calm themselves,
encouraging experimentation, learning through both successes and mis-
takes, and helping team members understand the vision are all part of this
encouragement. Good leaders create a safe environment in which people
can voice outlandish ideas, some of which turn out not to be so outlandish
after all. External encouragement and inspiration help teams build internal
motivation.
Great explorations flow from inspirational leaders. Cook, Magellan,
Shackleton, and Columbus were inspirational leaders with vision. They per-
severed in the face of monumental obstacles, not the least of which was fear
of the unknown. Magellan, after years of dealing with the entrenched Span-
ish bureaucracy trying to scuttle his plans, launched his five-ship fleet on
October 3, 1519. On September 6, 1522, the Victoria, last of the ships,
sailed into port without Magellan, who had died in the Philippines after
completing the most treacherous part of the journey. The expedition estab-
lished a route around Cape Horn and sailed across the vast Pacific Ocean
for the first time (Joyner 1992).
Great explorers articulate goals that inspire people—goals that get peo-
ple excited such that they inspire themselves. These goals or visions serve as

• 68 • ADAPTING OVER CONFORMING


05_0321502752_ch04.qxd 6/17/09 12:39 PM Page 69

a unifying focal point of effort, galvanizing people and creating an esprit de


corps among the team. Inspirational goals need to be energizing, com-
pelling, clear, and feasible, but just barely. Inspirational goals tap into a
team’s passion.
Encouraging leaders also know the difference between good goals and
bad ones. We all know of egocentric managers who point to some mountain
and say, “Let’s get up there, team,” when everyone else is thinking, “Who is
he kidding? There’s not a snowball’s chance in the hot place that we can
carry that off.” “Bad BHAGs [Big Hairy Audacious Goals], it turns out, are
set with bravado; good BHAGs are set with understanding,” says Jim Collins
(2001). Inspirational leaders know that setting a vision for the product is a
team effort, one based on analysis, understanding, and realistic risk assess-
ment, combined with a sprinkle of adventure.
Innovative product development teams are led, not managed. They
allow their leaders to be inspirational. They internalize the leader’s encour-
agement. Great new products, outstanding enhancements to existing prod-
ucts, and creative new business initiatives are driven by passion and
inspiration. Project managers who focus on network diagrams, cost budgets,
and resource histograms are dooming their teams to mediocrity.3, 4
Leaders help articulate the goals; teams internalize them and motivate
themselves. This internal motivation enables exploration. We don’t arrive at
something new, better, and different without trial and error, launching off in
multiple new directions to find the one that seems promising. Magellan and
his ships spent 38 days covering the 334 miles of the straits that bear his
name. In the vast expanse of islands and peninsulas, they explored many
dead ends before finding the correct passages (Kelley 2001).
Magellan’s ship Victoria sailed nearly 1,000 miles, back and forth—up
estuaries that dead-ended and back out—time and time again. Magellan (his
crew, actually) was the first to circumnavigate the globe. But Magellan

3 This
sentence should not be interpreted as saying these things are unimportant,
because properly used, each can be useful to the project leader. It is when they
become the focal point that trouble ensues.
4 Ken Delcol observes, “Most PMs are not selected for their ability to inspire peo-
ple! Leadership and business influencing skills are hard to establish in an inter-
view. Most managers have a difficult time identifying and evaluating people with
these skill sets.”

Exploring • 69 •
05_0321502752_ch04.qxd 6/17/09 12:39 PM Page 70

would probably have driven a production-style project manager or execu-


tive a little crazy, because he surely didn’t follow a plan. But then, any
detailed plan would have been foolish—no one even knew whether ships
could get around Cape Horn; none had found the way when Magellan
launched. No one knew how large the Pacific Ocean was, and even the best
guestimates turned out to be thousands of miles short. His vision never
changed, but his “execution” changed every day based on new information.
Teams need a shared purpose and goal, but they also need encourage-
ment to adapt—to experiment, explore, make mistakes, regroup, and forge
ahead again.

Responding to Change
We expect change (uncertainty) and respond accordingly rather than
follow outdated plans.
This statement reflects the agile viewpoint characterized further by

• Envision-Explore versus Plan-Do


• Adapting versus anticipating

In Artful Making, Harvard Business School professor and colleague Rob


Austin and his coauthor Lee Devin (2003) discuss a $125 million IT project
disaster in which the company refused to improvise and change from the
detailed plan set down prior to the project’s start. “ ‘Plan the work and work
the plan’ was their implicit mantra,” they write. “And it led them directly to
a costly and destructive course of action.…We’d all like to believe that this
kind of problem is rare in business. It’s not.”
Every project has knowns and unknowns, certainties and uncertainties,
and therefore every project has to balance planning and adapting. Balancing
is required because projects also run the gamut from production-style ones
in which uncertainty is low, to exploration-style ones in which uncertainty is
high. Exploration-style projects, similar to development of the Sketchbook
Pro© product introduced in Chapter 1, require a process that emphasizes

• 70 • ADAPTING OVER CONFORMING


05_0321502752_ch04.qxd 6/17/09 12:39 PM Page 71

envisioning and then exploring into that vision rather than detailed planning
and relatively strict execution of tasks. It’s not that one is right and the other
wrong, but that each style is more or less applicable to a particular project
type.
Another factor that impacts project management style is the cost of an
iteration; that is, the cost of experimenting. Even if the need for innovation
is great, high iteration costs may dictate a process with greater anticipatory
work. Low-cost iterations, like those mentioned earlier, enable an adaptive
style of development in which plans, architectures, and designs evolve con-
currently with the actual product.

Product, Process, People


Faced with major redirection in release 2.0 of their product, the Sketchbook
Pro© team introduced in Chapter 1 delivered the revised product in 42 days.
As I quip in workshops, “I know teams who would have complained for 42
days with comments such as: “They don’t know what they want. They are
always changing their minds.” Adaptability has three components—prod-
uct, process, and people. You need to have a gung-ho agile team with the
right attitude about change. You need processes and practices that allow the
team to adapt to circumstances. And you need high quality code with auto-
mated tests. You can have pristine code and a non-agile team and change
will be difficult. All three are required to have an agile, adaptable environ-
ment.
The barrier to agility in many software organizations is their failure to
deal with the technical debt in legacy code. The failure is understandable
because the solution can be costly and time consuming. However, failure to
address this significant barrier keeps many organizations from realizing their
agile potential. It took years for legacy code to degenerate; it will take signif-
icant time to revitalize the code. It requires a systematic investment in refac-
toring and automated testing—over several release cycles—to begin to solve
the problems of years of neglect.

Product, Process, People • 71 •


05_0321502752_ch04.qxd 6/17/09 12:39 PM Page 72

Barriers or Opportunities
One of the constant excuses, complaints, or rationalizations about some
agile practices are “They would take too much time,” or “They would cost
too much.” This has been said about short iterations, frequent database
updates, continuous integration, automated testing, and a host of other agile
practices. All too often companies succumb to what colleague Israel Gat
calls the “new toy” syndrome—placing all their emphasis on new develop-
ment and ignoring legacy code. Things like messy old code then become
excuses and barriers to change. Some activities certainly are cost-prohibi-
tive, but many of these are artificial barriers that people voice. Experienced
agilists turn these barriers into opportunities. They ask, “What would be the
benefit if we could do this?”
Working with a large company—and a very large program team (multi-
ple projects, one integrated product suite) of something over 500 people—
several years ago we wanted them to do a complete multi-project code
integration at the end of every couple of iterations. The reply was, “We can’t
do that, it would take multiple people and several weeks of time out of
development.” This was from a group who had experienced severe prob-
lems in prior releases when they integrated products very late in the release
cycle. Our response was, “What would the benefit be if you could do the
integration quickly and at low cost?” and, “You don’t have a choice; if you
want to be agile you must integrate across the entire product suite early and
often.” Grumbling, they managed the first integration with significant
effort, but with far less time than they anticipated. By the time 3–4 integra-
tions had occurred, they had figured out how to do them in a few days with
limited personnel. The benefits from frequent integration were significant,
eliminating many problems that previously would have lingered, unfound,
until close to release date.
Most, but not all, of the time perceived barriers to change (it costs too
much) really point out inefficiencies—opportunities to streamline the
process and enhance the organization’s ability to adapt. Agile development
demands short-cycle iterations. Doing short-cycle iterations demands find-
ing ways to do repetitive things quickly and inexpensively. Doing things
quickly and inexpensively enables teams to respond to changes in ways they
never anticipated previously. Doing things quickly and inexpensively fosters

• 72 • ADAPTING OVER CONFORMING


05_0321502752_ch04.qxd 6/17/09 12:39 PM Page 73

innovation because it encourages teams to experiment. These innovations


ripple out into other parts of the organization. Lowering the cost of change
enables companies to rethink their business models.

Reliable, Not Repeatable


Note that the word “repeatable” isn’t in the agile lexicon. Implementing
repeatable processes has been the goal of many companies, but in product
development, repeatability turns out to be the wrong goal; in fact, it turns
out to be an extremely counterproductive goal. Repeatable means doing the
same thing in the same way to product the same results. Reliable means
meeting targets regardless of the impediments thrown in your way—it
means constantly adapting to meet a goal.
Repeatable processes reduce variability through measurement and con-
stant process correction. The term originated in manufacturing, where
results were well defined and repeatability meant that if a process had con-
sistent inputs, then defined outputs would be produced. Repeatable means
that the conversion of inputs to outputs can be replicated with little varia-
tion. It implies that no new information can be generated during the process
because we have to know all the information in advance to predict the out-
put results accurately. Repeatable processes are not effective for product
development projects because precise results are rarely predictable, inputs
vary considerably from project to project, and the input-to-output conver-
sions themselves are highly variable.
Reliable processes focus on outputs, not inputs. Using a reliable
process, team members figure out ways to consistently achieve a given goal
even though the inputs vary dramatically. Because of the input variations,
the team may not use the same processes or practices from one project, or
even one iteration, to the next. Reliability is results driven. Repeatability is
input driven. The irony is that if every project process was somehow made
repeatable, the project would be extremely unstable because of input and
transformation variations. Even those organizations that purport to have
repeatable processes are often successful not because of those processes, but
because of the adaptability of the people who are using those processes.

Reliable, Not Repeatable • 73 •


05_0321502752_ch04.qxd 6/17/09 12:39 PM Page 74

At best, a repeatable process can deliver only what was specified in


the beginning. A reliable, emergent process can actually deliver a
better result than anyone ever conceived in the beginning. An
emergent process can produce what you wish you had thought about
at the start if only you had been smart and prescient enough.
Herein lies a definitional issue with project scope. With production-style
projects, those amenable to repeatable processes, scope is considered to be
the defined requirements. But in product development, requirements evolve
and change over the life of the project, so “scope” can never be precisely
defined in the beginning. Therefore, the correct scope to consider for agile
projects isn’t defined requirements but the articulated product vision—a
releasable product. Product managers may be worried about specific
requirements, but executives are concerned about the product as a whole—
Does it meet the vision of the customer? When management asks the ever-
popular question, “Did the project meet its scope, schedule, and cost
targets?” The answer should be evaluated according to the vision, value, and
totality of the capabilities delivered. That is, the evaluation of success can be
encapsulated in the question “Do we have a releasable product?” not on
whether the set of specific features defined at the start of the project was
produced.
Agile Project Management is both reliable and predictable: It can
deliver products that meet the customer’s evolving needs within the bound-
ary constraints better than any other process for a given level of uncertainty.
Why does this happen? Not because some project manager specified
detailed tasks and micromanaged them, but because an agile leader estab-
lished an environment in which people wanted to excel and meet their goals.
Although APM is reliable, it is not infallible, and it cannot eliminate the
vagaries of uncertainty, nor the surprises encountered while exploring. APM
can shift the odds toward success. If executives expect projects to deliver on
the product vision, within the constraints of specified time and cost, every
time, without fail, then they should be running an assembly line, not devel-
oping products.

• 74 • ADAPTING OVER CONFORMING


05_0321502752_ch04.qxd 6/17/09 12:39 PM Page 75

Reflection and Retrospective


Adapting requires both a certain mindset and a set of skills. If we are to be
adaptable, then we must be willing to seriously and critically evaluate our
performance as individual contributors and as teams. Effective teams cover
four key subject areas in their retrospectives: product from both the cus-
tomer’s perspective and a technical quality perspective; process, as in how
well the processes and practices being used by the team are working; team,
as in how well the group is working as a high-performance team; and project,
as in how the project is progressing according to plan. Feedback in each of
these areas—at the end of each iteration and at the end of the project—leads
to adaptations that improve performance. The “how to” of retrospectives
and reflection is covered in Chapter 10, “The Adapt and Close Phases.”

Principles to Practices
We adapt our processes and practices as necessary.
Ultimately, what people do, how they behave, is what creates great products.
Principles and practices are guides; they help identify and reinforce certain
behaviors.
Although principles guide agile teams, specific practices are necessary
to actually accomplish work. A process structure and specific practices form
a minimal, flexible framework for self-organizing teams. In an agile project
there must be both anticipatory and adaptive processes and practices.
Release planning uses known information to “anticipate” the future. Refac-
toring uses information found later in the project to “adapt” code. Ron Jef-
fries once said, “I have more confidence in my ability to adapt than in my
ability to plan.” Agilists do anticipate, but they always try to understand the
limits of anticipation and try to err on the side of less of it.

Principles to Practices • 75 •
05_0321502752_ch04.qxd 6/17/09 12:39 PM Page 76

Final Thoughts
Developing great products requires exploration, not tracking against a plan.
Exploring and adapting are two behavioral traits required to innovate—
having the courage to explore into the unknown and having the humility to
recognize mistakes and adapt to the situation. Magellan had a vision, a goal,
and some general ideas about sailing from Spain down the coast of South
America, avoiding Portuguese ships if at all possible, exploring dead end
after dead end to finding a way around Cape Horn, then tracking across the
Pacific to once-again known territory in the Southeast Asia archipelagoes.
Great new products come from similarly audacious goals and rough plans
that often include large gaps in which “miracles happen,” much like the mir-
acle of finding the Straits of Magellan.

• 76 • ADAPTING OVER CONFORMING


Google
Bookmarks Delicious Digg Facebook StumbleUpon Reddit Twitter
Buy 2 from informIT and
Michele Sliger Save 35%
Stacia Broderick Enter the coupon code
AGILE2009 during checkout.

The Software Project


Manager’s Bridge to Agility
When software development teams move to agile methods, experienced project
managers often struggle — doubtful about the new approach and uncertain about available
their new roles and responsibilities. In this book, two long-time certified Project • Book: 9780321502759
Management Professionals (PMP ® s) and Scrum trainers have built a bridge to • Safari Online
this dynamic new paradigm. They show experienced project managers how to • EBOOK: 0321572742
successfully transition to agile by refocusing on facilitation and collaboration, not • KINDLE: B001ADIWMO
“command and control.”

The authors begin by explaining how agile works: how it differs from traditional About the Authors
“plan-driven” methodologies, the benefits it promises, and the real-world results
Michele Sliger has extensive
it delivers. Next, they systematically map the Project Management Institute’s
experience in agile software
classic, methodology-independent techniques and terminology to agile practices.
development, having transitioned
They cover both process and project lifecycles and carefully address vital issues to Scrum and XP practices in 2000
ranging from scope and time to cost management and stakeholder communication. after starting her career following
Finally, drawing on their own extensive personal experience, they put a human the traditional waterfall approach.
face on your personal transition to agile — covering the emotional challenges, Michele is the owner of Sliger
personal values, and key leadership traits you’ll need to succeed. Consulting Inc., where she consults
with businesses ranging from small
Coverage includes
startups to Fortune 500 companies,
• Relating the PMBOK® Guide ideals to agile practices: similarities, helping teams with their agile
overlaps, and differences adoption, and helping organizations
• Understanding the role and value of agile techniques such as iteration/release prepare for the changes that agile
planning and retrospectives adoption brings. She is a certified
• Using agile techniques to systematically and continually reduce risk Project Management Professional
(PMP ® ) and a Certified Scrum
• Implementing quality assurance (QA) where it belongs: in analysis, design,
Trainer (CST).
defect prevention, and continuous improvement
• Learning to trust your teams and listen for their discoveries Stacia Broderick has worked
as a project manager for fifteen
• P rocuring, purchasing, and contracting for software in agile, collaborative
years, the last eight in software
environments
development. She was fortunate to
• Avoiding the common mistakes software teams make in transitioning to agile be helped across the bridge under
• Coordinating with project management offices and non-agile teams the mentorship of Ken Schwaber
• “Selling” agile within your teams and throughout your organization while working for Primavera
Systems in 2003 and ever since
has helped hundreds of teams the
world over embrace the principles
of and transition to an agile way of
creating products. Stacia founded
her company, AgileEvolution, Inc.,
informit.com/aw in 2006 and is a Certified Scrum
Trainer as well as a PMP ®.
08_0321502752_ch05.qxd 4/18/08 3:13 PM Page 67

Chapter 5
Scope Management

Project Scope Management includes the processes required to ensure


that the project includes all the work required, and only the work
required, to complete the project successfully.
—PMBOK® Guide

It is not the strongest of the species that survive, nor the most
intelligent, but the ones most responsive to change.
—Charles Darwin, The Origin of Species

Next week there can’t be any crisis. My schedule is already full.


—Henry Kissinger

“Scope creep” has always been the bane of traditional project managers, as
requirements continue to change in response to customer business needs,
changes in the industry, changes in technology, and things that were learned
during the development process. Scope planning, scope definition, scope
verification, and scope control are all processes that are defined in the
PMBOK® Guide to prevent scope creep, and these areas earn great atten-
tion from project managers. Those who use agile methods believe these
deserve great attention as well, but their philosophy on managing scope is
completely different. Plan-driven approaches work hard to prevent changes
in scope, whereas agile approaches expect and embrace scope change. The
agile strategy is to fix resources and schedule, and then work to implement
the highest value features as defined by the customer. Thus, the scope

• 67 •
08_0321502752_ch05.qxd 4/18/08 3:13 PM Page 68

remains flexible. This is in contrast to a typical waterfall approach, as shown


in Figure 5-1, where features (scope) are first defined in detail, driving the
cost and schedule estimates. Agile has simply flipped the triangle.

Figure 5-1
Waterfall vs. Agile:
The paradigm shift
(original concept
courtesy of the DSDM Traditional Agile
Consortium)
Fixed Features Resources Schedule

Value/Vision
Agile flips the Driven
triangle. Plan
Driven

Variable Resources Schedule Features

Scope Planning
The PMBOK® Guide defines the Project Scope Management Plan as the out-
put of the scope planning process.1 This document defines the processes that
will be followed in defining scope, documenting scope, verifying and accept-
ing scope and completed deliverables, and controlling and managing
requests for changes to the scope. In agile, the iterative and incremental
process itself is what manages scope. Unless documentation is required for
auditing purposes, no additional document outlining procedures for scope
management is needed. Scope is defined and redefined constantly in agile, as
part of the planning meetings—in particular, release planning and iteration
planning—and by the management of the product backlog. Remember,
resources and time are typically fixed in agile approaches, and it’s the scope
that is allowed to change. However, when fixed-scope projects are required,
it is the number of iterations that will change, in order to accommodate the
need for a full feature set prior to release. Additionally, one of the success cri-
teria in traditional projects is the extent to which we can “stick to the scope”;
in agile, it is more important to be able to efficiently and effectively respond
to change. The success criteria in agile thus changes to “Are we providing
value to our customer?” The primary measure of progress is working code.

• 68 • SCOPE MANAGEMENT
08_0321502752_ch05.qxd 4/18/08 3:13 PM Page 69

Table 5-1 provides a summary comparison of scope planning from the


traditional and agile perspectives. In agile projects, scope planning is
referred to as “managing the product backlog.”

Table 5-1
Scope Planning

Traditional Agile
Prepare a Project Scope Management Commit to following the framework as
Plan document. outlined in the chosen agile process.

Scope Definition

The PMBOK® Guide practices of scope definition, work breakdown struc-


ture (WBS) creation, and scope verification occur iteratively in agile. A tra-
ditional WBS for software projects is usually divided at its highest level into
phases of analysis, design, coding, testing, and deployment activities. Each
of these phases is then decomposed into tasks or groups of tasks, referred to
as work packages in the PMBOK® Guide. Traditional project planning
begins top-down and relies on the elaboration of detailed tasks with esti-
mates and dependencies to drive the project schedule via use of critical path
analysis. Even though the PMBOK® Guide goes into great detail about
scope decomposition by way of WBS (work breakdown structure), it also
warns that “excessive decomposition can lead to nonproductive manage-
ment effort, inefficient use of resources, and decreased efficiency in per-
forming the work.”2
In agile, we approach these practices differently in that we define fea-
tures at a high level in the product backlog and then place features into iter-
ations during release planning. One can think of the iteration—or even the
feature itself—as the agile equivalent of work packages. The features are
estimated at a gross level in the product backlog—no detailed tasks or
resources are defined at this point in time. Once the iteration begins, the fea-
tures slated for that iteration—and only that iteration—are then elaborated
into tasks that represent a development plan for the feature. Think of it as
just-in-time elaboration, preventing a wasteful buildup of requirements
inventory that may never be processed. The PMBOK® Guide supports this
idea of “rolling wave planning”:3 As the work is decomposed to lower levels

SCOPE PLANNING • 69 •
08_0321502752_ch05.qxd 4/18/08 3:13 PM Page 70

of detail, the ability to plan, manage, and control the work is enhanced
because the short timeframe of the iteration reduces the amount of detail
and the complexity of estimating. The agile approach assumes that because
things change so often, you shouldn’t spend the time doing “excessive
decomposition” until you’re ready to do the work.
Let’s look at how scope is defined throughout an agile project by exam-
ining five levels of planning common to most agile projects: the product
vision, the product roadmap, the release plan, the iteration plan, and the
daily plan.4

Product Vision
At the outset of a project, it is typical to hold a kickoff meeting. Agile is no
different; however, the way the agile vision meeting is conducted is unlike
what a traditional project manager might be accustomed to. Although the
vision is defined and presented by the customer or business representative,
it is the team that clarifies the vision during the discussions and subsequent
exercises. Therefore, the team is heavily involved, and group exercises are a
big part of determining the final outcomes. See Chapter 4, “Integration
Management,” for more detail on vision meetings.
The vision meeting is designed to present the big picture, get all team
members on the same page, and ensure a clear understanding of what it is
that they’ve been brought together to do. The vision defines the mission of
the project team and the boundaries within which they will work to achieve
the desired results. The project’s goal should be directly traceable to a cor-
porate strategic objective.
Here the scope is defined at a very high level. It is not uncommon to
leave the vision meeting with only a dozen or so features identified, such as
“provide online order capabilities,” “enable international ordering and
delivery,” “create data warehouse of customer orders to use for marketing
purposes,” and “integrate with our current brick-and-mortar inventory sys-
tem.” Clearly these are all very large pieces of functionality with little-to-no
detail—and this is what is appropriate at this stage of the project. The far-
ther away the delivery date, the broader the stroke given to feature details.

Product Roadmap
A product roadmap shows how the product will evolve over the next three
to four releases or some period of calendar time, typically quarters. The

• 70 • SCOPE MANAGEMENT
08_0321502752_ch05.qxd 4/18/08 3:13 PM Page 71

product roadmap is a high-level represen-


tation of what features or themes are to be Note
delivered in each release, the customer In agile, the word “release” does
targeted, the architecture needed to sup- not solely mean a product release
port the features, and the business value to the end customer—it can also
the release is expected to meet. The cus- mean an internal release to fulfill
tomer or product manager, agile project integration milestones and con-
manager, architect, and executive man- tinue to confirm that the product is
agement should meet on average two to “potentially shippable.”
three times a year to collaborate on the
development and revision of the product
roadmap. Figure 5-2 shows a sample roadmap template made popular by
Luke Hohmann in his book Beyond Software Architecture.5

Time Horizon -- Quarters work well… Figure 5-2


Market Map Product roadmap
(Target Market
Small template, courtesy of
Demographics) Office Enthiosys and Luke
Hohmann, from his
Features/
Managed Biometric book Beyond Soft-
Benefit Map
Service ID ware Architecture

Technology/ What
technology
Architecture should we use?
Roadmap Linux
?
Market Events COMDEX
/Rhythms

Because the customer is responsible for maintaining and prioritizing the


backlog of work, the customer also owns the product roadmap. In large cor-
porations or on projects with multiple customers or product owners, the
customer assigned to the project will often first work with others in his busi-
ness unit to create a roadmap straw man as part of working out the priorities
of deliverables with the business. Then this straw man is presented to key
project team members (agile project manager, architect, and so on) for fur-
ther revision. Finally, the roadmap is presented to the entire team and inter-
ested stakeholders, usually as part of the vision meeting and/or release

SCOPE PLANNING • 71 •
08_0321502752_ch05.qxd 4/18/08 3:13 PM Page 72

planning meeting. Feedback is encouraged at all sessions because it helps to


better define a reasonable approach to product deliverables.
In addition to the vision plan and product roadmap, the end result of
the product vision and product roadmap discussions should be the priori-
tized product backlog. These are all inputs into the next level of planning:
release (or quarterly) planning.

Release (or Quarterly) Planning


In a release planning meeting, the team reviews the strategies and vision
shared by the customer and determines how to map the work from the pri-
oritized backlog into the iterations that make up a release or that make up a
period of time such as a quarter. Figure 5-3 shows a typical release plan
agenda, and Figure 5-4 shows the release plan done using a whiteboard and
sticky notes, as is common in agile meetings when the team is co-located.
The release plan is divided up into iterations (usually one flipchart page per
iteration), with associated high-level features. The release plan also includes
any assumptions, dependencies, constraints, decisions made, concerns,
risks, or other issues that may affect the release. Again, documentation of
these additional items can be as simple as posting the flipchart that they
were originally recorded on or taking a picture of it and posting it on a
shared website.

Last Responsible Moment Decision Points


Note that one of the items on the release planning meeting agenda is the identifica-
tion of “Last Responsible Moment (LRM) decision points.” LRM decision points
identify points in the release where a decision must be made on an issue so as not to
allow a default decision to occur. In other words, they identify “the moment at which
failing to make a decision eliminates an important alternative”.6 Up until this point,
the team can continue its momentum and gather additional information that will help
in the decision-making. For example, one team knew it would have to make a deci-
sion between going with a Sybase database and an Oracle database. But the team did
not have to decide this before they could start on the project—indeed, the team real-
ized that it could develop code that was database-independent until the third iteration,
when integration and reporting were required. Therefore, the team set the end of the
second iteration as its LRM on the database decision, giving the architect and the
DBA time to experiment with the work being developed concurrently.

• 72 • SCOPE MANAGEMENT
08_0321502752_ch05.qxd 4/18/08 3:13 PM Page 73

Figure 5-3
Release planning
Release Planning meeting agenda
Meeting Agenda

Introductions, ground rules, review of purpose and agenda (Project manager)


Do we need to review our current situation and/or existing product roadmap?
(Project manager, architect, customer/product owner)
Do we remember the product vision? Has it changed? (Customer/product
owner)
What is the release date? How many iterations make up this release? (Project
Manager)
What is the theme for this release? (Customer/product owner)
What are the features we need for this release? (Customer/product owner)
What assumptions are we making? What constraints are we dealing with?
(Team)
What are the milestones/deliverables expected? Do we have any LRM decision
points? (Team)
What is the capacity of the team (iteration velocity)? (Team)
Can we move the features into the iterations? Do we need to break them into
smaller features so that they can be completed in a single iteration? (Team)
What issues/concerns do we have? (Team)
Can we commit to this release as a team, given what we know today? (Team)
Close: empty parking lot, action items, next steps (Project manager)

Figure 5-4
Release plan

SCOPE PLANNING • 73 •
08_0321502752_ch05.qxd 4/18/08 3:13 PM Page 74

Coordinated Release Planning


A colleague of ours once ran a release planning meeting with teams located in the
U.S. and in London. Because of the size of the team and the budget constraints, not
everyone could attend the day-long event. So the meeting was broken out into three
days. Day 1 was focused on the U.S. team’s release plan and all its assumptions
about and dependencies on the London team. Due to time zone issues, the London
team listened in on the phone for the first part of the meeting as the vision and the
high-level detail and expectations around the features were discussed, then dropped
off the call once the U.S. team started on the work of moving the features into the
iterations. On Day 2, the London team did its work of moving the features into the
iterations after reviewing the results of the U.S. team’s release plan (photos and notes
were made available on their shared wiki). At the end of Day 2, the London team
posted its release plan. Day 3 was devoted to the coordination of the two plans, mak-
ing sure all assumptions had been addressed and understood, all dependencies
accounted for, and proper prioritizations had been made reflecting the teams’ con-
straints. Both groups committed to the release plan on the third day after some final
tweaking.

Teams that are not co-located should make every effort to bring every-
one together for this meeting. Agile emphasizes face-to-face communication
because of its benefits. However, balancing this with the realities of geo-
graphically dispersed teams means that budget constraints force teams to be
selective about when they can gather together as a group. The vision and
release planning meetings should receive high priority, because the informa-
tion shared and decisions made in these meetings guide the team through-
out the remainder of the release.

Iteration Planning
Traditional scope definition and many of the practices defined in the
PMBOK® Guide knowledge area of Project Time Management are done as
part of iteration planning. Here, features are elaborated (creating the equiv-
alent of PMBOK® Guide work packages), tasks are identified, and the time
needed to accomplish the tasks is estimated (see Figures 5-5 and 5-8). At the
beginning of each iteration, the team should hold an iteration planning
meeting to conduct this work. The team reviews the release plan and the pri-
oritized items in the backlog, reviews the features requested for the current

• 74 • SCOPE MANAGEMENT
08_0321502752_ch05.qxd 4/18/08 3:13 PM Page 75

iteration, and tasks out and estimates those features. See Figure 5-6 for a
typical iteration planning meeting agenda. In keeping with the agile practice
of just-in-time design, it is here that the details of the features are discussed
and negotiated.

Figure 5-5
Iteration plan

Figure 5-6
Iteration planning
Iteration Planning meeting agenda
Meeting Agenda

Introductions, ground rules, review of purpose and agenda (Project manager)


Do we know our iteration start and end dates? (Project manager)
Do we know the team’s velocity? (Team)
Do we know what “done” means? (Team)
What are the features we need for this iteration? What is the acceptance criteria
for each feature? (Customer/product owner)
Do we have enough information about the features so that we can task them
out? (Team)
Can we estimate the time it takes to complete the tasks? (Team)
What assumptions are we making? What constraints are we dealing with? Are
there dependencies that affect our prioritization? (Team)
Are we within our velocity limits? (Team)
What issues/concerns do we have? (Team)
Can we commit to this iteration as a team, given what we know today? (Team)
Close: empty parking lot, action items, next steps (Project manager)

SCOPE PLANNING • 75 •
08_0321502752_ch05.qxd 4/18/08 3:13 PM Page 76

Again, planning and design work is done only for the pieces that are
being readied to code in that iteration, not for the entire system. It’s often
discovered during iteration planning that the sum of the task efforts exceeds
the size of the iteration timebox. When this occurs, some of the work needs
to be shifted either into the next iteration or back into the backlog. Similarly,
if a team discovers that it has chosen too little work for the iteration, it will
consult with the customer, who can then give the team an additional feature
or two to make up the difference. This allows the team to make a realistic
commitment to the scope of the work being defined.

Daily Stand-Up
One of the key heartbeats of agile development involves the practice of daily
stand-up meetings. It is just what it sounds like: a daily meeting, where all
team members attend, and while remaining standing, they each relate their
status to the other team members and their plan for the day based on the
progress that they’ve made. Standing helps keep the meetings short—stand-
ups should run only 5 to 15 minutes. Its primary purpose is for the team
members to inspect and adapt its work plan (iteration backlog) by quickly
sharing information about the progress (or lack of) being made by each
individual regarding the tasks that were committed to during the iteration
planning meeting. These stand-ups help the team to remain focused on the
agreed-to scope and goals of the iteration.

Summary Comparison
Table 5-2 provides a summary comparison of traditional and agile
approaches to scope definition. In agile projects this is called “multilevel
planning.”

Table 5-2
Scope Definition

Traditional Agile
Prepare a Project Scope Statement document Conduct a vision meeting to share the
that includes items such as the following: product vision; confirm and clarify the
Project boundaries and objectives, product boundaries, objectives, and product
scope description… scope description using exercises such as
the elevator statement and design the box.

• 76 • SCOPE MANAGEMENT
08_0321502752_ch05.qxd 4/18/08 3:13 PM Page 77

Traditional Agile
And major milestones and project Conduct a planning meeting to prepare the
deliverables… product roadmap, as well as release or quarterly
planning meetings that also include milestones
and deliverables at an iteration level.

And product specifications and Conduct an iteration planning meeting that


acceptance criteria… results in the detail around each feature, and the
tasks needed to complete the feature according to
the team’s definition of “done” and the acceptance
criteria defined by the customer.

And assumptions and constraints. All planning meetings identify and/or review
assumptions and constraints.

Create a WBS

Agile teams do not tend to create formal WBSs (work breakdown struc-
tures). Instead, flipcharts and whiteboards are used to capture the break-
down of work. You’ve seen examples of these in Figures 5-4 and 5-5. So at
the end of release planning, the agile equivalent of a WBS—a feature break-
down structure—would look like the sample release plan feature break-
down structure in Figure 5-7. If having iterations as work packages is not
sufficient for your organization/billing needs, then breaking the work down
further into smaller work packages would look like the results of an iteration
planning meeting, as illustrated in Figure 5-8.
Table 5-3 compares the traditional and agile approaches to work break-
down. In agile projects, the work breakdown structure is captured in the
release plan and the iteration plan.

Table 5-3
WBS Creation

Traditional Agile
Create a work breakdown structure Conduct planning meetings and give the team
diagram. the responsibility for breaking down the work
into smaller work packages (features and
tasks), displayed as the release plan at the
high level, and the iteration plan at the more
detailed level.

SCOPE PLANNING • 77 •
08_0321502752_ch05.qxd 4/18/08 3:13 PM Page 78

Figure 5-7 Software


Release plan feature Product
breakdown structure Release

Iteration 1 Iteration 2 Iteration 3 Iteration 4

Order Customer Inventory Trend


Entry Billing Updates Reporting

Customer Inventory
Profile Report

Security
Options

Figure 5-8
Iteration plan (partial) Iteration 1

Order
Entry

Tasks: Estimate (hours): Who:


Confirm available inventory. 5 Sue
Place Order
Capture customer info. 13 Sue
Using Credit
Capture shipping options. 8 Rob
Card
Validate credit card. 2 Stu
Provide status to user (pass, fail). 2 Stu
Place Order
Using
PayPal
Etc.
Access/Edit
Shopping
Cart

Cancel
Order

• 78 • SCOPE MANAGEMENT
08_0321502752_ch05.qxd 4/18/08 3:13 PM Page 79

Scope Verification

Scope verification is accomplished within the iteration, as the customer gets


to review, test, and accept the implemented features. Ideally this happens
throughout the iteration, but it can also happen at the end of the iteration,
during the demo of the working code. Those features that were not accepted
(either because they weren’t ready or weren’t right) move back into the
backlog or into the next iteration at the discretion of the customer. Scope
change control is handled by the management of this backlog, as discussed
in the previous chapter on integration.
Table 5-4 makes the comparison between the traditional and agile
approaches to scope verification. Scope verification is captured by the agile
practices of acceptance testing and customer acceptance.

Table 5-4
Scope Verification

Traditional Agile
Document those completed deliverables Documentation of accepted features may
that have been accepted and those that be done informally (by moving the sticky
have not been accepted, along with the reason. notes to the “done” pile) or formally.

Document change requests. Customer updates the backlog.

Scope Control

Controlling scope in agile projects consists of two things: managing the


product backlog and protecting the iteration. Whereas the customer main-
tains the backlog, it is the agile project manager who protects the team and
helps prevent scope changes from occurring during the iteration.
When a team commits to the iteration at the end of the iteration plan-
ning meeting, the delivery team is effectively saying, “Given what we know
today, we believe we can deliver this work using our definition of ‘done’
within this iteration,” and the customer is effectively saying, “Given what I

SCOPE PLANNING • 79 •
08_0321502752_ch05.qxd 4/18/08 3:13 PM Page 80

know today, this is the work that I am expecting by the end of the iteration,
and during that time I will not mess with the iteration backlog” (that is,
scope). The iteration backlog is thus locked in.
It is important to set the length of your iteration accordingly, because
the customer must wait until the next iteration to make changes. If there
happens to be lots of “requirements churn” (that is, requests for changes are
coming in very frequently), you may want to discuss shorter iteration cycles
with the team in order to enable more frequent changes. Maintenance teams
may have iteration lengths of only one week, whereas larger system develop-
ments with known requirements may have an iteration length of four to six
weeks. If the customer keeps trying to interrupt the team with changes, the
iteration length may be too long.
There will always be exceptions, and in those cases a discussion
between the customer and the agile project manager should help identify
potential resolutions. Iterations can be aborted and restarted, but this
should be the rare exception.
Given the short duration of iterations, it is easy to protect the iteration
backlog from change. However, changes in the product roadmap and the
release plan are expected and therefore should be reviewed regularly.
Table 5-5 lists out the differences between the traditional and agile
approaches to scope control. Agile users refer to scope control as “managing
the product backlog.”

Table 5-5
Scope Control

Traditional Agile
Use a change control system to The customer manages the product backlog; once
manage change. the team commits to the work to be done in an
iteration, the scope is protected for that duration.

Update all documents as appropriate The team revisits release plans and product
with the approved changes. roadmaps regularly, making changes as needed to
better reflect the team’s progress and changes
requested by the customer.

• 80 • SCOPE MANAGEMENT
08_0321502752_ch05.qxd 4/18/08 3:13 PM Page 81

Summary
The main points of this chapter can be summarized as follows:

• “Scope creep” doesn’t exist in agile projects, because scope is


expected to change.
• Scope management in agile is primarily a function of “rolling wave”
planning and the management of the product backlog.
• Scope is defined and redefined using five different levels of planning
that take the team from the broad vision down to what team mem-
bers plan to complete today.
• WBSs are not created per se; instead, release/quarterly plans and
iteration plans serve to break down the work into smaller work pack-
ages, referred to as “features and tasks.”
• Scope is verified by the customer, who is responsible for accepting or
rejecting the features completed each iteration.
• Scope is controlled through the use of the backlog, rolling wave plan-
ning, and the protection of the iteration.

Table 5-6 presents the differences in project management behavior


regarding scope management in traditional and agile projects.

Table 5-6
Agile Project Manager’s Change List for Scope Management

I used to do this: Now I do this:


Prepare a formal Project Scope Make sure the team understands the framework
Management plan. and process structure of the chosen agile
approach.

Prepare a formal Project Scope Facilitate planning meetings—vision, release,


Statement document. iteration, daily stand-up—and arrange for the
informally documented plans to be highly visible
to all stakeholders.

Create the WBS. Facilitate the release planning meeting so that the
team can create the plan showing the breakdown
of work across several iterations.

(continued)

SUMMARY • 81 •
08_0321502752_ch05.qxd 4/18/08 3:13 PM Page 82

Table 5-6
Agile Project Manager’s Change List for Scope Management (continued)

I used to do this: Now I do this:


Manage the change control system Step away from the backlog; it is owned by the
and try to prevent scope creep. customer. If needed, remind the customer that
during the iteration, the team is protected from
scope changes.

Manage the delivery of tasks to Allow team members to manage their daily tasks
prevent or correct scope creep and facilitate conversations with the customer to
at the task level. avoid unnecessary work or “gold plating.”

Endnotes
1. PMBOK® Guide, 107.
2. Ibid, 114.
3. Ibid.
4. Mike Cohn. Agile Estimating and Planning (Upper Saddle River, NJ: Pearson
Education, Inc., 2006), 28.
5. Luke Hohmann. Beyond Software Architecture (Boston: Addison-Wesley,
2003), 287.
6. Poppendieck. Lean Software Development, 57.

• 82 • SCOPE MANAGEMENT
Google
Bookmarks Delicious Digg Facebook StumbleUpon Reddit Twitter
Buy 2 from informIT and

Save 35%
Amr Elssamadisy Enter the coupon code
AGILE2009 during checkout.

Agile Adoption Patterns


A Roadmap to Organizational Success

Agile methods promise to help you create software that delivers far more business
value — and do it faster, at lower cost, and with less pain. However, many available
organizations struggle with implementation and leveraging these methods to their • Book: 9780321514523
full benefit. In this book, Amr Elssamadisy identifies the powerful lessons that • Safari Online
have been learned about successfully moving to agile and distills them into 30 • EBOOK: 0321579437
proven agile adoption patterns. • KINDLE: B001E50WTY
Elssamadisy walks you through the process of defining your optimal agile adoption
strategy with case studies and hands-on exercises that illuminate the key points.
He systematically examines the most common obstacles to agile implementation,
About the Author
identifying proven solutions. You’ll learn where to start, how to choose the best Amr Elssamadisy (elssamadisy.
agile practices for your business and technical environment, and how to adopt com) is a software development
agility incrementally, building on steadily growing success.
practitioner who works with his
Next, he presents the definitive agile adoption pattern reference: all the clients to build better, more valuable
information you need to implement the strategy that you’ve already defined. software. He and his colleagues at
Utilizing the classic pattern format, he explains each agile solution in its proper Gemba Systems help both small
context, revealing why it works — and how to make the most of it. The pattern and large development teams learn
reference prepares you to new technologies, adopt and adapt
• Understand the core drivers, principles, and values associated appropriate agile development
with agile success practices, and focus their efforts
• Tightly focus development on delivering business value — and recognize to maximize the value they bring
the “smells” of a project headed off track to their organizations.
• Gain rapid, effective feedback practices: iteration, kickoff and stand-up
meetings, demos, retrospectives, and much more Amr is also the author of Patterns
of Agile Practice Adoption:
• Foster team development: co-location, self-organization, cross-functional
The Technical Cluster. He is an
roles, and how to bring the customer aboard
editor for the AgileQ at InfoQ, a
• Facilitate technical tasks and processes: testing, refactoring, continuous
contributor to the Agile Journal, and
integration, simple design, collective code ownership, and pair programming
a frequent presenter at software
• Act as an effective coach, learning to engage the community and
development conferences.
promote learning
• Integrate “clusters” of agile practices that work exceptionally well together
Agile Adoption Patterns will help you whether you’re planning your first agile
project, trying to improve your next project, or evangelizing agility throughout your
organization. This actionable advice is designed to work with any agile method,
from XP and Scrum to Crystal Clear and Lean. The practical insights will make you
more effective in any agile project role: as leader, developer, architect, or customer.

informit.com/aw
09_0321514521_ch05.fm Page 37 Tuesday, June 10, 2008 12:38 PM

Chapter 5

ADOPTING AGILE PRACTICES

So far you have read about business value and smells. You have also done the
exercises at the end of each chapter and come up with a prioritized list of
business values and a prioritized list of smells that need fixing. If you have
not done so yet, please stop now and go back and do so. Armed with an
understanding of your customers’ priorities and the main pains your com-
pany is experiencing, you are ready to determine what practices you should
consider adopting to alleviate those pains and get the most value for your
efforts.

In this chapter, I will give you direction on how to go about successfully


choosing which practices to consider adopting. I’ll also ask you to bench-
mark your work—even if you just do it subjectively—so you can be “agile”
about your adoption. This is, however, only advice on how to come up with
your own priorities and your own list of practices to adopt. If you are looking
for a prescription—do practice A, then B, but not C—you won’t find it here.
(And if you do find it elsewhere, my advice to you is not to trust it.)

T HE P RACTICES
The bulk of this book contains patterns of Agile practice adoption—that is,
Agile practices written up in pattern format with a focus on adoption. In this
chapter, your goal is to choose the practices that fit your organization’s con-
text. That comes down to relying on the work you’ve done in the previous
chapter in prioritizing your business values and smells and using it to choose
which practices to adopt to improve your business values and reduce your
smells.

37
09_0321514521_ch05.fm Page 38 Tuesday, June 10, 2008 12:38 PM

38 C HAPTER 5  A DOPTING A GILE P RACTICES

P ATTERNS OF A GILE P RACTICE TO B USINESS V ALUE


M APPINGS
Let’s start with the meat of the chapter. Figures 5-1 through 5-7 provide dia-
grams for each business value and the practices that improve that business
value. These mappings, like all the patterns in this book, are built by aggre-
gating experiences from several Agile adoption efforts. Each of these prac-
tices corresponds to a pattern that is documented later in this book. Don’t
worry if you don’t know exactly what some of these practices are at this point
in time.
Lets examine Figure 5-1 to understand how to read these business value
charts. Arrows between practices indicate dependencies; therefore, Refactor-
ing depends on Automated Developer Tests. Also vertical ordering is impor-
tant; the higher up a practice is, the more effective it is for the business value.
Therefore, Iterations are more effective than Automated Developer Tests, and
Test-First Development is more effective than Test-Last Development with
respect to decreasing the time to market. Use these diagrams to determine
what practices to consider adopting. Take the suggestions accompanying
each diagram as just that—suggestions. All the practices in each diagram
positively affect that business value, and you may discover upon reading the
details of the suggested practices that they do not apply in your particular
context.
Time to Market
More Effective

Continuous
Iteration
Integration

Release Often
Iteration
Done State
Backlog

Test-Driven Automated Developer Tests Evolutionary


Requirements Design
Test-First
Development
Customer Part Functional
of Team Tests Refactoring Simple Design
Test-Last
Development
Cross-
Functional Team

Less Effective

Figure 5–1 Time to Market practices


09_0321514521_ch05.fm Page 39 Tuesday, June 10, 2008 12:38 PM

P ATTERNS OF A GILE P RACTICE TO B USINESS V ALUE M APPINGS 39

Small steps and failing fast are the most effective methods to get things out
quickly. Weed out defects early because the earlier you find them, the less
they will cost, and you won’t be building on a crumbling foundation. That
is why Iteration and Continuous Integration lead the practices that most
positively affect time to market. They are both, however, dependent on
other practices. Consider starting with automated tests and the Iteration
trio—Iteration, Done State, and Iteration Backlog—when you want to im-
prove time to market.
Figure 5-2 gives the practices that increase product utility. By far, the most ef-
fective practice is Customers Part of Team. Go there first. Then consider
functional tests if you are already doing automated developer tests or an iter-
ation ending with a demo.
More Effective

Customer Part
of Team
Product Utility
(Value to Market)
Does not improve
product utility unless
customerpart
customer partofofthem
tem
Test Driven
(If customer does Requirements
not know the exact
utilityofofproduct.)
utility product
Iteration Demo Functional
Tests Improved by

Requirements Documents
Release Often
Prioritized
Backlog User Story Use Case

Less Effective

Figure 5–2 Product Utility practices


09_0321514521_ch05.fm Page 40 Tuesday, June 10, 2008 12:38 PM

40 C HAPTER 5  A DOPTING A GILE P RACTICES

Quality to Market
More Effective

Test-Driven Test-Driven
Development Requirements
Automated Developer Tests

Test-First
Development

Test-Last
Development Functional Tests
Refactoring

Pair
Programming

Continuous
Collective Code Evolutionary Integration Iteration
Ownership Design

Release Often
Simple Design

Stand-Up
Meeting

Less Effective

Figure 5–3 Quality to Market practices

Although quality to market test-driven development and test-driven require-


ments are king, of course, they both depend on other practices. So consider
starting with one of the Automated Developer tests (preferably test-first de-
velopment) and Pair Programming, closely followed by Refactoring. Pair
programming helps you come up to speed with these particularly difficult
practices. Once you are comfortable with automated developer tests, aim for
full-fledged test-driven development and consider functional tests.
09_0321514521_ch05.fm Page 41 Tuesday, June 10, 2008 12:38 PM

P ATTERNS OF A GILE P RACTICE TO B USINESS V ALUE M APPINGS 41

More Effective

Evolutionary
Design Flexibility

Backlog Done State Self-Organizing


Refactoring Simple Design Team

Retrospective

Iteration Demo
Automated Developer Tests

Test-First Test-Last
Development Development Functional Tests

Cross- Stand-Up Customer Part Evocative


User Story Functional Team Meeting of Team Document
Collective Code Pair
Ownership Programming Continuous Information
Integration Radiator

Less Effective

Figure 5–4 Flexibility practices

There are two general types of flexibility in software development: team flex-
ibility and technical flexibility. Team flexibility is the team’s ability to recog-
nize and respond to changes that happen. For a team to respond to changes
by changing the software, there needs to be technical flexibility. Therefore,
you need both team flexibility and technical flexibility. Start with Automated
Developer Tests, a self-organizing team, and the trio of Iteration, Done State,
and Backlog. The testing gets you on your way to technical flexibility, and the
remaining practices enable your team’s flexibility.
09_0321514521_ch05.fm Page 42 Tuesday, June 10, 2008 12:38 PM

42 C HAPTER 5  A DOPTING A GILE P RACTICES

More Effective

Visibility

Information Test-Driven
Backlog Requirements Functional Tests
Radiator

Continuous
Release Often Iteration Done State
Demo Integration

Stand-Up
Kickoff Meeting Meeting

Less Effective

Figure 5–5 Visibility practices

The backlog and information radiators are your first easy steps toward in-
creased visibility. Depending on your need for increasing visibility, you
can take an easy route and consider iterations with a done state and a
demo or a more difficult but effective route with functional tests and test-
driven requirements.
More Effective

Reduce Cost

Evolutionary
Refactoring Design
Automated Developer Tests

Test-First Test-Last Evocative


Development Development Functional Tests Simple Design Document

Backlog

Cross- Self-Organizing
Functional Team Team Retrospective Iteration Done State

Continuous
Integration User Story

Less Effective

Figure 5–6 Reduce Cost practices


09_0321514521_ch05.fm Page 43 Tuesday, June 10, 2008 12:38 PM

P ATTERNS OF A GILE P RACTICE TO B USINESS V ALUE M APPINGS 43

You can reduce cost in two ways: make the code easier to maintain and write
less code—that is, code for the most important features first. Automated
tests, followed by refactoring, simple design, and evolutionary design, are
your path toward reducing the cost of maintenance. A backlog, iteration, and
done state reduce the amount of code written.

More Effective Product Lifetime

Automated Developer Tests

Test-First Test-Last Pair Collective Code


Refactoring Development Development Functional Tests Programming Ownership

Self-Organizing Cross-
Simple Design Team Functional Team
Evolutionary Evocative
Design Document

Continuous
Integration

Less Effective

Figure 5–7 Product Lifetime practices

Product lifetime is inversely proportional to the cost of software mainte-


nance. There are two ways that we know how to reduce maintenance costs:
1) build a safety net of tests that allow changes to the software system and
reduce the cost of change and 2) spread the knowledge of the design of the
software system. Automated developer tests are your key to (1), while pair
programming and collective code ownership are good starts for (2).

Patterns of Agile Practice to Smell Mappings


There are two types of smells: business smells and process smells. The busi-
ness smells are inverses of business values, and their patterns of Agile practice
mappings are identical:
 Quality Delivered to Customer Is Unacceptable: Quality to Market
 Delivering New Features to Customer Takes Too Long: Time to Market
 Features Are Not Used by Customer: Product Utility
 Software Is Not Useful to Customer: Product Utility
 Software Is Too Expensive: Reduce Cost
09_0321514521_ch05.fm Page 44 Tuesday, June 10, 2008 12:38 PM

44 C HAPTER 5  A DOPTING A GILE P RACTICES

The remaining smells have their own mappings to patterns of Agile practices
(see Figures 5-8 through 5-15).
More Effective

Us Versus Them

Information Prioritized
Radiator Backlog

Demo

Release Often Iteration

Done State

Customer Part Cross-


of Team Functional Team

Less Effective

Figure 5–8 Us Versus Them practices

The Us Versus Them smell can best be alleviated by having frequent conver-
sations about the true nature of the project. Start with increasing visibility by
creating information radiators that show the key points in your develop-
ment. Create a prioritized backlog by involving the whole development
team—including the customers. Use these practices to increase visibility and
build trust. When you are ready, take it further and build more trust by deliv-
ering often by adopting the iteration, demo, and done state trio.

More Effective

Customer Asks for Everything


(including the kitchen sink)

Customer Part
of Team
Test-Driven
Requirements Functional Tests Backlog Planning Poker

Co-located
Team

Demo
Information
Release Often Radiator

Stand-Up
Kickoff Iteration Meeting

Less Effective

Figure 5–9 Customer Asks for Everything practices


09_0321514521_ch05.fm Page 45 Tuesday, June 10, 2008 12:38 PM

P ATTERNS OF A GILE P RACTICE TO B USINESS V ALUE M APPINGS 45

Understand that when the customer asks for everything, it is a symptom of


lack of trust from the customer that the features will be delivered promptly
and a legacy of change-management barriers in traditional development.
The best way to address this issue is to bring the customers in as part of the
development team. Have them build the Backlog with the teams input and
be responsible for its prioritization.
More Effective

Direct and Regular Customer


Input Is Unrealistic

Test-Driven
Requirements Backlog

Information
Functional Tests Radiator Iteration Demo

Stand-Up
Release Often Meeting

Less Effective

Figure 5–10 Direct and Regular Customer Input Is Unrealistic practices

If direct input from the customer is not possible, mitigate this problem by re-
ducing the number of communication errors. You can do this by building
functional tests and slowly working toward test-driven requirements, where
the customer’s requirements document becomes an executable specification.
This particular practice will take a long time to adopt correctly, so start early
and be patient. In the meantime, create a backlog and start delivering work
incrementally, with iterations ending with a demo.
09_0321514521_ch05.fm Page 46 Tuesday, June 10, 2008 12:38 PM

46 C HAPTER 5  A DOPTING A GILE P RACTICES

More Effective

Management Is
Surprised
Information
Backlog Radiator

Done State

Demo Release Often

Iteration

Continuous Stand-Up
Kickoff
Integration Meeting

Less Effective

Figure 5–11 Management Is Surprised—Lack of Visibility practices

To keep management from being surprised, you need to do two


things: 1) build your application incrementally from end to end and
2) communicate your true progress. Address 1) by defining a done
state that is as close to deployment as possible and then working in it-
erations. Communicate your true progress by working through infor-
mation radiators showing your true progress and using a demo at the
end of every iteration.

More Effective Bottlenecked


Resources

Pair
Programming

Automated Developer Tests

Test-First Test-Last
Development Development Functional Tests

Self-Organizing Cross- Stand-Up Collective Code Continuous


Team Functional Team Meeting Ownership Integration

Co-located
Team

Less Effective

Figure 5–12 Bottlenecked Resources—Software Practitioners Are Members


of Multiple Teams Concurrently practices
09_0321514521_ch05.fm Page 47 Tuesday, June 10, 2008 12:38 PM

P ATTERNS OF A GILE P RACTICE TO B USINESS V ALUE M APPINGS 47

Bottlenecked resources happen because of specialization. Pair programming


is your single most effective method to share knowledge and spread the spe-
cialization. This, in turn, allows more than one person to address issues that
were previously the domain of a single expert. Automated developer tests
help this in another way—they allow people to work in code they do not
know well and rely on a safety net of tests to tell them if they have broken any
previously working code. This should be your second step toward alleviating
resource bottlenecks.
Projects churn when there is no clear prioritization. Prioritize requirements
by creating and maintaining a backlog. To make sure that the backlog is an
accurate reflection of customer needs, make your customers part of your
team and put together a cross-functional team that can build those require-
ments end to end. This will give you and your customers a better under-
standing of requirements, their priorities, and a feedback loop to make
course corrections quickly.

More Effective

Churning Projects

Backlog

Customer Part Cross-


of Team Functional Team

Co-located Information
Team Radiator

Iteration Release Often

Less Effective

Figure 5–13 Churning Projects practices

Hundreds of bugs need to be reduced. Start with automated developer tests


supported by pair programming to reduce the number of bugs you are intro-
ducing and start building a safety net of tests. Then work with iterations with
a done state to find as many bugs as possible. Don’t put off painful issues
such as integration. Fix things early.
09_0321514521_ch05.fm Page 48 Tuesday, June 10, 2008 12:38 PM

48 C HAPTER 5  A DOPTING A GILE P RACTICES

More Effective

Hundreds of
Bugs

Automated Developer Tests

Test-First Test-Last
Development Development Functional Tests

Pair
Simple Design Programming Done State Iteration

Refactoring
Continuous
Integration
Less Effective

Figure 5–14 Hundreds (Possibly Thousands) of Bugs in Bug Tracker practices

More Effective

Hardening Phase

Automated Developer Tests

Test-First Test-Last
Development Development Functional Tests Done State

Iteration
Continuous
Integration
Demo

Release Often

Less Effective

Figure 5–15 Hardening Phase Needed at End of Release Cycle practices

If you have a hardening phase, you’ve let a significant number of defects ac-
cumulate. Stop doing what you’ve been doing and add tests as you develop
code via automated developer tests. Choose a good done state—one that
takes you as close to deployment in every iteration as possible—to weed out
those difficult-to-find bugs early.
09_0321514521_ch05.fm Page 49 Tuesday, June 10, 2008 12:38 PM

C RAFTING Y OUR A GILE A DOPTION S TRATEGY 49

C RAFTING Y OUR A GILE A DOPTION S TRATEGY


You can use the information you have gathered so far about business value
and smells to determine which practices you should consider adopting.
 Choose practices based solely on business value delivered. In this
scenario, you are not suffering from any severe pains. You just want to
improve your software development process by increasing the busi-
ness value that your team delivers. Use the Business Value to Practice
mapping in Figures 5-1 through 5-7 to choose practices that most
strongly affect your organization’s business values.
 Choose practices to alleviate smells that have been prioritized by
business value. This technique focuses on alleviating pains that you
have while keeping business value in mind. Smells are prioritized ac-
cording to your customers business values. Then, from the prioritized
smell list, you choose the appropriate practices to adopt with the help
of the Smell to Practice mappings shown in Figures 5-8 through 5-15.
 Choose practices to address the most visible smells. This is com-
mon, although I wouldn’t recommend it. It’s plain and simple “fire-
fighting”—trying to get rid of the biggest pain regardless of the
business value it delivers. This is all too common when the technical
team determines the priority without customer input. (I’ve often
been guilty of this.)

The information found in the figures at the beginning of this chapter is pri-
oritized by effectiveness. Therefore, the first practice in the figure is the most
effective practice for increasing the business value or alleviating the smell.
Get your feet wet with the first practice, and after you’ve successfully adopted
that, come back and take another look at the remaining practices and clusters
related to your business value or smell.
No matter how you prioritize your list of practices to adopt, you should
adopt those practices as iteratively as possible. Armed with the list of prac-
tices, here is how you can successfully adopt the Agile practices on your list.

1. Start with an evaluation of the status quo. Take readings (even if they
are subjective) of the current business value(s) you want to improve
and the smell(s) you want to alleviate.
2. Set goals that you want to reach. How much do you want to increase
the business value? How much do you want to reduce the smell? What
is the time frame? Take a guess initially and modify it as you know
more through experience.
3. Pull the first practice or cluster off the list you created.
09_0321514521_ch05.fm Page 50 Tuesday, June 10, 2008 12:38 PM

50 C HAPTER 5  A DOPTING A GILE P RACTICES

4. Read the pattern that is related to that cluster or practice. Decide if it


is applicable or not by matching the context and forces to your work-
ing environment (more details on what patterns are and their differ-
ent sections in Part 3: The Pattern Catalog). If the practice is not
applicable in your environment, go back and pick the next one off the
business value/smells table.
5. Once you have determined that the pattern is applicable in your envi-
ronment, read the pattern thoroughly. Follow the advice in the
“Adoption” section in the pattern to get started.
6. Periodically evaluate whether the business value you are addressing is
improving or that the smell you are addressing is being resolved. If it
is not, adapt your practice for your environment using hints from the
“Variations” and “But” sections in the pattern. (You might want to
take a quick read of Chapter 6, “The Patterns of Agile Practice
Adoption,” at this point to get an understanding of what an Agile
adoption pattern looks like.)
7. Go back to step 1 and re-evaluate your business value or smell. If it
needs more improvement (that is, you still have not met your goal set
in 2), consider adding another practice or an entire cluster to resolve
the issue. If it has met your goals, move on to the next one.

So where is the test-driven part of this approach? Your tests are your goal val-
ues that you set in step 2. In step 6, you check your readings after adopting a
practice. This is a test of how effectively the practice(s) you adopted has al-
ready met the goal set earlier. This loop—set a goal, adopt a practice, and
then validate the practice against the expected goal—is a test-driven adop-
tion strategy.1

W HERE N EXT ?
With the completion of this chapter, you are ready to create your own Agile
adoption strategy that focuses on the business values and smells of your or-
ganization. This is not a one-shot deal—remember to be agile about your
adoption. Measure your progress against your goals and revisit them regu-
larly. Modify your strategy as you learn more about each practice and your
own environment. It is natural to make a few wrong turns, but fail fast and
quickly recover.

1. In management practices, this is commonly referred to as the PDCA cycle (Plan, Do,
Check, Act), originally developed by Walter Shewhart at Bell Laboratories in the 1930s
and promoted effectively in 1950s by the quality management guru W. Edward
Deming.
09_0321514521_ch05.fm Page 51 Tuesday, June 10, 2008 12:38 PM

T HEORY TO P RACTICE : B UILDING Y OUR O WN A GILE P RACTICE A DOPTION S TRATEGY 51

For the remainder of this book, we dig into the details of each practice. There
is a pattern for each practice mentioned in the mappings that describes the
practice, what problems it solves, how others have successfully adopted and
adapted practices in various environments, and what missteps they have
taken upon adoption so you can watch out for them.

T HEORY TO P RACTICE : B UILDING Y OUR O WN A GILE


P RACTICE A DOPTION S TRATEGY
Answer the following questions to build an adoption strategy. (Use the an-
swers from the Chapter 3 and 4 exercises here.) Also read Chapters 45 and 46
for real-world examples of how this might be done.)

1. What are your goals for adopting Agile practices? Do you want to alle-
viate smells or add business value? Be specific. If you have more than
one goal, prioritize them.
2. Take readings of the current business value(s) and smell(s) you want
to address. Don’t worry if they are subjective or fuzzy. Know, to the
best of your ability, where your organization is today with respect to
business values and smells.
3. Choose an adoption strategy. Choose practices using that strategy to
adopt.
4. Read the next chapter, which introduces the patterns. Then start fol-
lowing the steps outlined in this chapter to adopt your first practice.
Don’t forget to periodically take readings of your business value/smell
to make sure the practice is effective.
5. Congratulations and good luck! Youve started on your path to Agile
practice adoption!
Google
Bookmarks Delicious Digg Facebook StumbleUpon Reddit Twitter
Buy 2 from informIT and
Craig Larman Save 35%
Bas Vodde Enter the coupon code
AGILE2009 during checkout.

Scaling Lean & Agile Development


Thinking and Organizational Tools for Large-Scale Scrum

Increasingly, large product-development organizations are turning to lean


thinking, agile principles and practices, and large-scale Scrum to sustainably and available
quickly deliver value and innovation. However, many groups have floundered in • Book: 9780321480965
their practice-oriented adoptions. Why? Because without a deeper understanding • Safari Online
of the thinking tools and profound organizational redesign needed, it is as though • EBOOK: 0321617134
casting seeds on to an infertile field. Now, drawing on their long experience • KINDLE: B001PBSDIE
leading and guiding large-scale lean and agile adoptions for large, multisite,
and offshore product development, and drawing on the best research for great
team-based agile organizations, internationally recognized consultant and best-
About the Authors
selling author Craig Larman and former leader of the agile transformation Craig Larman is a management
at Nokia Networks Bas Vodde share the key thinking and organizational tools and product development
needed to plant the seeds of product development success in a fertile lean and consultant in enterprise-level
agile enterprise. adoption and use of lean
development, agile principles
Coverage includes
and practices, and large-scale
• Lean thinking and development combined with agile practices and methods Scrum in large, multisite, and
• Systems thinking offshore development. He is chief
• Queuing theory and large-scale development processes scientist at Valtech, an international
• M
 oving from single-function and component teams to stable cross-functional consulting and offshore
cross-component Scrum feature teams with end-to-end responsibility outsourcing company. His books
for features include the best-sellers Agile &
Iterative Development: A Manager’s
• Organizational redesign to a lean and agile enterprise that delivers value fast
Guide (Addison-Wesley, 2004) and
• Large-scale Scrum for multi-hundred-person product groups
Applying UML and Patterns, Third
In a competitive environment that demands ever-faster cycle times and greater Edition (Prentice Hall, 2005).
innovation, applied lean thinking and agile principles are becoming an urgent
Bas Vodde works as an
priority. Scaling Lean & Agile Development will help leaders create the foundation
independent product-development
for their lean enterprise — and deliver on the significant benefits of agility.
consultant and large-scale Scrum
In addition to the foundation tools in this text, see the companion book coach. For several years he led
Practices for Scaling Lean & Agile Development: Large, Multisite, and the agile and Scrum enterprise-
Offshore Product Development with Large-Scale Scrum, available Dec, 2009, wide adoption initiative at Nokia
for complementary action tools. Networks. He is passionate about
improving product development,
an avid student of organizational,
team management, and product
development research, and
remains an active developer.

informit.com/aw
book.book Page 149 Sunday, November 2, 2008 5:43 PM

Chapter

FEATURE TEAMS
7
Better to teach people and risk they leave, than not and risk they stay
—anonymous

INTRODUCTION TO FEATURE TEAMS

Figure 7.1 shows a feature team—a long-lived,1 cross-functional


team that completes many end-to-end customer features, one by one.

Figure 7.1 feature


team—long-lived,
feature team cross-functional,
long-lived, cross-functional learning-oriented,
multi-skilled people
potentially
customer- shippable
centric Developer Customer Doc product
Developer
feature increment
Product
Owner
Tester Interaction
Architect
Designer
Analyst

This figure could be misinterpreted: A feature team does not have a


person who is only a Developer and does not have a person who is
only a Tester. Rather, people have primary skills such as Developer and
Tester, and also other skills—and are learning new areas. Team
members may help in several areas to complete the feature. An
'architect' may write automated tests; a 'tester' may do analysis.

1. A misunderstanding is that new teams re-form for each feature. Not


true. A great feature team may stay together for years.

149
book.book Page 150 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

In Scrum and other agile methods the recommended team structure


is to organize teams by customer-centric features. Jim Highsmith, in
Agile Project Management [Highsmith04], explains:

Feature-based delivery means that the engineering team builds


[customer-centric] features of the final product.

lean thinking In lean thinking, minimizing the wastes of handoff, waiting, WIP,
wastes p. 58 information scatter, and underutilized people is critical; cross-func-
tional, cross-component feature teams are a powerful lean solution
to reduce these wastes.

Why study the following in-depth analysis? Because feature teams


are a key to accelerating time-to-market and to scaling agile devel-
opment, but a major organizational change for most—changing
team structure is slow work, involving learning, many stakeholders,
and policy and mindset issues. If you’re a change agent for large-
scale agility, you need to really grasp the issues.

Figure 7.2 a long-


lived feature team;
developers, testers,
and others create a
complete customer
feature

Scrum team A proper Scrum team is by definition a feature team, able to do all
p. 309 the work to complete a Product Backlog item (a customer feature).
Note that Scrum team (feature team) members have no special title
other than “team member.” There is not emphasis on ‘developer’ ver-
sus ‘tester’ titles. The goal is to encourage multi-skilled workers and
“whole team does whole feature.” Naturally people have primary
specialities, yet may sometimes be able to help in less familiar areas
to get the job done, such as an ‘analyst’ helping out with automated
testing. The titles in Figure 7.1 should not be misinterpreted as pro-
moting working-to-job-title, one of the wastes in lean thinking.

150
book.book Page 151 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

Feature teams are not a new or ‘agile’ idea; they have been applied
to large software development for decades. They are a refinement of
cross-functional teams, a well-researched proven practice to cross-functional
speed and improve development. The term and practice was popu- team p. 196
larized at Microsoft in the 1980s and discussed in Microsoft Secrets
[CS95]. Jim McCarthy [McCarthy95], the former development lead
of Visual C++, described feature teams:

Feature teams are about empowerment, accountability, identity,


consensus and balance…

Empowerment—While it would be difficult to entrust one


functional group or a single functional hierarchy, such as Devel-
opment, for instance, with virtually absolute control over a par-
ticular technology area, it’s a good idea to do that with a
balanced multi-disciplinary team. The frontline experts are the
people who know more than anyone else about their area, and it
seems dumb not to find a way to let them have control over their
area.

Accountability—… If a balanced group of people are mutually


accountable for all the aspects of design, development, debug-
ging, QA, shipping, and so on, they will devise ways to share
critical observations with one another. Because they are
accountable, if they perceive it, they own it. They must pass the
perception to the rest of the team.

Identity—… With cross-functional feature teams, individuals


gradually begin to identify with a part of the product rather
than with a narrow specialized skill.

Consensus—Consensus is the atmosphere of a feature team.


Since the point of identification is the feature rather than the
function, and since the accountability for the feature is mutual,
a certain degree of openness is safe, even necessary. I have
observed teams reorganizing themselves, creating visions, real-
locating resources, changing schedules, all without sticky con-
flict.

Balance—Balance on a feature team is about diverse skill sets,


diverse assignments, and diverse points of view.

151
book.book Page 152 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

Feature teams are common in organizations learning to deliver


faster and broaden their skills. Examples include Microsoft, Valtech
(applied in their India center for agile offshore development), the
Swedish software industry [OK99], Planon [Smeets07], and telecom
industry giant Ericsson [KAL00]. The report on Ericsson’s feature
teams clarifies:

The feature is the natural unit of functionality that we develop


and deliver to our customers, and thus it is the ideal task for a
team. The feature team is responsible for getting the feature to
the customer within a given time, quality and budget. A feature
team needs to be cross functional as it needs to cover all phases
of the development process from customer contact to system test,
as well as all areas [cross component] of the system which is
impacted by the feature.

To improve development on large products (one sub-project may be


one million person hours) in their GSM radio networks division,
Ericsson applies several practices supporting agility, including fea-
ture teams and daily builds. It’s no coincidence that both these prac-
tices were popularized by Microsoft in the 1990s; Ericsson also
understands the synergy between them [KA01]:

Daily build can only be fully implemented in an organization


with predominantly customer feature design responsibility.

… The reasons why feature responsibility is a prerequisite for


taking advantage of daily build is the amount of coordination
and planning needed between those responsible for delivering
consistent parts of each module that can be built. … In a feature
team this coordination is handled within the team.

In another book describing the successful practices needed for scal-


ing agile development, Jutta Eckstein similarly recommends “verti-
cal teams, which are focused around business functionality”
[Eckstein04]. Feature teams do ‘vertical’ end-to-end customer fea-
tures (GUI, application logic, database, …) rather than ‘horizontal’
components or layers. In her more recent scaling book she again
emphasizes “In order to always keep the business value of your cus-
tomer in mind, there is only one solution: having feature teams in
place” [Eckstein09].

152
book.book Page 153 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

A common misunderstanding is that each feature team member


must know everything about the code base, or be a generalist. Not
so. Rather, the team is composed of specialists in various software
component areas and disciplines (such as database or testing). Only
collectively do they have—or can learn—sufficient knowledge to
complete an end-to-end customer feature. Through close collabora-
tion they coordinate all feature tasks, while also—important point—
learning new skills from each other, and from extra-team experts. In
multi-skilled
this way, the members are generalizing specialists, a theme in
agile methods [KS93, Ambler03], and we reduce the waste of workers p. 204
underutilized people (working only in one narrow speciality), a
theme in lean thinking.

To summarize the ideal feature team2:

Feature Team

❑ long-lived—the team stays together so they can ‘jell’ for long-lived teams
higher performance; they take on new features over time p. 199
❑ cross-functional and cross-component
❑ co-located
❑ work on a complete customer-centric feature, across all com-
ponents and disciplines (analysis, programming, testing, …)
❑ composed of generalizing specialists
❑ in Scrum, typically 7 ± 2 people
Feature teams work independently by being empowered and given work redesign
the responsibility for a whole feature. Advantages include: p. 234

2. A Scrum feature team is typically stable, long-lived. The name “fea-


ture team” was first popularized by Microsoft, but is also used in the
(relatively rare) method Feature-Driven Development (FDD).
However, in FDD a “feature team” is only a short-term group
brought together for one feature and then disbanded. Such groups
have the productivity disadvantage of not being ‘jelled’—a rather
slow social process—and the disadvantage of not providing stable
work relationships for people.

153
book.book Page 154 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

❑ increased value throughput—focus on delivering what the


customer or market values most
❑ increased learning—individual and team learning increases
because of broader responsibility and because of co-location
with colleagues who are specialists in a variety of areas
– critical for long-term improvement and acceleration; reduces
the waste of underutilized people
❑ simplified planning—by giving a whole feature to the team,
organizing and planning become easier
– for example, it is no longer necessary to coordinate between
single-specialist functional and component teams
❑ reduced waste of handoff—since the entire co-located fea-
ture team does all work (analysis, design, code, test), handoff is
dramatically reduced
❑ less waiting; faster cycle time—the waste of waiting is
reduced because handoff is eliminated and because completing
a customer feature does not have to wait on multiple parties
each doing part of the work serially
❑ self-managing; improved cost and efficiency—feature
teams (and Scrum) do not require a project manager or matrix
management for feature delivery, because coordination is triv-
ial. The team has responsibility for end-to-end completion and
for coordinating their work with others. Data shows an inverse
relationship between the number of managers and develop-
ment productivity, and also that teams with both an internal
and external focus are more likely to be successful [AB07]. Fea-
ture teams are less expensive—there isn’t the need for extra
overhead such as project managers.
– For example [Jones01]: “The matrix structure tends to raise
the management head count for larger projects. Because soft-
ware productivity declines as the management count goes up,
this form of organization can be hazardous for software.”
❑ better code/design quality—multiple feature teams working
on shared components creates pressure to keep the code clean,
formatted to standards, constantly refactored, and surrounded
by many unit tests—as otherwise it won’t be possible to work

154
book.book Page 155 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

with. On the other hand, due to long familiarity, component


teams live with obfuscated code only they can understand.

❑ better motivation—research [HO80, Hackman02] shows that


if a team feels they have complete end-to-end responsibility for
a work item, and when the goal is customer-directed, then
there is higher motivation and job satisfaction—important fac-
tors in productivity and success.

❑ simple interface and module coordination—one person or


team updates both sides of an interface (caller and called) and
updates code in all modules; because the feature team works
across all components; no need for inter-team coordination.

❑ change is easier—changes in requirements or design (we


know it’s rare, but we heard it happened somewhere once) are
absorbed by one team; multi-team re-coordination and re-plan-
ning are not necessary.

AVOID…SINGLE-FUNCTION TEAMS

A Scrum feature team is cross-functional (cross-discipline), com- cross-functional


posed of testers, developers, analysts, and so on; they do all work to teams p. 196
complete features. One person will contribute primary skills (for
example, interaction design or GUI programming) and also second-
ary skills. There is no separate specification team, architecture
team, programming team, or testing team, and hence, much less
waiting and handoff waste, plus increased multiskill learning.

AVOID…COMPONENT TEAMS

An old approach to organizing developers in a large product group is


component teams—programmer groups formed around the archi-
tectural modules or components of the system, such as a single-spe-
ciality GUI team and component-X team. A customer-centric feature
is decomposed so that each team does only the partial programming
work for their component. The team owns and maintains their com-
ponent—single points of specialization success or failure.

155
book.book Page 156 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

In contrast, feature teams are not organized around specific compo-


nents; the goal is a cross-component team that can work in all mod-
ules to complete a feature.

Components (layer, class, …) still exist, and we strive to create


good components, but we do not organize teams by these.

What About Conway’s Law?

Long ago, Mel Conway [Conway68] observed that

[…] there is a very close relationship between the structure of a


system and the structure of the organization which designed it.

… Any organization that designs a system […] will inevitably


produce a design whose structure is a copy of the organization’s
communication structure.3

That is, once we define an organization of people to design some-


thing, that structure strongly influences the subsequent design—
typically in a one-to-one homomorphism. A striking example Con-
way gave was

[An] organization had eight people who were to produce a


COBOL and an ALGOL compiler. After some initial estimates
of difficulty and time, five people were assigned to the COBOL
job and three to the ALGOL job. The resulting COBOL compiler
ran in five phases, the ALG0L compiler ran in three.

Why raise this topic? Because “Conway’s Law” has—strangely—


been incorrectly used by some to promote component teams, as if
Conway were recommending them. But his point was very different:
It was an observation of how team structure limits design, not a rec-
ommendation. Cognizant of the negative impact, he cautioned:

3. In [Brooks75] this was coined Conway’s Law.

156
book.book Page 157 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

To the extent that an organization is not completely flexible in


its communication structure, that organization will stamp out
an image of itself in every design it produces.

… Because the design that occurs first is almost never the best
possible, the prevailing system concept [the design] may need to
change. Therefore, flexibility of organization is important to
effective design. Ways must be found to reward design manag-
ers for keeping their organizations lean and flexible.

In this way, Conway underlines a motivation for feature teams.

In Microsoft Secrets [CS95], Brad Silverberg, senior VP for Windows


and Office, explained their emphasis on feature teams, motivated by
the desire to avoid the effects of “Conway’s Law”:

The software tends to mirror the structure of the organization


that built it. If you have a big, slow organization, you tend to
build big, slow software.

Disadvantages

It is extraordinary the amount of delay, overhead, unnecessary man-


agement, handoff, bad code, duplication, and coordination complex-
ity that is introduced in large groups who organize into component
teams, primarily driven by two assumptions or fears: 1) people can’t
or shouldn’t learn new skills (other components, testing, …); and 2)
code can’t be effectively shared and integrated between people. The
first assumption is fortunately not so, and the second, more true in
the 1970s, has been resolved with agile engineering practices such
as continuous integration and test-driven development (TDD).

Component teams seemed a logical structure for 1960s or 1970s


sequential life cycle development with its fragile version control,
delayed integration, and weak testing tools and practices because
the apparent advantages included:

❑ people developed narrow specialized skill, leading to appar-


ently faster work when viewed locally rather than in terms of
overall systems throughput of customer-valued features, and
when viewed short-term rather than long-term

157
book.book Page 158 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

❑ those specialists were less likely to break their code


❑ there were no conflicting code changes from other teams

Fortunately, there has been much innovation since the 1960s. New
life cycle and team structures have been discovered, as have power-
ful new version-control, integration, and testing practices.

Systems and lean thinking invite us to ask, “Does a practice globally


optimize value throughput with ever-faster concept-to-cash cycle
time, or locally optimize for a secondary goal?” From that perspec-
tive, let’s examine the disadvantages of a component team…

Promotes Sequential Life Cycle Development and Mindset

Customer features don’t usually map to a single component nor,


therefore, to a single component team; they typically span many
modules. This influences organization of work.

Who is going to do requirements analysis? If several component


teams will be involved, it is not clear that any particular one of them
should be responsible for analysis. So, a separate analyst or analyst
team does specification in a first step.

Who is going to do high-level design and planning? Again, someone


before the component teams will have to do high-level design and
plan a decomposition of the feature to component-level tasks. She is
usually titled an architect or systems engineer; in [Leffingwell07]
this role is called requirements architect. In this case, one usually
sees a planning spreadsheet similar to the following:

Component
Feature A B C D E …
Feature 1 x x x
Feature 2 x x x

Who is going to test the end-to-end feature? This responsibility


doesn’t belong to any one component team, who only do part of the
work. So testing is assigned to a separate system-test team, and
they start high-level testing after development has finished—some-

158
book.book Page 159 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

times long after, as they need the work of multiple component teams
and these teams seldom finish their work at the same time. Plus,
they have a backlog of other features to test.

Now what do we have?

1. (before development) requirements analysis by a separate ana-


lyst
2. (before) high-level design and component-level task planning
by a separate designer
3. (during) implementation by multiple interdependent compo-
nent teams that have to coordinate partially completed work
4. (after) system testing of the feature

Back to a waterfall! There is massive handoff waste in the system


and plenty of delay. This is traditional sequential life cycle develop-
ment and mindset, even though—ironically—people may incorrectly
think they are doing Scrum or agile development simply because
they are doing mini-waterfalls in a shorter and iterative cycle
(Figure 7.3). But mini-waterfalls are not lean and agile development;
rather, we want real concurrent engineering.

Completing one non-trivial feature now typically takes at least five


or six iterations instead of one.4 And it gets worse: For very large
systems the organization adds a subsystem layer with a subsystem
architect and subsystem testing—each specialized and each adding
another phase delay before delivering customer functionality.

Component team structures and


sequential life cycle development are directly linked.

4. Five or six iterations is optimistic. With multiple component teams,


the handoff, waiting, and overhead coordination delays implementa-
tion over many iterations.

159
book.book Page 160 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

Figure 7.3 com-


ponent teams lead
to sequential life Iteration 1 Iteration 2 Iterations 3-5 At least
cycle (probably later) (probably later iteration 6
Analysis and more) (probably later)

Backlog Item 1 Design


Backlog Item 2
Backlog Item 3
Backlog Item 4 realistically, not Implementation
... available as
soon as the realistically, not all
analyst is teams start on Item 1 Test
Item 1 finished programming at the
same iteration; they are
multitasking on many it is unlikely that the
partially done features system testers are
available to test
Analyst System
Comp A Item 1 as soon as
Engineer the last component
Team
team has finished
code
Comp B
requirement Team
details
for Item 1
System
Comp C Testers
tasks by Team
component

Component teams create sequential life cycle development with


handoff, WIP queues, and single-specialist groups. This organizational
design is not Scrum or agile development, which are instead based on
true cross-functional teams that do all work for a feature without
handoff. This "mini-waterfall" development is sometimes confused as
agile development; that is a misunderstanding.

Limits Learning

Consider this thought experiment, although it will never be


achieved: Option 1—Everyone working on the product can do every-
thing well. Option 2—Every person can do one (and only one) small
task extremely well, but nothing else. Which option allows faster
feature throughput? Which option has more bottlenecks? Which

160
book.book Page 161 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

offers more adaptability? Although the perfection vision of option-1


isn’t possible, viewed along a continuum of desirability, we want to
encourage a learning organization that is moving in that direction—
reducing bottlenecks, people learning one area well, then two, …

Observations:

❑ Developing multi-skilled people takes plenty of learning oppor-


tunities and close work with different kinds of experts.
❑ More specifically, developing programmers who can help in sev-
eral components requires a variety of experiences and mentors.
❑ Data shows an extraordinary variance in individual program-
mer productivity—studies suggest an average of four times
faster in the top versus bottom quartile [Prechelt00].

There’s a strong link in software development between what you


know and what you can do well—software is the quintessential
knowledge-sensitive profession. In short: There are great business
benefits if we have skilled developers who are constantly learning.

This learning has preconditions, of management responsibility:

❑ slack5
❑ a structure to support continual learning
– but there’s a systemic flaw in component teams…

How do developers become skilled in their craft and broadly knowl-


edgeable about their product? We asked Pekka Laukkanen—an
experienced developer and creator of the Robot test framework
[Laukkanen06, Robot08]—a question: “How do you become a great
developer?” He thought about it carefully and answered: “Practice—
and this means not just writing lots of code, but reflecting on it. And
reading others’ code because that’s where you learn to reflect on your
own.”

Yet, in traditional large-product groups with component teams, most


developers know only a narrow fragment of the system, and most
salient, they don’t see or learn much that is new.

5. See Slack [DeMarco01] on the need for slack to get better.

161
book.book Page 162 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

And on the other hand, there are always a few wonderful people who
know a lot about the system—the people you would go to for help on
an inexplicable bug. Now, when you ask how that’s possible, a com-
mon answer will be, “He knows everything since he always reads
everybody’s code.” Or, “He’s worked on a lot of different code.” Inter-
estingly, such people are more common in large open source prod-
ucts; there is a culture and value of “Use the source, Luke”
[Raymond] that promotes reading and sharing knowledge via code.

Why does this matter? Because component teams inhibit developers


from reading and learning new areas of the code base, and more
broadly, from learning new things.

Contrast the organizational mindset that creates such a structure of


limited learning with the advice of the seminal The Fifth Discipline
[Senge94] in which MIT’s Peter Senge summarizes the focus and
culture of great long-lived companies: learning organizations. Lean
Process and Product Development [Ward06] also stresses this theme;
it summarizes the insight of Toyota’s new product development suc-
cess: It’s about creating lots of knowledge, and about continual learn-
ing. And Toyota Talent [LM07] asks the question: “How does Toyota
continue to be successful through good times and bad?” and answers
“The answer is simple: great people,” and

It is the knowledge and capability of people that distinguishes


any organization from another. For the most part, organizations
have access to the same technology, machinery, raw material,
and even the same pool of potential employees as Toyota. The
automaker’s success lies partially in these areas, but the full
benefit is from the people at Toyota who cultivate their success.

Isao Kato, one of the students of Taichii Ohno (father of the Toyota
Production System), said:

In Toyota we had a saying, “Mono zukuri wa hito zukuri”,


which mean “Making things is about making people.” [Kato06]

Yet what is the journey of the software developer in many large


product groups? After graduating from university, a young developer
joins a large company and is assigned to a new or existing compo-
nent. She writes the original code or evolves it, becoming the special-
ist. There she stays for years—apparently so that the organization

162
book.book Page 163 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

can “go faster” by exploiting her one specialty—becoming a single


point of success or failure, a bottleneck, and learning only a few new
things. The university did not teach her good design, so where did
she learn good from bad? How can she see lots of different code? How
can she see opportunities for reusable code? How can she help else-
where when there’s a need?

Note that the problem is not specialization; it is single-specializa-


tion, bottlenecks, and team structures that inhibit learning in
new areas. To create a learning organization, we want a struc-
ture where developers can eventually become skilled in two
areas—or more. Component teams inhibit that.

Component team (and single-function team) organizations gradually


incur a learning debt—learning that should have occurred but
didn’t because of narrowly focused specialists, short-term quick-fix
fire fighting, lack of reflection, and not keeping up with modern
developments. When the product is young, the pain of this debt isn’t
really felt. As it ages and the number of single-specialized teams—
the number of bottlenecks—expands from 5 to 35, this debt feels
heavier and heavier. Those of you involved in old large products
know what we mean.

Encourages Delivery of Easier Work, not More Value

Component specialists, like other single-specialists, create an orga-


nizational constraint or bottleneck. This leads to a fascinating sub-
optimization: Work is often selected based on specialty rather than
customer value.

Component teams are faster at developing customer features that


primarily involve their single-speciality component—if such single-
component customer features can be found (not always true). For
that reason, when people are sitting in a room deciding what to do
next, features are often selected according to what available compo-
nent teams can do best or quickest. This tends to maximize the
amount of code generated, but does not maximize the value deliv-
ered.6 Therefore, component teams are optimized for quickly devel-
oping features (or parts of features) that are easiest to do, rather than

163
book.book Page 164 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

of highest value. We once saw a component team assigned to code


their small part of a low-priority customer feature that was not due
for more than 18 months in the future, simply because it was easier
to plan that way.

Figure 7.4 lower-


value work chosen
With component teams, there is a tendency to select
goals familiar for people, not for maximizing customer
value. For example, Component A Team does Backlog System
Item 3 because it mostly involves Component A work.

Comp A Component
Item 1 Team A
Item 2
Item 3

...
Comp B Component
Item 8 Team B

Item 12 Comp C Component


Team C

Interestingly, this sub-optimization is often invisible because 1)


there isn’t prioritization based on a customer-value calculation or
the prioritization scheme consists of bizarre super-coarse-grained
variants such as “mandatory” versus “absolutely mandatory”; 2)
component teams tend to be busy fixing bugs related to their compo-
nent; and, 3) there is plenty of internal local-improvement work.
Everyone appears very busy—they must be doing valuable work!

6. Not only do more lines of code (LOC) not imply more value, more
code can make things worse. Why? Because there is a relationship
between LOC and defect rates and evolution effort. More code
equals more problems and effort. Great development groups strive
to reduce their LOC while creating new features, not increase it.
Since component teams have a narrow view of the code base, they
don’t see reuse or duplication issues.

164
book.book Page 165 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

The sub-optimization becomes clear when we create a real Product


Backlog, sorted by a priority that includes value (Figure 7.4).

The Resource Pool and Resource Manager Quick Fix

One quick-fix way that traditional resource management tackles the


priority problem is by creating projects according to which special-
ists are required and available [McGrath04]. Project managers
select people from a specialist resource pool and release them back project versus
when finished. This gives rise to project groups or feature product p. 238
projects, usually with matrix management. In such organizations
one hears people called ‘resources’ as though they were machine
parts and human or team dynamics had little importance on produc-
tivity or motivation (which is not the case).

Thus, with a resource pool, management twists the organization


around single-specialist constraints. It seems to work well on paper
or in a project management tool. But people are not machine parts—
they can learn, be inspired or de-motivated, gain or lose focus, etc. In
practice, resource pool and feature project management has disad-
vantages:

❑ lower productivity due to non-jelled project groups—


there is clear evidence that short-lived groups of people
brought together for a project—a “project group”—are corre-
lated with lower productivity [KS93].

❑ lower motivation and job satisfaction—I often lead a “love/ work redesign
hate” exercise with many people in an enterprise to learn what p. 234
they, well… hate. In large groups focused around resource pools
and project groups, “we hate being part of a resource pool
thrown into multiple short-term groups” is always at or near
the top.

❑ less learning—more single-specialization as people seldom


work/learn outside their area.

❑ lower productivity due to multitasking—with resource


pool management it is common to create partial ‘resource’ allo-
cations where a person is 20% allocated to project-A, 20% to
project-B, and so forth.7 This implies increasing multitasking

165
book.book Page 166 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

and—key point—lots of multitasking reduces productivity in


product development, it does not improve it [DeMarco01].
❑ lower productivity and throughput due to increased
handoff and delay waste—the people in the temporary
group are often multitasking on many projects. If that’s the
case, it leads to another productivity/throughput impact: Since
they are not working together at the same time on the same
goal, there is delay and handoff between the members.
❑ lower productivity and increased handoff and delay due
to physical dispersion—the project group is rarely co-located
in the same room; members may be in different offices, build-
ings, or even cities (and time zones), and have little or no rela-
tionship with each other; physical and time zone dispersion of a
task group impacts productivity [OO00].
❑ lower productivity and higher costs due to more manag-
ers—if each temporary project group has a project manager
(usually in a matrix management structure), costs are higher
and productivity lower because of the inverse relationship
between management count and software productivity.

Go See p. 52 Observe the relationship between the lean “Go See” attitude and the
belief that it is skillful to have resource pools that optimize around
single-specialist constraints. People that do not spend regular time
physically close to the real value-add workers may believe in
resource pools and short-lived project groups because it appears on
paper—as with machine parts—to be flexible and efficient. Yet those
frequently involved in the real work directly see the subtle (but non-
trivial) problems.

Promotes Some Teams to Do “Artificial Work”

A corollary of the disadvantage of encourages delivery of easier work,


not more value is illustrated by an example: Assume the market
wants ten features that primarily involve components A–T and thus
(in the simplest case) component teams A–T. What do component
teams U–Z do during the next release? The market is not calling for
high-value features involving their components, and there may even

7. Or worse. We’ve even seen 10% partial project allocations!

166
book.book Page 167 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

be no requests involving their components. In the best case, they are


working on lower-value features—because that is all they can do. In
the worst case, there is an explicit or more frequently a subtle
implicit creation of artificial work for these teams so that component
team U can keep busy doing component-U programming, even
though there is no market driver for the work.

With component teams and large product groups there is often a


resource manager who tries to keep the existing teams busy (“100%
allocated”) by choosing and assigning this low-value or artificial
work, or by asking the underutilized teams for advice. Their focus is
the local optimization of “everyone doing their best”—generating
code according to what people know, rather than generating the
most value. And the work is sometimes ‘redesign’: If we don’t have
anything new, we’ll redo what we did before.8

More Code Duplication and Hence Developers

We once visited a client with many component teams and discussed


the link between this structure and code duplication. The client
asked, rhetorically, “Do you know how many XML parsers we have?”

Consider duplication: Good code is free of it, and great developers see Legacy Code
strive to create less code as they build new features, through con- in companion
stant refactoring. It’s difficult to see duplication or opportunities for book
reuse in a code base with single-component specialists, because one
never looks broadly. Single-component specialists increase duplica-
tion. And so the code base grows ever larger than necessary, which
in turn demands more single-component specialists…9

8. Improving existing code is a good thing; our point is different.


9. Code-cloning statistics based on (imperfect) automated analysis of
large systems shows around 15% duplicated code [Baker95], but this
is probably an underrepresentation because such tools don’t
robustly find “implicit duplication” of different-looking code that
does the same thing. Anecdote: I’ve done refactoring (to remove
duplication) on large systems built by component teams, removing
explicit and implicit duplication; reduction averaged around 30%.

167
book.book Page 168 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

Figure 7.5 system


dynamics of
component teams component goal: faster work on a high-priority
and number of LOC feature mostly involving component A
developers

size of quick fix: hire more component


component team A developers
# of teams
component
constraint: dont reassign an
teams
existing component team
code member to a new component
O duplication
constraint: dont
break up an existing
amount of broad O component team
cross-component
code insight

Ever-Growing Number of Developers

Component teams create several forces to increase the number of


developers. One reason examined previously is the increased code
bulk due to duplication. A second reason involves the mismatch
between the existing component teams and the high-priority work,
as explained next and summarized in the system dynamics diagram,
Figure 7.5.

Component teams become boxes in the formal organization struc-


ture, each with its own manager. Several component teams form a
subsystem group or department with a second-level manager. This
leads to an interesting dynamic…

Example: Current release—A high-priority goal involves mostly


work in component or subsystem A, and therefore component-A or
subsystem-A groups work on it. They hire more people in the belief
it will make them go faster. Component-C team has lower-priority
goals and does not need or get more people. Next release—A high-
priority goal involves primarily work for component C. Now, they are
viewed as the bottleneck and so hire more people (see Figure 7.6).

We could have moved people from one component team to another,


and gradually taught them (through pair programming) to help,

168
book.book Page 169 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

instead of hiring more people. But this rarely happens. The other
component team already has work chosen for the release, so they
won’t wish to lose people. And there is a fear it will take too long to
learn anything to be helpful. Also, the mindset is that “it would be a
waste of our specialist to move her to another component team.”
Finally, moving people is inhibited by political and management sta-
tus problems—many managers don’t want to have a smaller group
(another force of local optimization). Conway formulated this well:

Parkinson’s law [Parkinson57] plays an important role… As


long as the manager’s prestige and power are tied to the size of
his budget, he will be motivated to expand his organization.
[Conway68]

Thus, the component-A team will grow, as will the component itself.
It may even eventually split into two new components, and hence
two new teams. The people will specialize on the new components.
In this way large product organizations tend to grow even larger.

Figure 7.6 ever-


growing size with
current release: current release System component teams
need more people

Comp A Component
Item 1 Team A
Item 2
Item 3

...
Comp B Component
Item 20 Team B

Item 42 Comp C Component


Team C

next release: next release


need more people

169
book.book Page 170 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

Figure 7.7 chal-


lenges in planning—
coordination With component teams, the overhead of a
project manager is required to coordinate
and see to completion a feature that spans
component teams and functional teams.
System

Project
Manager

Comp A Component
Item 1 Team A
Item 2
Item 3
Item 4

... Comp B Component


Team B

Comp C Component
Team C

With component teams, there is increased multitasking, as one component


team may work on several features in parallel, in addition to handling defects
related to "their" component. Multitasking is one of the wastes in lean
thinking, and correlated with reduction in productivity.

Problems in Planning and Coordination

Scrum (and other agile methods) strive for an integrated product at


the end of every iteration with demonstrable customer functionality.
For most features this involves multiple component teams and
therefore complicates planning and coordination between teams.

Example: In the next iteration the goal is to do Product Backlog


items 1, 2, 3, and 4. Backlog item 1 (customer feature 1) requires
changes in component A and B. Item 2 requires changes in compo-
nent A, B, and C, and so forth. All teams depend on one another in
the iteration planning and need to synchronize their work during

170
book.book Page 171 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

the iteration (see Figure 7.7)—a task that is often handled by a sep-
arate project manager. Even if we successfully plan the complex
interdependencies for this iteration, a delay in one team will have a
ripple effect through all component teams, often across several itera-
tions.

Delays Delivery of Value

Value can be delivered only when the work of multiple component


teams is integrated and tested. Figure 7.3 illustrates how compo-
nent teams promote sequential life cycle. So what? With a compo-
nent team organization, the work-in-progress (WIP) from a team
usually waits several iterations before it can be combined into a
valuable feature. This WIP, like all inventory, is one of the wastes in
lean thinking; it hides defects, locks up an investment, reduces flexi-
bility, and slows down the delivery of value. And in addition to the
straightforward sequential life cycle reasons already discussed, com-
ponent teams delay delivery as follows…

Example:

1. Item 1 in the Product Backlog involves component A. Compo-


nent team A will work on their part of item 1 next iteration.
2. Item 4 involves components A and C. Since component team A
is busy with item 1, they do not work on item 4.
3. Item 4 is the highest goal involving component C. Component
team C therefore works on their part of item 4 next iteration.

❑ First problem: Not every team is working on highest value.


❑ Second problem: After the iteration, item 4 (which needs code
in components A and C) can’t yet be integrated, tested, and
delivered, because of the missing component A code. Item 4
delivery has to wait for component team A.

Organizations try to solve this problem by the quick fix of creating a


role, called project manager or feature manager, for coordinat-
ing the work across teams and/or by creating temporary project
groups whose far-flung members multitask across multiple concur-
rent feature goals. Such tactics will never fundamentally resolve the

171
book.book Page 172 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

problem or support rapid development, since the problem is struc-


tural—baked into the organization, built into the system.

More Poor Code/Design

see Design in Perhaps the greatest irony of component teams is this: A mistaken
companion belief behind their creation is that they yield components with good
code/design. Yet, over the years we have looked closely at the code
across many large products, in many companies, and it is easy to see
that the opposite is true—the code in a component maintained by a
single-component team is often quite poor.10 For example, we will sit
to start pair programming with a component team member (who
knows we’ll be looking for great code), and with a slightly apologeti-
cally grin the programmer will say, “Yeah, we know it’s messy, but
we understand it.” What is going on?

❑ limited learning—as discussed above, developers are not


exposed to vast amounts of different code; this limits their
learning of good design.

❑ familiarity breeds obfuscation—when I stare at the same


complicated, obfuscated 10,000 lines of code month after month
it starts to be familiar and ‘clear’; I can no longer see how com-
plicated it is, nor does it especially bother me, because of long
exposure—so I am not motivated to deeply improve it.

❑ obfuscation and duplication-heavy large code bases


breed job security—some do think like this, especially in
groups where line management are not master programmers,
not looking at the code, not encouraging great, refactored code.

❑ no outside pressure to clarify, refactor, or provide many


unit tests for the code—no one other than the team of five
component developers (who are long familiar with the compli-
cated code) works on it; thus there is no pressure to continually
refactor it, reduce coupling, and surround it with many unit
tests so that it is clear and robustly testable for other people to
work on.

10. New developers joining an existing component team (i.e., compo-


nent) also report this observation.

172
book.book Page 173 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

The perpetuation of belief that component teams create great code is


an indicator of a lack of “Go See” behavior by first-level manage-
ment. If they were master developers (lean principle “my manager
can do my job better than me”) and regularly looking in depth across
the code base, they would see that on average, more—not less—fresh
eyes on the code makes it better.

Summary of Disadvantages

• promotes sequential life cycle • causes long delays due to major


development and mindset waiting and handoff wastes
• limits learning by people working • encourages code duplication
only on the same components for • unnecessarily promotes an ever-
a long time—the waste of growing number of developers
underutilized people
• complicates planning and syn-
• encourages doing easier work chronization
rather than most valuable work
• increases bottlenecks—single
• promotes some component teams points of success are also single
to do “artificial work” points of failure
• fosters more poor code/design

Platform Groups—Large-Scale Component Groups

In large product organizations, there often exist one or more lower-


level platform groups distinct from higher-level product groups. For
example, in one client’s radio networks division a platform group of
hundreds of people provides a common platform to several market-
visible products (each involving hundreds of people). Note that the
platform group and a higher-level product group that uses it are
essentially two very large component groups. There is no absolute
constraint that a separate platform group must exist; for example,
the software technologies and deployment environment are the
same in both layers. A higher-level developer could in theory modify
code in the lower-level ‘platform’ code—the boundary is arbitrary.

So, the long-term organizational change toward feature teams,


large-scale Scrum, and less handoff waste implies that an artificially
constructed platform group may merge into the customer-product

173
book.book Page 174 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

groups, with feature teams that work across all code. This is a multi-
year learning journey.

TRY…FEATURE TEAMS

Most drawbacks of component teams can be resolved with feature


teams (defined starting on p. 149). They enable us to put the
requirements analysis, interaction design, planning, high-level
design, programming, and system test responsibilities within the
team11, since they now have a whole end-to-end customer-feature
focus. Planning, coordinating, and doing the work are greatly simpli-
fied. Handoff and delay wastes are dramatically reduced, leading to
faster cycle time. Learning increases, and the organization can focus
on truly high-priority market-valued features. And because multiple
feature teams will work on shared components, sometimes at the
same time, it is essential that the code is clean, constantly refac-
tored, continually integrated, and surrounded by unit tests—as oth-
erwise it won’t be possible to work with.

Note a key insight: Feature teams shift the coordination chal-


lenge between teams away from upfront requirements, design,
and inter-team project management and toward coordination at
the code level. To see this, compare Figure 7.7 and Figure 7.8.
And with modern agile practices and tools, coordinating at the
code level is relatively easy. Naturally, developers and managers
unfamiliar with these practices don’t know this, and so continue
with upfront responses to the coordination challenge.

11. Ideally, customer documentation is also put within the team.

174
book.book Page 175 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

Figure 7.8 feature


teams shift the
With feature teams, coordination issues shift toward the shared code coordination
rather than coordination through upfront planning, delayed work, and problem to shared
handoff. In the 1960s-70s this code coordination was awkward due code
to weak tools and practices. Modern open-source tools and practices
such as TDD and continuous integration make this relatively simple.

Team Component
Item 1 Red A
Item 2
Item 3

...
Team Component
Item 8 Blue B

Item 12 Team Component


Green C

As the shift to shared code coordination illustrates, a feature team


organization introduces new issues. In traditional development
these seemed difficult to solve. Fortunately, there are now solutions.

The following sections analyze these challenges and illustrate how


modern agile development practices ameliorate them, thus enabling
feature teams. Challenges or issues of feature teams include:

• broader skills and product • difficult-to-learn skills


knowledge • development and coordination of
• concurrent access to code common functional (for example,
• shared responsibility for design test) skills that span members of
many feature teams
• different mechanism to ensure
product stability • organizational structure
• reuse and infrastructure work • defect handling

175
book.book Page 176 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

Feature Teams versus Feature Projects

Feature teams are not feature projects. A feature project is organized around one fea-
ture. At the start, the needed specialists (usually developers from component teams or a
resource pool) are identified and organized into a short-lived group—a virtual project
group. The specialists are usually allocated a percentage of their time to work for the fea-
ture project. Feature teams and feature projects have important differences:

Long-life teams—A feature team, unlike a project group, may stay together for several
years. The team has an opportunity to jell and learn to work together. A well-working
jelled team leads to higher performance [KS93].

Shared ownership—In a feature team, the whole team is responsible for the whole fea-
ture. This leads to shared code ownership and cross-learning, which in the long run
increases degrees of freedom and reduces bottlenecks. In feature projects, developers only
update their particular single-specialty section of code.

Stable, simple organizational structure—Feature teams offer a simple structure; they


are the stable organizational units. Traditional project teams are ever-shifting and result
in matrix organizations, which degrades productivity.

Self-managing; improved cost and efficiency—Feature teams (and Scrum) do not


require overhead project managers, because coordination is trivial.

Broader Skills and Product Knowledge

This is the opposite of the limits learning problem of component


teams. The feature team needs to make changes in any part of the
system when they are working on a customer feature.

First, not all people need to know the whole system and all skills.
The Product Owner and teams usually select a qualified feature
team for a feature, unless they have the time and desire for a ‘virgin’
team to invest in some deep learning in less familiar territory. In the
common case, the team members together need to already know
enough—or be able to learn enough without herculean effort—to
complete a customer-centric feature. Notice that feature teams do

176
book.book Page 177 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

have specialized knowledge—that’s good. And, since learning is pos-


sible, they are slowly extending their specializations over time as
they take on features that require moderate new learning, strength-
ening the system of development over time (see Figure 7.9). This is
enhanced by more pair-work and group-work in a team with various
skills. We move beyond false dichotomies such as “specialization
good, learning new areas bad” and “generalists good, specialists
bad.”

Figure 7.9 special-


ization is good,
learning is good
Team Red
with ABC
Product Backlog skills

Item 1 needing ABC

Item 2 needing ADE


Team Blue
? with CDE
skills
...


Team Green
with ABEF
skills
Item 2 will be given to
Team Blue or Green; skill
A or D will need learning

Learning new areas of the code base is not a profound problem for
“moderately large” products, but beyond some tipping point12 it
starts to be a challenge.

One solution is requirement areas. In traditional large product requirement


development, component teams are usually grouped within a major areas p. 217
subsystem department. Similarly, when scaling the feature team
organization, we can group feature teams within a requirement
area—a broad area of related customer requirements such as “net-
work performance monitoring and tuning” or “PDF features.” To

12. It depends on size, quality of code, and unit tests, …

177
book.book Page 178 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

clarify: A requirement area is not a subsystem or architectural mod-


ule; it is a domain of related requirements from the customer per-
spective.

What’s the advantage? Most often, a requirement-area feature team


will not need to know the entire code base, since features in one area
usually focus across a semi-predictable subset of the code base. Not
always, but enough to reduce the scope of learning. Requirement-
area feature teams provide the advantage of feature teams without
the overwhelming learning challenge of a massive code base.13

But stepping back from the ‘problem’ of requiring broader knowl-


edge: Is it a problem to avoid, or an opportunity to go faster?

A traditional assumption underlying this issue is the notion that


assigning the existing best specialist for a task leads to better per-
formance. Yet this is an example of local optimization thinking—no
doubt locally and short-term it seems faster for code generation, but
does it increase long-term systems improvement and throughput of
highest market-valued features? In addition to the obvious bottle-
necking it promotes (thus slowing throughput of a complete feature),
does it make the organization as a whole speed up over time? As pre-
viously explored in the section on the disadvantages of component
teams:

Product groups that repeatedly rely on single-skill specialists


are limiting learning, reducing degrees of freedom, increasing
bottlenecks, and creating single points of success—and failure.
That does not improve long-term system throughput of highest
market-valued features or the ability to change quickly.

There is an assumption underlying concerns about broader product


knowledge: The assumption is that it will take a really long time for
a developer to learn a new area of the code base. And yet, in large
product groups, it is not uncommon for an existing developer to
move to a different component team and within four or five months
be comfortable—even shorter if the code is clean. It isn’t trivial, but
neither is it a herculean feat. Programmers regularly learn to work

13. A requirement-area feature team may eventually move to a new


area; we haven’t seen that yet.

178
book.book Page 179 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

on new code and in new domains all the time; indeed, it’s an empha-
sis of their university education.

Still, to dig deeper: Why is it hard to learn new areas of the code
base? Usually, the symptom is incredibly messy code and bad
design—a lack of good abstraction, encapsulation, constant refactor-
ing, automated unit tests, and so forth. That should be a warning
sign to increase refactoring, unit tests, and learning, not to avoid
new people touching fragile code—to strengthen rather than to live
with a weakness.

Learning new code and changing an existing code base is indeed a potential skills
learnable skill. It takes practice to become good, and people in fea- p. 206
ture teams get that practice and learn this skill.

Returning to the apparent quick-fix, short-term performance advan-


tage of choosing the best existing specialist for a task, this “common
sense” has also been questioned in a study [Belshee05a].

Development ran with one-week iterations. Each iteration the team


experimented with new practices. One experiment involved task
selection. A traditional approach may be called most qualified imple-
menter—the specialist who knows most about a task works on it.
The team experimented with a task selection method called least
qualified implementer—everyone selects the task they know least
about. Also, task selection was combined with frequent pair switch-
ing, called promiscuous pairing, each 90 minutes. First, the ini-
tial velocity did not drop significantly. Second, after two iterations
(two weeks) the velocity increased above their previous level. The
benefit of increased learning eventually paid off.

Belshee explains the above result with a concept called beginner’s


mind. “Beginner’s Mind happens when the thinker is unsure of his
boundaries. The thinker opens himself up and thoroughly tests his
environment… The whole mind just opens up to learning.”
[Belshee05b]

An experience report from Microsoft related to these practices:

The principles laid out in Belshee’s paper are not for the faint of
heart. They require dedication, commitment and courage. Dedi-
cation is required of each team member to strive for self

179
book.book Page 180 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

improvement. Commitment is needed for each team member to


ensure the values and principles will be followed and the team
will hold itself accountable. Courage, because the emotions that
Promiscuous Pairing invites will be not unlike the most fun and
scariest roller-coaster ever experienced. [Lacey06]

The studies illustrates the potential for acceleration when an orga-


nization invests in broadening learning and skill, rather than limit-
ing it through dependence on bottlenecks of single-specialists.

Concurrent Access to Code

As illustrated in Figure 7.8, one important difference between com-


ponent teams and feature teams is that the dependency and coordi-
nation between teams shifts from requirements and design to code.
Several people may concurrently edit the same source code file, typi-
cally a set of C functions, or a class in C++ or Java.

With weak or complex version-control tools and practices, common


in the 1980s and still promoted by companies such as IBM, this was
a concern. Fortunately, it isn’t an issue with modern free tools and
agile practices.

see Continuous Old-generation and complex (and costly) version control systems
Integration in such as ClearCase defaulted to strict locking in which the person
companion making a change locked the source file so that no one else could
change it. Much worse, vendors promoted a culture of avoiding con-
current access, delaying integration, complex integration processes
involving manual steps, integration managers, and tool administra-
tors. This increased costs, complexity, bottlenecks, waiting, and rein-
forced single-team component ownership.

On the other hand, the practices and tools in agile and in open
source development are faster and simpler. Free open source tools
such as Subversion14 default to optimistic locking (no locking),

14. Subversion is likely the most popular version control tool worldwide,
and a de facto standard among agile organizations. Tip: It is no
longer necessary to pay for tools for robust large-scale development;
for example, we’ve seen Subversion used successfully on a 500-per-
son multisite product that spanned Asia and Europe.

180
book.book Page 181 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

and more deeply, have always encouraged—through teaching and


features—a culture of simplicity, shared code ownership, and con-
current access [Mason05]. With optimistic locking anyone can
change a source file concurrently. When a developer integrates her
code, Subversion automatically highlights and merges non-conflict-
ing changes, and detects if conflicts exist. If so, the tool easily allows
developers to see, merge, and resolve them.

An optimistic-locking, fast, simple tool and process are required


when working in an agile development environment and are a key in
eliminating problems related to concurrent access to code.

Optimistic locking could in theory lead to developers spending inor-


dinate time merging difficult changes and resolving large conflicts.
But this is resolved with continuous integration and test-driven
development, key practices in scaling agile and lean development.

When developers practice continuous integration (CI) they inte- see Continuous
grate their code frequently—at least twice a day. Integrations are Integration in
small and frequent (for example, five lines of code every two hours) companion
rather than hundreds or thousands of lines of code merged after
days or weeks. The chance of conflict is lower, as is the effort to parallel releases
resolve. Developers do not normally keep code on separate “devel-
p. 209
oper branches” or “feature branches”; rather, they frequently inte-
grate all code on the ‘trunk’ of the version-control system, and
minimize branching. Furthermore, CI includes an automated build
environment in which all code from the hundreds of developers is
endlessly, relentless compiled, linked, and validated against thou-
sands of automated tests; this happens many times each day.15

In test-driven development every function has many automated see Test in


micro-level unit tests, and all programming starts with writing a companion
new unit test before writing the code to be tested. Further, every fea-
ture has dozens or hundreds of automated high-level tests. This
leads to thousands of automated tests that the developer can rerun
locally after each merge step—in addition to their continual execu-
tion in the automated build system.

In lean thinking terminology, CI replaces big batches and long cycle


times of integration (the practice of traditional configuration man-

15. Note that this implies driving down the build time of a large system.

181
book.book Page 182 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

agement) with small batches and short cycles of integration—a


repeating lean theme.

Shared Responsibility for Design

In a traditional component team structure, each component has an


owner who is responsible for its design and ongoing “conceptual
integrity.” On the other hand, feature teams result in shared owner-
ship. This could—without the practices used in agile methods—lead
to a degradation of integrity. All that said, it must be stressed that
in reality, code/design degradation happens in many groups anyway,
regardless of structure; recall the reasons component teams ironi-
cally often live with obfuscated code (p. 172).

Continuous integration (CI) implies growing a system in small


steps—each meant to improve the system a little. In addition to inte-
gration of all code on the trunk multiple times daily and non-stop
automated builds running thousands of automated tests, CI with on-
going design improvement is supported by other practices:

see Design in ❑ evolutionary design culture—since (as Conway points out)


companion the initial design vision is rarely great, and in any event since
software is ever-changing, encourage a culture in which people
view the design or architecture as a living thing that needs
never-ending incremental refinement

– a sequential life cycle with a single upfront architectural or


design phase gives the false message that the design is
something we define and build once, rather then continually
refine every day for the life of the system

❑ test-driven development—drive code development with


automated micro-unit tests and higher-level tests; each test
drives a small increment of functionality

– this leads to hundreds of thousands of automated tests

❑ refactoring; a key step—after each micro-change of a new unit


test and related solution code, perform a small refactoring step
to improve the code/design quality (remove duplication,
increase encapsulation, …)

182
book.book Page 183 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

– refactoring implies always leaving the code a little better


than we found it
– note that design quality means code quality; there is no real
‘design’ in software other than the source code [Reeves92]
These CI practices support continuous design improvement with be agile p. 139
feature teams, and the 9th agile principle: Continuous attention to
technical excellence and good design enhances agility. Plus, there are
strong connections between these agile practices and the lean princi-
ples Stop and Fix, Continuous Improvement, and the kaizen practice
of endless and relentless small steps of improvement—in this case,
“kaizen in code.”

Successfully moving from solo to shared code ownership supported


by agile practices doesn’t happen overnight. The practice of compo-
nent guardians can help. Super-fragile components (for which
there is concern16) have a component guardian whose role is to teach
others about the component ensures that the changes in it are skill-
ful, and help remove the fragility. She is not the owner of the compo-
nent; changes are made by feature team members. A novice person
(or team) to the component asks the component guardian to teach
him and help make changes, probably in design workshops and
through pair programming. The guardian can also code-review all
changes using a ‘diff ’ tool that automatically sends her e-mail of
changes. This role is somewhat similar to the committer role in
open source development.17 It is another example of the lean prac-
tices of regular mentoring from seniors and of increasing learning.

Another possible practice is establishing an architecture code


police [OK99]; to quote, “The architecture police is responsible for
keeping a close check on the architecture.” Note that since the only
real design is in the code, architecture code police are responsible for
continually looking at the code (not at documents), identifying weak-
nesses, and coaching others while programming—they are master-

16. A typical reason for concern about delicate components is that the
code is not clean, well refactored, and surrounded by many unit
tests. The solution is to clean it up (“Stop and Fix”), after which a
component guardian may not be necessary.
17. But the roles are not identical. Guardians (or ‘stewards’) do more
teaching and pair programming, and allow commits at any time.
Committers also teach, but less so, and control the commit of code.

183
book.book Page 184 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

programmer teachers. Architecture code police are a variant of com-


ponent guardians; they are responsible for overall code quality. But
no single person is responsible for a specific component. Warning:
This practice could devolve into a separate “PowerPoint architects”
group that is not focussed on the code, and not teaching through pair
work.

A related practice is used at Planon, a Dutch company building


workplace management solutions. The co-creator of Scrum, Jeff
Sutherland, wrote: “We have another Scrum company that has hit
Gartner Group’s magic [leaders]” [Sutherland07]. They have multi-
ple feature teams, each consisting of an architect, developers,
testers, and documentation people. There is also one lead architect,
but he is not responsible for defining the architecture and handing it
over to the team. Instead, he is “the initiator of a professional circle,
that includes all architects, to keep the cross-team communication
community of
going.” Planon’s term professional circle is a community of prac-
practice p. 252
tice, in which people with similar interest form a community to
share experiences, guide, and learn from each other [Wenger98,
WMS02]. At Planon, they have a community of practice for different
specialists such as architects, testers, and ScrumMasters
[Smeets07].

see Design in Another practice to foster successful shared design is the design
companion workshop. Each iteration, perhaps multiple times, the feature
team gets together for between “two hours and two days” around
giant whiteboard spaces. They do collaborative agile modeling,
sketching on the walls in a creative design conversation. If there are
component guardians or other technical leaders (that are not part of
the feature team) who can help guide and review the agile modeling,
they ideally also participate. See Figure 7.10.

For broad architectural issues joint design workshops (held


repeatedly) can help. Interested representatives from different fea-
ture teams (not restricted to ‘official’ architects) spend time together
at the whiteboards for large-scale and common infrastructure
design.18 Participants return to their feature team, teaching joint
insights in their local workshops and while pair programming.

18. Solutions for multisite joint design workshops are explored in the
Design chapter.

184
book.book Page 185 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

Handoff and partially done work (such as design specifications) are


wastes in lean thinking. To reduce this and to encourage a culture of
teaching, it is desirable that design leaders not be members of a sep-
arate group that create specifications, but rather be full-time mem-
bers on a feature team who also participate in joint design
workshops as a part-time architectural community of practice.

Figure 7.10 design


workshop with agile
modeling

New Mechanisms for Code Stability

Code stability in a component team organization is attempted with


component owners. They implement their portion of a customer fea-
ture in their component, hopefully keeping it stable. Note that sta-
bility is an ideal rather than an assured consequence of this
approach. It is common to find large product groups where the build
frequently breaks—often as a consequence of the many coordination
problems inherent to and between component teams.19

With feature teams, new—and just plain better—stability tech- see Test and
niques are used. Massive test automation with continuous integra- Continuous
tion (CI) is a key practice. When developers implement new Integration in
functionality, they write automated tests that are added to the CI companion
system and run constantly. When a test breaks:

1. The CI system automatically (for example, via e-mail or SMS)


informs the set of people who might have broken the build.

19. We have seen many examples of a three-month or worse ‘stabiliza-


tion’ phase in traditional large products that used component teams.

185
book.book Page 186 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

2. Upon notification, one or more of these people stop, investigate,


and bring the build back to stability.
– this CI attitude illustrates the lean principle of Stop and Fix

Infrastructure and Reuse Work

In a component team organization, goals such as a reusable frame-


work or improving test automation are usually met by formation of a
temporary project group or with an existing component team.

In a feature team organization with Scrum, these major goals are


added to the Product Backlog—an exception to the guideline to focus
on adding customer-feature items, since these goals span all fea-
tures.

This backlog infrastructure work is prioritized by the Product


Owner in collaboration with the teams. Then the infrastructure
work is given to an existing feature team, as any other backlog item.
This team works on infrastructure for a few iterations (delivering
incremental results each iteration) and thus may be called an infra-
structure team, a temporary role until they return to normal fea-
ture team responsibility.

Difficult-to-Learn Skills

A feature team may not have mastery of all skills needed to finish a
potential skill feature. This is a solvable problem if there is the potential skill
p. 206 [KS01]. On the other hand, some skills are really tough to learn,
such as graphic art or specialized color mathematics. Solutions:

❑ fixed specialist for the iteration—This creates a constraint


in the iteration planning; all work related to that skill needs to
be done by the feature team with the specialist (who may be a
permanent or temporary visiting member).
– A good Stop and Fix approach to working with the specialist
is that he is a teacher and reviewer, not a doer
❑ roaming specialist—During the iteration planning several
teams request help from a specialist; she schedules which
teams she will work with (and coach) and roams between them.

186
book.book Page 187 Sunday, November 2, 2008 5:43 PM

Introduction to Feature Teams

❑ visit the specialist at her primary team—the specialist


physically stays with one feature team that needs her most (for
the iteration) and invites other people to visit her for mini-
design workshops, review, and consultation.

Solo specialists are bottlenecks; avoid these solutions unless team


learning is not an option. Encourage specialists to coach, not do.

Coordinating Functional Skills: Communities of Practice

An old issue in cross-functional teams is the development and coor-


dination of functional skills and issues across the teams, such as
testing skills or architectural issues. The classic solution, previously communities of
introduced, is to support communities of practice (COP) practice p. 252
[Wenger98, WMS02]. For example, there can be a COP leader for the
test discipline that coordinates education and resolution of common
issues across the testers who are full-time members of different fea-
ture teams and part-time members of a common testing COP.

Organizational Structure

In a component- and functional-team (for example, test team) orga-


nization, members typically report to component and functional
managers (for example, the “testing manager”). What is the man-
agement structure in an agile-oriented enterprise of cross-func-
tional, cross-component feature teams?

In an agile enterprise, several feature teams can report to a common organizational


feature team’s line manager. The developers and testers on the team structure p. 241
report to the same person. Note that this person is not a project
manager, because in Scrum and other agile methods, teams are self-
managing with respect to project work (11th agile principle).

Handling Defects

In a traditional component team structure, the team is usually given


responsibility for handling defects related to their component. Note
that this inhibits long-term systems improvement and throughput
by increasing interrupt-driven multitasking (reducing productivity)

187
book.book Page 188 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

for the team, and by avoiding learning and reinforcing the weakness
and bottleneck of depending upon single points of success or failure.

On a large product with (for example) 50 feature teams, an alterna-


tive that our clients have found useful is to have a rotating mainte-
nance (defect) group. Each iteration, a certain number of feature
teams move into the role of maintenance group. At the end of the
two or three iterations, they revert to feature teams doing new fea-
tures, and other feature teams move into maintenance. Lingering
defects that aren’t resolved by the timebox boundary are carried
back to the feature team role and wrapped up before new feature
work is done.

As an additional learning mechanism, consider adding the practice


of handling defects with pair programming, pairing someone who
knows more and someone who knows less, to increase skills transfer.

TRANSITION

In his report on feature teams in Ericsson [KAL00], Karlsson


observed, “Implementing daily build and feature teams in an organi-
zation with a strong [traditional] development process, distributed
development and a tradition of module [single component] responsi-
bility is not an easy task.” It takes hard work and management com-
mitment.

There are several tactics for transitioning to feature teams:

❑ reorganize into broad cross-component feature teams

❑ gradually expand team responsibility

Reorganize into Broad Cross-Component Feature Teams

One change tactic is to reorganize so that, collectively, the new


teams have knowledge of most of the system. How? By grouping dif-
ferent specialists from most component areas (Figure 7.11).

188
book.book Page 189 Sunday, November 2, 2008 5:43 PM

Transition

A variation is that a new team is formed more narrowly with spe-


cialists from the subset of most components typically used in one
(customer) requirements area, such as “PDF printing.” This requirement
approach exploits the fact that there is a semi-predictable subset of areas p. 217
components related to one requirements area. It is simpler to
achieve and reduces the learning burden on team members.

When one product at Xerox made the transition to feature teams, it


started out by forming larger (eleven- or twelve-member) teams
than the recommended Scrum average of seven. The advantage was
that a sufficiently broad cross-section of specialists was brought
together into feature teams capable of handling most features. The
disadvantage was that twelve members is an unwieldy size for cre-
ating a single jelled team with common purpose.

Figure 7.11 moving


to feature teams
Component A Team Feature Team Red

A specialist
A specialist

A specialist C specialist
A specialist B specialist

Component B Team Feature Team Blue

A specialist
B specialist

B specialist C specialist
B specialist B specialist

… and so forth

189
book.book Page 190 Sunday, November 2, 2008 5:43 PM

7 — Feature Teams

Gradually Expand Teams’ Responsibility

For some, reorganizing to full-feature teams is considered too diffi-


cult, although in fact the impediments are often mindset and politi-
cal will. As an alternative, take smaller steps to gradually expand
teams’ responsibility from component to “multi-component” teams to
true feature teams.

Simplified example: Suppose an organization has four component


teams A, B, C, and D. Create two AB teams and two CD teams from
the original four groups, slowly broadening the responsibilities of
the teams, and increasing cross-component learning. A customer
feature will still need to be split across more flexible “multi-compo-
nent” teams, but things are a little better. Eight months later, the
two AB and two CD teams can be reformed into four ABCD teams…
and so on.

One Nokia product took this path, and formed AB teams based on
the guideline of combining from closely interacting components; that
is, they chose A and B components (and thus teams) that directly
interacted with each other. Consequently, the original team A and
team B developers already had some familiarity with each other’s
components, at least in terms of interfaces and responsibilities.

CONCLUSION

Why a detailed justification toward feature teams and away from


single-function teams and component teams? The latter approach is
endemic in large-product development. The transition from compo-
nent to feature teams is a profound shift in structure and mindset,
yet of vital importance to scaling agile methods, increasing learning,
being agile, and improving competitiveness and time to market.

RECOMMENDED READINGS

❑ Dynamics of Software Development by Jim McCarthy. Origi-


nally published in 1995 but republished in 2008. Jim’s book is a
true classic on software development. Already in 1995 it

190
book.book Page 191 Sunday, November 2, 2008 5:43 PM

Transition

emphasized feature teams. The rest of the book is stuffed with


insightful tips related to software development.
❑ “XP and Large Distributed Software Projects” by Karlsson and
Andersson. This early large-scale agile development article is
published in Extreme Programming Perspectives. It is a insight-
ful and much under-appreciated article describing the strong
relationship between feature teams and continuous integra-
tion.
❑ “How Do Committees Invent?” by Mel Conway. This 40-year
article is as insightful today as it was 40 years ago. It is avail-
able via the authors website at www.melconway.com.
❑ Agile Software Development in the Large by Jutta Eckstein.
This is the first book published on the topic of scaling agile
development. It describes the experience of a medium-sized
(around 100 people) project and stresses the importance of fea-
ture teams in large-scale development.
❑ “Promiscuous Pairing and Beginner’s Mind” by Arlo Belshee.
This article is not directly related to feature teams or large-
scale development but it does contain some fascinating experi-
ments that question some of the assumptions behind special-
ization.

191
Try Safari Books Online FREE
Get online access to 7,500+ Books and Videos

Free Trial—Get Started Today!


informit.com/safaritrial

Find trusted answers, fast


Only Safari lets you search across thousands of best-selling books from the top
technology publishers, including Addison-Wesley Professional, Cisco Press,
O’Reilly, Prentice Hall, Que, and Sams.

Master the latest tools and techniques


In addition to gaining access to an incredible inventory of technical books,
Safari’s extensive collection of video tutorials lets you learn from the leading
video training experts.

Wait, there’s more!


Keep your competitive edge
With Rough Cuts, get access to the developing manuscript and be among the first
to learn the newest technologies.

Stay current with emerging technologies


Short Cuts and Quick Reference Sheets are short, concise, focused content
created to get you up-to-speed quickly on new and cutting-edge technologies.
informIT.com The Trusted Technology Learning Source


InformIT is a brand of Pearson and the online presence
for the world’s leading technology publishers. It’s your source
for reliable and qualified content and knowledge, providing
access to the top brands, authors, and contributors from
the tech community.

LearnIT at InformIT
Looking for a book, eBook, or training video on a new technology? Seeking
timely and relevant information and tutorials? Looking for expert opinions,
advice, and tips? InformIT has the solution.

• L
 earn about new releases and special promotions by
subscribing to a wide variety of newsletters.
Visit informit.com /newsletters.

• A ccess FREE podcasts from experts at informit.com /podcasts.

• R ead the latest author articles and sample chapters at


informit.com /articles.

• A
 ccess thousands of books and videos in the Safari Books
Online digital library at safari.informit.com.

• Get tips from expert blogs at informit.com /blogs.

Visit informit.com /learn to discover all the ways you can access the
hottest technology content.

Are You Part of the IT Crowd?


Connect with Pearson authors and editors via RSS feeds, Facebook,
Twitter, YouTube, and more! Visit informit.com /socialconnect.

informIT.com The Trusted Technology Learning Source


More Titles of Interest

AGILE MANAGEMENT FOR THE MYTHICAL MAN-MONTH,


SOFTWARE ENGINEERING ANNIVERSARY EDITION
Applying the Theory of Frederick P. Brooks, Jr.
Constraints ISBN: 9780201835953
for Business Results
David J. Anderson Few books on software project management have been
as influential and timeless as The Mythical Man-Month.
ISBN: 9780131424609
With a blend of software engineering facts and thought-
In Agile Management for Software Engineering, provoking opinions, Fred Brooks offers insight for anyone
David J. Anderson shows managers how to apply managing complex projects. These essays draw from
management science to gain the full business benefits his experience as project manager for the IBM
of agility through application of the focused approach System/360 computer family and then for OS/360,
taught by Eli Goldratt in his Theory of Constraints. its massive software system. Revised in 1995, this
classic text is still relevant for any software engineer or
Whether you’re using XP, Scrum, FDD, or another agile developer. Find out why this title has been a best-seller
approach, you’ll learn how to develop management for over 30 years.
discipline for all phases of the engineering process,
implement realistic financial and production metrics,
and focus on building software that delivers maximum
customer value and outstanding business results.
More Titles of Interest

WRITING EFFECTIVE AGILE SOFTWARE


USE CASES DEVELOPMENT,
Alistair Cockburn Second Edition
ISBN: 9780201702255 The Cooperative Game
Alistair Cockburn
In Writing Effective Use Cases, object technology expert
Alistair Cockburn presents an up-to-date, practical guide ISBN: 9780321482754
to use case writing. The author borrows from his extensive
The agile model of software development has taken the
experience in this realm, and expands on the classic
world by storm. Now, in Agile Software Development,
treatments of use cases to provide software developers
Second Edition, one of agile’s leading pioneers updates
with a “nuts-and-bolts” tutorial for writing use cases.
his Jolt Productivity award-winning book to reflect
The book thoroughly covers introductory, intermediate,
all that’s been learned about agile development since
and advanced concepts, and is, therefore, appropriate
its original introduction. Alistair Cockburn takes on
for all knowledge levels. Illustrative writing examples
crucial misconceptions that cause agile projects to
of both good and bad use cases reinforce the author’s
fail. For example, you’ll learn why encoding project
instructions. In addition, the book contains helpful
management strategies into fixed processes can lead
learning exercises — with answers — to illuminate
to ineffective strategy decisions and costly mistakes.
the most important points.
You’ll also find a thoughtful discussion of the
controversial relationship between agile methods and
user experience design. If you’re new to agile
development, this book will help you succeed the first
time out. If you’ve used agile methods before, Cockburn’s
techniques will make you even more effective.
More Titles of Interest

AGILE ESTIMATING USER STORIES APPLIED


AND PLANNING For Agile Software Development
Mike Cohn Mike Cohn
ISBN: 9780321205681
ISBN: 9780131479418

Agile Estimating and Planning is the definitive, practical Thoroughly reviewed and eagerly anticipated by the agile
guide to estimating and planning agile projects. In this community, User Stories Applied offers a requirements
book, Agile Alliance cofounder Mike Cohn discusses process that saves time, eliminates rework, and leads
the philosophy of agile estimating and planning and directly to better software. The best way to build software
shows you exactly how to get the job done, with real- that meets users’ needs is to begin with “user stories”:
world examples and case studies. Concepts are clearly simple, clear, brief descriptions of functionality that will
illustrated and readers are guided, step by step, toward be valuable to real users. In User Stories Applied, Mike
how to answer the following questions: What will we Cohn provides you with a front-to-back blueprint for
build? How big will it be? When must it be done? How writing these user stories and weaving them into your
much can I really complete by then? You will first learn development lifecycle.
what makes a good plan-and then what makes it agile.
Using the techniques in Agile Estimating and Planning,
you can stay agile from start to finish, saving time,
conserving resources, and accomplishing more.
More Titles of Interest

WORKING EFFECTIVELY REFACTORING


WITH LEGACY CODE Improving the Design of Existing Code
Michael C. Feathers Martin Fowler | Kent Beck | John Brant |
William Opdyke | Don Roberts
ISBN: 9780131177055
ISBN: 9780201485677
Is your code easy to change? Can you get nearly
With proper training a skilled system designer can take
instantaneous feedback when you do change it?
a bad design and rework it into well-designed, robust
Do you understand it? If the answer to any of these
code. In this book, Martin Fowler shows you where
questions is no, you have legacy code, and it is draining
opportunities for refactoring typically can be found, and
time and money away from your development efforts.
how to go about reworking a bad design into a good one.
In this book, Michael Feathers offers start-to-finish Each refactoring step is simple — seemingly too simple
strategies for working more effectively with large, untested — to be worth doing. Refactoring may involve moving a
legacy code bases. This book draws on material Michael field from one class to another, or pulling some code out
created for his renowned Object Mentor seminars: of a method to turn it into its own method, or even pushing
techniques Michael has used in mentoring to help some code up or down a hierarchy. While these individual
hundreds of developers, technical managers, and testers steps may seem elementary, the cumulative effect of
bring their legacy systems under control. such small changes can radically improve the design.
Refactoring is a proven way to prevent software decay.
More Titles of Interest

PATTERNS OF ENTERPRISE DESIGN PATTERNS


APPLICATION ARCHITECTURE Elements of Reusable
Martin Fowler Object-Oriented Software
Erich Gamma | Richard Helm |
ISBN: 9780321127426
Ralph Johnson | John M. Vlissides
Patterns of Enterprise Application Architecture is written ISBN: 9780201633610
in direct response to the stiff challenges that face
enterprise application developers. The author, noted Capturing a wealth of experience about the design of
object-oriented designer Martin Fowler, noticed that object-oriented software, four top-notch designers
despite changes in technology — from Smalltalk to present a catalog of simple and succinct solutions to
CORBA to Java to .NET — the same basic design commonly occurring design problems. Previously
ideas can be adapted and applied to solve common undocumented, these 23 patterns allow designers to
problems. With the help of an expert group of create more flexible, elegant, and ultimately reusable
contributors, Martin distills over forty recurring solutions designs without having to rediscover the design
into patterns. The result is an indispensable handbook solutions themselves. This is the original and classic
of solutions that are applicable to any enterprise book on Design Patterns and belongs on every serious
application platform. Armed with this book, you will developer’s shelf.
have the knowledge necessary to make important
architectural decisions about building an enterprise
application and the proven patterns for use when
building them.
More Titles of Interest

THE PRAGMATIC AGILE & ITERATIVE


PROGRAMMER DEVELOPMENT
From Journeyman to Master A Manager’s Guide
Andrew Hunt | David Thomas Craig Larman
ISBN: 9780201616224 ISBN: 9780131111554

Straight from the programming trenches, The Pragmatic This is the definitive guide for managers and students
Programmer cuts through the increasing specialization to agile and iterative development methods: what they
and technicalities of modern software development to are, how they work, how to implement them — and why
examine the core process — taking a requirement you should.
and producing working, maintainable code that
delights its users. It covers topics ranging from Using statistically significant research and large-scale
personal responsibility and career development to case studies, noted methods expert Craig Larman
architectural techniques for keeping your code flexible presents the most convincing case ever made for iterative
and easy to adapt and reuse. development. Larman offers a concise, information-
packed summary of the key ideas that drive all agile and
iterative processes, with the details of four noteworthy
iterative methods: Scrum, XP, RUP, and Evo.
More Titles of Interest

SCALING SOFTWARE AGILITY AGILE PRINCIPLES,


Best Practices for Large Enterprises PATTERNS, AND
Dean Leffingwell
PRACTICES IN C#
ISBN: 9780321458193
Robert C. Martin | Micah Martin
Agile development practices, while still controversial ISBN: 9780131857254
in some circles, offer undeniable benefits: faster time
to market, better responsiveness to changing customer This book presents a series of case studies illustrating
requirements, and higher quality. However, agile practices the fundamentals of agile development and agile design,
have been defined and recommended primarily to small and moves quickly from UML models to real C# code.
teams. In Scaling Software Agility, Dean Leffingwell The introductory chapters lay out the basics of the
describes how agile methods can be applied to agile movement, while the later chapters show proven
enterprise-class development. techniques in action. The book includes many source
code examples that are also available for download from
the authors’ web site. Whether you are a C# programmer
or a Visual Basic or Java programmer learning C#, a
software development manager, or a business analyst,
Agile Principles, Patterns, and Practices in C# is the first
book you should read to understand agile software and
how it applies to programming in the .NET Framework.
More Titles of Interest

IMPLEMENTING LEAN COLLABORATION EXPLAINED


SOFTWARE DEVELOPMENT Facilitation Skills for
Mary Poppendieck | Tom Poppendieck Software Project Leaders
Jean Tabaka
ISBN: 9780321437389
ISBN: 9780321268778
In 2003, Mary and Tom Poppendieck’s Lean Software
Development introduced breakthrough development To succeed, an agile project demands outstanding
techniques that leverage Lean principles to deliver collaboration among all its stakeholders. But great
unprecedented agility and value. Now this sequel and collaboration doesn’t happen by itself; it must be carefully
companion guide shows exactly how to implement Lean planned and facilitated throughout the entire project
software development, hands-on. Implementing Lean lifecycle. Collaboration Explained is the first book to bring
Software Development draws on the Poppendiecks’ together proven, start-to-finish techniques for ensuring
unparalleled experience helping development effective collaboration in any agile software project.
organizations optimize the entire software value stream. Since the early days of the agile movement, Jean Tabaka
You’ll discover the right questions to ask, the key has been studying and promoting collaboration in agile
issues to focus on, and techniques proven to work. environments. Drawing on her unsurpassed experience,
The authors present case studies from leading-edge she offers clear guidelines and easy-to-use collaboration
software organizations, and offer practical exercises for templates for every significant project event: from iteration
jumpstarting your own Lean initiatives. and release planning, through project chartering, all the
way through post-project retrospectives.
Get Behind the Scenes
Stay Ahead of the Curve
Rough Cuts is a Safari Books Online interactive publishing service that provides you
with first access to pre-published manuscripts on cutting-edge technology topics
— enabling you to stay on the cutting-edge and remain competitive. When you
participate in the Rough Cuts program, you also own an important role in helping
to develop manuscripts into best-selling books.
Here’s how it works:
1. Select a Rough Cuts title.
2. Get access through a Safari Library account. Or if you would like,
you can order a single Rough Cuts title as well.
3. Sign up to receive Alerts by visiting the Rough Cuts catalog page.
4. Read updated versions online or in PDF at your convenience.
5. Interact with the Rough Cuts community. Post your feedback
or respond to comments from other users, editors, and authors.
6. Access the final version – online access and PDF access of the
printed version is available for up to 45 days.

9780321660527 9780321648020 9780321686114 9780321522467

View Titles Available and Purchase Rough Cuts at


informit.com/roughcuts

informIT.com The Trusted Technology Learning Source


• L
 iveLessons allows you to keep LiveLessons: self-paced, personal
your skills up to date with the video instruction from the world’s
latest technology training from leading technology experts
trusted author experts. • INSTRUCTORS YOU TRUST
• Cutting edge topics
• L
 iveLessons is a cost effective • CUSTOMIZED, SELF-PACED
alternative to expensive off-site LEARNING
training programs. • LEARN BY DOING

• L
 iveLessons provides the ability
to learn at your own pace and
avoid hours in a classroom.

Package includes:
• 1 DVD featuring 3 - 8 hours of
instructor-led classroom sessions
divided into 15-20 minute
step-by-step hands-on labs
• S ample code and printed
study guide

The power of the world’s leading experts at your fingertips!

To learn more about LiveLessons visit


mylivelessons.com
BUY 2 SAVE

35
+ FREE
SHIPPING
%

USE COUPON CODE AGILE2009

SAVE 35% when you purchase two or more of these


featured titles from informIT.com/agile. Enter the coupon code
AGILE2009 upon checkout to receive your discount.

Das könnte Ihnen auch gefallen