Sie sind auf Seite 1von 7
International Journal of Project Management 20 (2002) 213–219 www.elsevier.com/locate/ijproman Learning to learn, from

International Journal of Project Management 20 (2002) 213–219

Journal of Project Management 20 (2002) 213–219 www.elsevier.com/locate/ijproman Learning to learn, from

www.elsevier.com/locate/ijproman

Learning to learn, from past to future

Kenneth G. Cooper, James M. Lyneis*, Benjamin J. Bryant

Business Dynamics Practice, PA Consulting Group, 123 Buckingham Palace Road, London SW1W 9SR, UK

Abstract

As we look from the past to the future of the fieldof project management, one of the great challenges is the largely untapped opportunity for transforming our projects’ performance. We have yet to discern how to systematically extract and disseminate management lessons as we move from project to project, andas we manage andexecute portfolios of projects. As managers, executives, andresearchers in project management, we have yet to learn how to learn. In this paper, the authors discuss some of the reasons behindthe failure to systematically learn, andpresent an approach andmodeling framework that facilitates cross-project learning. The approach is then illustrated with a case study of two multi-$100 million development projects. # 2002 Elsevier Science LtdandIPMA. All rights reserved.

Keywords: Learning; Strategic management; Rework cycle; Project dynamics

1. Introduction

As we look from the past to the future of the fieldof project management, one of our great challenges is the largely untappedopportunity for transforming our projects’ performance. We have yet to discern how to extract anddisseminate management lessons as we move from project to project, andas we manage and

execute portfolios of projects. As managers, executives, andresearchers in project management, we have yet to learn how to learn.

One who does not learn from the past

Whether the

motivation is the increasingly competitive arena of ‘‘Web-speed’’ product development, or the mandate of prospective customers to demonstrate qualifications basedon ‘‘past performance,’’ or the natural drive of the best performers to improve, we are challengedto

learn from our project successes andfailures. We do so

rather well at the technical andprocess levels

buildupon the latest chip technology to design the

we learn how to move that steel, or

smaller faster one

purchase order, more rapidly. But how does one sort among the extraordinary variety of factors that affect project performance in order to learn what about the

management helpeda ‘‘good’’ project? We must learn how to learn what it is about prior goodmanagement that made it good, that had a positive impact on the

we

* Corresponding author. Tel.: +44-20-7730-9000; fax: +44-20-

7333-5050.

E-mail address: jim.lyneis@paconsulting.com (J.M. Lyneis).

performance, and then we must learn how to codify and disseminate and improve upon those management les- sons. Learning how to learn future management lessons from past performance will enable us to improve system- atically andcontinuously the management of projects. A number of conditions have contributed to and per- petuatedthe failure to systematically learn on projects. First is the misguided prevalent belief that every project is different, that there is little commonality between projects, or that the differences are so great that separ- ating the differences from the similarities would be dif- ficult if not impossible. Second, the difficulty in determining the true causes of project performance hin- ders our learning. Even if we took the time to ask suc- cessful managers what they have learned, do we really believe that they can identify what has worked, and what has not, what works under what project condi- tions but not others, andhow much difference one practice vs. another makes? As Wheelwright andClark [1, p. 284–5] note:

the performance that matters is often a result of complex interactions within the overall develop- ment system. Moreover, the connection between cause andeffect may be separatedsignificantly in time andplace. In some instances, for example, the outcomes of interest are only evident at the con- clusion of the project. Thus, while symptoms and potential causes may be observedalong the devel- opment path, systematic investigation requires observation of the outcomes, followedby any ana- lysis that looks back to findthe underlying causes.

0263-7863/02/$22.00 # 2002 Elsevier Science LtdandIPMA. All rights reserved. PII: S0263-7863(01)00071-0

214

K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219

Third, projects are transient phenomena, andfew companies have organizations, money, systems or prac- tices that span them, especially for the very purpose of gleaning andimproving upon transferable lessons of project management. Natural incentives pressure us to getting on with the next project, andespecially not dwell on the failures of the past. And fourth, while there are individuals who learn—successful project managers that have three or four great projects before they move to different responsibilities or retire—their limited span andcareer path make systematic assessment andlearn- ing of transferable lessons that get incorporatedin sub- sequent projects extremely difficult. In order to provide learning-based improvement in project management, all of these conditions need to be addressed. Organizations need:

1. an understanding that the investment in learning can pay off, andthat there needs to be two outputs from every project: the product itself, and the post-project assessment of what was learned;

2. the right kindof data from past projects to sup- port that learning; and

3. model(s) of the process that allow:

comparison of ‘‘unique’’ projects, andthe sifting of the unique from the common; a search for patterns andcommonalities between the projects; and an understanding of the causes of project perfor- mance differences, including the ability to do ana- lyses andwhat-ifs.

In the remainder of this paper, the authors describe one company’s experience in achieving management science-basedlearning andreal project management improvement. In the next Section, the means of dis- playing and understanding the commonality among projects—the learning framework—is described. 1 Then, an example of using this framework for culling lessons from past projects is demonstrated—transforming a pro- ject disaster into a sterling success on real multi-$100M development projects (Section 2). Finally, the simulation- basedanalysis andtraining system that provides ongoing project management improvement is explained.

2. Understanding project commonality: the rework cycle and feedback effects

We draw upon more than 20 years of experience in analyzing development projects with the aid of computer-

1 The framework discussed in this paper is designed to understand project dynamics at a strategic/tactical level [5]. Additional frame- works andmodels will be requiredfor learning other aspects of project management (see, for example, [1]).

based simulation models. Such models have been used to accurately re-create, diagnose, forecast, and improve performance on dozens of projects and programs in aerospace, defense electronics, financial systems, con- struction, shipbuilding, telecommunications, and soft- ware development [2,3,6]. At the core of these models are three important structures underlying the dynamics of a project (in con- trast to the static perspective of the more standard ‘‘critical path’’): (1) the ‘‘rework cycle;’’ (2) feedback effects on productivity andwork quality; and(3) knock- on effects from upstream phases to downstream phases. These structures, described in detail elsewhere, are briefly summarizedlater [4,5]. What is most lacking in conventional project planning andmonitoring techniques is the acknowledgement or measurement of rework. Typically, conventional tools view tasks as either ‘‘to be done,’’ ‘‘in-process,’’ or ‘‘done.’’ In contrast, the rework cycle model shown in Fig. 1 below represents a near-universal description of work flow on project which incorporates rework and undiscovered rework: people working at a varying pro- ductivity accomplish work; this work becomes either work really done or undiscovered rework, depending on a varying ‘‘quality’’ (quality is the fraction of work done completely andcorrectly); undiscoveredrework is work that contains as-yet-undetected errors, and is therefore perceived as being done; errors are detected, often months later, by downstream efforts or testing, where it becomes known rework; known rework demands the application of people in competition with original work; errors may be made while executing rework, and hence work can cycle through undiscovered rework several times as the project progresses. On a typical project, productivity and quality change over time in response to conditions on the project and management actions. The factors that affect productiv- ity andquality are a part of the feedback loop structure that surrounds the rework cycle. Some of these feed- backs are ‘‘negative’’ or ‘‘controlling’’ feedbacks used by management to control resources on a project. In Fig. 2, for example, overtime is added and/or staff are brought on to a project (‘‘hiring’’) basedon work believedto be remaining (expectedhours at completion less hours expended to date) and scheduled time remaining to finish the work. 2 Alternatively, on the left of the diagram scheduled completion can be increased to allow completion of the project with fewer resources. Other effects drive productivity and quality, as indi- catedin Fig. 2: work quality to date, availability of prerequisites, out-of-sequence work, schedule pressure,

2 In Fig. 2., arrows represent cause–effect relationships, as in hiring causes staff to increase. Not indicated here, but a vital part of the actual computer model itself, these cause–effect relationships can involve delays (e.g. delays in finding and training new people), and non-linearities (e.g. regardless of the amount of pressure).

K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219

215

morale, skill andexperience, supervision, andovertime. 3 Each of these effects, in turn, is a part of a complex network of generally ‘‘positive’’ or reinforcing feedback loops that early in the project drive productivity and quality down, and later cause it to increase. For example, suppose that as a result of the design change (or because of an inconsistent plan), the project falls behindschedule. In response, the project may bring on more resources. However, while additional resources have positive effects on work accomplished, they also initiate negative effects on productivity and quality. Bringing on additional staff reduces average experience level. Less experiencedpeople make more errors and work slower than more experiencedpeople. Bringing on additional staff also creates shortages of supervisors, which in turn reduces productivity and quality. Finally,

which in turn reduces productivity and quality. Finally, Fig. 1. The Rework Cycle. while overtime may

Fig. 1. The Rework Cycle.

while overtime may augment the effective staff on the project, sustainedovertime can leadto fatigue which reduces productivity and quality. Because of these ‘‘secondary’’ effects on productivity andquality, the project will make less progress than expectedandcontain more errors—the availability and quality of upstream work has deteriorated. As a result, the productivity and quality of downstream work suf- fers. The project falls further behindschedule, so more resources are added, thus continuing the downward spiral. In addition to adding resources, a natural reac- tion to insufficient progress is to exert ‘‘schedule pres- sure’’ on the staff. This often results in more physical output, but also more errors (‘‘haste makes waste’’), and more out-of-sequence work. Schedule pressure can also leadto lower morale, which also reduces productivity andquality, andincreases staff turnover. A rework cycle andits associatedproductivity and quality effects form a ‘‘building block.’’ Building blocks can be usedto represent an entire project, or replicated to represent different phases of a project, in which case multiple rework cycles in parallel andseries might be included. At its most aggregate level, such building blocks might represent design, build and test. Alter- natively, building blocks might separately represent dif- ferent stages (e.g. conceptual vs. detail) and/or design functions (structural, electrical, power, etc.). In soft- ware, building blocks might represent specifications, detailed design, code and unit test, integration, and test. When multiple phases are present, the availability and

test. When multiple phases are present, the availability and Fig. 2. Feedback effects surrounding the Rework

Fig. 2. Feedback effects surrounding the Rework Cycle.

3 These are just a few of the factors affecting productivity and quality. Models used in actual situations contain many additional factors.

216

K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219

quality of upstream work can ‘‘knock-on’’ to affect the productivity and quality of downstream work. In addi- tion, downstream progress affects upstream work by fostering the discovery of upstream rework. The full simulation models of these development pro- jects employ thousands of equations to portray the time-varying conditions which cause changes in pro- ductivity, quality, staffing levels, rework detection, and work execution. All of the dynamic conditions at work in these projects andtheir models (e.g. staff experi- ence levels, work sequence, supervisory adequacy,

‘‘spec’’ stability, worker morale, task feasibility, ven- dor timeliness, overtime, schedule pressure, hiring and attrition, progress monitoring, organization andpro-

cess changes, prototyping, testing

the performance of the rework cycle. Because our business clients require demonstrable accuracy in the models upon which they will base important decisions, we have needed to develop accurate measures of all these factors, especially those of the Rework Cycle itself. In applying the Rework Cycle simulation model to a wide variety of projects, we found an extremely high level of commonality in the existence of the Rework Cycle andthe kinds of factors affecting productivity, quality, rework discovery, staffing, and scheduling. However, there is substantial variation in the strength andtiming of those factors, resulting in quite different levels of performance on the projects simulatedby the models (e.g. [3,6]). It is the comparison of the quantita- tive values of these factors across multiple programs that has enabledthe rigorous culling of management lessons.

) cause changes in

3. Culling lessons: cross-project simulation comparison

In using the Rework Cycle model to simulate the per- formance of dozens of projects and programs in different companies andindustries, it is unsurprising to note that, even with a high level of commonality in the logic struc- ture, the biggest differences in factors occur as one moves from one industry to another. Several, but fewer, differ- ences exist as one moves from one company to another within a given industry. Within a given company executing multiple projects, the differences are smaller still. Nevertheless, different projects within a company exhibit apparently quite different levels of performance when judged by their adherence to cost and schedule targets. Such was the case at Hughes Aircraft Company, long a leader in the defense electronics industry and a pioneer in the use of simulation technology for its programs. Hughes hadjust completedthe dramatically successful ‘‘Peace Shield‘‘ program, a commandandcontrol sys- tem development described by one senior US Air Force official, Darleen Druyun: ‘‘In my 26 years in acquisition, this [Peace ShieldWeapon System] is the most success- ful program I’ve ever been involvedwith, andthe lea- dership of the U.S. Air Force agrees.’’ [Program Manager, March–April 1996, p. 24]. This on-budget, ahead-of-schedule, highly complimented program stood in stark contrast to a past program, in the same orga- nization, to develop a different command and control system. The latter exceeded its original cost and sche- dule plans by several times, and suffered a large contract dispute with the customer. Note in Fig. 3 the substantial difference in their aggregate performance as indicated by staffing level on the two programs.

as indicated by staffing level on the two programs. Fig. 3. Past program performance comparedto Peace

Fig. 3. Past program performance comparedto Peace Shield(staffing levels; past program shiftedto start in 1991 when Peace Shieldstarted).

K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219

217

Journal of Project Management 20 (2002) 213–219 217 Fig. 4. Past program with external differences removed

Fig. 4. Past program with external differences removed indicates how peace shield would have performed absent management policy changes.

Theories abounded as to what had produced such significantly improvedperformance on Peace Shield. Naturally, they were ‘‘different’’ systems. Different cus- tomers. Different program managers. Different technol- ogies. Different contract terms. These andmore all were citedas (partially correct) explanations of why such different performance was achieved. Hughes executives were not satisfiedthat all the lessons to be learnedhad been learned. Both programs were modeled with the Rework Cycle simulation. First, data was collected on:

1. the starting conditions for the programs (scope, schedules, etc.);

2. changes or problems that occurredto the pro- grams (added scope, design changes, availability of equipment provided by the customer, labor market conditions, etc.);

3. differences in process or management policies between the two programs (e.g. teaming, hiring the most experiencedpeople, etc.); and

4. actual performance of the programs (quarterly time series for staffing, work accomplished, rework, overtime, attrition, etc.)

Second, two Rework Cycle models with identical structures (that is, the causal factors usedin the models) were set up with the different starting conditions on the programs andwith estimates of changes to, anddiffer- ences between, the programs as they were thought to have occurredover time. Finally, these models were simulated andthe performance comparedto actual performance of the programs as given by the data. Working with pro- gram managers, numerical estimates of project-specific conditions were refined in order to improve the corre- spondence of simulated to actual program performance.

In the end, the two programs were accurately simulated by an identical model using their different starting con- ditions, external changes, and management policies. After achieving the two accurate simulations, the next step in learning is to use the simulation model to under- standwhat causedthe differences between these two programs. How much results from: Differences in start- ing conditions? Differences in external factors? Differ- ences in processes or other management initiatives? The next series of analyses strippedaway the differ- ences in factors, one set at a time, in order to quantify the magnitude of performance differences caused by different conditions. Working with Hughes managers, the first step was to isolate the differences in starting conditions and ‘‘external’’ differences—those in work scope, suppliers, labor markets. 4 In particular, Peace Shieldhad: (1) lower scope andfewer changes than the past program; (2) experienced fewer vendor delays and hardware problems; and(3) hadbetter labor market conditions (lower delay in obtaining needed engineers). The removal of those different conditions yielded the intermediate simulation shown in Fig. 4. Having removedfrom the troubledprogram simula- tion the differences in scope and external conditions, the simulation above represents how Peace Shieldwould have performedbut for the changes in managerial practices andprocesses. While a large amount of per- formance difference clearly was attributable to exter- nal conditions, there is still a halving of cost andtime achievedon Peace Shieldremaining to be explainedby

4 In practice, differences in starting conditions are removed sepa- rately from differences in external conditions. Then, when external con- ditions are removed, we see the impact of changes (i.e. unplanned events or conditions) to the project. This provides us with information about potential sources andimpact of potential risks to future projects.

218

K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219

Journal of Project Management 20 (2002) 213–219 Fig. 5. Where did the cost improvement come from?

Fig. 5. Where did the cost improvement come from?

managerial differences. We then systematically altered the remaining factor differences in the model that representedmanagerial changes. For example, several process changes made in Peace Shield (such as extensive ‘‘integratedproduct development’’ practices) generated significantly reduced rework discovery times from an average of 7 months on the past program to 4 months on Peace Shield. Also, the policies govern- ing the staffing of the work became far more disciplined on Peace Shield: start of software coding at 30 versus

10%.

These andseveral other managerial changes were tes- ted in the model. When all were made, the model had been transformedfrom that of a highly troubledpro- gram to that of a very successful one—andthe perfor- mance improvements attributable to each aspect of the managerial changes were identified and quantified. A summarizedversion of the results (Fig. 5) shows that enormous savings were andcan be achievedby the implementation of what are essentially ‘‘ free’’ chan- ges—if only they are known and understood. That was

the groundbreaking value of the preceding analysis: to clarify just how much improvement couldbe achieved by each of several policies andpractices implementedon a new program. What remainedto be achievedwas to systematize the analytical andlearning capability in a manner that wouldsupport new andongoing programs, andhelp them achieve continuing performance gains through a corporate ‘‘learning system’’ that wouldyieldeffective management lesson improvement andtransfer across programs.

4. Putting lessons to work: the simulation-based learn- ing system

Beyondthe value of the immediate lessons from the cross-program comparative simulation analysis, the needwas to implement a system that wouldcontinue to support rigorous management improvement andlesson transfer, as illustratedbelow:

implement a system that wouldcontinue to support rigorous management improvement andlesson transfer, as illustratedbelow:

K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219

219

Development andimplementation of the learning system began with adapting the simulation model to each of several Hughes programs, totaling a value of several hundred million dollars, and ranging in status from just starting to half-complete. Dozens of managers were interviewed in order to identify management poli- cies believedto be ‘‘best practices’’; these were system- atically testedon each program model to test andverify the universality of their applicability, andthe magnitude of improvement that couldbe expectedfrom each. All of the models were integrated into a single compu- ter-basedsystem accessible to all program managers. This system was linkedto a data base of the ‘‘best practice’’ observations that couldbe searchedby the users when considering what actions to take. Each manager could conduct a wide variety of ‘‘what if’’ questions as new conditions and initiatives emerged, drawing upon one’s own experience, the testedideas from the other programs’ managers, the ‘‘best practice’’ data base, and the extensive explanatory diagnostics from the simulation models. As each idea testedis codifiedandits impacts quantifiedin new simulations, the amount andcauses of performance differences are identified. Changes that produce benefits for any one program are flaggedfor testing in other programs as well, to assess the value of their transfer. In this system’s first few months of use, large cost andtime savings (tens of millions of dollars, many months of program time) were identified on the programs. In order to facilitate expanding use and impact of the learning system, there is a built-in adaptor that allows an authorizeduser to set up a new program model by ‘‘building off’’ an existing program model. An extensive menu guides the user through an otherwise-automated adaptation. Upon completion, the system alerts the user to the degree of performance improvement that is required in order to meet targets and budgets. The manager can then draw upon test ideas from other pro- gram managers and the best-practice database in iden- tifying specific potential changes. These can in turn be testedon the new program model, andnew high-impact changes loggedin the database for future managers’ reference, in the quest for ever-improving performance. Not only does this provide a platform for organizational capture anddissemination of learning, it is a most rigor- ous means of implementing a clear ‘‘past performance’’ basis for planning new programs andimprovements. Finally, because there is a needfor other managers to learn the lessons being harvestedin the new system, a fully interactive trainer-simulator version of the same program model is includedas part of the integratedsystem. Furthermore, software enhancements made since the Hughes system was implementedmake the learning sys- tems even more effective. First, moving the software to a web-basedinterface makes it far more widely available to project managers, who can now access it over the Inter- net from anywhere in the world. A second enhancement

is to the arsenal of software tools available to the project manager. These include: an automatic sensitivity tester to determine high leverage parameters; a potentiator to ana- lyze thousands of alternative combinations of manage- ment policies to determine synergistic combinations; and an optimizer to aidin calibration andpolicy optimization. Finally, adding a Monte Carlo capability to the software allows the project manager to answer the question ‘‘Just how confident are you that this budget projection is correct?’’, given uncertainties in the inputs.

5. Conclusions

The learning system for program andproject managers implementedat Hughes Aircraft is a first-of-a-kindin addressing the challenges cited at the outset. First, it has effectively provided a framework, the Rework Cycle model, that addresses the problem of viewing each pro- ject as a unique phenomena from which there is little to learn for other projects. Second, it employs models that help explain the causality of those phenomena. Third, it provides systems that enable, and encourage, the use of past performance as a means of learning management lessons. Andfinally, it refines, stores, anddisseminates the learning andmanagement lessons of past projects to offset the limitedcareer span of project managers. While simulation is not the same as ‘‘real life’’, neither does real life offer us the chance to diagnose rigorously, understand clearly, and communicate effectively the effects of our actions as managers. Simulation-basedlearning systems for managers will continue to have project and business impact that increasingly distance these pro- gram managers from competitors who fail to learn.

References

[1] Wheelwright SC, Clark KB. Revolutionizing product development:

quantum leaps in speed, efficiency, and quality. New York: The Free Press, 1992. [2] Cooper KG. Naval ship production: a claim settled and a frame- work built,. Interfaces vol. 10, no. 6, pp. 20–36 December 1980. [3] Cooper KG, Mullen TW. Swords & plowshares: the rework cycles of defense and commercial software development projects. American Programmer vol. 6 no.5, May 1993. Reprintedin Guidelines for Successful Acquisition and Management of Soft- ware Intensive Systems, Department of the Air Force, September 1994. pp. 41–51. [4] Cooper KG. 1993a The rework cycle: Why projects are misman- aged, PM Network Magazine February 1993, 5–7; 1993b The

PM Net-

, work Magazine February 1993, 25–28; 1993c The rework cycle:

Benchmarks for the project manager, Project Management Jour- nal, 24(1), 17–21. [5] Lyneis JM, Cooper KG, Els SA. Strategic management of com-

plex projects: a case study using system dynamics. System Dynamics Review 2001; 17(3): 237–60. Reichelt KS andLyneis JM. The dynamics of project performance:

[6]

benchmarking the drivers of cost and schedule overrun. European Management Journal vol. 17, no. 2, April 1999 pp. 135–50.

rework cycle: How it really works

andreworks