Sie sind auf Seite 1von 14

Project Control and Process Instrumentation

THE SEVEN CORE METRICS

The seven core metrics are based on common sense and field experience with both successful and unsuccessful metrics programs. Their attributes include the following:
They are simple, objective, easy to collect, easy to interpret and hard to misinterpret. Collection can be automated and non intrusive. They provide for consistent assessment throughout the life cycle and are derived from the evolving product baselines rather than from a subjective assessment. They are useful to both management and engineering personnel for communicating progress and quality in a consistent format. Their fidelity improves across the life cycle.

Seven core metrics are used in all software projects. Three are management indicators and four are quality indicators. MANAGEMENT INDICATORS
Work and progress (work performed over time) Budgeted cost and expenditures (cost incurred over time) Staffing and team dynamics (personnel changes over time)

QUALITY INDICATORS

Change traffic and stability (change traffic over time) Breakage and modularity (average breakage per change over time) Rework and adaptability (average rework per change overtime) Mean time between failures (MTBF) and maturity (defect rate over time)

The following table describes the core software metrics. Each metric has two dimensions: a static value used as an objective, and the dynamic trend used to manage the achievement of that objective. While metrics values provide one dimension of insight, metrics trends provide a more important perspective for managing the process. Metrics trends with respect to time provide insight into how the process and product are evolving.

Metric Work and progress Budgeted cost and expenditures Staffing and team dynamics Change traffic and stability Breakage and modularity Rework and adaptability MTBF and maturity

Purpose Iteration panning, plan vs. actuals, management indicator

Perspectives SLOC, function points, object points, scenarios, test cases, SCOs

Financial insight, plan vs. actuals, Cost per month, full-time staff per management indicator month, percentage of budget expended Resource plan vs. actual, hiring rate, attrition rate Iteration planning, management indicator of schedule convergence Convergence, software scrap, quality indicator Convergence, software rework, quality indicator Test coverage/adequacy, robustness for use, quality indicator. People per month added, people per month leaving SCOs opened vs. SCOs closed, by type (0,1,2,3,4), by release/component subsystem. Reworked SLOOC per change, by type (0,1,2,3,4), by release/component subsystem. Average hours per change, by type (0,1,2,3,4), by release/component subsystem. Failure counts, test hours until failure, by release/ component/subsystem.
5

7.2 MANAGEMENT INDICATORS 7.2.1 Work & Progress The various activities of an iterative development project can be measured by defining a planned estimate of the work in an objective measure, then tracking progress (work completed over time) against that plan The default perspectives of this metric would be as follows: Software architecture team: use cases demonstrated Software development team: SLOC under baseline change management, SCOs closed. Software assessment team: SCOs opened, test hours executed, evaluation criteria met Software management team: milestones completed

7.2.2 BUDGETED COST AND EXPENDITURES To maintain management control, measuring cost expenditures over the project life cycle is always necessary. One common approach to financial performance measurement is use of an earned value system, which provides highly detailed cost and schedule insight. The basic parameters of an earned value system, usually expressed in units of dollars, are as follows:

Expenditure Plan: the planned spending profile for a project over its planned schedule. For most software projects (and other labor-intensive projects), this profile generally tracks the staffing profile. Actual Progress: the technical accomplishment relative to the planned progress underlying the spending profile. In a healthy project, the actual progress tracks planned progress closely. Actual Cost: the actual spending profile for a project over its actual schedule. In a healthy project, this profile tracks the planned profile closely.
8

Earned Value: the value that represents the planned cost of the actual progress. Cost variance: the difference between the actual cost and the earned value. Positive values correspond to over - budget situations; negative values correspond to under budget situations. Schedule Variance: the difference between the planned cost and the earned value. Positive values correspond to behindschedule situations; negative values correspond to aheadof-schedule situations. Figure 13-2 provides a graphical perspective of these parameters and shows a simple example of a project situation.

7.2.3 STAFFING AND TEAM DYNAMICS

Depending on the overlap of iterations and other project specific circumstance, staffing can vary. For discrete, one of-a-kind development efforts (such as building a corporate information system), the staffing profile in the following figure would be typical. It is reasonable to expect the maintenance team to be smaller than the development team for these sorts of developments. For a commercial product development, the sizes of the maintenance and development teams may be the same.

10

7.3 QUALITY INDICATORS 7.3.1 CHANGE TRAFFIC AND STABILITY

Overall change traffic is one specific indicator of progress and quality. Change traffic is defined as the number of software change orders opened and closed over the life cycle .This metric can be collected by change type, by release, across all releases, by team, by components, by subsystem, and so forth. Stability is defined as the relationship between opened versus closed SCOs.

11

7.3.2 BREAKAGE AND MODULARITY Breakage is defined as the average extent of change, which is the amount of software baseline that needs rework (in SLOC, function points, components, subsystems, files, etc). Modularity is the average breakage trend over time. For a healthy project, the trend expectation is decreasing or stable .

12

7.3.3 REWORK AND ADAPTABILITY Rework is defined as the average cost of change, which is the effort to analyze, resolve and retest all changes to software baselines. Adaptability is defined as the rework trend over time. For a health project, the trend expectation is decreasing or stable.

13

7.3.4 MTBF(mean time b/w failures) AND MATURITY MTBF is the average usage time between software faults. In rough terms, MTBF is computed by dividing the test hours by the number of type 0 and type 1 SCOs. Maturity is defined as the MTBF trend over time (Figure 13-8).

14

Das könnte Ihnen auch gefallen