Sie sind auf Seite 1von 9

SOFTWARE QUALITY MANAGEMENT

Software quality assurance professionals


believe that a higher quality level of software Benefits of a
Higher Quality
development process yields higher quality
performance, and they seek quantitative
evidence based on empirical findings. The

Level of the
few available journal and conference papers
that present quantitative findings use a
methodology based on a comparison of

Software Process:
“before-after” observations in the same
organization. A limitation of this before-after
methodology is the long observation period,

Two Organizations
during which intervening factors, such as
changes in products and in the organization,
may substantially affect the results. The

Compared
authors’ study employed a methodology
based on a comparison of observations in
two organizations simultaneously (Alpha and
Beta). Six quality performance metrics were
employed: 1) error density, 2) productivity, 3)
percentage of rework, 4) time required for an
Daniel Galin
error correction, 5) percentage of recurrent
repairs, and 6) error detection effectiveness.
Ruppin Academic Center
Key words: CMM level effects, CMM level
appraisal, software development perfor- Motti Avrahami
mance metrics
Verifone

SQP References
Sustaining Best Practices: How
Real-World Software Organizations
Improve Quality Processes INTRODUCTION
vol. 7, issue 3 Software quality assurance (SQA) professionals believe that
Diana Mekelburg a higher quality level of software development process yields
Making Effective Use of the CMM in an higher quality performance. SQA professionals seek evidence
Organization: Advice from a CMM Lead for positive results of investments in SQA systems to achieve
Assessor
improved quality performance of the software development
vol. 2, issue 4
Pat O’Toole
process. Journal and conference papers provide such evidence
by presenting studies that show SQA investments result in
improved software development processes. Most of these stud-
ies are based on comparison of “before-after” observations in
the same organization. Only parts of these papers quantify the
performance improvement achieved by SQA system invest-
ments, presenting percentages of productivity improvement
and percentages of reduction of defects density, and so on.
Of special interest are papers that quantify performance
improvement and also measure software process quality level
advancement. Capability Maturity Model (CMM®) and CMM
IntegrationSM (CMMI®) level are the tools used for measur-
ing the software process quality level common in all of these
papers. According to this approach, the improvement of the
quality level of the software process is measured by attain-
ing a higher CMM (or CMMI) level in the organization. For
example, Jung and Goldenson (2003) found that software main-
tenance projects from higher CMM-level organizations typically
report fewer schedule deviations than those from organizations

www.asq.org 27
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared

assessed at lower CMM levels. For U.S. maintenance An alternative study methodology that minimizes these
projects the results are: undesired effects is a methodology based on comparing the
• Mean deviation of 0.464 months for CMM level performance of several organizations observed at the same
1 organizations period (“comparison of organizations” methodology). The
observation period, when applying this methodology, is
• Mean deviation of 0.086 months for CMM level
much shorter, and the observed organization is not expected
2 organizations
to undergo a change process during the observation period.
• Mean deviation of 0.069 months for CMM level As a result, the software process is relatively uniform during
3 organizations the observation period and the effects of uncontrolled soft-
A variety of metrics are applied to measure the ware development environment changes are diminished.
resulting performance improvement of the software It is important to find out whether the results obtained
development process, relating mainly to quality, pro- by research applying the comparison of organizations
ductivity, and schedule keeping. Results of this nature methodology support findings of research that applied the
are presented by McGarry et al. 1999; Diaz and before-after methodology of empirical studies. Papers that
King 2002; Pitterman 2000; Blair 2001; Keeni 2000; report findings of studies that use the comparison of orga-
Franke 1999; Goldenson and Gibson 2003; and Isaac, nizations methodology are rare. One example is Herbsleb
Rajendran, and Anantharaman 2004a; 2004b. et al. (1994), which presents comparative case study results
Galin and Avrahami (2005; 2006) performed an for two projects that have similar characteristics performed
analysis of past studies (meta analysis) based on results at the same period by Texas Instruments. One of the
presented in 19 published quantitative papers. Their projects was performed by applying “old software develop-
results, which are statistically significant, show an aver- ment methodology,” while the other used “new (improved)
age performance improvement according to six metrics software development methodology.” The authors report a
that range from 38 percent to 63 percent for one CMM reduction of the cost per software line of code by 65 per-
level advancement. Another finding of this study is an cent. Another result was a substantial decrease in the defect
average return on investment of 360 percent for invest- density, from 6.9 to 2.0 defects per 1,000 lines of code. In
ments in one CMM level advancement. They found addition, the average costs to fix a defect were reduced by
similar results for CMMI level advancement, but the 71 percent. The improved software development process
publications that present findings for CMMI studies do was the product of intensive activities of software perfor-
not provide statistically significant results. mance improvement (SPI), and was characterized by an
Critics may claim that the picture portrayed by entirely different distribution of resources invested during
the published papers is biased by the tendency not the software development process. However, Herbsleb et
to publish negative results. Even if one assumes some al. (1994) provide no comparative details about the quality
bias, the multitude of published results proves that level of the software process, that is, by appraisal of the
a significant contribution to performance is derived CMM level for the two projects.
from SQA improvement investments, even if its real The authors’ study applies the comparison of orga-
effect is somewhat smaller. nizations methodology, which is based on empirical
The papers mentioned in Galin and Avrahami’s study, data of two software developing organizations (“develop-
which quantify performance improvement and rank software ers”) with similar characteristics, collected in the same
process quality level improvement, were formulated accord- period. The empirical data that became available to the
ing to the before-after methodology. An important limitation authors enabled them to process comparative results for
of this before-after methodology is the long period of obser- each of the two developers, which include: 1) quantita-
vations during which intervening factors, such as changes tive performance results according to several software
in products, the organization, and interfacing requirements, process performance metrics; and 2) a CMM appraisal
may substantially affect the results. In addition, the gradual of the developer’s quality level of its software processes.
changes, typical to implementation of software process In addition, the available data enable them to provide an
improvements, cause changing performance achievements explanation for the performance differences based on the
during the observation period that may affect the study differences in resource investment preferences during
results and lead to inaccurate conclusions. the software development phases.

28 SQP VOL. 9, NO. 4/© 2007, ASQ


Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared

THE CASE STUDY • H2: Beta, as the developer of a higher quality


level of its software process, will achieve soft-
ORGANIZATIONS ware process performance higher than Alpha
The authors’ case study is based on records and observa- according to all performance metrics.
tions of two software development organizations. The first • H3: The results for the differences in perfor-
organization, Alpha, is a startup firm that implements only mance achievements of the comparison of
basic software quality assurance practices. The second organizations methodology will support the
organization, Beta, is the software development depart- results of studies performed according to the
ment in an established electronics firm that performs a before-after methodology.
wide range of software quality assurance practices that are
employed throughout the software development process.
Both Alpha and Beta develop C++ real-time embedded METHODOLOGY
software in the same development environment: Alpha’s The authors’ comparative case study research was
software product serves the telecommunication security planned for both a preliminary stage and a two-stage
industry sector, while Beta’s software product serves the comparison:
aviation security industry sector. Both organizations
• Preliminary stage: Comparison of software process
employ the waterfall methodology; however, during the
performance for Alpha’s first and second products
study Alpha’s implementation was “crippled” because
(first part of the study period vs. the second part).
the resources invested in the analysis and design stage
were negligible. While the Alpha team adopted no soft- • Stage one: Comparison of software process
ware development standard, Beta’s software development performance of Alpha and Beta.
department was certified according to the ISO 9000-3 • Stage two: Comparison of the first stage findings
standard (ISO 1997) and according to the aviation soft- (of comparison of organizations methodology)
ware development standard for the aviation industry with the results of earlier research performed
DO-178B Software Considerations in Airborne Systems according to the before-after methodology.
and Equipment Certification (RTCA 1997). The Federal
Aviation Administration (FAA) accepts use of the standard
as a means of certifying software in avionics. Neither soft- The Empirical Data
ware development organization was CMM certified. The study was based on original records of software
During the study period Beta developed one software correction processes that the two developers made
product, while Alpha developed two versions of the available to the study team. The records cover a period
same software product. The software process and the
of about one year for each developer. The following six
SQA system of Beta were stable during the entire study
software process performance metrics (“performance
period. The SQA system of Alpha, however, experi-
metrics”) were calculated:
enced some improvements during the study period that
1. Error density (errors per 1,000 lines of code)
became effective for the development of the second ver-
sion of its software product. The first and second parts 2. Productivity (lines of new code per working day)
of the study period dedicated to the development of the 3. Percentage of rework
two versions lasted six and eight months, respectively.
4. Time required for an error correction (days)
A preliminary stage of the analysis was done to test the
significance of the results of the improvements performed 5. Percentage of recurrent repairs
in Alpha during the second part of the study period. 6. Error detection effectiveness
The detailed records enabled the authors to calculate
The Research Hypotheses these performance metrics for each developer. The met-
The research hypotheses are: rics were calculated on a monthly basis for the first five
• H1: Alpha’s software process performance met- performance metrics. For the sixth metric, only a global
ric for its second product will be similar to that metric calculated for the entire period could be processed
of its first product. for each developer.

www.asq.org 29
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared

Table 1 presents a comparison of the organization Alpha’s performance results for the second study
characteristics and a summary of the development activi- period show some improvements (compared with
ties of Alpha and Beta. the results of the first study period) for all five per-
formance metrics that were calculated on a monthly
The CMM Appraisal basis. However, the performance achievements of the
Since the studied organizations were not CMM certi- second study period were found statistically insignifi-
fied, the authors used an official SEI publication, cant for four out of five performance metrics. Only for
“Maturity Questionnaire for CMM based appraisal of one performance metric, namely the percentage of
internal process improvement - CBA IPI” (Zubrow, recurrent repairs, did the results show a significant
Hayes, and Goldenson 1994), to prepare an appraisal improvement.
of Alpha and Beta’s software process quality level. The Accordingly, H1 was supported for four out of five
appraisal yielded the following: CMM level 1 for Alpha performance metrics. H1 was rejected only for the
and CMM level 3 for Beta. A summary of the appraisal metric of the percentage of recurrent repairs.
results for Alpha and Beta is presented in Table 2.
Stage 1: The Organization
The Statistical Analysis Comparison - Alpha vs. Beta
For five of the six performance metrics, the calculated
As Beta’s software process quality level was appraised
monthly performance metrics for the two organizations
to be much higher than that of Alpha, according to H2,
were compared and statistically tested applying t-test
the quality performance achievements of Beta were
procedure. For the sixth performance metric, error detection
expected to be significantly higher than Alpha’s. The
effectiveness, only one global detection effectiveness metric
comparison of Alpha and Beta’s quality performance
(calculated for the entire study period) was available, 90.3
results is presented in Table 4.
percent and 99.7 percent for Alpha and Beta, respectively.
The results of the statistical analysis show that for

THE FINDINGS three out of the six performance metrics the performance
of Beta is significantly better than that of Alpha. It should
be noted that for the percentage of recurrent repairs,
The Preliminary Stage where Alpha demonstrated a significant performance
A comparison of Alpha’s performance metrics for the improvement during the second part of the study period,
two parts of the study period is shown in Table 3. Beta’s performance was significantly better than Alpha’s

Table 1 Comparison of the organization characteristics and summary of development


activities of Alpha and Beta

The developer
Subject of comparison
Alpha Beta
a) The organization characteristics
Type of software product Real-time embedded Real-time embedded
C++ software C++ software
Industry sector Aviation electronics Telecommunication security
Certification according to software development quality 1. ISO 9001 None
standards 2. DO-178B
CMM certification None None
CMM level appraisal CMM level 1 CMM level 3
b) Summary of development activities
Period of data collection Jan. 2002 – Feb. 2003 Aug. 2001 – July 2002
Team size 14 12
Man-days invested 2,824 2,315
New lines of code 56K 62K
© 2007, ASQ

Number of errors identified during development process 1,032 331


Number of errors identified after delivery to customers 111 1

30 SQP VOL. 9, NO. 4/© 2007, ASQ


Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared

for each of the two parts of the study period. For the Table 2 Summary of the maturity
fourth performance metric, productivity, Beta’s results questionnaire detailed appraisal
were 35 percent better than those of Alpha, but no statis- results for Alpha and Beta
tical significance was found. Somewhat surprising results
were found for the fifth metric, the time required for error No. Key Process Area Alpha grades Beta grades
correction, where the performance of Alpha was 14 per- 1. Requirements 1.67 10
cent better than Beta, but the difference was found to be management
2. Software project planning 4.28 10
statistically insignificant. The explanation for this finding
3. Software project tracking 5.74 8.57
is probably in the much lower quality of Alpha’s software and oversight
product. The lower quality of Alpha could be demonstrat- 4. Software subcontract 6.25 10
ed by the much higher percentages of recurrent repairs, management
where Alpha’s results were found significantly higher than 5. Software quality 3.75 10
assurance (SQA)
Beta’s, fivefold and threefold higher for Alpha’s first part 6. Software configuration 5 8.75
and second part of the study period, respectively. Alpha’s management (SCM)
lower quality is especially evident when referring to the Level 2 Average 4.45 9.55
sixth performance metric. For the sixth performance 7. Organization process 1.42 10
focus
metric, the error detection effectiveness, although only
8. Organization process 0 8.33
global performance results for the entire study period are definition
available, a clear inferiority of Alpha is revealed, where 9. Training program 4.28 7.14
the error detection effectiveness of Alpha is only 90.3 per- 10. Integrated software 0 10
cent compared with Beta’s error detection effectiveness of management
11. Software product 1.67 10
99.7 percent. In other words, 9.7 percent of Alpha’s errors engineering
were discovered by its customers compared with only 0.3 12. Intergroup coordination 4.28 8.57
percent of Beta’s errors. 13. Peer reviews 0 8.33
To sum up stage 1, H2 was supported by statisti- Level 3 Average 1.94 9.01
cally significant results for the following performance 14. Quantitative process 0 0
management
metrics: 1) error density, 2) percentage of rework,
15. Software quality 4.28 8.57
and 3) percentage of recurrent repairs. For an addi- management
tional metric, error detection effectiveness, though no Level 4 Average 2.14 4.29
statistical testing is possible—the global results that 16. Defect prevention 0 0
clearly indicate performance superiority of Beta support 17. Technology change 2.85 5.71
management
hypothesis H1. For two metrics H1 was not supported 18. Process change 1.42 4.28
© 2007, ASQ
statistically. As for the productivity metric, the results management
show substantially better performance for Beta. As Level 5 Average 1.42 3.33
for the time required for an error correction—Alpha’s

Table 3 Alpha’s performance comparison for the two parts of the study period

Alpha Alpha
Statistical
First part of the Second part of
significance of
SQA metrics study period the study period t 0.05 differences
(6 months) (8 months)
t 0.05
Mean s.d. Mean s.d.
1. Error density (errors per 1,000 lines of code) 17.9 3.8 15.8 4.2 t=0.964 Not significant
2. Productivity (lines of code per working day) 16.8 11.8 21.9 18.4 t=-0.585 Not significant
3. Percentage of rework 35.4 12.7 28.5 19.8 t=0.746 Not significant
4. Time required for an error correction (days) 35.9 33.3 16.9 8.4 t=1.570 Not significant
5. Percentage of recurrent repairs 26.7 11.8 13.8 8.7 t=2.200 Significant
© 2007, ASQ

6. Error detection effectiveness (global 90.3% 99.7% Statistical testing is not possible
performance metric for the entire study period)

www.asq.org 31
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared

results are a little better than Beta’s, with no statistical As Alpha’s and Beta’s SQA systems were appraised as
significance. To sum up, as the results for four of the similar to CMM levels 1 and 3, respectively, their quality
performance metrics support H1 and no result rejects performance gap is compared with Galin and Avrahami’s
H1, one may conclude that the results support H1. The mean quality performance improvement for a CMM level
authors would like to note that their results are typical 1 organization advancing to CMM level 3. The comparison
case study results, where a hypothesis when clearly for the four performance metrics is shown in Table 5.
supported is followed by some inconclusive results. The results of the comparison support hypothesis
H3 regarding all four performance metrics. For two of
Stage 2: Comparison of the performance metrics (error density and percent-
age of rework) this support is based on statistically
Methodologies—The Comparison significant results for the current test case. For the
of Organizations Methodology productivity metric the support is based on substantial
productivity improvement, which is not statistical-
vs. the Before-After ly significant. The comparison results for the four
Methodology metrics reveal similarity in direction, where size dif-
ferences in achievement are expected when comparing
In this stage the authors compared the results of the
multiproject mean results with case study results.
current case study that was performed according to the
To sum up, the results of the current test case
comparison of organizations methodology with results
performed according to the comparison of organiza-
of the commonly used before-after methodology. They
tions methodology conform to the published results
discovered that for this purpose, the work of Galin and
obtained by using the before-after methodology.
Avrahami (2005; 2006), which is based on combined
analysis of 19 past studies, is the suitable “representative” of
results obtained by applying the before-after methodology. DISCUSSION
The comparison is applicable to four software The reason for the substantial differences in software
process performance metrics that are common to process performance achievement between Alpha
the current case study and the findings of the com- and Beta is the main subject of this discussion. The
bined past studies analysis carried out by Galin and two developers claimed to use the same methodology.
Avrahami. These common performance metrics are: The authors assume that substantial differences in
• Error density (errors per 1,000 lines of code) software process performance result from the actual
implementation differences by the developers. To
• Productivity (lines of code per working day)
investigate the causes for the quality performance gap,
• Percentage of rework the authors first examine the available data relating to
• Error detection effectiveness the differences between Alpha and Beta’s distributions

Table 4 Quality performance comparison—Alpha vs. Beta

SQA metrics Alpha Beta t 0.05 Statistical


(14 months) (12 months) significance of
differences
Mean s.d. Mean s.d. t 0.05
1. Error density (errors per 1,000 lines of code) 16.8 4.0 5.0 3.0 8.225 Significant
2. Productivity (lines of code per working day) 19.7 15.5 26.7 16.6 -1.111 Not significant
3. Percentage of rework 31.4 16.9 17.9 8.0 2.532 Significant
4. Time required for an error correction (days) 25.0 23.7 29.0 15.6 -0.497 Not significant
5. Percentage of recurrent repairs – part 1 26.7 11.8 4.8 8.1 4.647 Significant
Percentage of recurrent repairs – part 2 13.8 8.7 4.8 8.1 2.239 Significant
© 2007, ASQ

6. E rror detection effectiveness - % discovered 9.7 — 0.3 — — No statistical


by the customer (global performance metric for analysis was
the entire study period) possible

32 SQP VOL. 9, NO. 4/© 2007, ASQ


Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared

of error identification phases along the development Further investigation of the causes of Beta’s higher
process. Table 6 presents for Alpha and Beta the quality performance leads one to data related to
percentages of error identification for the various resources distribution along the development process.
development phases. Table 7 presents the distribution of the development
Table 6 reveals entirely different distributions of the resources along the development process, indicating
error identification phases for Alpha and Beta. While noteworthy differences between the developers.
Alpha identified only 11.5 percent of its errors in the Examination of the data presented in Table 7
requirement definition, analysis, and design phases, reveals substantial differences in resource distribu-
Beta managed to identify almost half of the total errors tion between Alpha and Beta. While more than a third
during the same phases. Another delay in error identi- of the resources are invested by Beta’s team in the
fication is noticed in the unit testing phase. In the unit requirement definition, analysis, and design phases,
testing phase Alpha identified fewer than 4 percent of Alpha’s team investments during the same phases are
errors identified by testing while Beta identified in the negligible. Furthermore, while Alpha invests about
same phase more than 20 percent of the total errors half of the development resources in software testing
(almost 40 percent of errors identified by testing). The and the consequent software corrections, the invest-
delay in error identification by Alpha is again apparent ments of Beta in these phases are less than a quarter
when comparing the percentage of errors identified of the total project resources. It may be concluded that
during the integration and system tests: 75 percent the shift of resource invested “downstream” by Alpha
for Alpha compared to 35 percent for Beta. However, resulted in a parallel shift downstream of the distribu-
the most remarkable difference between Alpha and tion of error identification phases (see Table 6). The
Beta is in the rate of errors detected by the customers: very low resource investments of Beta in correction
9.7 percent for Alpha compared to only 0.3 percent of failures identified by customers as compared with
for Beta. This enormous difference in error detection Alpha’s investments in this phase correspond well to the
efficiency, as well as the remarkable difference in error differences in error identification distribution between
density, is the main contribution to the higher quality the developers. It may be concluded that the enormous
level of Beta’s software process. difference in the error detection efficiency as well as the

Table 5 Quality performance improvement results—methodology comparison

Comparison of organizations Before-after methodology


methodology
SQA metrics Beta’s performance compared CMM level 1 advancement to CMM level 3
with Alpha’s Mean performance improvement *
% %
1. Error density (errors per 1,000 lines of code) 70% reduction (Significant) 76% reduction
2. Productivity (lines of code per working day) 36% increase (Not significant) 72% increase
3. Percentage of rework 43% reduction (Significant) 65% reduction
© 2007, ASQ

4. Error detection effectiveness 97% reduction 84% reduction


(Not tested statistically)
* According to Galin and Avrahami (2005; 2006)

Table 6 Error identification phase—Alpha vs. Beta

Alpha Beta
Identified errors Identified errors— Identified errors Identified errors—
Development phases
% cumulative % cumulative
% %
Requirement definition 5.8 5.8 33.8 33.8
Design 5.7 11.5 9.0 42.8
Unit testing 3.8 15.3 22.3 65.1
© 2007, ASQ

Integration and system testing 75.0 90.3 34.6 99.7


Post delivery 9.7 100.0 0.3 100.0

www.asq.org 33
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared

remarkable difference in error density are the product of opment phases, while the old methodology led the team
the downstream shift of the distribution of the software to invest in coding and testing. In other words, while
process resource. In other words, they are the demon- in the improved development methodology project 40
stration of the results of a “crippled” implementation percent of the development resources were invested in
of the development methodology that actually begins the requirement definition and design phases, only 8
the software process at the programming phase. This percent of the resources of the old methodology project
crippled development methodology yields a software were invested in these development phases. Herbsleb
process of a substantially lower productivity, followed et al. also found a major difference in resource invest-
by a remarkable increase in error density and a colossal ments in unit testing: 18 percent of the total testing
reduction of error detection efficiency. resources by the old methodology project compared
At this stage it would be interesting to compare the to 90 percent by the improved methodology project.
authors’ findings regarding the differences of resourc- Herbsleb et al. believe that the change of development
es distribution between Alpha and Beta with those methodology, as evidenced by the change in resource
of Herbsleb et al.’s study. A comparison of findings distribution along the software development process,
regarding resource distribution along the development yielded the significant reduction in error density (from
process for the current study and Texas Instruments 6.9 to 2.0 defects per thousand lines of code) and to a
projects is presented in Table 8. remarkable reduction in resources invested in customer
The findings by Herbsleb et al. related to Texas support after delivery (from 23 percent of total project
Instruments projects indicate that the new (improved) resources to 7 percent). These findings by Herbsleb et
development methodology focuses on upstream devel- al. closely resemble the current case study findings.

Table 7 Project resources according to development phase—Alpha vs. Beta


Resources invested
Alpha Beta
Development phase Resources invested Resources invested— Resources invested Resources invested—
% cumulative % cumulative
% %
Requirement definition and design Negligible 0 34.5 34.5
Coding 46.5 46.5 41.5 76.0
Software testing 26.0 72.5 14.0 90.0
Error corrections according to
testing results 22.5 95.0 9.5 99.5

© 2007, ASQ
Correction of failures identified
by customers 5.0 100.0 0.5 100.0

Table 8 Texas Instruments project resources distribution according to development


phase—“Old development methodology” project vs. “New (improved) development
methodology” project. Source: Herbsleb et al. (1994)

Resources invested
Old development methodology project New (improved) development
methodology project
Development phase
Resources invested Resources invested— Resources invested Resources invested—
% cumulative % cumulative
% %
Requirement definition 4% 4% 13% 13%
Design 4% 8% 27% 40%
Coding 47% 55% 24% 64%
Unit testing 4% 59% 26% 90.0%
© 2007, ASQ

Integration and system testing 18% 77% 3% 93%


Support after delivery 23% 100.0% 7% 100.0%

34 SQP VOL. 9, NO. 4/© 2007, ASQ


Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared

CONCLUSIONS Isaac, G., C. Rajendran, and R. N. Anantharaman. 2004a. Does qual-


ity certification improve software industry’s operational performance.
The quantitative knowledge of the expected soft- Software Quality Professional 5, no. 1: 30-37.

ware process performance improvement is of great Isaac, G., C. Rajendran, and R. N. Anantharaman. 2004b. Does qual-
ity certification improve software industry’s operational performance
importance to the software industry. The available
– supplemental material. Available at http://www.asq.org.
quantitative results are based solely on studies per-
ISO. 1997. ISO 9000-3 Guidelines for the application of ISO 9001:1994 to
formed according to the before-after methodology. The
the development, supply, installation and maintenance of computer soft-
current case study supports these results by apply- ware. Geneva, Switzerland: International Organization for Standardization.
ing an alternative methodology—the comparison of Jung, H. W., and D. R. Goldenson. 2003. CMM-based process improvement
organizations methodology. As the examination of the and schedule deviation in software maintenance (CMU/SEI-2003-TN-015).
results obtained by the use of an alternative study Pittsburgh: Software Engineering Institute, Carnegie Mellon University.
methodology is important, the authors recommend Keeni, G. 2000. The evolution of quality processes at Tata Consultancy
performing a series of case studies applying the com- Services. IEEE Software 17, no. 4: 79-88. Available at: http:// www.stsc.
hill.af.mil/crosstalk/1999/05/oldham.pdf.
parison of organizations methodology. The results of
McGarry, F., R. Pajerski, G. Page, S. Waligora, V. Basili, and M. Zelkowitz.
these proposed case studies may support the earlier
1999. Software process improvement in the NASA Software Engineering
results and add substantially to their significance. Laboratory (CMU/SEI-94-TR-22). Pittsburgh: Software Engineering
The current case study is based on existing correc- Institute, Carnegie Mellon University. Available at: http://www.sei.cmu.
tion records and other data that became available to edu/publications/documents/94.reports/94.tr.022.html.
the research team. Future case studies applying the Pitterman, B. 2000. Telcordia Technologies: The journey to high maturity.
comparison of organizations methodology that will be IEEE Software 17, no. 4: 89-96.

planned at earlier stages of the development project RTCA. 1997. DO-178B Software considerations in airborne systems and
equipment certification, Radio Technical Commission for Aeronautics,
may participate in the planning of the project manage-
U.S. Federal Aviation Agency, Washington.
ment data collection, and enable collection of data for a
Zubrow D., W. Hayes, and D. Siegel Jand Goldenson. 1994. Maturity
wider variety of software process performance metrics.
questionnaire (CMU/SEI-94-SR-7). Pittsburgh: Carnegie Mellon
University, Software Engineering.
REFERENCES ® Carnegie Mellon, Capability Maturity Model, CMMI, and CMM are reg-
Blair, R. B. 2001. Software process improvement: What is the cost? istered trademarks of Carnegie Mellon University.
What is the return on investment? In Proceedings of the Pittsburgh PMI SM
CMM Integration and SEI are service marks of Carnegie Mellon
Conference, April 12. University.
Diaz, M., and J. King. 2002. How CMM impacts quality, productivity,
rework, and the bottom line. Crosstalk 15, no. 1: 9-14. BIOGRAPHIES
Franke, R. 1999. Achieving Level 3 in 30 months: The Honeywell BSCE Daniel Galin is the head of information systems studies at the
Case. Presentation at the 4th European Software Engineering Process Ruppin Academic Center, Israel, and an adjunct senior teaching
Group Conference, London. fellow with the Faculty of Computer Science, the Technion, Haifa,
Israel. He has a bachelor’s degree in industrial and management
Galin, D., and M. Avrahami. 2005. Do SQA program work – CMM works.
engineering, and master’s and doctorate degrees in operations
A meta analysis. In Proceedings of the IEEE International Conference
research from the Israel Institute of Technology, Haifa, Israel. His
on Software –Science, Technology & Engineering, Herzlia, Israel, 22-23
professional experience includes numerous consulting projects in
February. IEEE Computer Society Press, Los Alamitos, Calif.: 95-100. the areas of software quality assurance, analysis, and design of
Galin, D., and M. Avrahami. 2006. Are CMM programs beneficial? information systems and industrial engineering. He has published
Analyzing past studies. IEEE Software 23, no. 6: 81-87. many papers in professional journals and conference proceed-
ings. He is also the author of several books on software quality
Goldenson, D. R., and D. L. Gibson. 2003. Demonstrating the impact and
assurance and on analysis and design of information systems. He
benefits of CMMI: An update and preliminary results (CMU/SEI-2003-009). can be reached by e-mail at dgalin@bezeqint.net.
Pittsburgh: Software Engineering Institute, Carnegie Mellon University.
Motti Avrahami is VeriFone Global supply chain quality manager.
Herbsleb, J., A. Carleton, J. Rozum, J. Siegel, and D. Zubrow. 1994. He has more than nine years of experience in software quality
Benefits of CMM-based software process improvement: Initial results process and software testing. He received his master’s degree
(CMU/SEI-94-TR-013). Pittsburgh: Software Engineering Institute, in quality assurance and reliability from the Technion, Israel
Carnegie Mellon University. Available at: http://www.sei.cmu.edu/ Institute of Technology. He can be contacted by e-mail at
publications/documents/94.reports/94.tr.013.html. mottia@gmail.com.

www.asq.org 35

Das könnte Ihnen auch gefallen