Sie sind auf Seite 1von 313

Chapter 1

The Software Quality Challenge

The uniqueness of software quality assurance

DO you think that there is a bug-free software?

Can software developers warrant their software


applications and their documentation from any bugs
or defects ?

What are the essential elemental differences between


software and other industrial products such as
automobiles, washing machines etc?

The essential differences between software and other


industrial products can be categorized as follows :
1.

2.

3.

Product complexity : # of operational modes


the product permit.
Product visibility : SW products are
invisible.
Product development and production
process.

The phases at which the possibility of detecting


defects in industrial products and software products:
SW products do not benefit from the opportunities for
detection of defects at the three phases of the production
process
Industrial products:

Product development : QA -> product prototype


Product production planning : Production - line
Manufacturing : QA procedure applied

Software products:

Product development : QA -> product prototype


Product production planning : Not required
Manufacturing : Copying the product & printing copies
4

Factors affecting detecting defects in SW


products VS other industrial products:
Characteristic

SW products

Other industrial products

Complexity

Usually, v. complex allowing


for v. large number of
operational options

Degree of complexity much


lower

Visibility

Invisible, impossible to detect


defects or omissions by sight (
diskette or CD storing )

Visible, allowing effective


detection of defects by sight

Nature of
development and
production
process

Opportunities to detect defects Opportunities to detect defects


arise in only one phase,
arise in all phases of
namely product development
development and production

Important Conclusion

The great complexity as well as invisibility of


software, among other product characteristics,
make the development of SQA methodologies and
its successful implementation a highly professional
challenge

The environment for which SQA


methods are developed

Pupils & students


Hobbies
Engineers, economics , mgt & other fields
SW development professionals

All those SW developers are required to deal


with SW quality problems Bugs

SQA environment
The main characteristics of this environment :
1.
2.
3.
4.
5.
6.

7.

Contractual conditions
Subjection to customer-supplier relationship
Required teamwork
Cooperation and coordination with other SW teams
Interfaces with other SW systems
The need to continue carrying out a project despite
team member changes.
The need to continue out SW maintenance for
extended period.
8

Contractual conditions
the activities of SW development & maintenance need to
cope with :

A defined list of functional requirements


The project budget
The project timetable

Subjection to customer-supplier relationship

SW developer must cooperate continuously


with customer :
To

consider his request to changes


To discuss his criticisms
To get his approval for changes

10

Required teamwork

Factors motivating the


establishment of a project team:
Timetable

requirements
The need of variety
The wish to benefit from professional
mutual support & review for
enhancement of project quality

11

Cooperation and coordination with other


SW teams

Cooperation may be required with:


Other

SW dev. Teams in the same org.


HW dev. teams in the same org.
SW & HW dev. teams of other suppliers
Customer SW and HW dev. teams that take part in
the projects dev.

12

Interfaces with other SW Systems

Input interfaces
Output interfaces
I/O interfaces to the machines control board,
as in medical and lab. Control systems

13

The need to continue carrying out a project


despite team member changes.

During project dev. Period we might be face :


Leave

from the members of the team


Switch in employees
Transfer to another city

14

The need to continue out SW maintenance


for extended period.

From 5 to 10 years , customers need continue


to utilizing their systems:
Maintenance
Enhancement
Changes

( Modification )

15

Chapter 2

What is Software Quality ?

16

What is Software ?

IEEE Definition:

Software Is :
Computer programs, procedures, and possibly
associated documentation and data pertaining
to the operation of a computer system.

17

IEEE Definition is almost identical to


the ISO def. ( ISO/IEC 9000-3 )

Computer programs (Code)


Procedures
Documentation
Data necessary for operation the
SW system.

18

TO sum up:
Software quality assurance always
includes :
Code quality
The quality of the documentation
And the quality of the necessary SW
data

19

SW errors, faults and failures

Questions arise from HRM conference Page 16.

An error : can be a grammatical error in one or


more of the code lines, or a logical error in
carrying out one or more of the clients
requirements.
Not all SW errors become SW faults.
SW failures that disrupt our use of the software.

20

The relationship between SW faults


& SW failures:

Do all SW faults end with SW failures?


The

answer is not necessarily


The SW fault becomes a SW failure only when it is
activated.

Example page 17-18

21

Classification of the causes of SW errors

SW errors are the cause of poor SW quality


SW errors can be
Code

error
Documentation error
SW data error

The cause of all these errors are human

22

The nine causes of software errors


1.
2.
3.
4.
5.
6.
7.
8.
9.

Faulty requirement definition


Client-developer communication failures
Deliberate deviation from SW requirements
Logical design errors
Coding errors
Non-compliance with documentation and coding
instructions
Shortcomings of the testing process
Procedure errors
Documentation errors

23

Faulty requirement definition

1.
2.
3.
4.

Erroneous definition of requirements


Absence of vital requirements
Incomplete definition of requirements
Inclusion of unnecessary requirements

24

Client-developer communication failures

Misunderstandings resulting from defective


client-developer comunications.
Misunderstanding of the clients
requirements changes presented to the
developer

In written forms
Orally
Responses to the design problems
others
25

Deliberate deviation from SW requirements

The developer reuse SW modules taken


from the earlier project
Due to the time budget pressure
Due to the unapproved improvements

26

Logical design errors

This is come from systems architects,


system analysts, SW engineers such as:

Erroneous algorithms
Process definitions that contain sequencing
errors
Erroneous definition of boundary conditions
Omission of required SW system states
Omission of definitions concerning reactions to
illegal operations
27

Coding errors

Misunderstanding the design documentation


Linguistic errors in the prog. Lang.
Errors in the application of CASE and other
dev. Tool

28

Non-compliance with documentation and


coding
Team

members who need to coordinate their own


codes with code modules developed by noncomplying team members
Individuals replacing the non-complying team
member will find it difficult to fully understand his
work.
Design review to other non-complying team
Testing non-complying code is more difficult
Maintenance non-complying code
29

Shortcomings of the testing process

Such as

Incomplete testing plans


Failures to document and report errors and faults
Failures to quickly correct detected SW faults as a
result of inappropriate indications of the reasons
for the fault.
Incomplete correction of detected errors.

And this increases the number of undetected


errors
30

Procedure errors

Procedures direct the user with respect to the


activities required at each step of the process
Their importance appear in complex SW
The

processing is conducted in several steps, each


of which may feed a variety of types of data and
allow for examination of the intermediate results.

See example page 22

31

Documentation errors

Errors in the design documents and in the


documentation into the body of the SW
Trouble

the development and maintenance process

Error in the user manuals and in the help


Omission

of software functions.
Errors in the explanations and instructions
Listing of non-existing software functions

32

Software quality - Definition IEEE


1.

2.

The degree to which a system, component,


or process meets specified requirements.
The degree to which a system, component,
or process meets customer or user needs or
expectations.

33

Software Quality Pressmans def.


Conformance to explicitly stated functional and performance
requirements, explicitly documented standards, and implicit
characteristics that are expected of all professionally
developed software.

34

Software Quality Assurance The IEEE Definition

1.

2.

SQA is :
A planned and systematic pattern of all actions
necessary to provide adequate confidence that
an item or product conforms to established
technical requirements.
A set of activities designed to evaluate the
process by which the products are developed
or manufactured. Contrast with quality control.
35

IEEE SQA definition exclude the


maintenance & timetable and budget issues.

The Author adopts the following :

SQA should not be limited to the development process.


It should be extended to cover the long years of service
subsequent to product delivery. Adding the software
maintenance functions into the overall conception of
SQA.

SQA actions should not be limited to technical aspects


of the functional requirements, It should include
activities that deal with scheduling and timetable and
budget .
36

SQA Expanded Definition


A systematic, planned set of actions necessary to provide
adequate confidence that the software development process or
the maintenance of a software system product conforms to
established functional technical requirements as well as with the
managerial requirements of keeping the schedule and operating
.
within the budgetary confines.
This definition corresponds strongly with the concepts at the
foundation of ISO 9000-3, 1997
and also corresponds to the main outlines of the CMM for
software
See the Table 2.2 page 27

37

Software Quality Assurance Vs. Software Quality Control

Quality Control : a set of activities designed to evaluate


the quality of a developed or manufactured product. It
take place before the product is shipped to the client.

Quality Assurance : the main objective is to minimize the


cost of guaranteeing quality by a variety of activities
performed throughout the causes of errors, and detect and
correct them early in the dev. Process.

38

SQA Vs Software Engineering

SW Engineering (IEEE def. )


The application of a systematic,
restricted, quantifiable approach to
the development and maintenance
of SW; that is the application of
engineering to software.
Thus, SW engineering environment
is a good infrastructure for achieving
SQA objectives
39

Chapter 3
Software Quality Factors

40

SQ. Factors

From the previous chapters we have already


established that the requirements document is
one of the most important elements for achieving
SQ.

What is a Good SQ requirements document ?

41

The need for comprehensive SQ


requirements

Our Sales IS seems v. good , but it is frequently fails,


at least twice a day for 20 minutes or more.( SW house
claims no responsibility.
Local product contains a SW and every thing is ok,
but, when we began planning the development of a
European version, almost all the design and
programming will be new.
etc see page 36.

42

There are some characteristics common to


all these buts :
All SW projects satisfactorily fulfilled the basic
requirements for correct calculations.
All SW projects suffered from poor performance in
important areas such as maintenance, reliability, SW
reuse, or training.
The cause for poor performance of the developed SW
projects in these areas was lack of predefined
requirements to cover these important aspects of the SW
functionality.
The solution is :
The need for a comprehensive definition of requirements
( SQ Factors )

43

Classification of SW requirements into SW


quality factors.

McCalls Factor Model


This model classifies all SW requirements into 11 SW quality
factors, grouped into 3 categories:
Product operation: Correctness, Reliability, Efficiency,
Integrity, Usability
Product revision : Maintainability, Flexibility, Testability
Product transition : Portability, Reusability,
Interoperability.
See the McCall model of SW quality factors tree
see page 38
44

Product operation SW quality factors

Correctness: it is system required output


Output specifications are usually multidimensional ;
some common include:

The output mission what to output (printable document,


alarms,voice )
The required accuracy
The completeness
The up-to-dateness of the info.
The availability of the info.( the reaction time )
The standards for coding and documenting the SW system
See Example page 39.
45

Product operation SW quality factors


Reliability:
Deals with failures to provide service. They determine the
maximum allowed SW system failure rate, and can refer
to the entire system or to one or more of its separate
functions.
See examples page 39 ( heart-monitoring unit )

46

Product operation SW quality factors


Efficiency:
Deals with the HW resources needed to perform all the functions of
the SW system in conformance to all other requirements.

MIPS million instructions per second,


MHz or megahertz million cycles per second
MBs, GBs,TBs
KBPS kilobits per second( MBPS,GBPS)

Integrity:

Deals with the SW system security, that is requirements to prevent


access to unauthorized persons.
See examples page 40
47

Product operation SW quality factors


Usability:
Deals with the scope of staff resources needed to train a new
employee and to operate the SW system.
See examples page 41

48

Product revision SW quality factors


Maintainability :
Maintainability requirements determine the efforts that will
be needed by users and maintenance personnel to
identify the reasons for SW failures, to correct the
failure, and to verify the success of the corrections.
Example : Typical maintainability requirements:
1.
The size of a SW module will not exceed 30 statements
2.
The programming will adhere to the company coding
standards and guidelines.

49

Product revision SW quality factors


Flexibility :
The capabilities and efforts required to support adaptive
maintenance activities are covered by flexibility
requirements. This factors requirements also support
perfective maintenance activities, such as changes and
additions to the SW in order to improve its service and
adapt it to changes in the firms technical or commercial
environment.
Example :page 42

50

Product revision SW quality factors

Testability :
Deal with the testing of an IS as well as with its operation.
Providing predefined intermediate results and log files.
Automatic diagnostics performed by the SW system prior
starting the system, to find out whether all components of
SW system are in working order.
Obtain a report about detected faults.

Example :page 42, 43

51

Product transition SW quality factors


Portability :
Tend to the adaptation of a SW system to other
environments consisting :

Different HW
Different OS
Example : SW designed to work under windows 2000 env. Is
required to allow low-cost transfer to Linux.
-

52

Product transition SW quality factors

Reusability :
Deals with the use of SW modules originally designed
for one project in a new SW project currently begin
developed.
The reuse of SW is expected to save resources., shorten
the project period, and provide higher quality modules.
These benefits of higher quality are based on the
assumption that most SW faults have already been
detected by SQA activities performed previously on it.

53

Product transition SW quality factors


Interoperability :
Focus on creating interfaces with other SW systems or
with other equipment firmware.
Example:

The firmware of medical lab. equipment is required to process


its results according to a standard data structure that can be then
serve as input for a number of standard laboratory IS.

54

Alternative Models Of SW Quality Factors

Two other models for SQ factors:


Evans

and Marciniak 1987 ( 12 factors )


Deutsch and Willis 1988. ( 15 factors )

Five new factors were suggested


Verifiability
Expandability
Safety
Manageability
Survivability
55

Alternative Models Of SW Quality Factors

Five new factors were suggested

Verifiability: define design and programming features that enable


efficient verification of the design and programming ( modularity,
simplicity, adherence to documentation and prog guidelines. )
Expandability: refer to future efforts that will be needed to serve
larger populations, improve services, or add new applications in order
to improve usability.
Safety: meant to eliminate conditions hazardous to equipment as a
result of errors in process control SW.
Manageability: refer to the admin. tools that support SW modification
during the SW development and maintenance periods.
Survivability: refer to the continuity of service. These define the
minimum time allowed between failures of the system, and the
maximum time permitted for recovery of service.
56

Who is interested in the definition of quality


requirements ?

The client is not the only party interested in defining


the requirements that assure the quality of the SW
product.
The developer is often interested also specially :

Reusability
Verifiability
Porotability

Any SW project will be carried out according to 2


requirements document :

The clients requirements document


The developers additional requirements document.
57

Chapter 4
The Components Of the SQA system- Overview

58

The SQA system- an SQA architecture

SQA system components can be classified into 6


classes :
Pre-project

components
Components of project life cycle activities assessment
Components of infrastructure error prevention and
improvement.
Components of SQ management
Components of standardization, certification, and SQA
system assessment
Organizing for SQA- the human components
59

Pre-project Components :

To assure that :
1. The project commitments have been
adequately defined considering the resources
required, the schedule and budget.
2. The development and quality plans have been
correctly determined.

60

Components of project life cycle activities


assessment:

The project life cycle composed of two stages:


1.
The development Life cycle stage:

Detect design and programming errors

Its components divided into:


Reviews
Expert opinions
Software testing
Assurance of the quality of the subcontractors work and
customer-supplied parts.
2.
The operation-maintenance stage

Include specialize maintenance components as well as


development life cycle components, which are applied mainly
for functionality improving maintenance tasks.
61

Components of infrastructure error


prevention and improvement :
Main objectives of these components, which are
applied throughout the entire organization,
are :
To eliminate or at least reduce the rate of errors,
based on the organizations accumlated SQA
experience.

62

Components of software quality


management :
This class of components is geared toward several
goal:
The major ones being the control of development
and maintenance activities and introduction of
early managerial support actions that mainly
prevent or minimize schedule and budget failures
and their outcomes.

63

Components of standardization, certification,


and SQA system assessment
The main objective of this class are:
1.
Utilization of international professional knowledge
2.
Improvement of coordination of the organizational
quality system with other organizations
3.
Assessment of the achievements of quality systems
according to a common scale.

The various standards classified into 2 groupes:

Quality management standards


Project process standards.

64

Organizing for SQA- the human components


The SQA organizational base includes :
Managers
Testing personnel
The SQA unit and practitioners interested in SQ.

The main objectives are


to initiate and support the implementation of SQA
components
Detect deviation from SQA procedures and
methodology
Suggest improvements
65

Part II

Pre-project SQ components
Chapter 5

Contract Review

66

Contract Review
Is the software quality element that reduces the
probability of undesirable situation. See page
78
Contract review is a requirement by the ISO 9001
and ISO 9000-3 guidelines.

67

The Contract review process and its stages

Several situations can lead a SW company to


sign a contract with a customer such as :
Participation

in a tender
Submission of a proposal according to the
customers RFP.
Receipt of an order from a companys customer
Receipt of an internal request or order from
another department in the organization

68

The Contract review process and its stages

Contract review :
is

the SQA component devised to guide review


drafts of proposal and contract documents.

If

applicable, provides oversight ( supervision ) of


the contracts carried out with potential project
partners and subcontractors.

69

The Contract review process itself is


conducted in two stages :

Stage 1 Review of the proposal draft prior to


submission to the potential customer ( proposal draft
review ): Reviews the final proposal draft and
proposals foundations:

Customers requirement documents


Customers additional details and explanations of the
requirements
Cost and resources estimates
Existing contracts or contract drafts of the supplier with
partners and subcontractors.

70

The Contract review process itself is


conducted in two stages :

Stage 2 Review of the proposal draft prior to signing


( Contract draft review ):
Reviews the contract draft on the basis of the proposal
and the understandings ( include changes ) reached
during the contract negotiations sessions.

The individuals who perform the review thoroughly


examine the draft while referring to a comprehensive
range of review subjects ( a Check-list ) is very helpful
for assuring the full coverage of relevant subjects.
See appendix 5A, 5B

71

Contract Review objectives:

Proposal draft review objectives( assure the following )

Customer requirements have been clarified and documented


Alternative approaches for carrying out the project have been
examined
Formal aspects of the relationship between the customer and SW
firm have been specified.
Identification of development risks
Adequate estimation of project resources and timetable have been
prepared.
Examination of the customers capacity to fulfill his commitments
Definition of partners and subcontractors participation conditions
Definition and projection proprietary rights.

72

Contract Review objectives:

Contract draft review objectives( assure the


following )
No

un-clarified issues remain in the contract draft


All the understandings reached between the customer
and the firm are to be fully and correctly documented.
No changes, additions, or omissions that have not been
discussed and agreed upon should be introduced into
contract draft.

73

Factors affecting the extent of a contract review:

Project magnitude, usually measured in man-month


resources.
Project technical complexity
Degree of staff acquaintance with and experience in
the project area.
Project organizational complexity, the greater the
number of organizations ( partners, subcontractors, and
customers ) taking part in the project, the greater the
contract review efforts required.

74

Who performs a contract review:

The leader or another member of the proposal team


The members of the proposal team
An outside professional or a company staff member
who is not a member of the proposal team.
A team of outside experts.

75

Implementation of a contract review of a major


proposal

The difficulties of carrying out contract reviews for


major proposals :

Time pressures
Proper contract review requires substantial professional work
The potential contract review team members are very busy.

76

Implementation of a contract review of a major


proposal

Recommended avenues ( approaches ) for implementing major


contract reviews :

The contract review should be scheduled.


A team should carry out the contract review
A contract team leader should be appointed
The activities of the team leader include :

Recruitment of the team members


Distribution of review tasks
Coordination between members
Coordination between the review team and the proposal team
Follow-up of activities, especially compliance with the schedule
Summarization of the findings and their delivery to the proposal team.

77

Contract review for internal projects

Types of internal projects

78

Contract review for internal projects

The main point here is the internal relationship.


Loose relationships are usually characterized by
insufficient examination of the projects requirements, its
resources and development risks.
To avoid the previous problems we have to apply the
contract review to the internal as external projects by
implementing procedures that define :

An adequate proposal for the internal project


Applying a proper contract review process
An adequate agreement between the internal customer and the
internal supplier.

79

Chapter 6
Development and quality plans

Development plans and quality plans are the major elements


needed for project compliance with ISO 9000.3 standards and
ISO/IEC 2001 and with IEEE 730.
It is also an important element in the Capability Maturity
Model ( CMM ) for assessment of SW development
organization maturity.
The projects needs development and quality plans that :

Are based on proposal materials that have been re-examined and


thoroughly updated
Are more comprehensive than the approved proposal, especially
with respect to schedules, resources, estimates, and development
risk evaluations
Include additional subjects, absent from the approved proposal
others

80

Development plan and quality plan


objectives
1.

2.
3.
4.
5.

Scheduling development activities that will lead to


successful and timely completion of the project,
and estimating the required manpower resources
and budget.
Recruiting team members and allocating
development resources.
Resolving development risks.
Implementing required SQA activities
Providing mgt. with data needed for project
control.

81

Elements of the development plan


1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.

Project products
Project interfaces
Project methodology and development tools
SW development standards and procedures
The mapping of the development process.( proj. mgt. Gantt )
Project milestones ( documents , code , report )
Project staff organization ( org. stru., prof. req., no of team
mem., names of team leaders )
Development facilities ( SW, HW tools, space, period req. for
each use )
Development risks ( see next slide )
Control methods
Project cost estimation
82

Development risks

1.
2.
3.

Is a state or property of a development task or


environment which, if ignored, will increase the
likelihood of project failure. Such as :
Technological gap
Staff shortages
Interdependence of organizational elements- the
likelihood that suppliers or specialized HW or SW
subcontractors, for example, will not fulfill their
obligations or schedule.

83

Elements of Quality Plan


1.
2.

3.

Quality goals ( quantitative measures example page 102 )


Planned review activities (design review, design inspection etc)

The scope of review activity

The type

The schedule ( priorities )

The specific procedure to be applied

Who responsible for carrying out the rev. act.


Planned SW tests ( a complete list of planned SW tests should be
provided ) each test

The unit, integration or the complete system to be tested

The type of testing activities to be carried out

The planned test schedule

The specific procedure

Who responsible
84

Elements of Quality Plan


4.

Planned acceptance tests for externally developed SW

5.

Purchased SW
Subcontractor SW
Customer supplied SW

Configuration management configuration mgt tools and


procedures, including those change-control procedures
meant to be applied throughout the project

85

Dev. And Quality Plan for small projects &


internal projects

See page 105 , 106

86

Chapter 7
Integrating Quality activities in the project life
cycle

Classic and Other SW development Methodologies:

SDLC ( Req. def. , Analysis, Design, Coding, sys. Tests, install and
conversion, op. and maintenance )

87

Integrating Quality activities in the project


life cycle
Prototyping

88

Integrating Quality activities in the project


life cycle

89

Integrating Quality activities in the project


life cycle

Spiral Model

It is an improved metho. for overseeing large and more complex


projects
Combines SDLC & prototyping
It combines an iterative model that introduces and emphasizes risk
analysis and customer participation into the major elements of SDLC
and prototyping methodologies
At each iteration of the spiral the following activities are performed:

Planning
Risk analysis and resolution
Engineering activities
Customer evaluation, comment, changes, etc

90

Integrating Quality activities in the project


life cycle

91

Integrating Quality activities in the project


life cycle
The object-oriented model.
Easy integration of existing sw modules ( Objects ) into newly
developed sw sys.
A SW component library serves this purpose by supplying sw
components for reuse
Advantages of library reuse:

Economy
Improve quality
Shorter development time

The advantages of OOPS will grow as the storage of reusable


SW grows ( Example Microsoft and Unix )

92

Integrating Quality activities in the project


life cycle

93

Integrating Quality activities in the project life cycle

Quality assurance activities will be integrated into


development plan that implements one or more SW
development models
Quality assurance planners for project are required to
determine :

The list of QA activities needed for a project


For each QA activity:

Timing
Who perform & the resources required
Team members, external body for QA

Resources required for removal of defects and introduction of changes.

94

Factors affecting intensity of quality assurance


activities in the development projects

Project factors

Team factors

Magnitude of the project


Technical complexity and difficulty
Extent of reusable SW components
Severity of failure outcome if the project fails
Professional qualification of team members
Team acquaintance with the project and its experience in the area
Availability of staff members who can professionally support team
Familiarity with team members, in other words the percentage of
new staff members in the team

See example page 132


95

Verification, Validation and Qualification

Three aspects of quality assurance of the SW product


are examined under the issues of verification,
validation, and qualification( IEEE std 610.12-1990)
Verification : the process of evaluating a system or
component to determine whether the products of a
given development phase satisfy the conditions
imposed at the start of that phase.

It examines the consistency of the products being developed


with products developed in the previous phases.
Examiner can assure that development phases have been
completed correctly
96

Verification, Validation and Qualification

Validation : the process of evaluating a system or


component during or at the end of the development
process to determine whether it satisfies specified
requirements.

It represents the customers interest by examining the extent


of compliance to his original req.
Comprehensive validation reviews tend to improve customer
satisfaction from the system

97

Verification, Validation and Qualification

Qualification : the process used to determine whether


a system or component is suitable for operational use.

It focuses on operational aspects, where maintenance is the


main issues

Planners are required to determine which of these


aspects should be examined in each quality
assurance activity.

98

A model for SQA defect removal


effectiveness & Cost

The model deals with 2


quantitative aspects :
1.

2.

Effectiveness in removing project


defects
The cost of removal

See page 135

99

Defect removal effectiveness

It is assumed that any SQA activity filters ( screens ) a


certain percentage of existing defects.
In most cases the percentage of removed defects is somewhat
lower than the percentage of detected defects as some
corrections are ineffective or inadequate.
The next SQA activity will faces both the remaining defects
and the new defects created in the current development
phases.
It is assumed that the filtering effectiveness of accumulated
defects of each QA activity is not less than 40%.
Table 7.4 page 136 list the average filtering effectiveness by
QA activities.

100

Cost of defect removal

The cost of defect removals varies by development


phase, while costs rise substantially as the
development process proceeds.
Example : removal of a design defect detected in
the design phase may require an investment of 2.5
working days; removal of the same defect may
required 40 days during the acceptance tests.
Defect-removal costs based on some surveys are
shown in table 7.5 page 137.

101

The Model

The model is based on the following assumptions:

The development process is linear and sequential, following the waterfall


model.
A number of new defects are introduced in each development phase ( see
table 7.3 page 135 ).
Review and test SQA activities serve as filters, removing a percentage of
the entering defects and letting the rest pass to the next phase. If we have
30 defects and the filtering efficiency 60% then 18 defects will be
removed & 12 will stay to the next.
At each phase the incoming defects are the sum of defects not removed
together with the new defects introduced ( created ) in the current
development phase.
The cost is calculated for each QA activity by multiplying the number of
defects removed by the relative cost of removing a defect. ( table 7.5 )
The remaining defects passed to the customer, will be detected by him.
102

The model presents the following

POD : phase originated defects ( table 7.3 )


PD : passed defects.
%FE : % filtering effectiveness ( table 7.4 )
RD : removed defects
CDR : cost of defect removal ( table 7.5 )
TRC : total removal cost.

103

The model presents the following


From table 7.3 (P135)

From table 7.5 (P137)

104

The model presents the following

105

The model presents the following

106

Chapter 8
Reviews

IEEE definition Review process :

A process or meeting during which a work


product or set of products is presented to
project personnel , managers, users,
customers, or other interested parties for
comment or approval.

107

Methodologies for reviewing documents

Reviews acquire special importance in the SQA


process because they provide early direction and
prevent the passing of design and analysis errors
down-stream , to stages where error detection and
correction are much complicated and costly :
The methodologies for reviewing :

Formal design review


Peer reviews ( inspections and walkthroughs )
Expert opinions

Standards for SW reviews are the subject of IEEE std


1028 ( IEEE, 1997 ).
108

Reviews Objectives

( Direct Objectives )

To detect analysis & design errors as well as subjects where


corrections, changes and completions are required with
respect to the original specifications and approved changes.
To identify new risks likely to affect completion of the
project.
To locate deviations from templates and style procedures and
conventions.
To approve the analysis or design product. Approval allows
the team to continue to the next development phase.

109

Reviews Objectives ( Indirect


Objectives )

To provide an informal meeting place for exchange


of professional knowledge about development
methods, tools, and techniques.

To record analysis and design errors that will serve


as a basis for future corrective actions. The
corrective actions are expected to improve
development methods by increasing effective and
quality, among other product features.

110

Formal design reviews ( DRs )

Formal design review, also called

Design reviews ( DRs )


Formal technical reviews ( FTR )

Without this approval, the development team cannot


continue to the next phase of SW development
project.
Formal design review can be conducted at any
development milestone requiring completion of an
analysis or design document, whether that document
is a requirement specification or an installation plan.
111

A list of common Formal design reviews :

DPR - development plan review


SRSR- Software requirement specification review
PDR Preliminary design review
DDR Detail design review
DBDR Data base design review
TPR Test plan review
STPR Software test procedure review
VDR Version description review
OMR operator manual review
SMR Support manual review
TRR Test readiness review
PRR Product release review
IPR Installation Plan review
112

The Formal Design Review will focus on :

The participants
The prior preparations
The DR session
The recommended post-DR activities

113

The participants in a DR

All DRs are conducted by


A review leader
A review team
The review leader: characteristics
Knowledge & experience in development of projects of the
type reviewed.
Seniority at a level similar to if not higher than that of the
project leader
A good relationship with the project leader and his team
A position external to the project team.

114

The Review Team

It is desirable for non-project staff to make up


the majority of the review team.
The size of the review team from 3 to 5 to be
an efficient team

115

Preparation for a DR

A DR session are to be completed by all three main


participants in the review :

Review leader , team


Development team.

Each one is required to focus on distinct aspects of the


process.
Review leader preparations (main tasks) :

To appoint the team members


To schedule the review sessions
To distribute the design document among team members ( hard
copy, electronic copy etc )
116

Preparation for a DR

Review team preparations (main tasks) :


Review

the design document and list their comments


prior to the review session
Team members may use a review checklists.

Development team preparations ( main tasks )


Prepare

a short presentation of the design document


The presentation should focus on the main
professional issues awaiting approval rather than
wasting time on description of the project in general.
117

Preparation for a DR

See page 330

118

The DR session

The agenda is the issue ( a typical DR session


agenda ) :
1.
2.
3.

4.

A short presentation of the design document


Comments made by members of the review team.
Verification and validation in which each of the
comments is discussed to determine the required
actions ( corrections, changes and addition ) that the
project team has to perform.
Decisions about the design product ( document ),
which determines the projects progress. These
decisions take the following three forms:
119

Decisions forms :

Full approval : enables immediate continuation to


the next phase. It may be accompanied by demands
for some minor corrections to be performed by
project team.
Partial approval: approval of immediate
continuation to the next phase for some parts of the
project, with major action items demanded for the
remainder of the project.
Denial of approval : demands to repeat of the DR

120

The DR report see appendix 8A. P175

one of the review leader responsibilities is to issue a


DR report immediately after the review session.
The report major sections contain :

A summary of the review discussion


The decision of the continuation of the project
A full list of the required actions ( corr, changes, additions)
and the anticipated completion dates.
The name(s) of the review team member(s) assigned to
follow up performance of corrections.

121

The DR report see appendix 8A. P175


The formal design review process P159

122

The follow-up process

The review leader himself is required to


determine whether each action item has been
satisfactory accomplished as a condition for
allowing the project to continue to the next
phase.
Follow-up should be documented to enable
clarification.

123

Pressman (2000, chapter 8 )

Pressmans 13 golden guidelines


for a successful design review:

See page 157

124

Peer Reviews
two review methods ( Inspection and Walkthrough )

The major difference between formal design reviews and peer


review methods is rooted in participants & authority.
In peer reviews, as expected, the project leaders equals, members
of his department and other units.
The other difference lies in degree of authority & the objective of
each review method.
The peer review main objectives lies in detecting errors &
deviations from standards.
The appearance of the CASE tools reduce the value of manual
reviews such as inspection and walkthrough.

125

Inspection & Walkthrough

What differentiates a walkthrough from an


inspection is the level of formality, inspection
is the more formal of two.
Inspection emphasizes the objective of
corrective actions.
Walkthroughs findings are limited to
comments on the document reviewed.

126

Inspection & Walkthrough

Inspection is usually based on a comprehensive


infrastructure, including :

Development of inspection checklists developed for each


type of design document as well as coding language and
tool, which are periodically updated.
Development of typical defect type frequency tables, based
past findings, to direct inspectors to potential defect
concentration areas.
Periodic analysis of the effectiveness of past inspections to
improve the inspection methodology
Introduction of scheduled inspections into the project
activity plan and allocation of the required resources,
including resources for correction of detected defects.

127

Participants of peer reviews

A review leader

The author

Main tasks & qualification page 161


Invariably a participant in each type of peer review.

Specialized professional

For inspections:

A designer
A coder or implementer
A tester

For walkthrough:

A standards enforcer
A maintenance expert
A user representative.

128

Team assignments

The presenter
The Scribe

129

Preparations for a peer review session

Leader preparation
Teams preparation

130

Session Documentation

Inspection session findings report


Prepared

Inspection session summary report


Prepared

by the scribe
by the leader

See appendix 8b , 8c (Pages


176,177)

131

Post-Peer review activities

Post-inspection activities are conducted to attest


The

prompt, effective correction and reworking of all


errors by the designer/author and his team, as
performed by the inspection leader in the course of the
assigned follow-up activities.
Transmission of the inspection reports to the internal
Corrective Action Board ( CAB ) for analysis.

See Fig 8.2 ( comparison of the peer review


methods ( Page 166 )
132

The efficiency of peer reviews

Some of the more common metrics


applied to estimate the efficiency of peer
reviews:

peer review detection efficiency( average hrs


worked per defect detected)
Peer review defect detection density ( average
number of defects detected per page of the
design document )
Internal peer review effectiveness ( % of
defects detected by peer reviews as % of total
defects detected by the developer).

133

Comparisons

See tables page 167-169

134

Expert opinions ( external )

It is good in the following situations :


Insufficient

in-house proff.
Temporary lack in-house proff.
Disagreements

135

Chapter 9
Software testing - Strategies

Testing Definition :

Testing is the process of executing a program with


intention of finding errors.

IEEEdefinition:
1.

2.

The process of operating a system or component under


specified condition, observing or recording the results,
and making an evaluation of some aspect of the system or
component.
The process of analyzing a software item to detect the
difference between the existing and required conditions (
that is, bugs ) and evaluate the features of the software
item.

136

Software testing - Definition


Is a formal ( SW test plan ) process carried out
by specialized testing team
( independent ) in which a software unit,
several integrated software units or entire
software package are examined by running
the programs on a computer. All the
associated tests are performed according to
approved test procedures on approved
test case.
137

Software testing objectives

Direct objectives:
To

identify and reveal as many errors as possible in


the tested SW.
To bring the tested SW, after correction of the
identified errors and retesting, to an acceptable level
of quality.
To perform the required tests efficiently, within
budgetary and scheduling limitations.

Indirect objectives
To

compile a record of SW errors for use in error


prevention ( by corrective & preventive actions )
138

Software testing Strategies

To test the SW in its entirety, once the


completed package is available; otherwise
known as big bang testing .
To test the SW piecemeal, in modules, as they
are completed ( unit tests ) ; then to test groups
of tested modules integrated with newly
completed modules ( integrated tests ). This
process continues until all the entire package is
tested as whole ( system test ). This testing
strategy is usually termed incremental testing
139

Incremental testing is also performed


according to two basic Strategies:

Bottom-Up

Stage 1: Unit tests of modules 1 to 7


Stage 2: Integration test A of
modules 1 and 2, developed and tested
in stage 1, and integrated with module
8, developed in the current stage.
Stage 3: Two separate integration
tests, B, on modules 3, 4, 5 and 8,
integrated with module 9, and C, for
modules 6 and 7, integrated with
module 10.
Stage 4: System test is performed
after B and C have been integrated
with module 11, developed in the
current stage.
140

141

Incremental testing is also performed


according to two basic Strategies:

Top-down ( 6 stages ) see fig page 183


The incremental pathes:
Horizontal

sequence ( breadth first )


Vertical sequence ( Depth first )

142

Stubs & Drivers for incremental testing

Stubs and drivers are SW replacement


simulators required for modules not available
when performing a unit test or an integration
test.
Stubs ( often termed a Dummy Module )
replaces an unavailable lower level module,
subordinate to the module tested.
It is required for top-down testing of
incomplete systems. See example fig 9.2
143

Stubs & Drivers for incremental testing

A driver is a substitute module but of the


upper level module that activates the module
tested.
The driver is passing the test data on to the
tested module and accepting the results
calculated by it.
It is required in bottom-up testing until the
upper level modules are developed.

144

Stubs & Drivers for incremental testing

145

Bottom-Up Vs Top-Down strategies

Main Adv. Of bottom-up :

Main disadv.

The lateness at which the prog. As whole can be observed ( at the stage
following testing of the last module )

Main adv. Of Top-down:

The relative ease of its performance

The possibility it offers to demonstrate the entire prog. Function shortly


after activation of the upper-level modules has been completed. This
characteristic allows for early identification of analysis & design errors
related to algorithms, functional requir. , and the like.

Main disadv.

The relative difficulty of preparing the required stubs, with often require
very complicated programming.

146

Big bang Vs. Incremental testing

The main disadvantages of big bang :


Identification

of errors becomes difficult


Corrections will be at the same time.
Error correction estimation of the required testing
resources and testing schedule a rather fuzzy endeaver.

Incremental testing adv.:


Usually

performed on relatively small SW modules, as


unit or integration tests.( more errors detection )
Identification and correction of errors is much simpler
and requires fewer resources because it is performed on a
limited volume of SW.
147

Software test classification.


Classification according to testing
concept:

Two testing classes have been developed:


Black

box ( functionality ) testing:

Identifies bugs only according to SW malfunctioning as they


are revealed in its erroneous output.
Incases that outputs are found to be correct, black box testing
disregarded the internal path of calculations and processing
performed.

White

box ( structural ) testing:

Examines internal calculation paths in order to identify bugs.

148

White box and black box testing for the


various classes of tests

149

White Box Testing

White box testing concept requires verification of every program


statement and comment.
White box testing enables performance of

data processing and calculations correctness tests


SW qualification tests
Maintainability tests
Reusability tests

Every computational operation in the sequence of operations created


by each test case ( Path ) must be examined.
This type of verification allows us to decide whether the processing
operations and their sequences were programmed correctly for the path
in question, but not for other paths.

150

Data processing and calculation correctness tests


Path coverage & line coverage

Path coverage : total number of possible paths

10 if-then-else : 1024 paths

Line coverage: for full line coverage, every line of


code be executed at least once during the process of
testing. The line coverage metrics for completeness of
a line-testing plan are defined as the percentage of
lines indeed executed during the tests. Flow chart and a
program flow graph are used.
Example 191.
151

McCabes cyclomatic complexity


metrics

Measures the complexity of a program or module at the same


time as it determines the maximum number of independent
paths needed to achieve full line Coverage of the program.
The measure is based on graph theory using program flow
graph.
An independent path is defined with reference to the
succession of independent paths accumulated.
Independent path is any path on the program flow graph that
includes at least one edge that is not included in any former
independent paths.
See table 9.5 page 194
152

McCabes cyclomatic complexity


metrics

The cyclomatic complexity metric V(G)


V(G) = R = E N + 2 = P + 1
Where R = the number of regions
E = number of edges
N = number of nodes
P = number of decisions ( nodes having more than one leaving edge).
See example page 195
An empirical studies show that if V(G) < 5 considered simple
If it is 10 or less considered not too difficult
If 20 or more it is high
If it is exceed 50 the SW for practical purposes becomes untestable.
153

Example (1/4)
Imperial Taxi Services (ITS) serves one-time passengers and regular clients
(identified by a taxi card). The ITS taxi fares for one-time passengers are
calculated as follows:
(1) Minimal fare: $2. This fare covers the distance traveled up to 1000 yards and
waiting time (stopping for traffic lights or traffic jams, etc.) of up to 3 minutes.
(2) For every additional 250 yards or part of it: 25 cents.
(3) For every additional 2 minutes of stopping or waiting or part thereof: 20 cents.
(4) One suitcase: no charge; each additional suitcase: $1.
(5) Night supplement: 25%, effective for journeys between 21.00 and 06.00. 191
9.4 White box testing
(6) Regular clients are entitled to a 10% discount and are not charged the night
supplement.

Example (2/4)

Example (3/4)

Example (4/4)

Advantages and disadvantages of


white box testing

158

Black box testing

Allows performing output correctness tests and most


other classes of tests
Output correctness tests

Consume the greater part of testing resources


Apply the concept of test cases

Equivalence class partitioning

Equivalence class (EC)

A set of input variable values that produce the same output results
or that are processed identically.
Its boundaries are defined by

a single numeric or alphabetic value,


a group of numeric or alphabetic values,
a range of values, etc.

It aims to increase test efficiency and improve coverage of a


potential error conditions.

159

Black box testing

Equivalence class (EC)

An EC that contains only valid states is defined as a valid EC,


An EC that contains only invalid states is defined as an invalid EC
Valid and invalid ECs should be defined for each variable in a SW
Each valid EC and each invalid EC are included in at least one test
case.
The total number of required test cases to cover the valid ECs is
equal to and in most cases significantly below the number of valid
ECs (Why?)
In invalid ECs, we must assign one test case to each new invalid
EC(why?)
Equivalence classes save testing resources as they eliminate
duplication of the test cases defined for each EC.

See example page 199

160

Example

The SW module in question calculates entrance ticket prices for


Swimming Center.
The Centers ticket price depends on four variables: day (weekday,
weekend), visitors status (OT = one time, M = member), entry
hour (6.0019.00, 19.0124.00) and visitors age (up to 16,
16.0160, 60.01120).
The entrance ticket price table is shown in the table bellow

Example

The equivalence classes and the corresponding


test case values for the above example are
presented in Tables bellow

Example

A total of 15 ECs were defined for the ticket price


module: nine valid ECs and six invalid Ecs
The test cases for these ECs, including their
boundary values, are presented bellow

Black box testing

Documentation tests

Common components of documentation

Functional descriptions of the software system.


Installation manual
User manual
Programmer manual

Document testing should include the following

Document completeness check.


Document correctness tests.
Document style and editing inspection.

164

Black box testing

Availability tests (reaction time)

The time needed to obtain the requested information


The time required for firmware to react.
More importance in on-line applications

Reliability tests

Events occurring over time, such as


average time between failures
average time for recovery after system failure
average downtime per month
It is better to carry it out once the system is completed.
Problem
Testing process may need hundreds of hours
Comprehensive test case file must be constructed.

165

Black box testing

Stress tests
load

tests

Relates to the functional performance of the


system under
maximal operational load:
maximal transactions per minute,

There are usually realized for loads higher than


those indicated in the requirements
specification

166

Black box testing

Stress tests

Durability tests
They are typically required for real-time
firmware
These test include

Firmware responses to climatic effects such as


extreme hot and cold temperatures, dust,etc,
Operation failures resulting from sudden electrical

failures, voltage jumps, and sudden cutoffs in


communications.

167

Black box testing

Software system security tests


It aims at
Preventing unauthorized access to the system
or parts of it,
Detection of unauthorized access and the
activities performed by the penetration,
Recovery of damages caused by unauthorized
penetration cases.

168

Black box testing

Maintainability tests

Concerned by:
System structure compliance to the
standards and development procedures
Programmers manual

It is prepared according to approved


documentation standards

Internal documentation

It is prepared according to coding procedures and


conventions
Covers the systems documentation
requirements.

169

Black box testing

Portability tests

The possibility to use other operating


systems, hardware or communication
equipment standards to use the tested SW

Interoperability test

Test the capabilities of interfacing with other HW or


SW

Receive inputs or send outputs

170

Advantages and disadvantages of


black box testing

Advantages

Allows carrying out the majority of testing classes,


such as load tests and availability tests
Requires fewer resources than those required for white
box testing

Disadvantages

Coincidental aggregation of several errors may produce


correct response for a test case
thus prevent error detection
Absence of control of line coverage.
Impossibility of testing the quality of coding and its
strict adherence to the coding standards.

171

Chapter 10

Software testing
Implementation

172

The testing process

Determining the appropriate software


quality standard

Planning the tests

Designing the tests

Performing the tests (implementation)


173

Determining the appropriate


software quality standard

Depends on the characteristics of


the softwares application
Nature

and magnitude of damages in


case of system failure
The higher the expected level of
damage, the higher standard for
software quality is needed
see Table 8.1(Page 219)

174

Planning the tests

Unit tests
Deal

with small units of software or modules

Integration tests
Deal

with several units that combine into a


subsystem

System tests
Refer

to the entire software package/system.

175

Planning the tests

Commonly documented in a software test


plan (STP)
Issues to consider before initiating a specific
test plan:
What to test?
Which sources to use for test cases?
Who is to perform the tests?
Where to perform the tests?
When to terminate the tests?
176

What to test?

It is always preferred to perform a full


and comprehensive
This

will ensure top quality software


Requires the investment of vast resources

So we must decide
Which

modules should be unit tested


Which integrations should be tested
How to allocate available resources when
performing the test (priorities )
177

Priority rating method

Use two factors


Factor

The severity of results in case the module or


application fails.

Factor

A: Damage severity level:

B: Software risk level:

The probability of failure.

combined rating (C)


C=A+B
C = k A + m B
C = A B

and m are constant ( see page 222)

178

Priority rating method Example


Super Teacher is a software package designed to support
teachers in managing the grades of elementary school
pupils.
The package includes eight applications. Ratings are
required in order to plan the allocation of testing
resources for each application.
Applications 7 and 8 are based on high percentages of
reused code. Application 2 was developed by team C,
composed of new employees. A five-level scale is used
for rating damage severity (Factor A) as well as software
risk severity level (Factor B).

179

Priority rating method Example

180

Which sources to use for test


cases?

Should we use
Synthetic test case
Real-life test cases

Stratified Sampling (see ex page 235)


Break down the random sample into subpopulations of test cases
Reducing the proportion of the majority regular
population tested
Increasing the sampling proportion of small
populations and high potential error populations
Thus minimizes the number of repetitions

181

Which sources to use for test


cases?

For each component of the testing


plan we must decide wither
The

use of a single or combined source


of test cases, or both
How many test cases from each source
are to be prepared
The characteristics of the test cases.

182

Who performs the tests?

Integration tests and unit tests

Generally performed by the SW dev. Team


In some instances it is the testing unit

System tests

Usually performed by an independent testing team


(internal or external)

183

Where to perform the tests?

Usually is performed at the software


developers site
When the test is performed by external
testing consultants

It is performed at consultants site.

As a rule, the environment at the


customers site differs from that at the
developers site

Despite efforts to simulate that environment

184

When are tests terminated?

The completed implementation route

The mathematical models application route

The error seeding route

Termination after resources have petered out

The dual independent testing teams route

See pages (226-227)

185

The dual independent testing


teams route

Na = number of errors detected by team A


Nb = number of errors detected by team B.
Nab = number of errors detected by both team A and team B.
Pa = proportion of errors detected by team A
Pb = proportion of errors detected by team B
Pab = proportion of errors detected by both team A and team B
P(a)(b) = proportion of errors undetected by both teams
N(a)(b) = number of errors undetected by both teams
N = total number of errors in the software package/program
(1) Pab = Pa Pb = Nab/N
(2) Pa = Na/N
(3) Pb = Nb/N
(4) P(a)(b) = (1 Pa) (1 Pb)
(5) N = Na Nb / Nab
(6) Pa = Nab / Nb
(7) Pb = Nab / Na
(8) P(a)(b) = (1 Pa) (1 Pb) =
= (Na Nab) (Nb Nab) / (Na Nb)
(9) N(a)(b) = (Na Nab) (Nb Nab) / Nab

186

Test design

Composed of
Detailed design and procedures for each test
The software test procedure document
Test case database/file

The test case file document

In some cases the two document are


integrated in one document called
Software test description (STD)

187

Test implementation

In general it consists of
A

series of tests
Corrections of detected errors
Re-tests (regression tests)

Regression testing is done to


Verify

that the detected errors in the


previous test runs have been properly
corrected
No new errors have entered as a result of
faulty corrections.
188

Test implementation

It is advisable to re-test according


to the original test procedure
Usually only a portion of the original
test procedure is re-tested to save
testing resources
This involve the risk of not
detecting new errors produced
when performing the correction
189

Automated testing

This process includes


Test

planning
Test design
Test case preparation
Test performance
Test log and report preparation
Re-testing after correction of detected
errors (regression tests)
Final test log and report preparation
including comparison reports
190

Automated testing

Types of automated tests


Code

auditing (Qualification testing)

Does the code fulfill code structure/style


Module size, Levels of (loop nesting,
subroutine nesting)
Prohibited constructs, such as GOTO.
Naming conventions for variables, files, etc.

Do the internal program documentation


follows the coding style procedures?
Location of comments in the file
Help index and presentation style

191

Automated testing

Types of automated tests


Functional tests
Replace manual black-box correctness
tests
These tests can be executed with minimal
effort

Coverage

monitoring

Produce reports about the line coverage.

192

Automated testing

Types of automated tests


Load tests
If it to be performed manually, in most
cases it is impractical and impossible in others
The solution is to use computerized
simulations
In general it is combined with availability
and efficiency tests
In this test the load is gradually increased
to the specified maximal load and beyon

193

Automated testing

Types of automated tests


Load tests

The tester may wish to:


Change the hardware
Change the scenario in order to reveal the load
contributed by each user or event
Test an entirely different scenario
Test new combinations of hardware and scenario
components
The tester will continue this iterations till he finds
the appropriate hardware configuration

See example P 240


194

Automated testing

Advantages of automated tests

Accuracy and completeness of performance


Accuracy of results log and summary reports
Comprehensiveness of information
Few manpower resources required to perform tests
Shorter duration of testing
Performance of complete regression tests
Performance of test classes beyond the scope of
manual testing.

195

Automated testing

Disadvantages of automated tests


High investments required in package
purchasing and training.
High package development investment
costs.
High manpower requirements for test
preparation.
Considerable testing areas left uncovered.

See example 245

196

Alpha site tests

Alpha site tests are tests of a new


software package that are
performed at the developers site by
the customer
The identified errors are expected to
include the errors that only a real
user can reveal, and thus should be
reported to the developer

197

Beta site tests

Beta site tests are much more


commonly applied than are alpha
site tests.
It is applied on an advanced version
of the software package
The developer offers it free of
charge SW to one or more potential
users in order to test them
198

Alpha and beta site testing

Advantages
Identification of unexpected errors
A wider population in search of errors
Low costs

Disadvantages
A lack of systematic testing
Low quality error reports
Difficult to reproduce the test environment
Much effort is required to examine reports

199

Chapter 11
Assuring the quality of software
maintenance components

200

Maintenance service components

Corrective maintenance

Adaptive maintenance

support services and software corrections.


adapts the software package to differences in new

customer requirements,
Functionality improvement maintenance
perfective maintenance of new functions added to the
software so as to enhance performance,
preventive maintenance activities that improve
reliability and system infrastructure for easier
and more efficient future maintainability

201

Causes of user difficulties

Code failure
software

Documentation failure
users

failure

manual, help screens

Incomplete, vague or imprecise doc


Insufficient knowledge of the
software system
failure

to use the documentation supplied

202

Quality factors that impact


Software maintenance

203

Maintenance policy

Version development policy


How

many versions of the software


should be operative simultaneously
The number of versions becomes a
major issue for COTS software
packages

Can take a sequential or tree form

204

Maintenance policy

Version development policy

Sequential version policy


One version is made available to the entire
customer population
Includes a profusion of applications that exhibit
high redundancy, an attribute that enables the
software to serve the needs of all customers
The software must be revised periodically but
once a new version is completed, it replaces
the version currently used by the entire user
population.

205

Maintenance policy

Version development policy


Tree

version policy

Supports marketing efforts by developing


a specialized, targeted version for groups
of customers
A new version is inaugurated by adding
special applications or omitting
applications
Versions vary in complexity and level of
application

206

Maintenance policy

Version development policy


Tree

version policy

Software package can evolve into a multiversion package, a tree with several main
branches and numerous secondary
branches, each branch representing a
version with specialized revisions
More difficult and time-consuming
Some organizations apply a limited tree
version policy
See example P260

207

Maintenance policy

Change policy
Refers to the method of examining
each change request and the
criteria used for its approval
Permissive policy contributes to an
often-unjustified increase in the
change task load

208

Maintenance policy

Change policy
A balanced policy is preferred

Allows staff to focus on the most


important and beneficial changes, as well
as those that they will be able to perform
within a reasonable time and according to
the required quality standards

209

Maintenance software quality


assurance tools

SQA tools for corrective maintenance


SQA tools for functionality improvement
maintenance
SQA infrastructure tools for software
maintenance
SQA tools for managerial control of
software maintenance.

210

Maintenance software quality


assurance tools

SQA tools for corrective maintenance


Entail

(User support services and


Software corrections bug repairs)
Most bug repair tasks require the use of
mini-testing tool
Required to handle repair patch (smallscale) tasks (small number of coding line
to be corrected rapidly)

211

Maintenance software quality


assurance tools

SQA tools for functionality improvement


maintenance
The same project life cycle tools are
applied (reviews and testing etc..)
Are implemented also for large-scale
adaptive maintenance tasks

212

Maintenance software quality


assurance tools

SQA infrastructure component for S.W


maintenance
We

need SQA infrastructure tools for:

Maintenance procedures and work instructions


Supporting quality devices
Preventive and corrective actions
Configuration management
Documentation and quality record control
Training and certification of maintenance teams

213

Maintenance software quality


assurance tools

Maintenance procedures and work


instructions
Remote handling of request for service
On-site handling
User support service
Quality assurance control
Customer satisfaction surveys

214

Maintenance software quality


assurance tools

Supporting quality devices


Checklists for location of causes for a failure
Templates for reporting how software failure
were solved
Checklists for preparing a mini testing
procedure document

215

Maintenance software quality


assurance tools

Preventive and corrective actions

Directed and controlled

the CAB

Corrective Action Board


Changes in content and frequency of customer
requests for user support services
Increased average time invested in complying with
customers user support requests
Increased average time invested in repairing
customers software failures

Increased percentage of software correction failures.

216

Maintenance software quality


assurance tools

Configuration management
Failure repair
Software system version installed at the customers site
A copy of the current code and its documentation

Group

replacement

Decision making about the advisability of performing a


group replacement
Planning the group replacement, allocating resources
and determining the timetable.

Maintenance documentation and quality records

217

Chapter 12

Assuring the quality of external


participants contributions

218

Types of external participants

Subcontractors outsourcing
Undertake to carry out parts of a project
Advantages : staff availability, special expertise or
low prices.

Suppliers of COTS software and reused software


modules

Advantages:
reduce time and cost
increase quality: since these components have already
been tested and corrected by the developers and
previous customers

219

Types of external participants

The customer themselves

why?
Apply the customers special expertise,
respond to commercial or other security
needs
Keep internal development staff occupied,
prevent future maintenance problems

Disadvantages:

Need a good customersupplier relationship

220

Types of external participants

221

Risks and benefits of introducing


external participants

222

SQA tools for assuring the quality of


external participants contributions
1.

Requirements document reviews

2.

3.

4.

The contractor take the role of the customer

Participation in design reviews and software


testing
Preparation of progress reports of development
activities
Review of deliverables (documents) and
acceptance tests

223

SQA tools for assuring the quality of


external participants contributions
5.

Establishment of project coordination and joint


control committee

Confirmation of the project timetable and milestones


Follow-up according to project progress reports
Making decisions

6.

about problems arising during follow-up


about problems identified in design reviews and software tests

Evaluation of choice criteria regarding external


participants
Collection

of information
Systematic evaluation of the suppliers
224

SQA tools for assuring the quality of


external participants contributions

Collection of information
Internal

info about suppliers and subcontractors

Past performance file based on cumulative


experience. Requires systematic reporting

Auditing

the suppliers quality system


Opinions of regular users of the suppliers
products

Internal units, Other organizations and Professional


organizations that certified the supplier as qualified
to specialize in the field)

225

Chapter 17

Corrective and preventive


actions

226

Corrective and preventive actions


(CAPA) definition:

Corrective actions:
A

regularly applied feedback process that


includes collection of information on
quality non-conformities, identification
and analysis of sources of irregularities as
well
as
development
and
assimilation of improved practices
and procedures, together with control of
their implementation and measurement of
their outcomes.
227

Corrective and preventive actions


(CAPA) definition:

Preventive actions:
A

regularly applied feedback process that


includes collection of information on
potential quality problems, identification
and analysis of departures from quality
standards, development and assimilation
of improved practices and procedures,
together
with
control
of
their
implementation and measurement of their
outcomes.
228

The corrective and preventive


actions process

Information collection
Analysis of information
Development of solutions and
improved methods
Implementation of improved methods
Follow-up.

229

The corrective and preventive


actions process

230

Information collection

231

Information collection

232

Analysis of collected information

Screening the information and


identifying potential improvements.
Analysis of potential improvements
Expected

types and levels of damage


Causes for faults
Estimates of the extent of organizationwide potential faults of each type

Generating feedback
233

Development of solutions and their


implementation
1.
2.

3.

4.
5.

Updating relevant procedures


Changes in practices, including updating of
relevant work instructions
Shifting to a development tool that is more
effective and less prone to the detected
faults.
Improvement of reporting methods
Initiatives for training, retraining or
updating staff
See examples page(357)

234

Follow-up of activities
1.

2.
3.

Follow-up of the flow of development


and maintenance CAPA records
Follow-up of implementation
Follow-up of outcomes

235

Organizing for preventive and


corrective actions

CAPA activities depends on


the existence of a permanent core
organizational
ad hoc team participants

Can be
members

of the SQA unit


top-level professionals, development
and maintenance
department managers
236

Organizing for preventive and


corrective actions

CAB committee tasks include


Collecting

CAPA records
Screening the collected information
Nominating entire ad hoc CAPA
teams
Promoting implementation of CAPA
Following up

237

Chapter 21

Software quality metrics

238

Quality metrics

You cant control what you cant measure


Tom DeMarco (1982)

IEEE definition
A quantitative measure of the degree
to which an item possesses a given
quality attribute

239

Main objectives of software


quality metrics

To facilitate management control, planning and


execution of the appropriate managerial
interventions

Deviations of actual

functional (quality) performance from planned


performance
timetable and budget performance from planned
performance

To identify situations that require development


or maintenance process improvement in the
form of preventive or corrective actions

240

Software quality metrics


requirements

241

Software quality metrics

Classification
Process

Related to the software development


process

Product

metrics

metrics

Related to software maintenance

242

Software quality metrics


Depend on system size measures by
(KLOC, Function Point)
KLOC

Measures

the size of software by thousands of


code lines Classic metric
Metrics use KLOC is limited to software systems
that use the same programming language or
development tool
The number of code lines may be counted only
after programming completion
243

Software quality metrics

Function Point
A method used to provide pre-project
estimates of project size, stated in terms of
required development resources
It measures project size by functionality,
indicated in the customers or tender
requirement specification

244

Software quality metrics

Function Point advantages


Estimation

can be prepared at the preproject stage


Support project management preparation
Does not dependent on development tools
or programming languages
Relatively high reliability

245

Software quality metrics

Function Point disadvantages


Based

on detailed requirements
specifications or software system
specifications, which are not always
available at the preproject stage
Requires an experienced function point
team
Cannot be universally applied

More successful in data processing systems

246

Process metrics
1.

Software process quality metrics


Error

density metrics
Error severity metrics.
2.
3.
4.

Software process timetable metrics


Error removal effectiveness metrics
Software process productivity
metrics

247

Process metrics

Error density metrics


Calculation

measures:

Software volume

of error density metrics involves two

(KLOC, Function point)

Errors counted
Relate to the number of errors or to the weighted number
of errors (severity of the errors)
Application of weighted measures can lead to decisions
different than those arrived at with simple (unweighted)
metrics
weighted measures are assumed to be better indicators of
adverse situations

248

Process metrics

Error density metrics (example)

Number of code errors (NCE)


Weighted number of code errors (WCE)

249

Process metrics

Error density metrics (example)

250

Process metrics

Error severity metrics

251

Process metrics
2.

Software process timetable metrics


based on accounts of success
(completion of milestones per
schedule) in addition to failure events
(noncompletion per schedule)
An alternative approach calculates
the average delay in completion of
milestones

252

Process metrics
2.

Software process timetable metrics

253

Process metrics
3.

Error removal effectiveness metrics


Can

be measured by the software


quality assurance system after a period
of regular operation (6 or 12 months)

The

metrics combine the error


records of the development stage
with the failures records compiled
during any defined period

254

Process metrics
3.

Error removal effectiveness metrics

255

Process metrics
4.

Software process productivity metrics


Includes
Direct

metrics: deal with a projects


human resources productivity
Indirect metrics: focus on the extent of
software reuse
Software reuse substantially affects
productivity and effectiveness

256

Process metrics
4.

Software process productivity metrics

257

Product metric

Refer to the software systems operational


phase (Customer services)
Customer services have two types

Help desk services (HD)

Method of application of the software and


solution of customer implementation problems

Corrective

maintenance services

correction of software failures identified by


users or detected by the customer service team

258

Product metric

Can be classified into


1.
2.
3.
4.

HD quality metrics
HD productivity and effectiveness metrics
Corrective maintenance quality metrics
Corrective maintenance productivity and
effectiveness metrics

259

Product metric
1.

HD quality metrics includes

HD calls density metrics

The number of calls received from


customers

The severity of the HD issues raised


HD success metrics

A success is achieved by completing the


required service within the time determined in
the service contract

260

Product metric
2.

HD productivity and effectiveness metrics


Productivity

metrics

Relate to the total resources invested during a specified


period

E.g. HD Productivity:
KLMC = thousands of lines of maintained software code
HDYH = total yearly working hours invested in HD servicing
of the software system

Effectiveness

metrics

Relate to the resources invested in responding to a HD


customer call

261

Product metric
3.

Corrective maintenance quality


metrics
Failures

of maintenance services metrics

Maintenance services were unable to complete


the failure correction on time or that the
correction performed failed

Software

system availability metrics

The disturbances caused to the customer

The services of the software are unavailable or only


partly available
262

Product metric
4.

Corrective maintenance productivity and


effectiveness metrics

Corrective maintenance productivity

Corrective maintenance effectiveness

Resources invested in correction of a single failure

Software maintenance system with displaying higher


productivity

The total of human resources invested in maintaining a given


software system

Require fewer resources for its maintenance task

Effective software maintenance system

Require fewer resources for correcting one failure

263

Implementation of software quality


metrics
1.
2.
3.

4.

Definition of software quality metrics


Application of the metrics
Statistical analysis of collected metrics
data (descriptive or analytical)
Respond to metrics by

CAB committees
Project team

264

Limitations of software metrics

The main problem in software quality metrics


are rooted in the attributes measured
Programming style:
Volume of documentation comments
Software complexity
Percentage of reused code
Professionalism of design review and
software testing teams
Reporting style of the review and testing
results
265

CHAPTER 22

Costs of software quality

266

Cost of software quality metrics


objectives

Control organization-initiated costs to


prevent and detect software errors
Evaluation of the economic damages of
software failures as a basis for revising
the SQA budget
Evaluation of plans to increase or
decrease SQA activities or to invest in a
new or updated SQA infrastructure on
the basis of past economic performance
267

Indicators of the success of an


SQA plan

Percentage of cost of SQ out of total


software development costs
Percentage of SW failure costs out of
total SW development costs
Percentage of cost of SQ out of total
software maintenance costs
Percentage of cost of SQ out of total
sales of software products and
software maintenance
268

The classic model of cost of


software quality

Developed by Feigenbaum 1950


Provides a methodology for
classifying the costs associated with
product quality assurance from an
economic point of view
Developed to the quality situations
found in manufacturing
organizations
269

The classic model of cost of


software quality

Classifies costs related to product


quality into two general classes:
Costs

of control

Prevention costs
Appraisal costs

Costs

of failure of control

Internal failure cost


External failure cost

270

The classic model of cost of


software quality

271

1) Prevention costs

Typical preventive costs include:

Investments in development of new SQA


infrastructure components or regular updating of
those components:

Support devices: templates, checklists,


Software quality metrics

Regular implementation of SQA preventive


activities:

Instruction of new employees in SQA subjects and


procedures related to their positions
Certification of employees for required certification

272

1) Prevention costs

Typical preventive costs include:


Control

of the SQA system through


performance of:
Internal quality reviews
External quality audits by customers and
SQA system certification organizations

273

2) Appraisal costs

Typical appraisal costs cover:


Reviews:

Formal design reviews (DRs)


Peer reviews (inspections and
walkthroughs)

Costs

of software testing:

Unit, Integration and Software tests


Acceptance tests

Costs

of assuring quality of external


participants
274

3) Internal failure costs

Typical costs of internal failures are:


Costs of redesign or design
corrections
Costs of re-programming or
correcting programs
Costs of repeated design review
and re-testing (regression tests)

275

4) External failure costs

Can be overt or hidden


Typical overt external failure costs
Resolution of customer complaints
during the warranty period
Correction of SW bugs detected
during regular operation

Tests of the corrected SW followed by


installation of the corrected code

276

4) External failure costs

Typical overt external failure costs


Damages paid to customers in case
of a severe software faille
Compensation of customers
purchase costs
Insurance against customers claims
in case of severe software failure

277

4) External failure costs

Typical examples of hidden external


failure costs are
Reduction of sales to customers suffering
from high rates of software failures
Reduction of sales motivated by the firms
damaged reputation
Increased investment in sales promotion
Reduced prospects to win a tender, as a
result, underprice to prevent competitors
from winning tenders

278

Application of a cost of software


quality system
1) Definition of a cost of software
quality model:
Definition

of the list of cost items


specifically for the organization,
department, team or project
Each of the cost items should be
related to one of the subclasses of the
chosen cost of software quality model

279

Application of a cost of software


quality system
1) Definition of a cost of software
quality model: example
The

SQA unit of the Information


Systems Department of a commercial
company adopted the classic model as
its cost of software quality model. The
SQA unit defined about 30 cost items
to comprise the model.

280

Application of a cost of software


quality system
1) Definition of a cost of software
quality model: example

281

Application of a cost of software


quality system
2) Definition of the cost data collection
method

Either develop an independent system for


collecting data
Or rely on the currently operating
management information system (MIS)
The use of MIS systems in place is
preferable

282

Application of a cost of software


quality system
3) Implementation of a cost of software
quality system
Assigning

responsibility for reporting and


collecting quality cost data
Follow-up:
Review of cost reporting, proper classification and
recording
Review of the completeness and accuracy of
reports

Updating

and revising the definitions of the


cost items based on feedback

283

Application of a cost of software quality


system
4) Actions taken in response to the models
findings

Depend on software quality balance concept

An increase in control costs is expected to yield a


decrease in failure of control costs and vice versa

Management is usually interested in minimal total


quality costs rather than in control or failure of
control cost components

284

Application of a cost of software quality


system
4) Actions taken in response to the models findings

285

Application of a cost of software quality


system
4) Actions taken in response to the models findings

Examples of typical decisions taken in the wake of cost of


software quality analysis and their expected results

286

Problems in the application of cost


of software quality metrics

Problem in quality cost data accuracy


and completeness
Inaccurate or incomplete identification
and classification of quality costs
Negligent reporting by team members
and others
Biased reporting of software costs
Biased recording of external failure
unclear costs to compensation
287

Problems in the application of cost


of software quality metrics

Problems when collecting data on


managerial preparation and control
cost
Contract review and progress control
activities are performed in part-time
mode, subdivided into several
disconnected activities of short
duration

288

Problems in the application of cost


of software quality metrics

Problems when collecting data on


managerial failure cost

Determination of responsibility for schedule


failures, these cost can be assigned to

Customer, development team or management

Payment of overt and formal compensation

Occurs after the project is completed, and much too


late for efficient application of the lessons learned

This

lateness opens up the question of


whether the failure was managerial or external

289

Chapter 23

SQA Standards

290

The benefits of standards

Apply software development and


maintenance methodologies and
procedures of the highest professional
level
Better mutual understanding and
coordination among
Development teams
Development and maintenance teams
External participants in the project
Customers

291

Classification of SQA standards

IEC (International Electrotechnical Commission)


Software engineering institute

292

ISO 9000-3 quality management


system: guiding principles

Customer focus
Leadership
Involvement of people
Process approach
System approach to management
Continual improvement
Factual approach to decision making
Mutually supportive supplier relationships
293

Process to obtain ISO 9000-3


certification
1) Planning the process leading to
certification action plan
An

internal survey of the current SQA


system is realized, to construct action
plan that include
A list of activities to be performed, including
timetables
Estimates of resources required to carry out
each activity
Organizational resources

294

Process to obtain ISO 9000-3


certification
2) Development of the organizations SQA
system to a level adequate to ISO 9000-3
requirements
Development

of a quality manual and a


comprehensive set of SQA procedures
Development of SQA infrastructure
Staff training, certification programs
Preventive and corrective actions procedures,
including the CAB committee

Development

of a project progress control system

295

Process to obtain ISO 9000-3


certification
3) Implementation of the organizations SQA
system
This

include setting up a staff instruction program


and support services to solve problems that may
arise
Special emphasize on the team leaders and unit
managers (whose unit will be involved
Internal quality audits are carried out to verify the
success in implementation

Determine whether the organization has reached a


satisfactory level of implementation

296

Process to obtain ISO 9000-3


certification
4) Undergoing the certification audits

Carried out in two stages:


Review of the quality manual and procedures
Verification audits of compliance with the requirements
defined by the organization in its quality manual and
procedures

Staff been adequately instructed on SQA topics and display a


satisfactory level of knowledge
Relevant procedures project plans, progress reports, etc. been
properly implemented

The main sources of information are


Interviews with members of the audited unit
Review of documents

297

Process to obtain ISO 9000-3


certification

298

Capability Maturity Models CMM

Developed by Carnegie Mellon


Universitys Software Engineering
Institute (SEI) 1986
The principles of CMM
Application

of more elaborate management


methods based on quantitative approaches

This will increase the organizations capability


to control the quality and improve the
productivity of the software development
process

299

Capability Maturity Models CMM

The principles of CMM


The

enhancement of software
development is composed of five-level
maturity model

The model enables an organization to


evaluate its achievements and determine
the efforts needed to reach the next
capability level by locating the process
areas requiring improvement

300

Capability Maturity Models CMM

The principles of CMM


Process areas are generic; they
define the what, not the how
Use any life cycle model, design
methodology, software development tool and
programming language
It does not specify any particular
documentation standard

301

The evolution of CMM

In 1993 Software development and


maintenance capability model (SW-CMM)
was expanded to

System Engineering CMM (SE-CMM)


Trusted CMM (T-CMM)
System Security Engineering CMM (SSE-CMM)
People CMM (P-CMM)
Software Acquisition CMM (SA-CMM)
Integrated Product Development CMM (IPD-CMM)

In late 1990s CMMI were developed

302

CMMI motivation

Development of specialized CMM models


Development of different sets of key processes
for model variants for different departments
that exhibited joint processes
Departments that applied different CMM
variants in the same organization faced
difficulties in cooperation and coordination

CMMI is composed of three models


CMMI-DEV, V1.3
CMMI-SVC, V1.3
CMMI-ACQ, V1.3

303

CMMI structure

Composed of five levels


Each level contains several process areas (PA)

PA: is a cluster of related practices in an area


that, when implemented collectively, satisfies a
set of goals considered important for making
improvement in that area

To reach a particular level, an organization


must satisfy all of the goals of the process
area or set of process areas that are
targeted for improvement

304

CMMI structure

Levels are used in CMMI-DEV to


Describe an evolutionary path recommended
for an organization that wants to improve the
processes it uses to develop products or
services
Levels can also be the outcome of the rating
activity in appraisals

Appraisals can apply to entire organizations or to


smaller groups such as a group of projects or a
division.

305

CMMI structure

CMMI supports two improvement paths


using levels (Representation)

Capability level:

One path enables organizations to incrementally


improve processes corresponding to an individual
process area (or group of process areas) selected
by the organization. continuous

Maturity level

The other path enables organizations to improve a


set of related processes by incrementally addressing
successive sets of process areas.staged

306

CMMI structure

Staged representation levels


Level
Level
Level
Level
Level

1:
2:
3:
4:
5:

Initial
Managed
Defined
Quantitatively managed
Optimizing

307

Maturity Level 1: Initial

Processes are usually ad hoc and chaotic


Success depends on the competence and
heroics of the people in the organization
and not on the use of proven processes
Organizations often produce products and
services that work, but they frequently
exceed the budget and schedule
documented in their plans
Organizations unable to repeat their
successes
308

Maturity Level 2: Managed

The projects have ensured that processes are


planned and executed in accordance with policy
The projects employ skilled people who have
adequate resources to produce controlled outputs.
Projects are performed and managed according to
their documented plans
Existing practices are retained during times of
stress.
The work products and services satisfy their
specified process descriptions, standards, and
procedures

309

Maturity Level 3: Defined

Processes are well characterized and understood,


and are described in standards, procedures, tools,
and methods
These standard processes are used to establish
consistency across the organization
Projects establish their defined processes by
tailoring the organizations set of standard
processes according to tailoring guidelines
Processes are managed more proactively using an
understanding of the interrelationships of process
activities and detailed measures of the process, its
work products, and its services
310

Maturity Level 3: Defined

Distinction between maturity L2 and L3 is that


at L2, the standards, process descriptions, and
procedures can be quite different in each
specific instance of the process (e.g., on a
particular project)
At L3, the standards, process descriptions, and
procedures for a project are tailored from the
organizations set of standard processes to suit
a particular project or organizational unit and
therefore are more consistent except for the
differences allowed by the tailoring guidelines

311

Maturity Level 4: Quantitatively


Managed

Organization and projects establish


quantitative objectives for quality and
process performance and use them as
criteria in managing projects
Quantitative objectives are based on the
needs of the customer, end users,
organization, and process implementers

312

Maturity Level 5: Optimizing

The organization continually improves its processes


based on a quantitative understanding of its
business objectives and performance needs
The organization uses a quantitative approach to
understand the variation inherent in the process
and the causes of process outcomes

The organization is concerned with overall


organizational performance using data collected
from multiple projects to identify shortfalls or
gaps in performance. These gaps are used to
drive organizational process improvement

313