Sie sind auf Seite 1von 783

Software Quality

Engineering
CSE302
Dr. Munam Ali Shah
PhD: University of Bedfordshire
MS: University of Surrey
M.Sc: University of Peshawar

Serving COMSATS since July 2004


Some Pictures

Park Square Campus,


UoB, Luton

New Post Graduate


Center, UoB, Luton

Putteridge Burry Campus, UoB, Luton


Contact Information

Dr. Munam Ali Shah


Office: Room 137, Department of Computer Science,
Academic Block II, COMSATS, Islamabad.
About the course

§ To provide a survey and exposure of both


principles and practice of Software Quality
Engineering.
§ To assure how to incorporate quality in the life
cycle of a software.
§ The course will also help you understand and
learn quality parameters, standards and
specification while engineering a software.
Books and Resources
l Software Quality Engineering, 6th Edition by Jeff
Tian

l Software Engineering, A practitioner’s approach,


8th Edition by Roger S. Pressman
How this course will be run
The course is comprised of 32 lectures and is divided in
following parts:
n Part - 1: Overview and Basics.
n Part - 2: Software Testing: models & techniques,
management, automation, and integration,
n Part - 3: Other Quality Assurance Techniques:
defect prevention, process improvement, inspection,
formal verification, fault tolerance, accident prevention,
and safety assurance.
n Part – 4: Quantifiable Quality Improvement: analysis and
feedback, measurements and models, defect analysis,
risk identification, and software reliability engineering.
Part - 1: Overview and Basics.

n The main concepts that are discussed in this part are:

Overview. What is Quality? Quality Assurance,


QA in Context, Quality Engineering and the
Quality Challenge.

n This part will be covered in


Lecture 1 to Lecture 7
Part - 2: Software Testing

n This part is will cover most of the important contents of


the course. It has been further divided in following
sub-parts:
a) Software Testing
b) Models and Techniques
c) Management
d) Automation
e) Integration
Part – 2 (a): Software Testing

n Here we will discuss :


l Different Software Testing Techniques.
Specification Based Testing Techniques, Black-box
and Grey-Box Testing, Other Comprehensive
Software Testing techniques for SDLC.
l Coverage and Usage Testing Based on Checklists
and Partitions

l The topics will be covered in


Lecture 8 - Lecture 12
Part – 2 (b): Models and Techniques

n Topics covered in this part are:


l Use of different models for Software Quality
Assurance, Quality Planning and Quality Control

l The topics will be covered in


Lecture 13 - Lecture 14
Part – 2 (c): Software Quality Management

n Topics covered in this part are:


l Phases of Quality Assurance, Test Execution,
Result checking and Test Measurement

l The topics will be covered in


Lecture 15 - Lecture 16
Part – 2 (d): Test Automation

n This part will cover the following topics:


l Specific needs and potential for automation;
Selection of existing testing tools, if available;
Possibility and cost of constructing specific test
automation tools; Availability of user training for
these tools and time/effort needed; .

l The topics will be covered in


Lecture 17 - Lecture 18
Part – 2 (e): Test Integration

n This part will discuss the following topics:


l Testing Sub-Phases and Applicable Testing
Techniques, Specialized Test Tasks and
Techniques, Test Integration

l The topics will be covered in


Lecture 19 - Lecture 20
Part - 3: Quality Assurance Techniques

n The main concepts that are discussed in this part are:

n Defect prevention, process improvement, inspection,


formal verification, fault tolerance, accident
prevention, and safety assurance. Defect Prevention
and Process Improvement. Software Inspection.
Formal Verification. Fault Tolerance and Failure
Containment. Comparing QA Techniques and
Activities.
n This part will be covered in
Lecture 21 – Lecture 25
n The last two lectures, i.e., Lecture 31 and 32 are
reserved for the revision of the course.
Part - 4: Quantifiable Quality Improvement

n This is the last part of the course. The main concepts


that are discussed in this part are:
n Feedback Loop and Activities for Quantifiable Quality
Improvement. Quality Models and Measurements.
Defect Classification and Analysis. Risk Identification
for Quantifiable Quality Improvement. Software
Reliability Engineering
n This part will be covered in
Lecture 26 – Lecture 29

n The last two lectures, i.e., Lecture 31 and 32 are


reserved for the revision of the course.
Are you ready !!!!
Lets Begin
Lecture 1:
Software Quality Engineering
Outlines

n What is a software
n What is Quality
n What is Engineering
Objectives

n To describe the basics of a Software Quality Engineering

n To understand and distinguish between Software

Engineering and Software Quality Engineering.


The Software Engineering

“Software is instructions (computer programs) that


when executed provide desired function and
performance.
Or
Data structures that enable the programs to adequately
manipulate information”

(Roger S. Pressman)
The Software Engineering
Characteristics of a software:
1. Software is developed or engineered (in
engineering we make the things from scrap, we don’t
have any existing model), it is not manufactured in the
classical sense.
2. Software doesn't "wear out.“ (wear out means
things gets old or obsolete with the passage of time,
the performance decreases as time passes, like
computer hardware, or human body, but the
sofwtware improves with time, like Windows.NT and
now windows 10)
Difference b/w Engineering and Manufacturing
Engineering refers to planning or designing whereas manufacturing
refers to using of machines and raw materials to physically make the
thing.
Software is engineered because it involves the designing and
planning phases in it.
Manufacturing does not refer to making of civil architectures. Its called
construction.
For example Company A makes the blueprint of a building. Its in
engineering business. Company B makes cement and bricks to make
the building. Its in manufacturing. Company C takes raw material from
B and blue print from A and makes the building, Its in construction.

Engineering
Manufacturing
Constructions
What is Quality

n In general, people’s quality expectations for software


systems they use and rely upon are two-fold:

1. The software systems must do what they are


supposed to do. In other words, they must do the right
things.
2. They must perform these specific tasks correctly or
satisfactorily. In other words, they must do the things
right.
Placing the file from one room to another correctly and with the correct way is an
example of quality.
Data retrieval from a database in different language or with excessive time.

Now you can define Software Quality Engineering.


Summary of Today’s Lecture

n We overviewed the course outlines


n We discusses and revised important concepts of
Software Engineering
n We overviewed What is Software Quality Engineering
Overview of Next lecture

n We will continue our discussion on overview and basics


of Software Quality Engineering
n Precisely, we will talk about the quality assurance
The End
Software Quality
Engineering
CSE302
Part - 1: Overview and Basics.

n The main concepts that are discussed in this part are:

Overview. What is Quality? Quality Assurance,


QA in Context, Quality Engineering and the
Quality Challenge.
Part- 1
Overview and Basics
Summary of the previous lecture

n What is a software?
Characteristics of a Software
n What is Engineering?
What is difference between engineering,
manufacturing and construction
n What is Quality?
Outlines

n Meeting People’s Quality Expectations


n General Expectations
n Quality Expectations
n Meeting Quality Expectations
n Software Quality Engineering (SQE) activities
Objectives

n To describe the basics of what is expected as quality

n To understand and distinguish between people

expectations and software quality expectations.


Meeting People’s Quality Expectations

As we previously discussed, if people’s expectations


are met in any product, then the product is supposed to
have quality in it.
Must perform expected behavior.
General Expectations

n General expectation: “good” software quality


n • Objects of our study: software
software products, systems, and services
stand-alone to embedded
software-intensive systems
wide variety, but focus on software
n • Quality (and how “good”) formally defined in Ch.2
Quality Expectations

n People: Consumers vs producers .


quality expectations by consumers
to be satisfied by producers through software
quality engineering (SQE)
n Deliver software system that... .
does what it is supposed to do –
– needs to be “validated” .
does the things correctly
– needs to be “verified” .
– show/demonstrate/prove it (“does”) –
modeling/analysis needed
Meeting Quality Expectations

n Difficulties in achieving good quality:


size: MLOC products common
Complexity
environmental stress/constraints
flexibility/adaptability expected
n Other difficulties/factors:
product type
cost and market conditions
Major SQE Activities

n Major SQE Activities:


Testing: MLOC products common
Other quality assurance alternatives to testing
How do you know: analysis & modeling
n Scope and content hierarchy:

Software Quality Engineering

Quality Assurance

Testing
Perspectives and Expectations
Quality Perspective
Quality Expectations
Quality Expectations (conti..)
ISO-9126 Quality Framework
Summary of Today’s Lecture

n We explored the concept of quality from different


aspects.
n We have also seen what is a defect or failure to the
quality.
n Lastly, we talked about the history of the Quality
Overview of Next lecture

n QA as Dealing with Defect


n Defect Prevention
n Defect Detection and Removal
n Defect Containment
The End
Software Quality
Engineering
Lecture No. 3
Part- 1
Overview and Basics
Summary of the previous lecture

n Meeting People’s Quality Expectations


n General Expectations
n Meeting Quality Expectations
Internal Expectations / Consumer Expectations
External Expectations / Producer Expectations
n Software Quality Engineering (SQE) activities
Testing => Quality Assurance Alternatives => SQE
n Software Quality Framework
Outlines

n ISO-9126 Quality Framework


n Other Quality Frameworks
n Correctness, Defects and Quality
n Quality, as a Historical Perspective
n How to prevent defect
Objectives

n To describe the standards for quality engineering

n To understand and distinguish between correctness

defect and quality and detection, prevention and removal


ISO-9126 Quality Framework
Other Quality Frameworks
Correctness, Defect and Quality
Correctness, Defect and Quality
Defining Quality in SQE
Quality: Historical Perspective
Quality: Historical Perspective (conti..)
Quality Assurance
n Quality Assurance mainly deals in
1. Dealing with Defect
2. Defect Prevention
3. Defect Detection and Removal
§ QA focus on correctness aspect of Q
§ QA as dealing with defects
§ – post-release: impact on consumers
§ – pre-release: what producer can do .
§ what: testing & many others
§ when: earlier ones desirable (lower cost) but may not be
feasible
§ how ⇒ classification below
Summary of Today’s Lecture

n We explored some standards of Quality Engineering


n Furthermore, we briefly overviewed following
Fault, error , bug
Detect Prevention Overview
Overview of Next lecture

Testing Overview
Fault Tolerance Overview
Safety Assurance Overview
Formal Method Overview
Inspection Overview
The End
Software Quality
Engineering
Lecture No. 4
Part- 1
Overview and Basics
Summary of the previous lecture

n ISO-9126 Quality Framework


n Other Quality Frameworks
n Correctness, Defects and Quality
Failure, Fault, error, bug, defect
n Quality, as a Historical Perspective
Summary of the previous lecture
Outlines

n QA Classification
n Overview
Defect prevention
Testing
Fault
Safety
Objectives

n To describe how to prevent and stop defects

n To briefly overview some concepts related to SQE


Quality Assurance
n Quality Assurance mainly deals in
1. Dealing with Defect
2. Defect Prevention
3. Defect Detection and Removal
§ QA focus on correctness aspect of Q
§ QA as dealing with defects
§ – post-release: impact on consumers
§ – pre-release: what producer can do .
§ what: testing & many others
§ when: earlier ones desirable (lower cost) but may not be
feasible
§ how ⇒ classification below
How to deal with defects
n Quality Assurance mainly deals in
1. Prevention
2. Removal (but detect them first)
3. Containment (control)
QA Classification
n Dealing with
errors, faults, or
failures
n Removing or
blocking defect
sources
n Preventing
undesirable
consequences
Error/Fault/Failure & QA

n Preventing fault injection


error blocking (errors 6⇒ faults)
error source removal
n Removal of faults (pre: detection)
inspection: faults discovered/removed
testing: failures trace back to faults
n Failure prevention and containment:
local failure 6⇒ global failure – via dynamic measures to
tolerate faults
failure impact↓ ⇒ safety assurance
Overview of some topics related to SQE

n We will briefly overview the following


Detect Prevention Overview
Testing Overview
Fault Tolerance Overview
Safety Assurance Overview
Formal Method Overview
Inspection Overview
Defect Prevention Overview

n Error blocking
error: missing/incorrect actions .
direct intervention to block errors ⇒ fault injections
prevented
rely on technology/tools/etc.
n Error source removal .
root cause analysis ⇒ identify error sources
removal through education/training/etc.
n Systematic defect prevention via process improvement.
Formal Method Overview

n Motivation .
fault present: – revealed through testing/inspection/etc.
fault absent: formally verify. (formal methods ⇒ fault
absent)
n Basic ideas .
behavior formally specified:
– – pre/post conditions, or
– – as mathematical functions. .
verify “correctness”: – intermediate states/steps, – axioms
and compositional rules. .
Approaches: axiomatic/functional/etc.
Inspection Overview

n Artifacts (code/design/test-cases/etc.) from


req./design/coding/testing/etc. phases.
n Informal reviews:
self-conducted reviews.
independent reviews.
orthogonality of views desirable.
n Formal inspections:
Fagan inspection and variations.
process and structure.
individual vs. group inspections.
what/how to check: techniques
Testing Overview

n Product/Process characteristics: .
object: product type, language, etc.
scale/order: unit, component, system, who: self,
independent, 3rd party
n What to check:
verification vs. validation
external specifications (black-box) .
internal implementation (white/clear-box)
n Criteria: when to stop?
coverage of specs/structures. .
reliability ⇒ usage-based testing
Fault Tolerance Overview

n Motivation .
fault present but removal infeasible/impractical
fault tolerance ⇒ contain defects
n FT techniques: break fault-failure link
recovery: rollback and redo
NVP: N-version programming – fault blocked/out-voted
Safety Assurance Overview

n Extending FT idea for safety:


– fault tolerance to failure “tolerance”
n Safety related concepts: .
safety: accident free .
accident: failure w/ severe consequences
hazard: precondition to accident
n Safety assurance:.
hazard analysis
hazard elimination/reduction/control
damage control
Summary of Today’s Lecture

n We explored some standards of Quality Engineering


n Furthermore, we briefly overviewed following
Detect Prevention Overview
Testing Overview
Fault Tolerance Overview
Safety Assurance Overview
Formal Method Overview
Inspection Overview
Overview of Next lecture

n Defect Handling
n QA in Software Processes
n V&V Perspective
n QA: Defect View vs V&V View
The End
Software Quality
Engineering
Lecture No. 5
Part- 1
Overview and Basics
Summary of the previous lecture

n QA Classification
n Overview
Defect prevention
Testing
Fault
Safety
Outlines

n QA Context
n Quality in software processes
n V & V View
n DC View
n QA: Defect View vs V&V View
Objectives

n To understand and distinguish between V&V-view and


DC-view of SQE
Quality Assurance in Context
n QA and the overall development context
defect handling/resolution
activities in process
alternative perspectives: verification/validation (V&V)
view
n Defect handling/resolution
status and tracking
causal (root-cause) analysis
resolution: defect removal/etc.
improvement: break causal chain
Defect Measurement and Analysis
n Defect measurement: .
parallel to defect handling
where injected/found?
type/severity/impact?
more detailed classification possible?
consistent interpretation
timely defect reporting
n Defect analyses/quality models
as follow-up to defect handling
data and historical baselines
goal: assessment/prediction/improvement
causal/risk/reliability/etc. analyses
QA in Software Processes

n Mega-process:
initiation, development, maintenance, termination.
n Development process components:
requirement, specification, design, coding, testing,
release.
QA in software Process

n Process variations:
waterfall development process
iterative development process
spiral development process
lightweight/agile development processes and XP
(extreme programming)
maintenance process too
mixed/synthesized/customized processes
n QA important in all processes
QA in Waterfall Process

n defect prevention in early phases . focused defect removal


in testing phase . defect containment in late phases .
phase transitions: inspection/review/etc
QA in software Processes
n Process variations (¬ waterfall) and QA:
iterative: QA in iterations/increments
spiral: QA and risk management
XP: test-driven development
mixed/synthesized: case specific
more evenly distributed QA activities
n QA in maintenance processes:
focus on defect handling;
some defect containment activities for critical or
highly-dependable systems;
data for future QA activities •
n QA scattered throughout all processes
V&V

n Core QA activities grouped into V&V


n Validation: w.r.t. requirement (what?)
appropriate/fit-for-use/“right thing”?
scenario and usage inspection/testing;
system/integration/acceptance testing;
beta testing and operational support.
n Verification: w.r.t. specification/design (how?)
correct/“doing things right”?
design as specification for components;
structural and functional testing;
inspections and formal verification.
V&V in Software Process

V&V
V-model
V&V in Software Process

n V-model as bent-over waterfall .


n left-arm: implementation (& V&V)
n right-arm: testing (& V&V)
n user@top vs. developer@bottom

n V&V vs DC View
l Two views of QA:
V&V view
DC (defect-centered)
Interconnected: mapping possible?
V&V vs DC View

n Mapping between V&V and DC view:


V&V after commitment (defect injected already)
⇒ defect removal & containment focus
Verification: more internal focus
Validation: more external focus
In V-model: closer to user (near top) or developer
(near bottom)?
DC-V&V Mapping
Summary of Today’s Lecture

n QA in Software Processes
n We explored some V & V model
n We compared and contrasted V & V and DC views
Overview of Next lecture

n QA to SQE
n Key SQE Activities
n SQE in Software Process
n SQE and QIP (quality improvement paradigm)
The End
Software Quality
Engineering
Lecture No. 6
Part- 1
Overview and Basics
Summary of the previous lecture

n QA to SQE
n Key SQE Activities
n V & V model
n SQE in Software Process
Outlines

n SQE: Software Quality Engineering


n Key SQE Activities
n SQE in Software Process
Objectives

n To understand SQE in software process

n To understand and distinguish between SQE activities


QA to SQE
n QA activities need additional support:
Planning and goal setting
Management:
– – when to stop?
– – adjustment and improvement, etc.
– – all based on assessments/predictions
n Assessment of quality/reliability/etc.:
Data collection needed
Analysis and modeling
Providing feedback for management
n QA + above ⇒ software quality engineering (SQE)
SQE Process

Quality Engineering Process


SQE Process

SQE process to link major SQE activities:


Pre-QA planning;
QA: covered previously (Lecture 4 and 5);
Post-QA analysis and feedback
(maybe parallel instead of “post-”)
SQE and QIP

n QIP (quality improvement paradigm):


Step 1: understand baseline
Step 2: change then assess impact
Step 3: package for improvement
SQE and QIP

n QIP support:
overall support: experience factory
measurement/analysis: GQM (goal-question-metric
paradigm)
n SQE as expanding QA to include QIP ideas
Pre-QA Planning

n Pre-QA planning:
Quality goal
Overall QA strategy:
QA activities to perform?
measurement/feedback planning
n Setting quality goal(s):
Identify quality views/attributes
Select direct quality measurements
Assess quality expectations vs. cost
Setting Quality Goals
n Identify quality views/attributes
customer/user expectations,
market condition,
product type, etc.
n Select direct quality measurements
direct: reliability
defect-based measurement
other measurements
n Assess quality expectations vs. cost
cost-of-quality/defect studies
economic models: COCOMO etc
Forming QA Strategy
n QA activity planning
evaluate individual QA alternatives
strength/weakness/cost/applicability/etc.
match against goals
integration/cost considerations
n Measurement/feedback planning:
define measurements (defect & others)
planning to collect data
preliminary choices of models/analyses
feedback & follow-up mechanisms, etc.
Measurement Analysis and Feedback
n Measurement:
defect measurement as part of defect handling
process
other data and historical baselines
n Analyses: quality/other models
input: above data
output/goal: feedback and follow-up
focus on defect/risk/reliability analyses
n Feedback and follow-up:
frequent feedback: assessments/predictions
possible improvement areas
project management and improvement
SQE in Software Processes
q SQE activities ⊂ development activities:
q quality planning ⊂ product planning
q QA activities ⊂ development activities
q analysis/feedback ⊂ project management
q Fitting SQE in software processes:
q different start/end time
q different sets of activities, sub-activities, and
focuses
q In waterfall process: more staged (planning,
execution, analysis/feedback)
q In other processes: more iterative or other
variations
Quality engineering in the waterfall process
Quality engineering in the waterfall process
SQE Effort Profile

n QE activity/effort distribution/dynamics:
different focus in different phases
different levels (qualitatively)
different build-up/wind-down patterns
impact of product release deadline (deadline-
driven activities)
n planning: front heavy
n QA: activity mix (early vs. late; peak variability?
deadline?)
n analysis/feedback: tail heavy (often deadline-driven or
decision-driven)
SQE Effort in Waterfall Process

Quality engineering effort profile: The share of different activities as


part of the total effort
SQE Effort in Waterfall Process

n Effort profile
n planning/QA/analysis of total effort
n general shape/pattern only (actually data would not be
as smooth)
n in other processes: – similar but more evenly distributed
Summary of Today’s Lecture

n In today’s lecture, we explored Key SQE Activities


n We also discussed SQE in Software Process such as
waterfall
Overview of Next lecture

n We will move to the last lecture of Part-I


n We will explore some more expects that leads to or are
helpful in SQE
The End
Software Quality
Engineering
Lecture No. 7
Summary of the previous lecture

n QA to SQE
n Key SQE Activities
n SQE in Software Process
n SQE and QIP (quality improvement paradigm)
Last Lecture of Part- 1:
Overview and Basics
Outlines

n More on:
Quality concepts
Quality control
Cost of Quality
n Statistical Quality Assurance
n The SQA Plan
Objectives

n To further understand the concept of quality in the life


cycle of a software

n To be able to distinguish between quality control, quality


concept and cost of quality.
Quality Concepts
n Software quality assurance is an umbrella activity that is
applied throughout the software process.
n SQA encompasses:
(1) a quality management approach
(2) effective software engineering technology
(3) formal technical reviews
(4) a multi-tiered testing strategy
(5) document change control
(6) software development standard and its control
procedure
(7) measurement and reporting mechanism)
Quality Concepts

n Quality --> refers to measurable characteristics of a software.


These items can be compared based on a given standard

n Two types of quality control:


Quality design -> the characteristics that designers specify for an
item. --> includes: requirements, specifications, and the design of
the system.

Quality of conformance -> the degree to which the design


specification are followed. It focuses on implementation based on
the design.
Quality Control

n What is quality control -- the series of inspections, reviews, and


test used throughout the develop cycle of a software product

n Quality control includes a feedback loop to the process.

n Objective ---> minimize the produced defects, increase the


product quality. Implementation approaches:
- Fully automated
- Entirely manual
- Combination of automated tools and human
interactions
Quality Control
n Key concept of quality control:
--> compare the work products with the specified and measurable
standards

n Quality assurance consists of:


the auditing and reporting function of management
n Goal --> provide management with the necessary data about
product quality. --> gain the insight and confidence of product
quality
Cost of Quality

n Cost of quality --> includes all costs incurred in the


pursuit of quality or perform quality related work

n Quality cost includes:


Prevention cost:
Appraisal cost
Failure Cost
Cost of Quality
n - prevention cost:
- quality planning
- formal technical reviews
- testing equipment
- training
n - appraisal cost:
- in-process and inter-process inspection
- equipment calibration and maintenance
- testing
n - failure cost:
internal failure cost:
Repair, and failure mode analysis, external failure cost:
complaint resolution, product return and replacement,
help line support , warranty work
Software Quality Assurance

n Three import points for quality measurement:


- Use requirements as the foundation
- Use specified standards as the criteria
- Considering implicit requirements
Software Quality Assurance Group

n Who is involved in quality assurance activities?


n Software engineers, project managers, customers, sale
people, SQA group

n Engineers involved the quality assurance work do


following:
- apply technical methods and measures
- conduct formal technical review
- perform well-planned software testing
Causes of Errors

n Causes of errors:
- incomplete or erroneous specification (IES)
- misinterpretation of customer communication (MCC)
- intentional deviation from specification (IDS)
- violation of programming standards (VPS)
- error in data representation (EDR)
- inconsistent module interface (IMI)
- error in design logic (EDL)
- incomplete or erroneous testing (IET)
- inaccurate or incomplete documentation (IID)
- error in programming language translation of design (PLT)
- ambiguous or inconsistent human-computer interface (HCI)
- miscellaneous (MIS)
The SQA Plan
n The SQA plan provides a road map for instituting software
quality assurance.
n Basic items:
- purpose of plan and its scope
- management : organization structure, SQA tasks, their
placement in the process - roles and responsibilities related to
product quality
- documentation: Project documents, model, technical
documents, user document - standards, practices, and
conventions
- reviews and audits; test - test plan and procedure
- problem reporting, and correction actions
- tools; code control; media control; supplier control
- records collection, maintenance, and retention
- training; risk management
Summary of Today’s Lecture

n In today’s lecture, we explored quality and other related


terminologies for SQE and SQA.
n We explored different types of Costs, errors and
constraints associated with SQE
Overview of Next lecture

n We will move to the second part of the course. Here we


will discuss :
l Different Software Testing Techniques. Specification
Based Testing Techniques, Black-box and Grey-Box
Testing, Other Comprehensive Software Testing
techniques for SDLC.
The End
Software Quality
Engineering
Lecture No. 8
Summary of the previous lecture

n More on:
Quality concepts
Quality control
Cost of Quality
n Statistical Quality Assurance
n The SQA Plan
Part- 2 (a):
Software Testing
Outlines

n What is testing
n Why testing is needed
n History of testing
n White box testing
n Black box testing
Objectives

n To understand the basics of Testing

n To be able to distinguish between black box testing and

white box testing


Software Testing

“Testing is the process of executing a program with the


intention of finding errors.”

“Testing can show the presence of bugs but never their


absence.”
Objective Testing

Uncover as many as error(or bug) as possible in a given


produce.
Demonstrate a given software product matching its
requirement specification.
Validate the quality of a software testing using the
minimum cost and effort.
Generate high quality test case, perform effective test and
issue correct and helpful problem report.
DEVELOPMENT PROCESS EVOLUTION
How Testing Has Changed
What? I have done the OK. May be you were right Testers! You must
coding and now you about testing. It looks like a work harder! Longer!
want to test it? Why? nasty bug made its way into Fasters!
We have not got time Live environment and now
anyway customers are complaining
History of Software Testing
Phases in Testing
Testing Methology

BLACK BOX TESTING


WHITE BOX TESTING
Black Box Testing

No knowledge of internal program design or code required


Testing are based on requirement and functionality
Software is considered as a Black Box
WHITE BOX TESTING

Knowledge of the internal program design and code


required.

Test are based on coverage of code


statement,branches,path,condition.
LEVEL OF TESTING

Unit testing
Integration testing
System testing
UNIT TESTING

Test each module individually.

Follows a white box testing.


INTEGRATION TESTING

Once all modules have been tested, integration testing is


perform.
It is systematic testing
Produce test to identify errors associated with interfacing.
TYPES:-
Big bang integration testing
Top down integration testing
Bottom up integration testing
Mixed integration testing
System Testing

This system as a whole is tested to uncover requirement


errors.
verifies that all system element work properly and that
overall system function and performance has been
achieved.
TYPES:-
Alfa testing
Beta testing
Acceptance testing
Performance testing
Summary of Today’s Lecture

n In today’s lecture, we explored what quality is and why it


is important in the life cycle of a software.
n We explored different types of testing such as white box,
black box, unit, integration and system testing.
Overview of Next lecture

n Our discussion on Testing will continue and we will


explore some challenges associated with testing in real
life
n We will explore further unit, integration and system
testing in details
The End
Software Quality
Engineering
Lecture No. 9
Part- 2 (a):
Software Testing
Summary of the previous lecture

n We discussed basics of software testing. We highlighted


why testing is important and how it plays role in SQE
n W examined different types of testing such as black box
and white box testing.
Outlines
n Software Testing Activities
n Software Testing Scope
n Software Testing Principles
n Software Testing Process
n Software Testing Myths
n Software Testing Limits
n Different Types of Software Testing
Objectives

n To further understand the basics of Testing

n To be able to distinguish between unit, integration and

system testing
What is Software Testing
Several definitions:

“Testing is the process of establishing confidence


that a program or system does what it is supposed
to.” by Hetzel 1973

“Testing is the process of executing a program or


system with the intent of finding errors.”
by Myers 1979

“Testing is any activity aimed at evaluating an


attribute or capability of a program or system and
determining that it meets its required results.”
by Hetzel 1983
What is Software Testing

- One of very important software development phases

- A software process based on well-defined software quality


control and testing standards, testing methods, strategy, test
criteria, and tools.

- Engineers perform all types of software testing activities to


perform a software
test process.

- The last quality checking point for software on its production


line
Testing Objectives

The Major Objectives of Software Testing:


- Uncover as many as errors (or bugs) as possible in a given
timeline.
- Demonstrate a given software product matching its requirement
specifications.
- Validate the quality of a software testing using the minimum cost
and efforts.
- Generate high quality test cases, perform effective tests, and
issue correct and helpful problem reports.

Major goals:
uncover the errors (defects) in the software, including errors in:
- requirements from requirement analysis
- design documented in design specifications
- coding (implementation)
- system resources and system environment

- hardware problems and their interfaces to software


Who does Software Testing
- Test manager
- manage and control a software test project
- supervise test engineers
- define and specify a test plan

- Software Test Engineers and Testers


- define test cases, write test specifications, run
tests

- Independent Test Group

- Development Engineers
- Only perform unit tests and integration tests

- Quality Assurance Group and Engineers


- Perform system testing
- Define software testing standards and quality
control process
Software Testing Scope
Software Testing Activities

- Test Planning
Define a software test plan by specifying:
- a test schedule for a test process and its activities, as
well as assignments
- test requirements and items
- test strategy and supporting tools

- Test Design and Specification


- Conduct software design based well-defined test
generation methods.
- Specify test cases to achieve a targeted test coverage.

- Test Set up:


- Testing Tools and Environment Set-up
- Test Suite Set-up

- Test Operation and Execution


- Run test cases manually or automatically
Software Testing Activities
- Test Result Analysis and Reporting
Report software testing results and conduct test result
analysis

- Problem Reporting
Report program errors using a systematic solution.

- Test Management and Measurement


Manage software testing activities, control testing
schedule, measure testing complexity and cost

- Test Automation
- Define and develop software test tools
- Adopt and use software test tools
- Write software test scripts and facility

- Test Configuration Management


- Manage and maintain different versions of software
test suites, test environment and tools, and documents
for various product versions.
Verification and Validation
Software testing is one element of a broader topic that is often
referred to as
===> Verification and Validation (V&V)

Verification --> refers to the set of activities that ensure that


software correctly implements a specific function.

Validation -> refers to a different set of activities that ensure that


the software that has been built is traceable to customer
requirements.

Boehm [BOE81]:

Verification: “Are we building the product right?”


Validation: “Are we building the right product?”

The definition of V&V encompasses many of SQA activities, including


formal technical reviews, quality and configuration audits
performance monitoring, different types of software testing
feasibility study and simulation
Software Quality Factors

Functionality (exterior quality)

- Correctness, reliability, usability, and integrity

Engineering (interior quality)

- Efficiency, testability, documentation, structure

Adaptability (future qualities)

- Flexibility, reusability, maintainability


Software Testing Principles
•Principle #1: Complete testing is impossible.

•Principle #2: Software testing is not simple.


•Reasons:
•Quality testing requires testers to understand a system/product
completely
•Quality testing needs adequate test set, and efficient testing
methods
•A very tight schedule and lack of test tools.

•Principle #3: Testing is risk-based.

•Principle #4: Testing must be planned.

•Principle #5: Testing requires independence.

•Principle #6: Quality software testing depends on:


•Good understanding of software products and related domain
application
•Cost-effective testing methodology, coverage, test methods, and
tools.
•Good engineers with creativity, and solid software testing
experience
Software Testing Myths

q We can test a program completely. In other words, we test a


program exhaustively.
q We can find all program errors as long as test engineers do a
good job.
q We can test a program by trying all possible inputs and
states of a program.
q A good test suite must include a great number of test cases.
q Good test cases always are complicated ones.
q Software test automation can replace test engineers to
perform good software testing.
q Software testing is simple and easy. Anyone can do it. No
training is needed.
Software Testing Limits

- Due to the testing time limit, it is impossible to achieve total


confidence.

- We can never be sure the specifications are 100% correct.

- We can never be certain that a testing system (or tool) is


correct.

- No testing tools can copy with every software program.

- Tester engineers never be sure that they completely


understand a software product.

- We never have enough resources to perform software testing.

- We can never be certain that we achieve 100% adequate


software testing.
Software Testing Process
Unit Test (Component Level Test)

Unit testing: Individual components are tested independently to


ensure their quality. The focus is to uncover errors in design
and implementation, including:
- data structure in a component
- program logic and program structure in a component
- component interface
- functions and operations of a component
Unit testers: developers of the components.
Integration Testing
Integration test: A group of dependent components are tested together
to ensure their the quality of their integration unit.
The focus is to uncover errors in:
- Design and construction of software architecture
- Integrated functions or operations at sub-system level
- Interfaces and interactions between them
- Resource integration and/or environment integration

Integration testers: either developers and/or test engineers.


Function Validation Testing
Validation test: The integrated software is tested based on
requirements to ensure that we have a right product.
The focus is to uncover errors in:
- System input/output
- System functions and information data
- System interfaces with external parts
- User interfaces
- System behavior and performance

Validation testers: test engineers in ITG or SQA people.


System Testing
System test: The system software is tested as a whole. It verifies all
elements mesh properly to make sure that all system
functions and performance are achieved in the target
environment.

The focus areas are:


- System functions and performance
- System reliability and recoverability (recovery test)
- System installation (installation test)
- System behavior in the special conditions
(stress and load test)
- System user operations (acceptance test/alpha test)
- Hardware and software integration and collaboration
- Integration of external software and the system

System testers:test engineers in ITG or SQA people.

When a system is to be marketed as a software product, a testing


process called
beta testing is often used.
Test Issues in Real World
Software testing is very expensive.

How to achieve test automation?????

When should we stop software testing?

Test criteria, test coverage, adequate testing.

Other software testing:

GUI Testing
Object-Oriented Software Testing
Component Testing and Component-based Software
Testing
Domain-specific Feature Testing
Testing Web-based Systems
Summary of Today’s Lecture

n In today’s lecture, we explored different definitions of


software.
n We have also seen different roles which perform
software testing such as test managers, testers, SQA
people etc.
n And lastly, we talked about unit, integration, validation
and system testing.
Overview of Next lecture

n Our discussion on testing will continue. We will see more


details about software testing and how it achieves
software quality.
The End
Software Quality
Engineering
Lecture No. 10
Part- 2 (a):
Software Testing
Summary of the previous lecture

n Software Testing Activities


n Software Testing Scope
n Software Testing Principles
n Software Testing Process
n Software Testing Myths
n Software Testing Limits
n Different Types of Software Testing
Outlines
n QA and Testing
n Testing: Concepts & Process
n Testing Related Questions
n Major Testing Techniques
Objectives

n To understand different phases involved in software


testing

n To be able to understand and distinguish between


testing concepts and processes.
Testing and QA Alternatives
• Defect and QA:

. Defect: error/fault/failure.
. Defect prevention/removal/containment.
. Map to major QA activities

• Defect prevention:
Error blocking and error source removal.

• Defect removal:

. Inspection, etc.

• Defect containment: Fault tolerance and failure containment


(safety assurance).
QA and Testing

q Testing as part of AQ:


q Activities focus on testing phase
q QA/testing in waterfall and V-models
q One of the most important part of QA – defect removal
Testing: Key questions:

q Why: quality demonstration vs. defect detection and removal


q How: techniques/activities/process/etc.
q View: functional/external/black-box vs.
structural/internal/white-box
q Exit: coverage vs. usage-based
Testing: Why?

q Original purpose: demonstration of proper behavior or


quality demonstration.

≈ “testing” in traditional settings.


evidence of quality or proper behavior

q New purpose: defect detection & removal:

. mostly defect-free software manufacturing vs.


traditional manufacturing.
. flexibility of software (ease of change;
sometimes, curse of change/flexibility)
. failure observation ⇒ fault removal. (defect detection
⇒ defect fixing)
. eclipsing original purpose
Testing: How?

• How? Run-observe-follow-up

(particularly in case of failure observations)

• Refinement
⇒ generic process below
Testing: Activities & Generic Process
• Major testing activities:
. test planning and preparation
. execution (testing)
. analysis and follow-up

• Link above activities ⇒ generic process:


. planning-execution-analysis-feedback.

. entry criteria: typically external.

. exit criteria: internal and external.

. some (small) process variations

– but we focus on strategies/techniques.


Testing: Planning and Preparation

• Test planning:

. goal setting based on customers’ quality perspectives and


expectations.
. overall strategy based on the above and
product/environmental characteristics.

• Test preparation:

. preparing test cases/suites:


– typically based on formal models.
. preparing test procedure.
Testing: Execution

• General steps in test execution

. allocating test time (& resources)


. invoking test
. identifying system failures
(& gathering info. for followup actions)
• Key to execution: handling both normal vs. abnormal cases

• Activities closely related to execution:

. failure identification: test all types of problem


. data capturing and other measurement
Testing: Analysis and Follow-up
• Analysis of testing results:

. result checking (as part of execution)


. further result analyses
– defect/reliability/etc. analyses.
. other analyses: defect ∼ other metrics.

• Followup activities:

. feedback based analysis results.


. immediate: defect removal (& re-test)
. other followup (longer term):
– decision making (exit testing, etc.)
– test process improvement, etc.
Testing: How?
• How to test?
– refine into three sets of questions

. basic questions
. testing technique questions
. activity/management questions
• Basic questions addressed here:

. What artifacts are tested?


. What to test?
– from which view?
– related: type of faults found?
. When to stop testing?
Testing Technique Questions

• Testing technique questions:

. specific technique used?


. systematic models used?
– related model questions (below)
. adapting technique from other domains?
. integration for efficiency/effectiveness↑?

• Testing model questions:

. underlying structure of the model?


– main types: list vs. FSM?
. how are these models used?
. model extension?
Test Activity/Management Questions

• Test activity/management questions:

. Who performs which specific activities?


. When can specific activities be performed?
. Test automation? What about tools?
. Artifacts used for test management?
. General environment for testing?
. Product type/segment?
When to Stop Testing

• Resource-based criteria:

. Stop when you run out of time.


. Stop when you run out of money.
. Irresponsible ⇒ quality/other problems.

• Quality-based criteria:

. Stop when quality goals reached.


. Direct quality measure: reliability
– resemble actual customer usages
. Indirect quality measure: coverage.
. Other surrogate: activity completion.
. Above in decreasing desirability.
Summary of Today’s Lecture

n In today’s lecture, we explored why testing is needed,


when to start & stop testing and what goals to achieve

n We further explored different steps that are carried out in


coverage-based and systematic testing
Overview of Next lecture

n We will discuss Testing Activities, Management, and


Automation.

n Major Testing Activities; Test Management and Testing


Automation will also be part of next lecture
The End
Software Quality
Engineering
Lecture No. 12
Part- 2 (a):
Software Testing
Summary of the previous lecture

n Usage Based Testing

n Coverage Based Testing

n Testing Activities

n Test Management

n Testing Automation
Outlines
n Checklist-Based Testing

n Partitions and Partition Testing

n Usage-Based Testing with Musa’s Ops

n OP Development: Procedures/Examples
Objectives

n To understand and distinguish between Checklists-


based testing and Partition-based testing

n To be able to apply the knowledge of testing in real


cases.
Checklists for Testing
• Ad hoc testing:
. “run-and-observe”
. How to start the run?
. Areas/focuses of “observations”?
. Implicit checklists may be involved.

• Explicit checklists:
. Function/features (external)
. Implementation (internal)
. Standards, etc.
. Mixed or combined checklists
Function Checklists
• Function/feature (external) checklists:
. Black-box in nature
. List of major functions expected

• Example: Table A high-level functional checklist for


some relational database products
. abnormal termination
. backup and restore
. communication
. co-existence
. file I/O
. gateway
. index management
. Installation; logging and recovery
. Locking; Migration; stress
Implementation Checklists

• Implementation (internal) checklists:


. White-box in nature
. At different levels of abstraction
– e.g., lists of modules/components/etc.
– statement coverage as covering a list
• Related: cross-cutting features/structures:
. Multiple elements involved.
. Examples: call-pairs, diff. parts that
cooperate/collaborate/communicate/etc.
• Other checklists:
. related to certain properties
– e.g., coding standards,
. hierarchical list,
Other Check Lists

• Combined ×-list based on n attributes for large


products,
Example: Table: A template for a two-dimensional
checklist by combining a standards checklist and a
component check list
Component Standard Standar Standard
item 1 d item 2 item n
C1
C2
.
Cm
• Checklists in other forms:
. tree/graph/etc. ⇒ enumerate into lists
. certain elements of complex models
– e.g., lists of states and links in FSMs
Checklists: Assessment

• Key advantage: simplicity.

• Possible drawbacks of checklists:


. Coverage: need to fill “hole”.
. Duplication: need to improve efficiency.
. Complex interactions not modeled.
. Root cause: complexity
– contributing to all 3 problems above.

• Possible solutions:
. specialized checklists ⇒ partitions.
. alternatives to checklists: FSMs
Partitions: Ideas and Definitions
• Partitions: a special type of checklists

. Mutually exclusive ⇒ no overlaps.


. Collectively exhaustive ⇒ coverage.
. Address two problems of checklists.

• Partition of set S into subsets


G1, G2, . . . , Gn (Gi ⊂ S):
Partitions-Based Testing
• Different types of partition definitions:
. membership based partition definitions
. properties/relations used in definitions
. combinations

• Basic idea of partition-based testing:


. membership/equivalence-class analysis
⇒ defining meaningful partitions
. sampling from partitioned subsets for different
types of partitions

• Extending basic coverage to perform non- uniform


testing
Partitions-Based Testing
• Testing for membership in partitions:
. partitions: components in a subsystems
. testing via direct sampling,
e.g., sampling 1 component/subsystem
• Testing for general partitions:
. properties/relations used in definitions
. direct predicates on logical variables
– direct derivation of test cases
. operations on numerical variables
– sensitize (select) input values
• Testing for combinations of the above partition
definitions
Partitions-Based Testing

• Testing multiple sets of partitions:


. Divide-and-conquer.
. Model as stages.
. Combination (cross-product) of the stages.

• General: an m-way partition followed by an n-way


partition: m × n combinations.
Partitions-Based Testing
• Extensions to basic ideas:
. Sampling from partitioned subsets.
. Coverage of partitions: non-uniform?
. Testing based on related problems:
– usage-related problems?
– boundary problems?
. Testing based on level/hierarchy/etc.?
• Usage-related problems:
. More use ⇒ failures more likely
. Usage information in testing
⇒ (Musa’s) operational profiles (OPs)
• Boundary problems:
Input domain boundarytesting
Usage-Based Statistical Testing
• Usage based statistical testing (UBST) to ensure reliability.
• Reliability: Probability of failure-free operation for a
specific time period or a given
set of input under a specific environment

. Reliability: customer view of quality


. Probability: statistical modeling
. Time/input/environment: OP
• OP: Operational Profile
. Quantitative characterization of the way a (software)
system will be used.
. Generate/execute test cases for UBST
. Realistic reliability assessment
. Development decisions/priorities
UBST: General Issues

• General steps:
. Information collection.
. OP construction.
. UBST under OP.
. Analysis (reliability!) and follow-up.
• Linkage to development process
. Construction: Requirement/specification, and spill over to
later phases.
. Usage: Testing techniques and SRE
• Procedures for OP construction necessary
UBST: Primary Benefit
• Primary benefit:
. Overall reliability management.
. Focus on high leverage parts
⇒ productivity and schedule gains:
– same effort on most-used parts
– reduced effort on lesser-used parts
– reduction of 56% system testing cost
– or 11.5% overall cost (Musa, 1993)
• Gains vs. savings situations
. Savings situation:
– reliability goal within reach and not to over test lesser-used parts
. Gains situation: more typical
– re-focusing testing effort
– constrained reliability maximization
Developing OP

• OP: operations & their probabilities.


– probability: partition that sum up to 1.

• Obtaining OP information:
. identify distinct operations as disjoint alternatives.
. assign associated probabilities
– occurrences/weights ⇒ probabilities.
. in two steps or via an iterative procedure

• OP information sources:
. actual measurement.
. customer surveys.
. expert opinion.
Developing OP
• Customer surveys:
. Less accurate/costly than measurement.
. But without the related difficulties.
. Key to statistical validity:
– large enough participation
– “right” individuals completing surveys
. More important to cross-validate
• Expert opinion:
. Least accurate and least costly.
. Ready availability of internal experts.
. Use as a rough starting point.
Developing OP
• Who should develop OP?
. System engineers
– requirement ⇒ specification
. High-level designers
– specification ⇒ product design
. Planning and marketing
– requirement gathering
. Test planners(testing)
– users of OP
. Customers (implicitly assumed)
– as the main information source
• Key: those who can help us
. identify distinct alternatives (operations)
. assign associated probabilities
OP Construction: A Case Study
• Background:
. Former CSE 302 student
. Course project: SQE OP development
. Application:
• Problem and key decisions:
. Product: name of the product
. Product characteristics ⇒ OP type
– menu selection/classification type
– flat instead of Markovian
. Result OP, validation, and application
OP Construction: A Case Study

• Participants:
. Software Product Manager
. Test Engineers
. Systems Engineers
. Customers
. Asad: pulling it together
. Tahir: technical advising
. Malik: documentation
• Information gathering
. Interview Software Product Manager to identifytarget
customers
. Customer survey/questionnaire to obtain customer usage
information
. Preparation, OP construction and follow-up
• Use Similar profiles / customers
OP case study

• User profile weighting:


. User groups & marketing concerns.
. Profile reflects both.
. Idea applicable to other steps:
– profile can be importance weighted
• System modes
. No significant difference in op.
• Analysis and follow-up
. Cross-validation: Peer review by Software Product
Manager, System Engineers and Test Engineers
. Followup actions
Summary of Today’s Lecture

n Checklist-Based Testing

n Partitions and Partition Testing

n Usage-Based Statistical Testing

n OP Development: Procedures/Examples
Overview of Next lecture

n Part 2 (b): Other Models and Techniques of Testing


n Control Flow, Data Dependency, and Interaction Testing
The End
Software Quality
Engineering
Lecture No. 13
Summary of the previous lecture

n Usage Based Statistical Testing

n Checklist-Based Testing

n Partitions and Partition Testing

n OP case study
Part- 2 (b):
Other Models & Techniques of
Testing
Outlines
n Control Flow Testing (CFT)

n Data Dependency Analysis

n Control Flow Graph (CFG)

l Steps
l Construction
l Issues
l Methods
Objectives

n To understand and distinguish Control Flow Testing and


Data Dependency Analysis

n To be able to understand Control Flow Graphs


Interactions in program execution

. Interaction along the execution paths:


– path: involving multiple elements/stages
– later execution affected by earlier stages
– tested via control flow testing (CFT)
– control flow graph (CFG) ⊂ FSM
. Computational results affected too:
– data dependency through execution
– analysis: data dependency graph (DDG)
– tested via data flow testing (DFT)
CFGs and FSMs

• CFG (control flow graph):


. Basis for control flow testing (CFT).
. CFG as specialized FSMs:
– type II: processing & I/O in nodes,
– links: “is-followed-by” relation, some
annotated with conditions.

• CFG elements as FSM elements:


. nodes = states = unit of processing.
. links = transitions = “is-followed-by”.
. link types: unconditional and conditional,
latter marked by branching condition
CFG Example

Example: CFG for a proper


structured program (seq.
concatenation. + nesting, no
GOTOs)
CFG: Nodes and Links

• Inlink and outlink defined w.r.t a node.


• Entry/exit/processing nodes:
. Entry (source/initial) nodes.
. Exit (sink/final) nodes.
. Processing nodes.
• Branching & junction nodes & links:
. Branching/decision/condition nodes:
– multiple outlinks,
– each marked by a specific condition,
– only 1 outlink taken in execution.
. Junction nodes:
– opposite to branching nodes,
– but no need to mark these inlinks,
– only 1 inlink taken in execution.
. 2-way and N-way branching/junction.
CFG for CFT
• CFGs for our CFT:
. Separate processing/branching/junction
nodes for clarity
. Sequential nodes: mostly processing
⇒ collapsing into one node (larger unit)
. No parallelism allow
(single point of control in all executions).
. Mostly single-entry/single-exit CFGs
. Focus: structured programs, ¬ GOTO.
– GOTOs ⇒ ad hoc testing.

• Notational conventions:
. “Pi” for processing node “i”
. “Ji” for junction node “i”
. “Ci” for condition/branching node “i”
CFT Technique
• Test preparation:
. Build and verify the model (CFG)
. Test cases: CFG ⇒ path to follow
. Outcome checking: what to expect and how to
check it

• Other steps: Standard


. Test planning & procedure preparation.
. Execution: normal/failure case handling.
. Analysis and Follow-up

• Some specific attention in standard steps:


Confirmation of outcome and route in analysis
and follow-up.
CFT: Constructing CFG
• Sources for CFG:
. White box: design/code
– traditional white-box technique
. Black box: specification
– structure and relations in specs
• Program-derived (white-box) CFGs:
. Processing: assignment and calls
. Branch statements:
– binary: if-then-else, if-then
– multi-way: switch-case, cascading if’s.
. Loop statements (later)
. composition: concatenating/nesting.
. structured programming: no GOTOs
– hierarchical decomposition possible.
. explicit/implicit entry/exit
CFT: Constructing WB/CFG

. Analyse program code on left


. derive CFG on right
. focus on decision and branches
CFT: Constructing CFG
• Specification-derived (black-box) CFGs:
. Node: “do” (enter, calculate, etc.)
. Branch: “goto/if/when/while/...”
. Loop: “repeat” (for all, until, etc.)
. Entry: usually implicit
. Exit: explicit and implicit
. External reference as process unit
. General sequence: “do”...(then)...“do”.
• Comparison to white-box CFGs:
. Implementation independent.
. Generally assume structured programs.
. Other info sources: user-related items
– usage-scenarios/traces/user-manuals,
– high-level req. and market analyses.
CFT: Path Definition
• Test cases: CFG ⇒ path to follow
. Connecting CFG elements together in paths.
. Define and select paths to cover
. Sensitize (decide input for) the paths

• Path related concepts/definitions:


. Path: entry to exit via n intermediate links and
nodes.
. Path segment or sub-path: proper subset of a
path.
. Loop: path or sub-path with 1+ nodes visited
1+ times.
. Testing based on sub-path combinations. . Loop
testing: specialized techniques.
CFT: Path Selection
• Path selection (divide & conquer)
. Path segment definition
. Sequential concatenation
. Nesting of segments
. Unstructured construction: difficult
. Eliminate unachievable/dead paths
(contradictions and correlations)

• “Divide”: hierarchical decomposition for


structured programs.

• “Conquer”: Bottom-up path definition one


segment at a time via basic cases for nesting
and sequential concatenation.
CFT: Other Steps

• CFT: Sensitization: Path sensitization/realization


• CFT: Algebraic Sensitization
Complexity due to dynamic values . Symbolic
execution
• CFT: Logic Sensitization:
• Segment and combination . Divide into
segments (entry-exit).
• Segment and combination . Divide into
segments
• Execution and follow-up: . Path/statement-
oriented execution – debugger and other tools
helpful . Follow-up: coverage and analysis
Loops: What and Why

• Loop: What is it?


. Repetitive or iterative process.
. Graph: a path with one or more nodes visited
more than once.
. Appear in many testing models.
. Recursion.
• Why is it important?
. Intrinsic complexity:
– coverage: how much?
– effectiveness concerns (above)
. Practical evidence: loop defects
. Usage in other testing.
Loop Examples
• Common loop examples:
. left: “while” loops
. right: “for” loops
. other (structured) loops can be converted to
these loops
Software Quality
Engineering
Lecture No. 14
Part- 2 (b):
Other Models & Techniques of
Testing
Summary of the previous lecture

n We discussed Control Flow Testing (CFT)

and Control Flow Graph (CFG).

n We explored details of CFG such as:

l Steps
l Construction
l Issues
l Methods
Outlines
n Data Dependency and Sequencing

n Data Dependency Analysis

n Data Dependency Graph (DDG)

l Why we need DDG; Characteristics and


Construction; Example
n CFT Vs DDT
Objectives

n To understand and distinguish between the need of CFT


and DDT

n To learn and understand the DDG


Dependency vs. Sequencing
• Sequencing:
. Represented in CFT “is-followed-by”
. Implicit: sequential statements
. Explicit: control statements & calls
. Apparent dependency:
– order of execution (sequential machine)
– but must follow that order?
• Dependency relations:
. Correct computational result?
. Correct sequence: dependencies
. Synchronization
. Must obey: essential
– captured by data flow/dependency
. PL/system imposed: accidental
– CFT, including loop testing
Data Dependency And Data Flow Testing
There are some difficulties when shared variables
instead of constants are involved in the decision
points, and analyses of these variable values were
performed to eliminate un-realizable paths.
In fact, the correlated decisions need not necessarily
involve the shared variable.
If some computation and assignments link variables
used in later decisions to those used earlier ones,
the decisions are correlated.
The analysis of this and other data relations is the
subject of data dependency analysis (DDA) and the
verification of correct handling of such data relations
during program execution is the subject of data flow
testing (DFT).
Need for DFT
• Need other alternatives to CFT:
. CFT tests sequencing
– either implemented or perceived
. Dependency = sequencing
. Other techniqueto test dependency

• Data flow testing (DFT)


. Data dependencies in computation
. Different models/representations
(traditionally/often as augmented CFT)
. DFT is not untouched data items within a program/module/etc.
. “data flow” may referred to information passed along from one
component to another, which is different from DFT
. Key: dependency (not flow)?
DFT: Data Operations

• Types of data operations


P-use The use of variables or data items in CFG decisions is
called P-use in data dependency analysis to indicate their use in
predicates or conditions.
Data definition through data creation, initialization, assignment,
all explicitly, or sometimes through side effects such as through
shared memory locations, mailboxes, read/write parameters,
etc. It is commonly abbreviated as D-operation, or just D
C-use The other kind of use is called C-use, or computational
use
DFT: Data Definitions

• Types of data definitions


Data definition through data creation, initialization, assignment, all
explicitly, or sometimes through side effects such as through shared
memory locations, mailboxes, read/write parameters, etc. It is commonly
abbreviated as D-operation, or just D.

Data use in general computation or in predicate, commonly referred to as


C-use or P-use. Both these types of uses are collectively called U-
operation, or just U. The key characteristic of the U-operation is that it is
non-destructive, that is, the value of the data item remains the same after
this operation. However, P-use of a data item in a predicate might affect
the execution path to be selected and followed. C-use of data items
usually occurs in the form of variables and constants in a computational
expression or as parameters in a program function or procedure
Pair-wise Ordinal Relations On The Same Data Objects

• Data Flow or Data Dependencies


. U-U: no effect or dependency
– therefore ignore
. D-U: normal usage case
– normal DFT
. D-D: overloading/masking
– no U in between ⇒ problems/defects?
(racing conditions, inefficiency, etc.)
– implicit U: D-U, U-D
expand for conditionals/loops
. U-D: anti-usage
– substitute/ignore if sequential
– convert to other cases in loops

• Data dependency analysis may detect some


problems above immediately.
• DFT focuses on testing D-U relations.
DDG and DFT

• Data dependency graphs (DDGs): Computation result(s)


expressed in terms of input variables and constants via inter-
mediate nodes and links.
• DFT central steps (test preparation):
. Build and verify DDGs.
. Define and select data slices to cover. (Slice:all used to
define a data item.)
. Sensitize data slices.
. Plan for result checking.
• Other steps in DFT can follow standard testing steps for
planning and preparation, execution, analysis and follow-up.
DDG Elements
• Nodes in DDG: data definitions (D)
. Represent definitions of data items:
– typically variables and constants,
– also functional/structural components e.g.,
file/record/grouped-data/etc.
. Input/output/storage/processing nodes.
• Links: relating different D-nodes
. relation: is-used-by (D-U relation)
. an earlier D is used by a later D
• Conditional vs unconditional D’s:
. unconditional: directly link nodes
. conditional: use data selectors (later)
Example DDG Elements

. Unconditional definition example for z ← x + y

Data dependency graph (DDG) element: An example of data definition through


assignment

The links in DDGs represent the D-U relation, or “is used by”.
That is, if we have a link from A to B, we interpret it as that
the data defined in A is used to define the data in B
Example DDG Elements
. Conditional definition example with a data selector node
. parallel conditional assignment
. multi-valued data selector predicate
. match control and data in-link values

DDG element: An example of data selector node

The three possible values for the result r can be marked as rl, 7-2, and 7-3. The
final result r will be selected among these three values based on the condition on
d. Therefore, we can place r in a data selector node, connect rl, 7-2, and 7-3 to r
as data inlink, and the condition node “d?O” to r as the control inlink. We
distinguish control inlink from data inlink by using a dotted instead of a solid
link. Only one value will be selected for r from candidate values, rl, r2, and r3,
by matching their specific conditions to the control inlink evaluation result.
DDG Characteristics and Construction

• Characteristics of DDG:
. Multiple inlinks at most non-terminal nodes.
. Focus: output variable(s)
– usually one or just a few
. More input variables and constants.
. Usually more complex than CFG
– usually contains more information
(omit non-essential sequencing info.)
• Source of modeling:
. White box: design/code (traditionally).
. Black box: specification (new usage).
. Backward data resolution
(often used as construction procedure.)
Building DDG

• Basic steps

. Identify output variable(s) (OV)


. Backward chaining to resolve OV:
– variables used in its computation
– identify D-U relations
– repeat above steps for other variables
– until all resolved as input/constants
. Handling conditional definitions in above.
Building DDG: An Example

A sample data flow graph

• Example: Data selector


. Identify non-terminal nodes resolve them,
until only input/constants left (at top part)
DFT and Loops

• Essential vs nonessential loops:


. Essential: mostly nondeterministic
. Nonessential iteration/loops:
– most deterministic loops
– due to language/system limitations;
– example: sum over an array
• Loop testing in DFT:
. Treat loop as a computational node
. Unfold/unwind once or twice
. Similar to one or two if’s
. Test basic data relation but not all (loop) boundary values
Other Activities in DFT
• Default/random value setting
. Not affecting the slice
. But may affect other executions
. DFT slices has better separation and focus than CFT paths
. Automated support
• Outcome prediction:
only need relevant variables in the slice. (simpler than
CFT!)
• Path
(similar, but more powerful and more work, so more need for
automated support).
DFT vs CFT

• Comparing with CFT:


. Independent models
. DFT closer to specification
(what result, not how to proceed)
. More complex, and more info.
⇒ limit data flow complexity
. Essential vs. accidental dependencies
. Loop handling limitations
• Combine CFT with DFT
. Use in hierarchical testing
. Nesting, inner CFT & outer DFT
. CFT for loops
(then collapse into a single node in DFT)
. Other combinations to focus on items of concern
DFT: Other Issues

• Applicability: (in addition to CFT)

. Synchronization.
. OO systems: abstraction hierarchies.
. Integration testing:
– communication/connections,
– call graphs.

• Need automated support:

. Graph models from (pseudo)programs


. Sensitization: default setting, etc.
. Path/slice verification
. Execution support
Summary of Today’s Lecture

n There are two basic elements in any computation or


information processing task: the data element and the control
element, which are organized together through some
implemented algorithms.
n In this lecture and previous, we extended the basic analysis
based on FSMs further to analyze and test the overall control
flow paths and the overall interactions among different data
items through control flow testing (CFT) and data flow testing
(DFT).
n The basis for CFT is the construction of control flow graphs
(CFGs) as a special type of FSMs and the related path
analysis. The basis for DFT is the data dependency analysis
(DDA) using data dependency graphs (DDGs).
Overview of Next lecture

n We will talk about Testing Techniques: Adaptation,

Specialization, and Integration.

n Specifically, we will talk about Adaptation to Test Sub-

phases; Specialized Testing Techniques; Integration


and Web Testing Case Study
The End
Software Quality
Engineering
Lecture No. 15
Part- 2 (b):
Other Models & Techniques of
Testing
Summary of the previous lecture

n Data Dependency and Sequencing

n Data Dependency Analysis

n Data Dependency Graph (DDG)

l we need DDG; Its characteristics and


construction with example was discussed
n CFT Vs DDT
Outlines

n Testing Techniques: Adaptation, Specialization, and


Integration.

n Adaptation to Test Sub-phases

n Specialized Testing Techniques


Objectives

n To understand and distinguish between different


Application and adaptation issues of testing

n To understand the needs of Testing Sub-Phases.


Applications of Testing Techniques
• Major testing techniques covered so far:
. Ad hoc (non-systematic) testing.
. Checklist-based testing.
. Partition-based coverage testing.
. Musa’s OP for UBST.
. Boundary testing (BT).
. Control flow testing (CFT).
. Data flow testing (DFT)
• Application and adaptation issues:
. For different purposes/goals.
. In different environments/sub-phases.
. Existing techniques: select/adapt.
. May need new or specialized techniques.
V-Model

n Solid box: original sub-phase


n Dashed box: added sub-phase or specialized testing
Testing Sub-Phases: V-Model

Testing sub-phases associated with the V-Model


Testing Sub-Phases: V-Model

n Original sub-phases in V-model:


. Operational use (not testing, strictly).
. System test for product specification.
. Integration test for high-level design.
. Component test for low-level design.
. Unit test for program code.
n Additional sub-phases/specialized testing:
. Diagnosis test through all sub-phases.
. Beta test for limited product release.
. Acceptance test for product release.
. Regression test for legacy products.
Unit Testing
n Key characteristics:
. Object: unit (implemented code)
– function/procedure/subroutine in C, FORTRAN, etc.
– method in OO languages
. Implementation detail ⇒ WBT. (BBT could be used,
but less often.)
. Exit: coverage (reliability undefined).
n Commonly used testing techniques:
. Ad hoc testing.
. Informal debugging.
. Input domain partition testing and BT.
. CFT and DFT.
Component Testing
n Key characteristics:
. Object: component (⊃ unit), 2 types.
1. collection of units in C/FORTRAN/etc.
– implementation detail ⇒ WBT. .
2. class in OO languages
– reusable component ⇒ BBT.
. Exit: coverage (sometimes reliability).
n Commonly used testing techniques:
. for traditional systems (component I) ≈ unit
testing, but at larger scale
. for OO systems (component II) ≈ system testing,
but at smaller scale
– see system testing techniques later in the slides
Summary of Today’s Lecture

n We explored what type of testing is suitable in which


type of environment.
n Not all types of testing are suitable universally
Overview of Next lecture

• System Testing, Acceptance Testing, Beta Testing


• Integration and Web Testing Case Study
• Defect Diagnose Testing
• Regression Testing
The End
Software Quality
Engineering
Lecture No. 16
Part- 2 (b):
Other Models & Techniques of
Testing
Summary of the previous lecture

n Overview of Different Software Testing

Techniques

n Identified the need of specific testing technique


for specific environment (software or a program)

n Unit & Component Testing

Key characteristics

Applications and adaption


Outlines

n Integration Testing

n Regression Testing

n Beta Testing

n Testing Sub-Phases Comparison

n Existing Web Testing


Objectives

n To understand and distinguish between the


characteristics and applications of different software
testing (integration, regression, beta etc.)

n To understand and compare the need of specialized

testing (sub-phases).
Integration Testing

n Key characteristics:
.Object: interface and interaction among multiple
components or subsystems.
. Component as a black-box (assumed).
. System as a white-box (focus).
. Exit: coverage (sometimes reliability).
n Commonly used testing techniques:
. FSM-based coverage testing.
. Other techniques may also be used.
. Sometimes treated as ⊂ system testing ⇒ see
system testing techniques in next slide.
System Testing
n Key characteristics:
. Object: whole system and the overall operations,
typically from a customer’s perspective.
. No implementation detail ⇒ BBT.
. Customer perspective ⇒ UBST.
. Exit: reliability (sometimes coverage).
n Commonly used testing techniques:
. UBST with Musa or Markov OPs.
. High-level functional checklists.
. High-level FSM, possibly CFT & DFT.
. Special case: as part of a “super”-system in
embedded environment ⇒ test interaction with
environment.
Acceptance Testing

n Key characteristics:
. Object: whole system.
– – but defect fixing no longer allowed.
– . Customer acceptance in the market.
– . Exit: reliability.
n Commonly used testing techniques:
. Repeated random sampling without defect fixing.
.UBST with Musa OPs.
. External testing services/organizations may be
used for system “certification”.
Beta Testing

n Key characteristics:
. Object: whole system
. Normal usage by customers.
. Exit: reliability.
n Commonly used testing techniques:
. Normal usage.
. Ad hoc testing by customers. (trying out different
functions/features)
. Diagnosis testing by testers/developers to fix
problems observed by customers.
Testing Sub-Phases: Comparison
n Key characteristics for comparison:
. Object and perspectives.
. Exit criteria.
. Who is performing the test.
. Major types of specific techniques.
n “Who” question not covered earlier:
. Dual role of programmers as testers in unit testing
and component testing I.
. Customers as testers in beta testing. .
Professional testers in other sub-phases.
. Possible 3rd party (IV&V) to test reusable
components & system acceptance.
Testing Sub-Phases: Summary
Specialized Testing

n Specialized testing tasks:


. Some do not fit into specific sub-phases.
. Different goals (other than reliability).
. Non-standard application environment.
n Our coverage:
. Defect diagnosis testing.
. Defect-based testing.
. Regression testing.
. Testing beyond programs.
. Testing for other goals/objectives.
Defect Diagnosis Testing
n • Context of defect diagnosis testing:
. In follow-up to discovered problems by customers or
during testing.
. Pre-test: understand/recreate problems.
. Test result: faults located.
. Follow-up with fault removal and re-run/re-test to
confirm defect fixing.
n • Defect diagnosis testing:
. Typically involve multiple related runs.
. Problem recreation as the starting point. .
. Domain knowledge important.
. More recorded defect information ⇒ less reliance on
defect diagnosis.
. Defect-based techniques (below) useful.
Defect-Based Testing

n General idea and generic techniques:


. Focus: discovered or potential defects (and
related areas).
. Ad hoc testing based on defect guesses.
. Risk identification ⇒ risk-based testing.
. Defect injection and mutation testing.
n Defect injection and testing:
. Inject known defect (seed known fault).
. Test for both seeded and ingenuous faults.
. Missed faults ⇒ testing technique↑.
. Also used in reliability modeling.
Regression Testing
n Context of regression testing:
. In software maintenance and support: – ensure
change /⇒ negative impact.
. In legacy software systems:
– ensure quality of remaining functions,
– during development/product update,
– new part ≈ new development,
– focus: integration sub-phase & after.
. Re-test to verify defect fixing as well as no unintended
consequences.
n Regression testing techniques:
. Specialized analysis of change (new part)
. Focused testing on new part.
. Integration of old and new
Other Specialized Testing
n • Testing beyond programs:
. Embedded and heterogeneous systems:
– test interactions with surroundings.
– . Web testing.

n • Testing to achieve other goals:


. Performance testing;
. Stress testing;
. Usability testing, etc.
n • Dynamic analysis and related techniques:
. Simulation to reduce overall cost.
. Prototyping, particularly in early phases.
. Timing and sequencing analysis.
. Event-tree analysis (ETA),
Existing Web Testing
n • Web functionality testing: .
Focus on the web components
. HTML syntax checking via various tools.
. Link checking.
. Form testing.
. Verification of end-to-end transactions.
. Java and other program testing.
n • Beyond web functionality testing:
. Load testing.
. Usability testing.
. Browse rendering.
Hierarchical Web Testing

Implementation of the hierarchical web testing strategy:


Summary of Today’s Lecture

n We overviewed unit, component, system and regression


testing.
n We compared different sub-phases of testing
n Lastly, we explored the need of Hierarchical Web
Testing
Overview of Next lecture

n Use case based test case generation

n Equivalence Partitioning

n Boundary Value Analysis

Some examples of will be explored


The End
Software Quality
Engineering
Lecture No. 17
Part- 2 (b):
Other Models & Techniques of
Testing
Summary of the previous lecture

n Testing Techniques: Adaptation, Specialization,

and Integration.

n Adaptation to Test Sub-phases

n Specialized Testing Techniques

n Integration and Web Testing Case Study


Outlines

n Use case based test case generation

n Equivalence Partitioning

n Boundary Value Analysis

Some examples of will be explored.


Objectives

n To understand and distinguish between use case test


generation, Equivalence Partitioning and Boundary
Value Analysis

n To be able to test a use case and apply boundary

value analysis in real life.


Transforming Use cases into Test cases

n Step 1: Draw a Use Case Diagram


n Step 2: Write the Detailed Use Case Text
n Step 3: Identify Use Case Scenarios
n Step 4: Generating the Test Cases
n Step 5: Generating Test Data
Step 1

Enroll

Change

Student
Drop
Step 2
Step 2
Step 3: Identify Use Case Scenarios
• A use case scenario is an instance of a use case, or a complete “path”
through the use case.
• End users of a system can go down many paths as they execute the
functionality specified in the use case.
• The basic (or normal) path is illustrated by the dotted lines.
Step 4: Generate Test Case

• A test case is a set of test inputs, execution conditions,


and expected results developed for a particular
objective.
• Once the set of scenarios has been identified, the next
step is to identify the test cases.
• This is accomplished by analyzing the scenarios and
reviewing the use case textual descriptions.
• There should be at least one test case for each scenario.
• For each invalid test case, there should be only one
invalid input.
Step 4: Generate Test Case
• To document the test cases, a matrix format can be used.
• The first column of the first row contains the test case ID, and the second
column has a brief description of the test case and the scenario being
tested.
• All the other columns except the last one contain data elements that will be
used to implement the tests.
• The last column contains a description of the test case’s expected output.
The “V” depicts a valid test input, and an “I” depicts an invalid test input.
Step 4: Generate Test Case
Step 5: Generating Test Data

• Once all of the test cases have been identified, they


should be reviewed and validated to ensure accuracy
and to identify redundant or missing test cases.
• Then, once they are approved, the final step is to
substitute actual data values for the I’s and V’s.
• A test case matrix with values substituted for the I’s and
V’s in the previous matrix.
• A number of techniques can be used for identifying data
values.
• Two valuable techniques are Equivalence Class
Partitioning and Boundary Value Analysis.
Equivalence Partitioning #1

• It is very difficult, expensive and time consuming if not at


times impossible to test every single input value
combination for a system.
• We can break our set of test cases into sub-sets. We
then choose representative values for each subset and
ensure that we test these.
• Each subset of tests is thought of as a partition between
neighbouring subsets or domains.
Equivalence Partitioning #2
n Equivalence Partitioning:
l Makes use of the principle that software acts in a
general way (generalises) in the way it deals
with subsets of data,
l Selects Groups of inputs that will be expected to
be handled in the same way.
l Within a group of data, a representative input
can be selected for testing.
l For many professional testers this is fairly
intuitive.
l The approach formalises the technique allowing
an intuitive approach to become repeatable.
Equivalence Partitioning #3

n EP Example:
n Consider a requirement for a software system:

l “The customer is eligible for a life assurance discount


if they are at least 18 and no older than 56 years of
age.”

For the exercise only consider integer years.


Equivalence Partitioning #4

n “The customer is eligible for a life assurance discount if they are at


least 18 and no older than 56 years of age.”

18 56

Invalid Valid Partition Invalid


Partition Partition
Range 19 to
Less than or 56 Greater than
equal to 18 56
Equivalence Partitioning #5
n What if our developer incorrectly interpreted the requirement as:
• “The customer is eligible for a life assurance discount if they are over 18
and less than 56 years of age.”
• People aged exactly 18 or exactly 56 would now not get a discount.
18 56

Invalid Valid Partition Invalid


Partition Partition
18< Range
=< 18 <56 <56

Errors are more common at boundary values, either just below, just above
or specifically on the boundary value.
Boundary Analysis #1

n “The customer is eligible for a life assurance discount if they are at least
18 and no older than 56 years of age.”

17, 18, 19 55, 56,


Boundarie 57
s

Invalid Valid Partition Invalid


Partition Partition
Range 17 to
Less than 18 56 Greater than
56

Test values would be: 17, 18, 19, 55, 56 and 57.
This assumes that we are dealing with integers and so least significant
digit is 1 either side of boundary.
Boundary Analysis #2

n For each boundary we test +/- 1 in the least significant digit of either side
of the boundary.

Boundary
Limit
Boundary Limit -1 Boundary Limit +
1

If significant digit was second decimal place, then


the limits above would be +/- 0.01
Boundary Analysis #3
• While the textbooks may limit testing to the
boundaries, we are interested in how software
normally behaves and how it reacts to handling
error conditions. Therefore it is normal to treat
NOT ONLY the boundaries but also:
• A typical mid range value e.g. 37
• Zero (since divide by 0 errors can occur).
• Negative values
• Numbers out of range by a long way e.g. +/-1000
• Illegal data entries like “nineteen” as letters, Fred,
banana.
• Illegal characters such as # $ & ‘ @ : ;
Taking EP and BVA Further
n Consider the following requirement:
“The customers must be at least 18. Customers are eligible
for a life assurance discount of 40% if they are at least 18 and
no older than 25 years of age. Customers are entitled to a
30% discount if they are older than 25 years of age, but under
40. Customers are entitled to a 10% discount if they are 40 or
over, but no older than 56. Over 56 customers are not
entitled to a discount.”

n What are the equivalence partitions?


n What are the boundary values to be tested?
n What other values might you test?
Taking EP and BVA Further - Answer
“The customers must be at least 18. Customers are eligible for a
life assurance discount of 40% if they are at least 18 and no older
than 25 years of age. Age is only recorded in integer years.
Customers are entitled to a 30% discount if they are older than 25
years of age, but under 40. Customers are entitled to a 10%
discount if they are 40 or over, but no older than 56. Over 56
customers are not entitled to a discount.”

17, 18, 19 24, 25, 26 38, 39, 40 55, 56, 57

40% 10% 10% 0%


invalid discount discount discount discount

Might also test: 0, -5, 200, Fred, 0.00000001, some typical mid range
values: 21, 32, 47. Note boundary values tested +/- least significant
recorded digit.
Invalid Partitions

n Let’s consider some example


Question 1
n One of the fields on a form contains a text box which accepts numeric
values in the range of 18 to 26. Identify the invalid Equivalence class.
a) 17
b) 19
c) 25
d) 21
Solution 1
n The text box accepts numeric values in the range 18 to 25 (18 and 25 are
also part of the class). So this class becomes our valid class. But the
question is to identify invalid equivalence class. The classes will be as
follows:
Class I: values < 18 => invalid class
Class II: 18 to 25 => valid class
Class III: values > 25 => invalid class
17 fall under invalid class. 19, 25 and 21 fall under valid class.
So answer is ‘a’ (17)
Question 2
n In an Examination a candidate has to score minimum of 25 marks in order
to pass the exam. The maximum that he can score is 50 marks. Identify the
Valid Equivalence values if the student passes the exam.
a) 22,24,27
b) 21,39,40
c) 29,30,31
d) 0,15,22
Solution 2
n The classes will be as follows:
Class I: values < 25 => invalid class
Class II: 25 to 50 => valid class
Class III: values > 50 => invalid class
We have to indentify Valid Equivalence values. Valid Equivalence values
will be there in Valid Equivalence class.
All the values should be in Class II.
So answer is ‘c’ ( 29,30,31)
Summary of Today’s Lecture

n We applied testing on use cases, Equivalence


Partitioning and tested the boundary values for some
cases.
Overview of Next lecture

n Software testing depends on good requirements, so it is


important to understand some of the key elements of
quality requirements.
n We will talk about Requirement Quality Factors such as
Understandable, Necessary, Modifiable, etc.
The End
Software Quality
Engineering
Lecture No. 18
Part- 2 (b):
Other Models & Techniques of
Testing
Summary of the previous lecture

n Use Case Test Generation


Step 1: Draw a Use Case Diagram
Step 2: Write the Detailed Use Case Text
Step 3: Identify Use Case Scenarios
Step 4: Generating the Test Cases
Step 5: Generating Test Data

n Which testing technique to be applied / used

n Equalince Partitioning & Boundry Value Analysis


Outlines

n Testing Vs Debugging

n System Testing Process

n Test Execution and Fault Reports

n Bug Life cycle


Objectives

n To understand and distinguish between Testing Vs


Debugging

n To understand the bug life cycle and learn the stages


to resolve errors.
Introduction
n Software Testing

Computer programs are designed and developed by


human beings and hence are prone to errors.
Unchecked, they can lead to a lot of problems,
including social implications.
Testing the software becomes an essential part of the
software development lifecycle.
Carrying out the testing activities for projects has to be
practiced with proper planning and must be implemented
correctly.
Basic Questions on Testing
Why to test?
testing becomes absolutely essential to make sure the software
works properly and does the work that it is meant to perform.

What to test?
Any working product which forms part of the software
application has to be tested. Both data and programs must be
tested.

How often to test?


When a program (source code) is modified or newly developed,
it has to be tested.

Who tests?
Programmer, Tester and Customer
Software Development Lifecycle (SDLC)

n Inception
n Requirements
n Design
n Coding
n Testing
n Release
n Maintenance
Inception

n Request for Proposal


n Proposal
n Negotiation
n Letter Of Intent (LOI) – some companies may do this along with a
feasibility study to ensure that everything is correct, before signing contract
n Contract
Requirements
User Requirements Specification (URS)

This document will describe in detail about what is expected


out of the software product from the user's perspective.
The wordings of this document will be in the same tone that of a
user

Software Requirements Specification (SRS)

A team of business analysts, who are having a very good


domain or functional expertise, will go to the clients place and
get to know the activities that are to be automated and prepare
a document based on URS and it is called as SRS
Design

High Level Design (HLD)


List of modules and a brief description of each module.
Brief functionality of each module.
Interface relationship among modules
Dependencies between modules (if exists, B exists etc.)
Database tables identified along with key element.
Overall architecture diagrams along with technology details.

Low Level Design (LLD)


Details functional logic of the module, in pseudo code.
Database tables, with all elements, including their type and size
All interface details with complete API references (both requests and responses)
All dependency issues Error message Listings
Complete input and outputs for a module.
Coding

n Converting the pseudo code into a programming language in the specified


platform

n Guidelines to be followed for the naming convention of procedures,


variables, commenting methods etc

n By compiling and correcting the errors, all syntax error and removed.
Testing Levels

n Unit Testing
Programs will be tested at unit level
The same developer will do the test

Integration Testing
When all the individual program units are tested in the unit testing phase and all units
are clear of any known bugs, the interfaces between those modules will be tested
Ensure that data flows from one piece to another piece

System Testing
After all the interfaces are tested between multiple modules, the whole set of
software is tested to establish that all modules work together correctly as an
application.
Put all pieces together and test

Acceptance Testing
The client will test it, in their place, in a near-real-time or simulated environment.
Release to Production and Warranty Period

n When the clients to the acceptance testing and finds no


problems, then they will accept the software and now
they have to start using the software in their real
office.

n Bug Fixes during the warranty period – we cannot


charge the customer for this

n Go Live Process means the product is used in live


servers
Maintenance Phase

Bug fixing
Upgrade
Enhancement

v After some time, the software may become obsolete and will
reach a point that it cannot be used. At that time, it will
be replaced by another software which is superior to that. This
is the end of the software
v We do not use FoxPro or Windows 3.1 now as they are gone!
Development Models
n Water Fall Model – do one phase at a time for all requirements given by
customer
Development Models
n Incremental Model – take smaller set of requirements and build slowly
Development Models

Extreme Programming Model – take only one piece and develop!


Testing Vs Debugging

Testing is focused on identifying the problems in the


product
Done by Tester
Need not know the source code

Debugging is to make sure that the bugs are removed or


fixed
Done by Developer
Need to know the source Code
System Testing Process
n Plan
l Create master test plan (MTP) – done by test manager or test lead
l Create Detailed Test Plan (what to test) – by testers – this will contain
test scenarios also known as test conditions
l Create Detailed Test Cases (DTC) – how to test – by testers
n Execute
n Regress and Analyze
Detailed Test Plan

n What is to be tested ?
l Configuration – check all parts for existence
l Security – how the safety measures work
l Functionality – the requirements
l Performance – with more users and more data
l Environment – keep product same but other settings different
Detailed Test Cases
The test cases will have a generic format as
below.
Test Case Id
Test Case Description
Test Prerequisite
Test Inputs
Test Steps
Expected Results
Detailed Test Case (DTC)

Simple Functionality – field level


Communicative Functionality – data on one screen goes to another
End-to-End Test Cases – full sequence as though the end users carry out
Test Execution and Fault Reports

n Test Case Assignment – done by test lead


n Test Environment Set-up – install OS, database,
applications
n Test Data Preparation – what kind of data to be used
n Actual Test Execution – do it!
Test Environment Set-up
There must be no development tools installed in
a test bed.
Ensure the right OS and service pack/patch
installed.
Ensure the disks have enough space for the
application
Carry out a virus check if needed.
Ensure the integrity of the web server.
Ensure the integrity of the database serves.
Test Data Preparation

This data can be identified either at the time of


writing the test case itself or just before executing the
test cases.
Data that are very much static can be identified while
writing the test case itself.
Data which are dynamic and configurable need more
analysis before preparation.
Preparation of test data depends upon the
functionality that is being tested.
Actual Test Execution

Install Tests
n Auto install in default mode
n Does the installer check for the prequsites?
n Does the installer check for the system user
privileges?
n Does the installer check for disk and memory space?
n Does the installer check for the license agreement ?
n Does the installer check for the right product key?
n Does the installer installs in the default path?
n Do we have different install types like custom, full,
compact, fully?
Install Tests continued..
Cancel the installation half way thru.
Uninstall the software.
Cancel half way thru un-install.
Reinstall on the same machine. Repair an existing install on the same
machine.
Does installer create folders, icons, short cuts, files, database, registry
entries?
Does uninstall remove any other files that do not belong to this
product?
Actual Test Execution
Navigation Tests
Once install complete, start the application
Move to every possible screen using menus, tool bar icons, short cut
keys, or links.
Check for respective screen titles and screen fields for the existence.
Move back and forth from various screens to other forms in adhoc
Exit the application and restart the application many times
Core Functional Test
Build Verification Tests (BVT)
A set of test scenarios/cases must be identified as critical priority, such
that, if these tests do not work, the product does not get acceptance
from the test team.
Build Acceptance Tests (BAT)
This starts once the BVT is done. This involves feeding the values to the
program as per the test input and then performing the other actions (like
clicking specific buttons of function keys etc) in the sequence as given in the
test steps.
Test Problem Report or Fault Report or Bug
Report

TPR Id A unique identifier across the company


TPR Description A brief description of the problem
Date The date on which the TPR is raised
Author The tester who raised the TPR
Test Case Id The test case that caused this TPR to be raised
Software Version/Build The version number of the software that was tested and found faulty

Problem Severity Show stopper/High/Medium/Low. This will be agreed by the lead tester and the development
project manager.

Priority High/Medium/Low. How soon to fix?

Problem Detailed Description A description of what was tested and what happened
This will be filled by the tester.

Problem Resolution After fixing the problem, the developer fills this section, with details about the fix. Developer
gives this

Assigned to To whom the TPR is assigned to be fixed


Expected Closure When the problem to be closed Data
Actual closure data When the problem is actually rectified and closed
TPR status This is a changing field to reflect the status of the TPR.
Bug Life Cycle
• New
• Do it until solved
• Open
• In-Fix
• Fix-Complete
• In-Retest
• Retest-Complete
• Closed
• Retest-Complete
• Open

Bug Tracking Tools

n Softsmith’s QAMonitor and SPCG


n Bugzilla
n HP Quality Center
n JIRA
Summary of Today’s Lecture

n We talked about software testing and debugging and


have explored different details which are applied on
different phases of software testing
The End
Software Quality
Engineering
Lecture No. 19
Summary of the previous lecture

n Testing Vs Debugging

n System Testing Process

n Test Execution and Fault Reports

n Bug Life cycle

n We also covered the , Test Management, Test

Automation and Test Integration in Part 2(c), (d)


and (e) of the course respectively.
Part- 3:
Quality Assurance Techniques
Outlines

n Defect Prevention & Process Improvement

• Defect prevention approaches

• Error blocking

• Error source removal

• Process improvement
Objectives

n To understand and distinguish between defect


prevention and process improvement

n To understand blocking and source removal of the


errors.
QA Alternatives

n • Defect and QA:


. Defect: error/fault/failure.
. Defect prevention/removal/containment.
. Map to major QA activities
n • Defect prevention (this chapter):
– Error source removal & error blocking
n • Defect removal: Inspection/testing/etc.
n • Defect containment: Fault tolerance and failure
containment (safety assurance).
Generic Ways for Defect Prevention
n • Error blocking
. Error: missing/incorrect actions
. Direct intervention
. Error blocked ⇒ fault injections prevented
(or errors tolerated)
. Rely on technology/tools/etc.
n • Error source removal
. Root cause analysis ⇒ identify error
sources
. Removal through education/training/etc.
Defect Prevention: Why and How?
n • Major factors in favor of defect prevention:
. Super-linear defect cost↑ over time
– early faults: chain-effect/propagation
– difficulty to fix remote (early) faults
– in-field problems: cost↑ significantly
. Other QA techniques for later phases
– even inspection after defect injection

n • Basis for defect prevention: Causal and risk


analysis
. Analyze pervasive defects
. Cause identification and fixing
. Risk analysis to focus/zoom-in
Defect Cause and Actions
n • Types of causal analyses:
. Logical (root cause) analysis by expert for
individual defects and defect groups
. Statistical (risk) analysis for large data sets with
multiple attributes
– Model: predictor variables ⇒ defects
. Cause(s) identified via either variation
n • Actions for identified causes:
. Remedial actions for current product
. Preventive actions for future products:
– negate causes or pre-conditions
Common Causes/Preventive Actions
n • Education/training to correct human misconceptions as
error sources:
. Product/domain knowledge,
. Development methodology,
. Development process, etc.
. Act to remove error sources
. Cause identification: mostly through root case
analysis.
n • Formal methods:
. Formal specification: to eliminate imprecision in
design/implementation. (error source removal)
. Formally verify fault absence.
Common Causes/Preventive Actions
n • Technologies/tools/standards/etc.:
. Based on empirical evidence
. Proper selection and consistent usage or
enforcement
. More error blocking than error source removal
. Cause identification: mostly statistical
n • Process improvement:
. Integration of many factors in processes
. Based on empirical evidence or logic
. Define/select/enforce
. Helping both error blocking and error source removal
. Cause identification: often implicit
Education and Training

n • People: most important factor to quality


n • Development methodology knowledge:
. Solid CS and SE education
. Methodology/process/tools/etc.
n • Product/domain knowledge: . Industry/segment
specific knowledge . Type of products: new vs.
legacy etc.
– legacy product: inter-operability . General
product environment, etc.
n • Means of delivery: formal and informal
education + on-the-job training.
Other Techniques
n • Appropriate software technologies:
. Cleanroom: formal verification + statistical testing
. Other technologies: CBSE, COTS, etc.
n • Appropriate standards/guidelines:
. Mis-understanding/miscommunication↓
. Empirical evidence for effectiveness
. Appropriate scope and formality
n • Effective methodologies:
. As package technologies/std/tools/etc.
. Empirical evidence
. Match to the specific product domain
Tools for Error Blocking
n • Programming language/environment tools:
. Syntax-directed editor to match pairs.
. Syntax checker/enforcer.
. General tools for coding standards, etc.
n • Other tools:
. Design/code and version control
– examples: CMVC, CVS, etc.
. Tools for indiv. development activities:
– testing tools,
– requirement solicitation tools,
– design automation tools, etc.

n • General tools or tool suites for certain


methodologies, e.g., Rational Rose.
Process Improvement
n • Integration of individual pieces for defect
prevention ⇒ process improvement
n • Selecting appropriate development processes:
. Process characteristics and capability
. Match to specific product environment
. Consideration of culture/experience/etc.
n • Process definition and customization
. Adapt to specific project environment . e.g., IBM’s
PPA from Waterfall
n • Process enforcement and ISO/9000:
. “say what you do”
. “do what you say”
. “show me”
Process Maturity for Improvement
n • Focus on defect prevention
. maturity level: focus/key practice area
1. ad-hoc: competent people/heroics
2. repeatable: proj. management proc.
3. defined: engr-proc./organizational support
4. managed: prod./proc. quality
5. optimized: continuous proc. impr. . expectation:
maturity↑ ⇒ quality↑ .
n • Other process maturity work
. SPICE (Software Process Improvement and
Capability dEtermination)
– – international effort
– – assessment, trial, and tech. transfer
TAME: Process/Quality Improvement
n • QIP: Quality Improvement Paradigm
. understand baseline
. intro. process change and assess impact
. package above for infusion
n • GQM: goals/questions/metrics paradigm
. goal-driven activities
. questions related to goals
. metrics to answer questions
n • EF: experience factory
. separation of concerns
. EF separate from product organization
. form a feedback/improvement loop
Conclusions
n • Key advantages:
. Significant savings if applicable:
– – avoid downstream problems
. Direct affect important people factor
. Promising tools, methodologies, etc.
. Process improvement: long-lasting and wide-impact
n • Key limitations:
. Known causes of pervasive problems
. Difficulties analyzing complex problems
. Difficulties with changing environment
. Hard to automate
. Process quality 6= product quality
Summary of Today’s Lecture

n We proceeded towards the third part of the course,


i.e., Quality Assurance Techniques. We specifically
discussed

Defect prevention approaches

Error blocking

Error source removal

Process improvement
Overview of the next lecture

n We will talk about Inspection. Specifically, we


will explore
Basic Concept and Generic Process for
Software Inspection
Fagan Inspection
The End
Software Quality
Engineering
Lecture No. 20
Part- 3:
Quality Assurance Techniques
Summary of the previous lecture

n Defect Prevention & Process Improvement

• Defect prevention approaches

• Error blocking

• Error source removal

• Process improvement
Outlines

n Software Inspection

Basic Concept and Generic Process

Inspection Process Variations

Fagan Inspection
Objectives

n To understand and distinguish between Basic


Concept and Generic Process

n To understand Inspection Process Variations and


Fagan inspection
QA Alternatives

n • Defect and QA:


. Defect: error/fault/failure.
. Defect prevention/removal/containment.
. Map to major QA activities
n • Defect prevention:
Error source removal & error blocking
n • Defect removal: Inspection (this lecture).
n • Defect containment: Fault tolerance and failure
containment (safety assurance).
Inspection as Part of QA
n • Throughout the software process
. Coding phase: code inspection
. Design phase: design inspection
. Inspection in other phases and at transitions from
one phase to another
n • Many different software artifacts:
. program code, typically
. requirement/design/other documents
. charts/models/diagrams/tables/etc.
n • Other characteristics:
. People focus.
. Not waiting for implemented system.
. Complementary to other QA activities.
Generic Inspection Process

n 1. Planning and preparation (individual)


n 2. Collection (group/meeting)
n 3. Repair (follow-up)
Inspection Process Variations
n • Overall planning:
. who? team organization/size/roles/etc. .
what? inspection objects
. objectives?
. number/coordination of multiple sessions?
n • Technique
. for preparation (individual inspection)
. for collection
n • What to do with defects?
. always: detect and confirm defects
. classify/analyze defects for feedback?
n • Use of post-collection feedback?
Fagan Inspection
n • General description
. Earliest, Fagan at IBM
. Lead to other variations
. Generic process and steps
n • Six steps of Fagan inspection:
1. Planning
2. Overview (1-to-n meeting)
3. Preparation (individual inspection)
4. Inspection (n-to-n meeting)
5. Rework
6. Follow-up.
Fagan Inspection
1. Planning
• Entry criteria: what to inspect
• Team size: about 4 persons
• Developers/testers from similar projects
• Effectiveness concerns (assumptions)
• Inspectors not authors
2. Overview
• Author-inspectors meeting
• General background information
• functional/structural/info., intentions
• Assign individual tasks:
• coverage of important areas
• moderate overlap
Fagan Inspection

3. Preparation or individual inspection


• Independent analysis/examination
• Code as well as other document
• Individual results:
• – questions/guesses
• – potential defects
Fagan Inspection

4. Inspection (generic: collection)


. Meeting to collect/consolidate individual
inspection results
. Team leader/meeting moderator
. Reader/presenter: summarize/paraphrase for
individual pieces (assignment)
. Defect identification, but not solutions, to ensure
inspection effectiveness
. No more than 2 hours
. Inspection report
Fagan Inspection
5. Rework
. Author’s response
. Defect fixing (solutions)
6. Follow-up
. Resolution verification by moderator
. Re-inspection?

• Fagan inspection in practice


. Widely used in industry
. Evaluation studies
. Variations and other inspections
Fagan Inspection: Findings
n Importance of preparation:
. Most defect detected
. Meetings to consolidate defects
. ⇒ alternatives focusing on preparation.
n Other important findings:
. Important role of the moderator
. Team size and #sessions tailored to env.
. Prefer systematic detection techniques to
ad-hoc ones
. More use of inspection feedback/analysis
Other Inspection Methods
n Variations to Fagan inspection: size/scope and
formality variations.
n Alternative inspection techniques/processes:
. Two-person inspection
. Meeting-less inspections
. Gilb inspection
. Phased inspections
. N-fold inspections
. Informal check/review/walkthrough
. Active design reviews
. Inspection for program correctness
. Code reading
. Code reading with stepwise abstraction
Summary of Today’s Lecture

n We talked about Software Inspection and

discussed Basic Concept and Generic


Process; Inspection Process Variations and
Fagan Inspection
We also discussed the 5 steps process in
Fagan Inspection, i.e., planning, overview,
preparation, Rework and Follow-up.
Overview of the next lecture

n We will talk about Reduced Size/Scope


Inspection
n Gilb Inspection (Expanded Fagan)
n Formal Inspection: Code Reading
The End
Software Quality
Engineering
Lecture No. 21
Part- 3:
Quality Assurance Techniques
Summary of the previous lecture

n We discussed Software Inspection

Basic Concept and Generic Process

Inspection Process Variations

Fagan Inspection
Outlines

n Software Inspection

Scope Inspection

Gilb Inspection (Expanded Fagan)

Other Inspection and Related Activities

Other Issues
Objectives

n To understand and distinguish between informal


software inspection and formal software inspection

n To be able to inspect the software for program


correctness.
Reduced Size/Scope Inspection
n Two-person inspection
Fagan inspection simplified
Author-inspector pair which is mutually
beneficial
Smaller scale program
n Meeting-less inspections
Importance of preparation (individual insp.)
(most defects found during preparation)
Empirical (observed) evidence
1-on-1 instead of team meetings (or other
feedback mechanisms)
Gilb Inspection (Expanded Fagan)
n Key: A “process brainstorming” meeting
root cause analysis
right after inspection meeting
parallel to edit (rework)
aim at preventive actions/improvement
n Other characteristics
Clearly identified input, checklists/rules extensively used
Output include change request and suggested process
improvement, in addition to inspected documents.
Team size: 4-6 people.
. More emphasis on
feedback loop: resemble To SQE process
Other Expanded Fagan Inspections
n Phased inspections
Expand Fagan inspection
Multiple phases/meetings
Each on a specific area/problem-type
Dynamic team make-up
n N-fold inspections
Idea similar to NVP (N-Version Programming)
N parallel inspections, 1 moderator
Duplications ⇒ cost↑
Informal Inspection
n Desk check (self-conducted):
Should focus on conceptual problems
Use tools for problems with syntax / spelling / format etc.
n Informal review (by others):
Similar to desk check, but by others
Benefit from independent/orthogonal views
Group reviews for phase transitions
n Walkthroughs:
More organized, but still informal
Leading role of author/moderator
Less preparation by other participants than in inspection
Formal Inspection: Code Reading

A program segment (left) and its permutation (right)


n Program comprehension (understanding):
a program (left) and its permutation (its change) (right)
different effort in comprehension
different recall accuracy
experience factor (expert vs beginner)
n Related to top-down design and code reading/abstraction
(bottom-up)
Formal Inspection: Code Reading

n Code reading
. focus on code
. optional meetings
n Code reading by stepwise abstraction
. basis: program comprehension studies
. variation to code reading
– formalized code reading technique
. top-down decomposition and bottom-up abstraction
. recent evidence of effectiveness
Formal Inspection: ADR & Correctness
§ Active design reviews (ADR)
§ . Another formal inspection, for designs
§ . Inspector active vs. passive
§ . Author prepares questionnaires
§ . More than one meeting
§ . Scenario based (questionnaires)
§ . Overall ADR divided into small ones
§ . 2-4 persons (for each smaller ADR)

§ Inspection for program correctness


§ . Correctness (vs. questionnaire) of:
– topology (decomposition, hierarchy)
– algebra (equivalence of refinements)
– invariance (variable relations)
– robustness (error handling)
§ . Close to formal verification
Extending Inspection: Analysis
n Inspection as analysis
. Program/document/etc. analysis
. Inspection as statics analysis
. Testing as dynamic analysis
n Other analyses
. Static: algorithm, decision table, boundary value,
control flow, data flow, etc.
. Dynamic: symbolic execution, simulation,
prototyping, timing, in-field execution, etc.
Defect Detection Techniques
n • Ad-hoc vs. systematic ones below:
checklist-/scenario-/abstraction-based.
n • Checklist-based inspection:
. Similar to testing checklists
. Basic types: artifact-/property-based.
n • Scenario-based inspection:
. Similar to usage-based testing.
. Scenarios ties multiple components.
. More a usage/external view.
. Suitable for OOS.
n • Abstraction-based inspection: Similar to code
reading with stepwise abstraction.
Implementation and Effectiveness
n • Implementation support:
. Process and communication support
. Repository management tools
. Defect tracking and analysis as followup
. Still human intensive
n • Effectiveness studies
. Measurement: defect or effort
. Defect detection technique important
. Inspector skills/expertise also important
. Many individual variations
Conclusions

n • Key advantages:
. Wide applicability and early availability
. Complementary to testing/other QA
. Many techniques/process to follow/adapt
. Effective under many circumstances
n • Key limitations:
. Human intensive
. Dynamic/complex problems and interactions:
Hard to track/analyze.
. Hard to automate.
Summary of Today’s Lecture

n We continued our discussion on Software

Inspection. We explored Scope Inspection


and Gilb Inspection which is an Expanded
version of Fagan inspection.

n We also highlighted Active design reviews


(ADR)
Overview of the next lecture

n We will talk about QA and Formal Verification.


The General idea and approaches and
Axiomatic verification will form part of our next
lecture discussion.
The End
Software Quality
Engineering
Lecture No. 22
Part- 3:
Quality Assurance Techniques
Summary of the previous lecture

n We continued our discussion on Software

Inspection. We explored following:

Scope Inspection

Gilb Inspection (Expanded Fagan)

Other Inspection and Related Activities

Other Issues
Outlines

n In today’s lecture, we will talk about Formal

Verification of a software.

n The General idea and approaches for formal


verification will be discussed.

n We will also highlight Axiomatic verification

and other approaches for software verification


Objectives

n To understand and distinguish between Formal


Inspection and Formal Verification of the software.

n Ability to understand and apply the knowledge the


formal verification in real life software problems.
QA Alternatives
n Defect and QA:
. Defect: error/fault/failure.
. Defect prevention/removal/containment.
. Map to major QA activities
n Defect prevention:
Error source removal & error blocking
n Defect removal: Inspection (Lecture No. 21)
n Defect containment: Fault tolerance and failure
containment (safety assurance)
n Special case (this lecture): Formal verification (&
formal specification)
QA and Formal Verification
n • Formal methods = formal specification + formal
verification
n • Formal specification (FS):
. As part of defect prevention
. Formal ⇒ prevent/reduce defect injection due to
imprecision, ambiguity, etc.
. Briefly covered as related to FV.
n • Formal verification (FV):
. As part of QA, but focus on positive: “Prove absence of
fault”
. People intensive
. Several commonly used approaches
Formal Specification: Ideas
n Formal specification:
. Correctness focus
. Different levels of details
. 3Cs: complete, clear, consistent
. Two types: descriptive & behavioral
n Descriptive formal specifications:
. Logic: pre-/post-conditions.
. Math functions
. Notations and language support: Z, VDM, etc.
n Behavioral formal specifications: FSM, Petri-Net,
etc.
Behavioral formal specifications

n A Petri net is a directed bipartite graph, in which the


nodes represent transitions (i.e. events that may occur,
signified by bars) and places (i.e. conditions, signified by
circles). The directed arcs describe which places are
pre- and/or post-conditions for which transitions
(signified by arrows)
Formal Specification: Ideas
n • “Testing shows the presence of errors, not their
absence.” — Dijkstra
n • Formal verification: proof of correctness
. Formal specs: as pre/post-conditions
. Axioms for components or functional units
. Composition (bottom-up, chaining)
. Development and verification together
n • Other related approaches:
. Semi-formal verification
. Model checking
. Inspection for correctness
Formal Verification Basics

n • Basic approaches:
. Floyd/Hoare axiomatic
. Dijkstra/Gries weakest precondition (WP)
. Mills’ program calculus/functional approach
n • Basis for verification:
. logic (axiomatic and WP)
. mathematical function (Mills)
. other formalisms
n • Procedures/steps used:
. bottom-up (axiomatic)
. backward chaining (WP)
. forward composition (Mills), etc.
Object and General Approach
n • Basic block: statements
. block (begin/end)
. concatenation (S1; S2)
. conditional (if-then/if-then-else)
. loop (while) . assignment
n • Formal verification
. rules for above units
. composition
. connectors (logical consequences)
Axiomatic Approach

n • Floyd axioms/flowchart
. Annotation on flowchart
. Logical relations
. Verification using logic
n • Hoare axioms/formalization
. Pre/Post conditions
. Composition (bottom-up)
. Loops and functions/parameters
. Invariants (loops, functions)
. Basis for many later approaches
Axiomatic Correctness
n • Notations
. Statements: Si
. Logical conditions: {P} etc.
. Schema: {P} S {Q}
. Axioms/rules: conditions or schemas
conclusion
n • Axioms:
. Schema for assignment
. Basic statement types
. “Connectors”
. Loop invariant
Functional Approach

n • Functional approach
. Mills’ program calculus
. Symbolic execution extensively used.
. Code reading/chunking/cognition ideas.
n • Functional approach elements
. Mills box notation
. Basic function associated with individual
statements
. Compositional rules
. Forward flow/symbolic execution . Comparison
with Dijkstra’s wp
Functional Approach: Symbolic Execution

Trace 1 Trace 2

n if x ≥ 0 then y ← x else y ← −x
Both traces used in verification
Formal Verification: Limitations

n • Formal methods (FM) Seven myths:


. Guarantee that software is perfect.
. They work by proving correctness.
. Only highly critical system benefits.
. FM involve complex mathematics.
. FM increase cost of development.
. They are incomprehensible to client.
. Nobody uses them for real projects.
n • However, some quantified validity ⇒ alternative FV
methods.
Other Models/Approaches
n Making FV more easily/widely usable.
n • Other models for formal verification:
. State machines and model checking.
. Algebraic data spec/verification.
. Petri nets, etc.
. Related checking/proof procedures.
n • General assessment
. More advantages & reduced limitations.
. Formal analysis vs. verification.
. May lead to additional automation.
. Hybrid methods.
. Adaptation and semi-formal methods.
Formal Verification: Other

n • Algebraic specification/verification:
. Specify and verify data properties
. Behavior specification
. Base case
. Constructions
. Domain/behavior mapping
. Use in verification
n • Stack example
. newstack
. push
. pop
. Canonical form
Formal Verification: Other
n • Model checking:
. Behavioral specification via FSMs.
. Proposition: property of interest expressed as a
suitable formula.
. Model checker: algorithm/program to check
proposition validity.
» Proof: positive result.
» Counterexample: negative result.
n • Other approaches and discussions:
. Algorithm analysis.
. Petri-net modeling and analysis.
. Tabular/semi-formal method.
. Formal inspection based.
. Limited aspects ⇒ easier to perform.
Formal Verification: Summary
n • Basic features:
. Axioms/rules for all language features
. Ignore some practical issues: Size, capacity, side
effects, etc.?
. Forward/backward/bottom-up procedure.
. Develop invariants: key, but hard.
n • General assessment:
. Difficult, even on small programs
. Very hard to scale up
. Inappropriate to non-math. problems
. Hard to automate
– manual process ⇒ errors↑
. Worthwhile for critical applications
Summary of Today’s Lecture

n We continued our discussion on Formal

Verification of a software. The General idea


and approaches for formal verification were
discussed. We also highlight Axiomatic
verification and other approaches for software
verification.
Overview of the next lecture

n We will talk about Fault Tolerance and Failure


Containment.
n Basic Concepts. The Fault Tolerance via RB and
NVP will also form part of next lecture
discussion.
The End
Software Quality
Engineering
Lecture No. 23
Part- 3:
Quality Assurance Techniques
Summary of the previous lecture

n We continued our discussion on Formal

Verification of a software. The General idea and


approaches for formal verification were
discussed. We also highlight Axiomatic
verification and other approaches for software
verification.
Outlines

n In today’s lecture, we will talk about Fault

Tolerance and Safety Assurance.

n The basic concepts and safety assurance


techniques will be discussed.

n We will also talk about Fault Tolerance

recovery blocks and Event Tree Analysis


(ETA)
Objectives

n To be able to understand the concepts of Safety for


Software development.

n To understand and distinguish between Fault


Tolerance Analysis (FTA) and Event Tree Analysis
(ETA)
QA Alternatives
n Defect and QA:
. Defect: error/fault/failure.
. Defect prevention/removal/containment.
. Map to major QA activities
n Defect prevention:
Error source removal & error blocking
n Defect removal: Inspection (Lecture No. 21)
n Special case (this lecture): Formal verification
(Lecture 22)
n Defect containment: Fault tolerance and failure
containment (safety assurance) This Lecture.
QA and Fault Tolerance (FT)
n • Fault tolerance as part of QA:
. Duplication: over time or components
. High cost, high reliability
. Run-time/dynamic focus
. FT design and implementation
. Complementary to other QA activities
n • General idea
. Local faults not lead to system failures
. Duplication/redundancy used
. redo ⇒ recovery block (RB)
. parallel redundancy
⇒ N version programming (NVP)
FT: Recovery Blocks (RB)
n General idea:
. Periodic check-pointing
. Problem detection/acceptance test
. Rollback (recovery)
FT: Recovery Blocks (RB)
n • Periodic check-pointing
. too often: expensive check-pointing
. too rare: expensive recovery
. smart/incremental check-pointing
n • Problem detection/acceptance test
. exceptions due to in/external causes
. periodic vs event-triggered
n • Recovery (rollback) from problems:
. external disturbance: environment?
. internal faults: tolerate/correct?
Fault Tolerance: N- Version Programming

n • FT with NVP:
. NVP: N-Version Programming
. Multiple independent versions
. Dynamic voting/decision ⇒ FT.
Fault Tolerance: N- Version Programming
n Multiple independent versions
. Multiple: parallel vs backup?
. How to ensure independence?
n Support environment:
. concurrent execution
. switching
. voting/decision algorithms
n Correction/recovery?
. p-out-of-n reliability
. in conjunction with RB
. dynamic vs. off-line correction
FT and Safety
n • Extending FT idea for safety:
. FT: tolerate fault
. Extend: tolerate failure
. Safety: accident free
. Weaken error-fault-failure-accident link
n • FT in SSE (software safety engineering):
. Too expensive for regular systems
. As hazard reduction technique in SSE
. Other related SSE techniques:
» general redundancy
» substitution/choice of modules
» barriers and locks
» analysis of FT
What Is Safety?
n • Safety: The property of being accident-free for
(embedded) software systems.
. Accident: failures with severe consequences
. Hazard: condition for accident
. Special case of reliability
. Specialized techniques
n • Software safety engineering (SSE): .
Hazard identification/analysis techniques
. Hazard resolution alternatives
. Safety and risk assessment
. Qualitative focus
. Safety and process improvement
Safety Analysis & Improvement

n • Hazard analysis:
. Hazard: condition for accident
. Fault trees: (static) logical conditions
. Event trees: dynamic sequences
. Combined and other analyses
. Generally qualitative
. Related: accident analysis and risk assessment
n • Hazard resolution
. Hazard elimination
. Hazard reduction
. Hazard control
. Related: damage reduction
Hazard Analysis: Fault Tolerance Analysis
n • Fault tree idea:
. Top event (accident)
. Intermediate events/conditions
. Basic or primary events/conditions
. Logical connections
. Form a tree structure
n • Elements of a fault tree:
. Nodes: conditions and sub-conditions – terminal
vs. no terminal
. Logical relations among sub-conditions – AND,
OR, NOT
. Other types/extensions possible
Hazard Analysis: FTA Example

Example FTA for an automobile


accident
Hazard Analysis: FTA
n • FTA construction:
. Starts with top event/accident
. Decomposition of events or conditions
. Stop when further development not required or
not possible (atomic)
. Focus on controllable events/elements
n • Using FTA:
. Hazard identification – logical composition
– (vs. temporal composition in Event Tree
Analysis (ETA) .
. Hazard resolution (more later)
» component replacement etc.
» focused safety verification
negate logical relation
Hazard Analysis: Event Tree Analysis (ETA)
n • ETA: Why?
. FTA: focus on static analysis
» (static) logical conditions
. Dynamic aspect of accidents
. Timing and temporal relations
. Real-time control systems
n • Search space/strategy concerns:
. Contrast ETA with FTA:
» FTA: backward search
» ETA: forward search
. May yield different path/info.
. ETA provide additional info.
Hazard Analysis: ETA Example

Example ETA for an automobile accident


Hazard Analysis: ETA
n • Event trees:
. Temporal/cause-effect diagram
. (Primary) event and consequences
. Stages and (simple) propagation
» not exact time interval
» logical stages and decisions

n • Event tree analysis (ETA):


Recreate accident sequence/scenario
Critical path analysis
Used in hazard resolution (more later)
» esp. in hazard reduction/control
» e.g. creating barriers
» isolation and containment
Summary of Today’s Lecture

n In today’s lecture, we talked about Fault

Tolerance and Safety Assurance. We have


seen some basic concepts and safety
assurance techniques. We also talked about
Fault Tolerance recovery blocks. Our
discussion ended on Fault Tolerance Analysis
(FTA) and Event Tree Analysis (ETA)
Overview of the next lecture

n We will talk about Hazard Elimination and


Hazard Reduction.
n We will also talk about Hazard Control and
Damage Control.
The End
Software Quality
Engineering
Lecture No. 24
Part- 3:
Quality Assurance Techniques
Summary of the previous lecture

n In previous lecture, we talked about Fault

Tolerance and Safety Assurance.

n We have seen some basic concepts and safety


assurance techniques.

n We also talked about Fault Tolerance recovery


blocks.

n Our discussion ended on Fault Tolerance

Analysis (FTA) and Event Tree Analysis (ETA)


Outlines

Our discussion on fault Tolerance and failure


containment will continue.
n Hazard Elimination and Hazard Reduction.
n Hazard Control and Damage Control.
n Software Safety Program (SSP)
n TFM: Two-Frame-Model
n Frame Inconsistencies
n Prescriptive Specifications (PS)
Objectives

n To understand and distinguish between Hazard


Elimination and Hazard Reduction & Hazard
Control and Damage Control.
n To be able to apply knowledge of Software
Safety Program (SSP) in real life systems.
Hazard Elimination
n • Hazard sources identification ⇒ elimination
(Some specific faults prevented or removed.)
n • Traditional QA (but with hazard focus):
. Fault prevention activities:
» education/process/technology/etc
» formal specification & verification
. Fault removal activities:
» rigorous testing/inspection/analyses
n • “Safe” design: More specialized techniques:
. Substitution, simplification, decoupling.
. Human error elimination.
. Hazardous material/conditions↓
Hazard Reduction
n • Hazard identification ⇒ reduction (Some specific
system failures prevented or tolerated.)
n • Traditional QA (but with hazard focus):
. Fault tolerance
. Other redundancy
n • “Safe” design: More specialized techniques:
. Creating hazard barriers
. Safety margins and safety constraints
. Locking devices
. Reducing hazard likelihood
. Minimizing failure probability
. Mostly “passive” or “reactive”
Hazard Control
n • Hazard identification ⇒ control
. Key: failure severity reduction.
. Post-failure actions.
. Failure-accident link weakened.
. Traditional QA: not much, but good design principles
may help.
n • “Safe” design: More specialized techniques:
. Isolation and containment
. Fail-safe design & hazard scope↓
. Protection system . More “active” than “passive”
. Similar techniques to hazard reduction,
» but focus on post-failure severity↓ vs. pre-failure
hazard likelihood↓.
Accident Analysis & Damage Control
n • Accident analysis:
. Accident scenario recreation/analysis
» possible accidents and damage areas
. Generally simpler than hazard analysis
. Based on good domain knowledge (not much software
specifics involved)
n • Damage reduction or damage control
. Post-accident vs. pre-accident hazard resolution
. Accident severity reduced
. Escape route
. Safe abandonment of material/product/etc.
. Device for limiting damages
Software Safety Program (SSP)
n • Leveson’s approach (Leveson, 1995) — Software
safety program (SSP)
n • Process and technology integration
. Limited goals
. Formal verification/inspection based
. But restricted to safety risks
. Based on hazard analyses results
. Safety analysis and hazard resolution
. Safety verification:
» few things carried over
Software Safety Program (SSP)
n • In overall development process:
. Safety as part of the requirement
. Safety constraints at different levels/phases
. Verification/refinement activities
. Distribution over the whole process
TFM: Two-Frame-Model

n • TFM: Two-Frame-Model
. Physical frame
. Logical frame
. Sensors: physical ⇒ logical
. Actuators: logical ⇒ physical
n • TFM characteristics and comparison:
. Interaction between the two frames
. Nondeterministic state transitions and
encoding/decoding functions
. Focuses on symmetry/consistency between the
two frames.
TFM Example

n • TFM Example:
. physical frame: nuclear reactor
. logical frame: computer controller
Usage of TFM

n • Failure/hazard sources and scenarios:


. Hardware/equipment failures.
. Software failures.
. Communication/interface failures.
. Focus on last one, based on empirical evidence.

n • Causes of communication/interface hazards:


. Inconsistency between frames.
. Sources of inconsistencies
. Use of prescriptive specifications (PS)
. Automatic checking of PS for hazard prevention
Frame Inconsistencies
n • System integrity weaknesses: Major sources of
frame inconsistencies
n • Discrete vs. continuous:
. Logical frame: disconnected/ separated
. Physical frame: mostly continuous
. Continuous regularity or validity of in-
/extrapolation
n • Total vs. partial functions:
. Logical frame: partial function
. Physical frame: total function
. ⇒ coercion, domain/default specs, etc.
Prescriptive Specifications (PS)
n • Definition and examples:
. Assertion: desired system behavior.
. Use PS in CS
n • PS for CS:
. Address integrity weaknesses
. Systematic derivation
. How to check? dynamic/automatic
. Applications in case studies
. Effectiveness and completeness
Deriving Specific PS
n • Domain prescriptions:
. Address: partial/total function
. Boundary: e.g., upper/lower bounds .
Type:
» expected ⇒ normal processing
» unexpected: provide default values or perform
exception handling
n • Primitive invariants
. Address: lack of intrinsic invariant
. Relations based on physical law
. Use TFM-based FTA and ETA to identify entities
to check
Deriving Specific PS
n • Safety assertions:
. Address: physical/safety limits
. Directly from physical/safety limits
. Indirect assertions:
» related program variables
» based on TFM-based FTA and ETA
n • Image consistency assertions:
. Address: discrete vs. continuous
. State or status checking
. Rate checking
A Comprehensive Case Study
n • Selecting a case study:
. Several case studies performed
. TMI-2: Three Mile Island accident
. Simulator of TMI-2 accident
. Seeding and detection of faults
n • A simulator with components:
. digital controller (pseudo-program chart)
. physical system with 4 process variables: power,
temp, pressure, water level
. introducing prescription monitor
Prescription Monitor in Case Study

n • Prescription monitor development:


. performance constraints
. quality/reliability of itself?
. usage of independent sets of sensors
More on Case Study

n • Developing PS in the case study:


. Generic assertions (domain etc.)
. Specific assertions with examples
n • Fault seeding: wide variety of faults
. Erroneous input from the user (1-4)
. Wrong data types or values (5-7)
. Programming errors (8-16)
. Wrong reading of sensors (17-19)
n • Result: all detected by prescription monitor by
specific PS
Case Study Summary
n • Prescriptive specification checking:
. Based on TFM
. Analyze system integrity weaknesses
. Derive corresponding assertions or PS
. Checking PS for hazard prevention
. Appears to be effective in several case studies
n • Future directions and development:
. Apply to realistic applications
. Prescription monitor development
. Support for PS derivation
. Generalization to other systems e.g., embedded
systems, and software-based heterogeneous
systems...
Summary and Perspectives

• Software fault tolerance:


» . Duplication and redundancy.
» . Techniques: RB, NVP, and variations.
» . Cost and effectiveness concerns.
• SSE: Augment S/w Eng.
» . Analysis to identify hazard
» . Design for safety . Safety constraints and
verification
» . Leveson’s s/w safety program, PSC, etc.
» . Cost and application concerns.
Summary of Today’s Lecture
In today’s lecture, our discussion on fault Tolerance and
failure containment continued. We talked about :
n Hazard Elimination and Hazard Reduction.
n Hazard Control and Damage Control.
n Software Safety Program (SSP)
n TFM: Two-Frame-Model
n Frame Inconsistencies
n Prescriptive Specifications (PS)
Overview of the next lecture

n We will proceed to the last lecture of Part- 3. we


will talk about Comparing QA Techniques and
Activities. We will also talk about General
Areas/Questions for Comparison and
Applicability, Effectiveness, and Cost
Software Quality
Engineering
Lecture No. 25
Part- 3:
Quality Assurance Techniques
Summary of the previous lecture
In last lecture, our discussion on fault Tolerance
and failure containment continued. We talked about :
n Hazard Elimination and Hazard Reduction.
n Hazard Control and Damage Control.
n Software Safety Program (SSP)
n TFM: Two-Frame-Model
n Frame Inconsistencies
n Prescriptive Specifications (PS)
Outlines

We will Compare QA Alternatives, More


specifically, we will talk about\
n General Areas
n Questions for Comparison
n Applicability, Effectiveness, and Cost
n Summary and Recommendations.
Objectives

n To understand and distinguish between QA


alternatives.
n To be able to apply the knowledge learnt on real
life QA in software.
QA Alternatives
n • Defect and QA:
Defect: error/fault/failure.
Defect prevention/removal/containment.
Map to major QA activities
n • Defect prevention
Error source removal & error blocking
n • Defect removal: Inspection/testing/etc.
n • Defect containment: Fault tolerance and failure
containment (safety assurance).
n • Comparison: This Lecture
Comparison
n • Cost-benefit under given environments:
. Environments: applicable or not?
. Cost to perform.
. Benefit: quality, directly or indirectly.
n • Testing as the comparison baseline:
. Most commonly performed QA activity.
. Empirical and internal data for testing.
. QA alternatives compared to testing:
» defect prevention (DP),
» Inspection and formal verification (FV),
» fault tolerance (FT),
» failure containment (FC).
. FT & FC: separate items in comparison.
Comparison: Applicability
n • Applicability questions:
. High-level questions: development vs. field usage (and
support/maintenance)
. Low level questions: development phases/activities.
n • Applicability to maintenance:
. Not applicable: Defect prevention. (although lessons
applied to future)
. Applicable to a limited degree: Inspection, testing, formal
verification, as related to reported field failures.
. Applicable: fault tolerance and failure containment, but
designed/implemented during development.
n • Our focus: applicability to development
Comparison: Applicability
n • Objects QA activities applied on:
. Mostly on specific objects, e.g., testing executable
code
. Exception: defect prevention on (implementation
related) dev. activities
Comparison: Applicability
n • Applicability to development phases:
. In waterfall or V-model: implementation
(req/design/coding) & testing/later.
. Inspection in all phases.
. Other QA in specific sets of phases.
Comparison: Applicability
n • Applicability to product domain/segment:
. All QA alternatives can be applied to all
domains/segments.
. Other factors: cost-benefit ratio.
. Higher cost needs to be justified by higher
payoff/returns.
. Further comparison in connect to cost and
effectiveness comparisons.
Comparison: Applicability
n • Also relate to general context of QA
. QA distribution
. Related activities in other phases, e.g.,
design/implementation for FT/SSE.
n • Other process variations: similar to smaller cycles
of waterfall
Comparison: Applicability/Expertise
n • Pre-condition to performing specific QA
activities:
. specific expertise required
. also related to cost
n • Expertise areas:
. Specifics about the QA alternative.
. Background/domain-specific knowledge.
. FV: formal training.
. FT: dynamic system behavior.
. FC: embedded system safety.
. Other QA: general CS/SE knowledge.
Comparison: Applicability/Expertise
n • General expertise levels: mostly in ranges,
depending on specific techniques used.
n • Specific background knowledge

Required expertise and background knowledge for people to perform


different QA Alternatives
Comparison: Effectiveness
n • Defect specifics or perspectives:
. Dealing with errors/faults/failures?
. Direct action vs follow-up action: may deal with
different defect perspectives.
. Example: failures detected in testing but (failure-
causing) faults fixed in follow-up.
Comparison: Effectiveness
n Problem or defect types:
errors/faults/failures of different types or
characteristics
Comparison: Effectiveness
n • Defect types: Inspection vs. testing:
. Static analysis vs. dynamic execution ⇒ static vs
dynamic problems and conceptual/logical problems vs.
timing problems.
. Localized defects easily detected by inspection vs.
interface/interaction problems detected by testing.
n • Defect types: Other QA:
. defect prevention: negating causes or pre-conditions
to pervasive problems.
. fault tolerance: rare conditions
. safety assurance: accidents
. FV: logical problems, but indirectly.
Comparison: Effectiveness
n Information for defect↓ and quality↑
n Result interpretation:
specific pieces of info.
interpret the info./result
link to quality, impact, meaning, etc.?
n Using information/measurement:
to provide feedback
to guide followup activities
to help decision making/improvement
goal: defect↓ and quality↑ (usually via
analysis/modeling)
n Part IV. Quantifiable Improvement: measure-
analyze-feedback-improve steps.
Comparison: Effectiveness
n Ease of result interpretation
n Specific info/measurement
Comparison: Cost
n • Cost measurement/characterization:
Direct cost: $$$
Indirect cost: time, effort, etc.
Things affecting cost: simplicity, expertise (already
addressed), tools, etc.
Cost to perform specific QA activities.
n • Factors beyond cost to perform QA:
Cost of failures and related damage.
Other cost, particularly for defect containment (FT
and FC)
Operational cost, e.g., FT mechanisms slow down
normal operations
Implementation cost of FT mechanisms.
Comparison: Cost
n • Overall cost comparison:
. rough values and ranges
. multiple factors but focus on performing
the specific QA activities
Comparison: Summary
n • Testing:
. Important link in dev. process
. Activities spilt over to other phases
» OP development, test preparation, etc.
» (partial) code exist before testing
Dynamic/run-time/interaction problems
. Medium/low defect situations
. Techniques and tools
. Coverage vs. reliability focus
. Cost: moderate
n • Defect prevention: . Most effective if causes
known. . Good at pervasive problems. . Low
Comparison: Summary
n • Defect prevention:
. Most effective if causes known.
. Good at general/universal problems.
. Low cost, due to downstream damage↓.
. Issue: “if causes”, and up-front cost
Comparison: Summary
n • Inspection:
. Good throughout dev. process
. Works on many software artifacts
. Conceptual/static faults
. High fault density situations:
» non-blocking
» experience ⇒ efficiency↑
. Human intensive, varied cost
Comparison: Summary
n Formal verification:
. Positive confirmation/correctness.
. On design/code with formal spec.
. Low/no defect situations
. Practicality: high cost → benefit?
. Human intensive, rigorous training
(therefore, high up-front cost)
Comparison: Summary
n • Fault tolerance:
. Dynamic problems (must be rare)
. High cost & reliability (low defect)
. Technique problems (independent NVP?) .
Process/technology intensive
n • Failure containment:
. Similar to FT above, but even more so.
. Rare conditions related to accidents
. Extremely high cost ⇒ apply only when safety
matters
. Many specialized techniques
. Process/technology intensive
Comparison: Grand Summary
n Pairwise comparison, if needed.
n Different strength/weakness ⇒ hybrid/integrated
strategies
Pairwise Comparison
n • Inspection vs. preventive actions:
. Inspection coupled with causal analysis.
. Together drive preventive actions.
. Key difference: error vs fault focus
n • Inspection vs. formal verification
. FV ≈ formalized inspection
. Focus: people vs. mathematical/logical
. Applicability to design/code only?
. Existence of formal specifications?
. Tradeoff: formality vs. cost
. Training and acceptability issues
Pairwise Comparison
n • Inspection vs. testing:
. Existence of the implemented product
. Levels of quality/defects
. Static vs. dynamic defects
. Localized vs. interconnected defects
. Combined approaches:
» phases and transitions
» inspection of testing entities/processes

n • Inspection vs. fault tolerance


. Complementary instead of competing (e.g.,
inspect individual versions)
. Static vs. dynamic
. Inspection of FT techniques/mechanisms
Recommendation: Integration
n • Different QA alternatives often complementary instead of
competing to one another:
. Dealing with different problems.
. Work in different phases/environments.
. Combined effect ⇒ use multiple QA alternatives
together.
. Shared resource and expertise.
Recommendation: Integration
n Integration: Concerted QA effort
. As a series of defense (Fig 3.1, p.30).
. Satisfy specific product/segment needs.
. Fit into process and overall environment.
. Adaptation/customization often needed.
. Match to organizational culture.
Summary of Today’s Lecture
In today’s lecture, we talked about comparing QA
alternatives. We highlighted its General Areas and
we did comparison using different tables. Lastly, we
have seen the summary and recommendations.
Overview of the next lecture

n We will proceed to the Part- 4 of the course, i.e.,


Quantifiable Quality Improvement:
n We will talk about Feedback Loop and Activities
for Quantifiable Quality Improvement.
The End
Software Quality
Engineering
Lecture No. 26
Summary of the previous lecture
In last lecture, we compared different QA
Alternatives (including Testing, Inspection,
Validation) More specifically, we will talk about
n General Areas
n Questions for Comparison
n Applicability, Effectiveness, and Cost
n Summary and Recommendations.
We have seen different comparison tables to
identify which QA alternative is appropriate
for what type of software.
Part- 4:
Quantifiable Quality Improvement
Outlines

Feedback Loop and Activities for Quantifiable


Quality Improvement
• Feedback Loop and Overall Mechanism
• Monitoring and Measurement
• Analysis and Feedback
• Tool and Implementation Support
Objectives

n To understand and distinguish between Pre-QA


activities and Post-QA activities.
n To be able to apply the knowledge of
quantifiable quality improvement in real life
software.
Importance of Feedback Loop
n All QA activities covered in Part II and Part III
need additional support:
. Planning and goal setting
. Management via feedback loop:
» When to stop?
» Adjustment and improvement, etc.
» All based on assessments/predictions
Importance of Feedback Loop
n Feedback loop for quantification/improvement:
. Focus of Part IV chapters .
» mechanism and implementation.
» models and measurements.
» defect analyses and techniques.
» risk identification techniques.
» software reliability engineering
QE Activities and Process Review
n • Major activities:
. Pre-QA planning (earlier lectures).
. QA (Part II and Part III).
. Post-QA analysis & feedback
» Part IV (maybe parallel instead of “post-”)
n • Overall process:– Software quality engineering
(SQE)

Quality Engg. Process


QE Activities and Process Review
n Feedback loop zoom-in: Fig 18.1 (p.304)
. Multiple measurement sources.
. Many types of analysis performed.
. Multiple feedback paths.
Refined Quality Engineering Process: Management, Analysis and feedback
for Quantifiable improvement
Feedback Loop Related Activities
n Monitoring and measurement:
defect monitoring ∈ process management.
defect measurement ∈ defect handling.
many other related measurements.
n Analysis modeling:
Historical baselines and experience.
Choosing models and analysis techniques.
Focus on defect/risk/reliability analyses.
Goal: assessment/prediction/improvement.
n Feedback and followup:
Frequent feedback: assessment/prediction.
Possible improvement areas identified.
Overall management and improvement.
Quality Monitoring and Measurements
n Quality monitoring needs:
Quality as a quantified entity over time.
Able to assess, predict, and control.
Various measurement data needed.
Some directly in quality monitoring.
Others via analyses to provide feedback.
n Direct quality measurements:
Result, impact and related info. – e.g., success
vs. failure
Defect information: directly monitored. –
additional defect analysis
Mostly used in quality monitoring.
Indirect Quality Measurements
n • Indirect quality measurements: Why?
. Other quality measurements (reliability) need
additional analyses/data.
Unavailability of direct quality measurements early in
the development cycle ⇒ early (indirect) indicators.
Used to assess/predict/control quality. (to link to or
affect various direct quality measurements)
n • Types of indirect quality measurements:
Environmental measurements.
Product internal measurements.
Activity measurements.
Indirect Measurements: Environment
n • Process characteristics
. Entities and relationships
. Preparation, execution and followup
. Techniques used
n • People characteristics
. Skills and experience
. Roles: planners/developers/testers
. Process management and teams
n • Product characteristics
. Product/market environment
. Hardware/software environment
Indirect Measurements: Internal
n • Product internal measurements: most
studied/understood in SE
n • Software artifacts being measured:
. Mostly code-related
. Sometimes SRS, design, docs etc.
n • Product attributes being measured:
. Control: e.g., McCabe complexity
. Data: e.g., Halstead metrics
. Presentation: e.g., indentation rules
n • Structures:
Indirect Measurements: Activity
n • Execution/activity measurements:
Overall: e.g., cycle time, total effort.
Phased: profiles/histograms.
Detailed: transactions in SRGMs.
n • Testing activity examples:
Timing during testing/usage
Path verification (white-box)
Usage-component mapping (black-box)
Measurement along the path
n • Usage of observations/measurements:
observation-based and predictive models
Immediate Followup and Feedback
n • Immediate (without analyses): Why?
. Immediate action needed right away: – critical
problems ⇒ immediate fixing – most other problems:
no need to wait
. Some feedback as built-in features in various QA
alternatives and techniques.
. Activities related to immediate actions.
n • Testing activity examples:
. Shifting focus from failed runs/areas.
. Re-test to verify defect fixing.
. Other defect-related adjustments.
n • Defect and activity measurements used.
Analyses, Feedback, and Followup
n • Most feedback/follow-up relies on analyses.
n Types of analyses:
. Product release decision related.
. For other project management decisions, at the phase
or overall project level.
. Longer-term or wider-scope analyses.
n • Types of feedback paths:
. Shorter vs. longer feedback loops.
. Frequency and time duration variations.
. Overall scope of the feedback.
. Data source refinement.
. Feedback destinations.
Analysis for Product Release Decisions
n • Most important usage of analysis results
“when to stop testing?”
n • Basis for decision making:
. Without explicit quality assessment:
» – implicit: planned activities,
» – indirect: coverage goals,
» – other factors: time/$-based.
. With explicit quality assessment:
» – failure-based: reliability,
» – fault-based: defect count & density.
n • Criteria preference: reliability – defect – coverage –
activity.
Analyses for Other Decisions
n • Transition from one (sub-)phase to another:
. Later ones: similar to product release.
. Earlier ones: reliability undefined
» – defects – coverage – activity,
» – inspection and other early QA
n • Other decisions/management-activities:
. Schedule adjustment.
. Resource allocation and adjustment.
. Planning for post-release support.
. Planning for future products or updates.
n • These are product-level or sub-product-level
decisions and activities.
Other Feedback and Followup
n • Other (less frequent) feedback/followup:
. Goal adjustment (justified/approved).
. Self-feedback (measurement & analysis)
» – unsuitable measurements and models?
» – SRE measurement example in IBM.
. Longer term, project-level feedback.
. May even carry over to followup projects.
n • Beyond a single-project duration/scope:
. Future product quality improvement
» – overall goal/strategy/model/data,
» – especially for defect prevention.
. Process improvement.
. More experienced people.
Feedback Loop Implementation
n • Key question: sources and destinations. (Analysis
and modeling activity at center.)
n • Sources of feedback loop = data sources:
. Result and defect data:
» – the QA activities themselves.
. Activity data:
» – both QA and development activities.
. Product internal data: product. (produced by
development activities)
. Environmental data: environment.
n • Additional sources of feedback loop:
. From project/QA planning.
. Extended environment: measurement data and
models beyond project scope.
Feedback Loop Implementation
n • Feedback loop at different duration/scope
levels.
n • Immediate feedback to current development
activities (locally).
n • Short-term or sub-project-level feedback:
. transition, schedule, resource,
. destination: development activities.
n • Medium-term or project-level feedback:
. overall project adjustment and release
n • Longer-term or multi-project feedback: – to
external destinations
Feedback Loop Implementation
Implementation Support Tools
n • Type of tools:
. Data gathering tools.
. Analysis and modeling tools.
. Presentation tools.
n • Data gathering tools:
. Defects/direct quality measurements:
» – from defect tracking tools.
. Environmental data: project db.
. Activity measurements: logs.
. Product internal measurements:
» – commercial/home-build tools.
. New tools/APIs might be needed.
Implementation Support Tools
n • Analysis and modeling tools:
. Dedicated modeling tools: – e.g., SMERFS and
CASRE for SRE
. General modeling tools/packages: – e.g., multi-
purpose S-Plus, SAS.
. Utility programs often needed for data screening
and processing.
n • Presentation tools:
. Aim: easy interpretation of feedback ⇒ more likely
to act on.
. Graphical presentation preferred.
. Some “what-if”/exploration capability.
Strategy for Tool Support
n • Using existing tools ⇒ cost↓:
. Functionality and availability/cost.
. Usability.
. Flexibility and programmability.
. Integration with other tools.
n • Tool integration issues:
. Assumption: multiple tools used. (All-purpose
tools not feasible/practical.)
. External rules for inter-operability, – common data
format and repository.
. Multi-purpose tools.
. Utilities for inter-operability.
Tool Support Example (IBM)
Summary of Today’s Lecture
In today’s lecture, we talked about Feedback Loop
and Activities for Quantifiable Quality Improvement.
We discussed the Feedback Loop and Overall
Mechanism, Monitoring and Measurement, Analysis
and Feedback and lastly, we talked about Tool and
Implementation Support
.
Overview of the next lecture

n We will talk about Quality Models and


Measurements
Types of Quality Assessment Models.
Comparing Quality Assessment Models.
Data Requirements and Measurement
Measurement and Model Selection.
The End
Software Quality
Engineering
Lecture No. 27
Part- 4:
Quantifiable Quality Improvement
Summary of the previous lecture
n In last lecture, we talked about Quality Models
and Measurements
n Post-QA analysis & feedback
Part IV (maybe parallel instead of “post-)
n Analysis => Feedback loops => Follow-ups
QE Activities and Process Review
Quality Engg. Process

Refined Quality
Engineering Process:
Management, Analysis
and feedback for
Quantifiable
improvement
Outlines

Quality Models and Measurements


• Types of Quality Assessment Models.
• Comparing Quality Assessment Models.
• Data Requirements and Measurement
• Measurement and Model Selection.
Objectives

n To understand and distinguish different types


of quality models such as Generalized and
Product-specific
n To apply and use different types of product
specific models to solve real life software
challenges
QA Data and Analysis
n • Generic testing process:
. Test planning and preparation.
. Execution and measurement.
. Test data analysis and follow-up.
. Related data ⇒ quality ⇒ decisions
n • Other QA activities:
. Similar general process.
. Data from QA/other sources (previous lecture).
. Models used in analysis and follow-up:
» – provide timely feedback/assessment
» – prediction, anticipating/planning
» – corrective actions ⇒ improvement
QA Models and Measures
n • General approach
. Adapt GQM-paradigm.
. Quality: basic concept and ideas.
. Compare models ⇒ taxonomy.
. Data requirements ⇒ measurements.
. Practical selection steps.
. Illustrative examples.
n • Quality attributes and definitions:
. Q models: data ⇒ quality
. Correctness vs. other attributes
. Our definition/restriction: being defect-free or of low-
defect
. Examples: reliability, safety, defect
count/density/distribution/etc.
Quality Analysis
n • Analysis and modeling:
. Quality models: data ⇒ quality
» Also known as quality assessment models or quality evaluation models
. Various models needed
. Assessment, prediction, control
. Management decisions
. Problematic areas for actions
. Process improvement
n • Measurement data needed
. Direct quality measurements: success/failure (defect info)
. Indirect quality measurements:
» activities/internal/environmental.
Indirect but early quality indicators. All described in Lecture26.
Quality Models
n • Practical issues:
. Applicability vs. appl. environment
. Goal/Usefulness: information/results?
. Data: measurement data required
. Cost of models and related data
n • Type of quality models
. Generalized: averages or trends
. Product-specific: more customized
. Relating to issues above
Generalized Models: Overall
n • Model taxonomy:
. Generalized:
» – overall, segmented, and dynamic
. Product-specific:
» – semi-customized: product history
» – observation-based: observations
» – measurement-driven: predictive

Classification of quality
assessment models
Generalized Models: Overall
n • Key characteristics
. Industrial averages/patterns ⇒ (single) rough estimate.
. Most widely applicable.
. Low cost of use.
n • Examples: Defect density.
. Estimate total defect with sizing model.
. Variation: QI in IBM (counting in-field unique defect
only)
n • Non-quantitative overall models:
. As extension to quantitative models.
. Examples: 80:20 rule, and other general observations.
Product-Specific Models (PSM)
n • Product-specific models (PSMs):
. Product-specific info. used (vs. none used in
generalized models)
. Better accuracy/usefulness at cost ↑
. Three types:
» semi-customized
» observation-based
» measurement-driven predictive
n • Connection to generalized models (GMs):
. Customize GMs to PSMs with new/refined models and
additional data.
. Generalize PSMs to GMs with empirical evidence and
general patterns.
PSM: Semi-Customized
n • Semi-customized models:
. Project level model based on history.
. Data captured by phase.
. Both projections and actual.
. Linear extrapolation.
n Related extensions to Defect Removal Model (DRM):
. Defect dynamics model in Lecture 28,
. Orthogonal Defect Classification (ODC) defect
analyses in Lecture 28:
» 1-way distribution/trend analysis
» 2-way analysis of interaction.
PSM: Observation-Based
n • Observation-based models:
. Detailed observations and modeling
. Software reliability growth models
. Other reliability/safety models
n • Model characteristics
. Focus on the effect/observations
. Assumptions about the causes
. Assessment-centric
PSM: Predictive
n • Measurement-driven predictive models
. Establish predictive relations
. Modeling techniques: regression, TBM, NN,
OSR etc.
. Risk assessment and management
n • Model characteristics:
. Response: chief concern
. Predictors: observable/controllable
. Linkage quantification
Model Summary
Relating Models to Measurements
n Data required by quality models (Lecture 26):
. Direct quality measurements
» to be assessed/predicted/controlled
. Indirect quality measurements
» – means to achieve the goal
» – environmental, activity, product-internal
. Data requirement by models summarized below
Relating Models to Measurements
Model/Measurement Selection
n • Customize Goal-Question-Metric (GQM) into 3-
steps
n • Step 1: Quality goals
. Restricted, not general goals
n • Step 2: Quality models
. Model characteristics/taxonomy
. Model applicability/usefulness
. Data requirement/affordability
n • Step 3: Quality measurements
. Model-measurements relations
. Detailed model information
Selection Example A
n • Goal: rough quality estimates
n • Situation 1:
. No product specific data
. Industrial averages/patterns
. Commercial tools: SLIM etc.
. Product planning stage
. Defect profile in lifecycle
. Use generalized models
n • Situation 2: . Data from related products
. DRM for legacy products
. ODC profile for IBM products
. Semi-customized models
Summary of Today’s Lecture
In today’s lecture, we talked about quality models.
We discussed some generalized models and their
characteristics and we also discussed PSMs.
Certain models can be appropriate for specific type
of softwares.
Overview of the next lecture

n In next lecture, we will talk about Defect


Classification and Analysis
General Types of Defect Analyses.
ODC: Orthogonal Defect Classification.
Analysis of ODC Data.
The End
Software Quality
Engineering
Lecture No. 28
Part- 4:
Quantifiable Quality Improvement
Summary of the previous lecture
n In our previous lecture, we talked about
quality models. We discussed different types
of quality models
. Generalized: averages or trends
. Product-specific: more customized
n We also discussed Model/Measurement Selection
Outlines of today’s lecture

n In today’s lecture, we will talk about Defect


Classification and Analysis
General Types of Defect Analyses.
ODC: Orthogonal Defect Classification.
Analysis of ODC Data.
Objectives

n To understand and distinguish defect analysis


and defect distribution analysis.
n To apply the knowledge learnt of the ODC to
real life software.
Defect Analysis
n Goal: (actual/potential) defect↓ or quality↑ in current
and future products.
n General defect analyses:
. Questions: what/where/when/how/why?
. Distribution/trend/causal analyses.
n Analyses of classified defect data:
. Prior: defect classification.
. Use of historical baselines.
. Attribute focusing in 1-way and 2-way analyses.
. Tree-based defect analysis (Lecture 29)
Defect in Quality Data/Models
n • Defect data ⊂ quality measurement data:
. As part of direct Q data.
. Extracted from defect tracking tools.
. Additional (defect classification) data may be available.
n • Defect data in quality models:
. As results in generalized models (GMs).
. As (response/independent variable) in product
specific models (PSMs).
» semi-customized models ≈ GMs,
» observation-based: in SRGMs,
» predictive: in TBDMs
General Defect Analysis
n • General defect analyses: Questions
. What? identification (and classification).
» type, severity, etc.,
» even without formal classification.
. Where? distribution across location.
. When? discovery/observation
» – what about when injection? harder
» – pre-release: more data
» – post-release: more meaningful/sensitive
. How/why? related to injection
⇒ use in future defect prevention.
General defect analyses
n General defect analyses: Types
. Distribution by type or area.
. Trend over time.
. Causal analysis.
. Other analysis for classified data.
Defect Analysis: Data Treatment
n • Variations of defect data:
. Error/fault/failure perspective.
. Pre-/post-release. . Unique defect?
. Focus here: defect fixes.
n • Why defect fixes (DF):
. Propagation information.
. Close ties to effort (defect fixing).
. Pre-release: more meaningful. (post
release: each failure occurrence.)
Defect Distribution Analysis
n • Distribution: what, where, etc.
n • What: Distribution over defect types.
. Ties to quality views/attributes.
. Within specific view: types/sub-types.
. Defect types ⇐ product’s “domain”.
n • Important observation:
⇒ importance of risk identification for effective
quality improvement
. Early indicators needed! (Cannot wait after
defect discoveries.)
Defect Distribution Analysis
n Web example:
. defect = “error” in web community.
. dominance of type E “missing files”.
. type A: further information needed.
. all other types: negligible.
Defect Distribution Analysis
n • Further analysis of web example above:
. for dominant type E “missing files”
. web error distribution by file type
. again, skewed distribution!
Defect Trend Analysis
n • Trend as a continuous function:
Customized with local data
Other analysis related to SRE
» defect/effort/reliability curves
» more in Lecture 30.
. Sometimes discrete analysis may be more meaningful
(see below).
n • Defect dynamics model:
. Important variation to trend analysis.
. Defect categorized by phase.
. Discovery (already done).
. Analysis to identify injection phase.
Defect Trend Analysis
n • Defect dynamics model:
. row: where (phase) injected
. column: where (phase) removed/discovered
. focus out-of-phase/off-diagonal ones!
Defect Causal Analysis
n • Defect causal analyses: Types
. Causal relation identified:
» error-fault vs fault-failure
» works backwards
. Techniques: statistical or logical.
n • Root cause analysis (logical):
. Human intensive.
. Good domain knowledge.
. Fault-failure: individual and common.
. Error-fault: project-wide effort focused on
pervasive problems.
n • Statistical causal analysis: ≈ risk identification
techniques in Lecture 29
Orthogonal Defect Classification ODC: Overview

n • Development . Chillarege et al. at IBM


. Applications in IBM Labs and several other companies
. Recent development and tools
n • Key elements of ODC
. Aim: tracking/analysis/improve
. Approach: classification and analysis
. Key attributes of defects
. Views: both failure and fault
. Applicability: inspection and testing
. Analysis: attribute focusing
. Need for historical data
ODC: Why?
n • Statistical defect models:
Quantitative and objective analyses.
Problems: accuracy & timeliness.
n • Causal (root cause) analyses:
Qualitative but subjective analyses.
Use in defect prevention.
n • Gap and ODC solution:
Bridge the gap between the two.
Systematic scheme used.
Wide applicability
ODC: Ideas
n • Cause-effect relation by type:
Different types of faults.
Causing different failures.
Need defect classification.
Multiple attributes for defects.
n • Good measurement:
Orthogonality (independent view).
Consistency across phases.
Uniformity across products.
n • ODC process/implementation:
Human classification.
Analysis method and tools.
Feedback results (and follow-up)
ODC Example
ODC Attributes: Failure-View
n • Defect trigger:
. Associated with verification process
» similar to test case measurement
» collected by testers
. Trigger classes
» – product specific
» – black box in nature
» – pre/post-release triggers

n • Other attributes:
. Impact
. Severity: low-high
. Detection time, etc.
ODC Attributes: Cause/Fault-View
n • Defect type:
. Associated with development process.
. Missing or incorrect.
. Collected by developers.
. May be adapted for other products.
n • Other attributes:
. Action: add, delete, change.
. Number of lines changed, etc.
ODC Attributes: Cause/Error-View
n • Key attributes:
. Defect source: vendor/base/new code.
. Where injected.
. When injected.
n • Characteristics:
. Associated to additional causal analysis.
. (May not be performed.)
. Many subjective judgment involved
(evolution of ODC philosophy)
n • Phase injected: rough “when”.
Adapting ODC for Web Error Analysis
n • Continuation of web testing/QA study.
n • Web error = observed failures, with causes
already recorded in access/error logs.
n • Key attributes mapped to ODC:
. Error type = defect impact.
» response code (4xx) in access logs
. Referring page = defect trigger.
» individual pages with embedded links
» classified: internal/external/empty
» focus on internal problems
. Missing file type = defect source – different fixing
actions to follow.
n • May include other attributes for different kinds
of web sites.
ODC Analysis: Attribute Focusing
n • General characteristics
. Graphical in nature
. 1-way or 2-way distribution
. Phases and progression
. Historical data necessary
. Focusing on big deviations
n • Representation and analysis
. 1-way: histograms
. 2-way: stack-up vs. multiple graphics
. Support with analysis tools
ODC Analysis Examples
n • 1-way analysis:
. Defect impact distribution for an IBM product.
. Uneven distribution of impact areas!
⇒ risk identification and focus.
ODC Analysis Examples
n • 2-way analysis:
. Defect impact-severity analysis.
. IBM product study continued.
. Huge contrast: severity of reliability and usability
problems!
ODC Process and Implementation
n • ODC process:
. Human classification
» defect type: developers,
» defect trigger and effect: testers,
» other information: coordinator/other.
. Tie to inspection/testing processes.
. Analysis: attribute focusing.
. Feedback results: graphical.
n • Implementation and deployment:
. Training of participants.
. Data capturing tools.
. Centralized analysis.
. Usage of analysis results.
Linkage to Other Topics

n • Development process
. Defect prevention process/techniques.
. Inspection and testing.
n • Testing and reliability:
. Expanded testing measurement
» Defects and other information:
» Environmental (impact)
» Test case (trigger)
» Causal (fault)
. Reliability modeling for ODC classes
Summary of Today’s Lecture
n In today’s lecture, we talked about Defect
Classification and Analysis. We also focused on:
General Types of Defect Analyses.
ODC: Orthogonal Defect Classification.
Analysis of ODC Data.
Overview of the next lecture

n Risk Identification for Quantifiable Quality


Improvement
Basic Ideas and Concepts
Traditional Statistical Techniques
Newer/More Effective Techniques
Tree-Based Analysis of ODC Data
The End
Software Quality
Engineering
Lecture No. 29
Part- 4:
Quantifiable Quality Improvement
Summary of the previous lecture
n In our previous lecture, we talked about
Defect Classification and Analysis. We also
focused on:
General Types of Defect Analyses,
ODC: Orthogonal Defect Classification.
Analysis of ODC Data.
We also discussed an examples of most
common defects that occur in a web based
project.
Outlines of today’s lecture

n In today’s lecture, we will talk about Risk


Identification for Quantifiable Quality
Improvement
Basic Ideas and Concepts
Traditional Statistical Techniques
Newer/More Effective Techniques
– NN, TBM, OSR etc.
Tree-Based Analysis of ODC Data
Objectives

n To understand and distinguish between different


techniques that are available for risk
identification in a software
n To apply the knowledge learnt in the real life
software.
Risk Identification: Why?
n • Observations and empirical evidences:
. 80:20 rule: non-uniform distribution:
» 20% of the modules/parts/etc. contribute
to
» 80% of the defects/effort/etc.
. implication: non-uniform attention – risk
identification – risk management/resolution
n • Risk Identification in SQE:
. 80:20 rule as implicit hypothesis
. focus: techniques and applications
Risk Identification: How?
n • Qualitative and subjective techniques:
. Causal analysis
. Delphi and other subjective methods
n • Traditional statistical techniques:
. Correlation analysis
. Regression models:
» linear, non-linear, logistic, etc.
n • Newer (more effective) techniques:
. Statistical: PCA, DA, TBM
. AI-based: NN, OSR
. Focus of our Lecture.
Risk Identification: Where?
n • 80% or target:
. Mostly quality or defect (most of our examples also)
. Effort and other external metrics
. Typically directly related to goal
. Resultant improvement
n • 20% or contributor:
. 20%: risk identification!
. Understand the link
. Control the contributor:
» corrections/defect removal/etc.
» future planning/improvement
» remedial vs preventive actions
New Techniques
n • New statistical techniques:
. PCA: principal component analysis
. DA: discriminant analysis
. TBM: tree-based modeling
n • AI-based new techniques:
. NN: artificial neural networks.
. OSR: optimal set reduction.
. Abductive-reasoning, etc.
New Techniques: PCA & DA
n • Not really new techniques, but rather new
applications in SE.
n • PCA: principal component analysis
. Idea of linear transformation.
. PCA to reduce dimensionality.
. Effectively combined with DA and other techniques.
n • DA: discriminant analysis
. Discriminant function
. Risk id as a classification problem
. Combine with other techniques
New Techniques: PCA & DA
n • DA: how?
Other/similar definitions possible.
. Minimize misclassification rate in model fitting and in
prediction.
. Good results (Khoshgoftaar et al., 1996).
n • PCA&DA: Summary and Observations:
. Positive/encouraging results, but,
. Much processing/transformation needed.
. Much statistics knowledge.
. Difficulty in data/result interpretation.
New Technique: NN
n • NN or ANN: artificial neural networks
. Inspired by biological computation
. Neuron: basic computational unit
» different functions
. Connection: neural network
. Input/output/hidden layers
n • NN applications:
. AI and AI problem solving
. In SQE: defect/risk identification
New Technique: TBM
n • TBM: tree-based modeling
. Similar to decision trees
. But data-based (derived from data)
. Preserves tree advantages:
» – easy to understand/interpret
» – both numerical and categorical data
» – partition ⇒ non-uniform treatment
n • TBM applications:
. Main: defect analysis TBDMs (tree-based defect
models)
Reliability: TBRMs (next lecture)
n • Risk identification and characterization.
New Technique: TBM
n • TBM for risk identification:
. Assumption in traditional techniques: – linear relation –
uniformly valid result
. Reality of defect distribution: – isolated pocket –
different types of metrics – correlation/dependency in
metrics – qualitative differences
. Need new risk id. techniques.
n • TBM for risk characterization:
. Identified, then what? . Result interpretation.
. Remedial/corrective actions.
. Extrapolation to new product/release.
. TBDMs appropriate.
New Technique: TBM
n • TBDMs: tree-based defect models using tree-
based modeling (TBM) technique
n • Decision trees:
. multiple/multi-stage decisions
. may be context-sensitive
. natural to the decision process
. applications in many problems – decision making &
problem solving – decision analysis/optimization
n • Tree-based models:
. reverse process of decision trees
. data ⇒ tree . idea of decision extraction
. generalization of “decision”
TBM Example

n • TBDM example:
defect prediction for IBM-NS
. 11 design/size/complexity metrics
New Technique: OSR
n • OSR: optimal set reduction
. pattern matching idea
. clusters and cluster analysis
. similar to TBM but different in: pattern extraction vs. partition
Following are the steps for OST algorithm
New Technique: OSR
n • Organization/modeling results:
. no longer a tree, see example above
. general subsets, may overlap
. details and some positive results:
Risk Identification: Comparison
n • Comparison: cost-benefit analysis ≈ comparing QA
alternatives (Last lecture of Part III).
n • Comparison area: benefit-related
. accuracy
. early availability and stability
. constructive information and guidance for (quality)
improvement
n • Comparison area: cost-related
. simplicity
. ease of result interpretation
. availability of tool support
Comparison: Accuracy
n • Accuracy in assessment:
. model fits data well
» use various goodness-of-fit measures
. avoid over-fitting
. cross validation by review etc.
n • Accuracy in prediction:
. over-fitting ⇒ bad predictions
. prediction: training and testing sets
» within project: jackknife
» across projects: extrapolate
. minimize prediction errors
Comparison: Usefulness
n • Early availability and stability
. to be useful must be available early
. focus on control/improvement
. apply remedial/preventive actions early
. track progress: stability
n • Constructive information and guidance
. what: assessment/prediction
. how to improve?
» constructive information
» guidance on what to do
. example of TBRMs
Comparison Summary
Summary of Today’s Lecture

n There are different techniques available for

Risk Identification.

n Each technique has some merits and


limitations.

n These techniques could be compared amongst

each other using some parameters such as


accuracy and usability etc.
Overview of the next lecture

n Last Topic of the course, i.e., Software Reliability


Engineering
Concepts and Approaches
Existing Approaches: SRGMs & IDRMs
Assessment & Improvement with TBRMs
The End
Software Quality
Engineering
Lecture No. 30
Part- 4:
Quantifiable Quality Improvement
Summary of the previous lecture
n In our previous lecture, we talked about different
techniques that are available for Risk Identification such
as NN, PCA and OSR etc. Some of them are classified
as statistical-based while other are Artificial Intelligence
based

n Each technique has some merits and limitations.

n These techniques could be compared amongst each


other using some parameters such as accuracy and
usability etc.
Outlines of today’s lecture

n In today’s lecture, we will talk about the last topic


of the course, i.e., Software Reliability
Engineering (SRE)
Concepts and Approaches
Existing Approaches: SRGMs & IDRMs
Assessment & Improvement with TBRMs
Objectives

n To understand and distinguish between different


concepts and approaches used for Software
Reliability Engineering (SRE)
What Is SRE
n • Reliability: Probability of failure-free operation for a
specific time period or input set under a specific environment
. Failure: behavioral deviations
. Time: how to measure?
. Input state characterization and Environment: OP
n • Software reliability engineering:
. Engineering (applied science) discipline
. Measure, predict, manage reliability
. Statistical modeling
. Customer perspective:
» – failures vs. faults
» – meaningful time vs. development days
» – customer operational profile
Assumption: SRE and OP
n • Assumption 1: OP, to ensure software reliability
from a user’s perspective.
n • OP: Operational Profile
. Quantitative characterization of the way a
(software) system will be used.
. Test case generation/selection/execution
. Realistic assessment
. Predictions (minimize discontinuity)
Other Assumptions in Context
n • Assumption 2: Randomized testing
. Independent failure intervals/observations
. Approximation in large software systems
. Adjustment for non-random testing
⇒ new models or data treatments
n • Assumption 3: Failure-fault relation
. Failure probability ∼ # faults
. Exposure through OP-based testing
. Possible adjustment?
. Statistical validity for large s/w systems
Other Assumptions and Context
n • Assumption 4: time-reliability relation
. time measurement in SRGMs
. usage-dependent vs. usage-independent
. proper choice under specific environment.
n • Usage-independent time measurement:
. calendar/wall-clock time
. only if stable or constant workload
n • Usage-dependent time measurement:
. for systems with uneven workload
. execution time – Musa’s models
. alternatives: runs, transactions, etc.
Workload for Products: An example of IBM
n IBM product D workload
. number of test runs for each day
. wide variability
. need usage-dependent time measurement (# of runs used)
Input Domain Reliability Models (IDRMs)
n • IDRMs: Current reliability snapshot based on
observed testing data of n samples.
n • Assessment of current reliability.
n • Prediction of future reliability (limited prediction
due to snapshot)
n • Management and improvement
. As acceptance criteria.
. Risk identification and follow-ups:
» reliability for input subsets
» remedies for problematic areas
» preventive actions for other areas
Nelson’s IDRM
n • Nelson Model:
. Running for a sample of n inputs.
. Randomly selected from set E:
E = {Ei : i = 1, 2, . . . , N}
. Sampling probability vector:
{Pi : i = 1, 2, . . . , N} .
.{Pi}: Operational profile.
. Number of failures: f.
. Estimated reliability:
IDRM Applications
n • Nelson model for web applications
daily error rates:
Other IDRMs
n • Brown-Lipow model:
. explicit input state distribution.
. known probability for sub-domains Ei
. fi failures for ni runs from subdomain Ei

would be the same as Nelson model for


representative sampling
IDRM Applications
n • IDRM applications
. overall reliability at acceptance testing
. reliability snapshots over time: in Nelson
model examples earlier
. reliability for input subsets: in TBRMs
Time Domain Measures and Models
n • Reliability measurement
. Reliability: time & probability
. Result: failure vs. success
. Time/input measurement
. Failure intensity (rate): alternative
. Mean Time Between/To Failures (MTBF/MTTF):
summary measure
S/w reliability growth models (SRGMs)

n • S/w reliability growth models (SRGMs):


. Reliability growth due to defect removal based on
observed testing data.
. Reliability-fault relations
. Exposure assumptions
. Data: time-between-failure (TBF) vs. period-failure-
count (PFC) models
SRGM Applications
n • Assessment of current reliability
n • Prediction of future reliability and resource to reach
reliability goals
n • Management and improvement
. Reliability goals as exit criteria
. Resource allocation (time/distribution)
. Risk identification and followups:
» – reliability (growth) of different areas
» – remedies for problematic areas
» – preventive actions for other areas
Assessing Existing Approaches
n • Time domain reliability analysis:
. Customer perspective.
. Overall assessment and prediction.
. Ability to track reliability change.
. Issues: assumption validity.
. Problem: how to improve reliability?
n • Input domain reliability analysis:
. Explicit operational profile.
. Better input state definition.
. Hard to handle change/evolution.
. Issues: sampling and practicality.
. Problem: realistic reliability assessment?
TBRMs: An Integrated Approach

n • Combine strengths of the two.


n • TBRM for reliability modeling:
. Input state: categorical information.
. Each run as a data point.
. Time cutoff for partitions.
. Data sensitive partitioning ⇒ Nelson models for subsets.

n • Using TBRMs:
. Reliability for partitioned subsets.
. Use both input and timing information.
. Monitoring changes in trees.
. Enhanced exit criteria.
. Integrate into the testing process.
TBRMs: Interpretation & Usage
n • Interpretation of trees:
. Predicted response: success rate. (Nelson reliability
estimate.)
. Time predictor: reliability change.
. State predictor: risk identification.

n • Change monitoring and risk identification:


. Change in predicted response.
. Through tree structural change.
. Identify high risk input state.
. Additional analyses often necessary.
. Enhanced test cases or components.
TBRMs at Different Times
n An early TBRM.
high-risk areas identified by input
early actions to improve reliability
TBRMs at Different Times
n An early/late TBRM.
high-risk areas identified by input
early actions to improve reliability
high-risk areas ≈ early runs
. uniformly reliable ⇒ ready for release
TBRM Impact
n • Evaluation/validation with SRGMs:
. Trend of reliability growth.
. Stability of failure arrivals.
. Estimated reliability:
Important: deployment at all successor products at
IBM
Integrated Approach: Implementation
n • Modified testing process:
. Additional link for data analysis.
. Process change and remedial actions.

n • Activities and Responsibilities:


. Evolutionary, stepwise refinement.
. Collaboration: project & quality orgs.
. Experience factory prototype.

n • Implementation:
. Passive tracking and active guidance.
. Periodic and event-triggered.
. S/W tool support
Tool Support
n • Types of tool support:
. Data capturing
» mostly existing logging tools
» modified to capture new data
. Analysis and modeling
» SMERFS modeling tool
» S-PLUS and related programs
. Presentation/visualization and feedback
» S-PLUS and Tree-Browser
Implementation Support
n Implementation of tool support:
. Existing tools: minimize cost
» internal as well as external tools
. New tools and utility programs
. Tool integration
» loosely coupled suite of tools
» connectors/utility programs
SRE Perspectives
n • New models and applications
. Expand from “medium-reliable” systems.
New models for new application domains.
Data selection/treatment
n • Reliability improvement
. Followup to TBRMs
Predictive (early!) modeling for risk identification
and management
n • Other SRE frontiers:
. Coverage/testing and reliability
. Reliability composition and maximization
Summary of Today’s Lecture
n In today’s lecture, we talked about the Software
Reliability Engineering (SRE)
Concepts and Approaches
Existing Approaches: SRGMs & IDRMs
Assessment & Improvement with TBRMs
Overview of the next lecture

n The last two lectures of the course, i.e., Lecture


No. 21 & 32 are going to be the revision of the
whole course.
The End
Software Quality
Engineering
Lecture No. 31
Summary of the previous lecture
n In previous lecture, we talked about the Software
Reliability Engineering (SRE)
Concepts and Approaches
Existing Approaches: SRGMs & IDRMs
Assessment & Improvement with TBRMs
Revision of the course
Course overview
The course is comprised of 32 lectures and is divided in
following parts:
n Part - 1: Overview and Basics.
n Part - 2: Software Testing: models & techniques,
management, automation, and integration,
n Part - 3: Other Quality Assurance Techniques:
defect prevention, process improvement, inspection, formal
verification, fault tolerance, accident prevention, and safety
assurance.
n Part – 4: Quantifiable Quality Improvement: analysis and
feedback, measurements and models, defect analysis, risk
identification, and software reliability engineering.
The Software Engineering

“Software is instructions (computer programs) that


when executed provide desired function and
performance.
Or
Data structures that enable the programs to adequately
manipulate information”

(Roger S. Pressman)
The Software Engineering
Characteristics of a software:
1. Software is developed or engineered, it is not
manufactured in the classical sense.
2. Software doesn't "wear out.“

Hardware failure Software failure

Although the industry is moving toward component-based


assembly, most software continues to be custom built.
Difference b/w Engineering and Manufacturing
Engineering refers to planning or designing whereas manufacturing
refers to using of machines and raw materials to physically make the
thing.
Manufacturing does not refer to making of civil architectures. Its called
construction.
For example Company A makes the blueprint of a building. Its in
engineering business. Company B makes cement and bricks to make
the building. Its in manufacturing. Company C takes raw material from
B and blue print from A and makes the building, Its in construction.
What is Quality

n In general, people’s quality expectations for software


systems they use and rely upon are two-fold:

1. The software systems must do what they are


supposed to do. In other words, they must do the right
things.
2. They must perform these specific tasks correctly or
satisfactorily. In other words, they must do the things
right.

Now you can define Software Quality Engineering.


Meeting People’s Quality Expectations

As we know that, if people’s expectations are met in


any product, then the product is supposed to have
quality in it.
Must perform expected behavior.
General Expectations

n General expectation: “good” software quality


n • Objects of our study: software
software products, systems, and services
stand-alone to embedded
software-intensive systems
wide variety, but focus on software
n • Quality (and how “good”)
Quality Expectations

n People: Consumers vs producers .


quality expectations by consumers
to be satisfied by producers through software
quality engineering (SQE)
n Deliver software system that... .
does what it is supposed to do –
– needs to be “validated” .
does the things correctly
– needs to be “verified” .
– show/demonstrate/prove it (“does”) –
modeling/analysis needed
Meeting Quality Expectations

n Difficulties in achieving good quality:


size: MLOC products common
Complexity
environmental stress/constraints
flexibility/adaptability expected
n Other difficulties/factors:
product type
cost and market conditions
Major SQE Activities

n Major SQE Activities:


Testing: MLOC products common
Other quality assurance alternatives to testing
How do you know: analysis & modeling
n Scope and content hierarchy:

Software Quality Engineering

Quality Assurance

Testing
Correctness, Defect and Quality
Defining Quality in SQE
ISO-9126 Quality Framework
Other Quality Frameworks
Quality Assurance
n Quality Assurance mainly deals in
1. Dealing with Defect
2. Defect Prevention
3. Defect Detection and Removal
§ QA focus on correctness aspect of Q
§ QA as dealing with defects
§ – post-release: impact on consumers
§ – pre-release: what producer can do .
§ what: testing & many others
§ when: earlier ones desirable (lower cost) but may not be
feasible
§ how ⇒ classification below
QA Classification
n dealing with errors,
faults, or failures
n removing or
blocking defect
sources
n preventing
undesirable
consequences
Overview of some topics related to SQE

n We covered the following topics in the course


Detect Prevention
Testing
Fault Tolerance
Safety Assurance
Formal Method
Inspection
QA in Software Processes

n Mega-process:
initiation, development, maintenance, termination.
n Development process components:
requirement, specification, design, coding, testing,
release.
QA in software Process

n Process variations:
waterfall development process
iterative development process
spiral development process
lightweight/agile development processes and XP
(extreme programming)
maintenance process too
mixed/synthesized/customized processes
n QA important in all processes
QA in Waterfall Process

n defect prevention in early phases . focused defect removal


in testing phase . defect containment in late phases .
phase transitions: inspection/review/etc
V&V in Software Process

V&V
V-model
SQE Process

Quality Engineering Process


Quality Concepts
n Software quality assurance is an umbrella activity that is
applied throughout the software process.
n SQA encompasses:
(1) a quality management approach
(2) effective software engineering technology
(3) formal technical reviews
(4) a multi-tiered testing strategy
(5) document change control
(6) software development standard and its control
procedure
(7) measurement and reporting mechanism)
Quality Concepts

n Quality --> refers to measurable characteristics of a software.


These items can be compared based on a given standard

n Two types of quality control:


Quality design -> the characteristics that designers specify for an
item. --> includes: requirements, specifications, and the design of
the system.

Quality of conformance -> the degree to which the design


specification are followed. It focuses on implementation based on
the design.
Quality Control

n What is quality control -- the series of inspections, reviews, and


test used throughout the develop cycle of a software product

n Quality control includes a feedback loop to the process.

n Objective ---> minimize the produced defects, increase the


product quality. Implementation approaches:
- Fully automated
- Entirely manual
- Combination of automated tools and human
interactions
Quality Control
n Key concept of quality control:
--> compare the work products with the specified and measurable
standards

n Quality assurance consists of:


the auditing and reporting function of management
n Goal --> provide management with the necessary data about
product quality. --> gain the insight and confidence of product
quality
Cost of Quality
n - prevention cost:
- quality planning
- formal technical reviews
- testing equipment
- training
n - appraisal cost:
- in-process and inter-process inspection
- equipment calibration and maintenance
- testing
n - failure cost:
internal failure cost:
Repair, and failure mode analysis, external failure cost:
complaint resolution, product return and replacement,
help line support , warranty work
What is Software Testing
Several definitions:

“Testing is the process of establishing confidence


that a program or system does what it is supposed
to.” by Hetzel 1973

“Testing is the process of executing a program or


system with the intent of finding errors.”
by Myers 1979

“Testing is any activity aimed at evaluating an


attribute or capability of a program or system and
determining that it meets its required results.”
by Hetzel 1983
What is Software Testing

- One of very important software development phases

- A software process based on well-defined software quality


control and testing standards, testing methods, strategy, test
criteria, and tools.

- Engineers perform all types of software testing activities to


perform a software
test process.

- The last quality checking point for software on its production


line
Software Testing

“Testing is the process of executing a program with the


intention of finding errors.”

“Testing can show the presence of bugs but never their


absence.”
Objective Testing

Uncover as many as error(or bug) as possible in a given


produce.
Demonstrate a given software product matching its
requirement specification.
Validate the quality of a software testing using the
minimum cost and effort.
Generate high quality test case, perform effective test and
issue correct and helpful problem report.
How Testing Has Changed
What? I have done the OK. May be you were right Testers! You must
coding and now you about testing. It looks like a work harder! Longer!
want to test it? Why? nasty bug made its way into Fasters!
We have not got time Live environment and now
anyway customers are complaining
History of Software Testing
Phases in Testing
Defect Prevention Overview

n Error blocking
error: missing/incorrect actions .
direct intervention to block errors ⇒ fault injections
prevented
rely on technology/tools/etc.
n Error source removal .
root cause analysis ⇒ identify error sources
removal through education/training/etc.
n Systematic defect prevention via process improvement.
Formal Method Overview

n Motivation .
fault present: – revealed through testing/inspection/etc.
fault absent: formally verify. (formal methods ⇒ fault
absent)
n Basic ideas .
behavior formally specified:
– – pre/post conditions, or
– – as mathematical functions. .
verify “correctness”: – intermediate states/steps, – axioms
and compositional rules. .
Approaches: axiomatic/functional/etc.
Inspection Overview

n Artifacts (code/design/test-cases/etc.) from


req./design/coding/testing/etc. phases. •
n Informal reviews: .
self-conducted reviews.
independent reviews.
orthogonality of views desirable. •
n Formal inspections: .
Fagan inspection and variations. .
process and structure. .
individual vs. group inspections. .
what/how to check: techniques
Testing Overview

n Product/Process characteristics: .
object: product type, language, etc. .
scale/order: unit, component, system, . . . .
who: self, independent, 3rd party •
n What to check: .
verification vs. validation .
external specifications (black-box) .
internal implementation (white/clear-box) •
n Criteria: when to stop? .
coverage of specs/structures. .
reliability ⇒ usage-based testing
Fault Tolerance Overview

n Motivation .
fault present but removal infeasible/impractical .
fault tolerance ⇒ contain defects •
n FT techniques: break fault-failure link
recovery: rollback and redo .
NVP: N-version programming – fault blocked/out-voted
Safety Assurance Overview

n Extending FT idea for safety:


– fault tolerance to failure “tolerance” •
n Safety related concepts: .
safety: accident free .
accident: failure w/ severe consequences .
hazard: precondition to accident
n Safety assurance: .
hazard analysis .
hazard elimination/reduction/control .
damage control
The End
Software Quality
Engineering
Lecture No. 32
Revision of the course
Test Planning and Preparation

• Major testing activities:


. test planning and preparation
. execution (testing)
. analysis and follow-up
• Test planning:
. goal setting
. overall strategy
• Test preparation:

. preparing test cases & test suite(s) (systematic: model-


based; our focus)
. preparing test procedure
Applications of Testing Techniques
• Major testing techniques covered so far:
. Ad hoc (non-systematic) testing.
. Checklist-based testing.
. Partition-based coverage testing.
. Musa’s OP for UBST.
. Boundary testing (BT).
. Control flow testing (CFT).
. Data flow testing (DFT)
• Application and adaptation issues:
. For different purposes/goals.
. In different environments/sub-phases.
. Existing techniques: select/adapt.
. May need new or specialized techniques.
Testing Sub-Phases: V-Model

Testing sub-phases associated with the V-Model


Testing Levels

n Unit Testing
Programs will be tested at unit level
The same developer will do the test

Integration Testing
When all the individual program units are tested in the unit testing phase and all units
are clear of any known bugs, the interfaces between those modules will be tested
Ensure that data flows from one piece to another piece

System Testing
After all the interfaces are tested between multiple modules, the whole set of
software is tested to establish that all modules work together correctly as an
application.
Put all pieces together and test

Acceptance Testing
The client will test it, in their place, in a near-real-time or simulated environment.
Testing Vs Debugging

Testing is focused on identifying the problems in the


product
Done by Tester
Need not know the source code

Debugging is to make sure that the bugs are removed or


fixed
Done by Developer
Need to know the source Code
QA Alternatives

n • Defect and QA:


. Defect: error/fault/failure.
. Defect prevention/removal/containment.
. Map to major QA activities
n • Defect prevention:
– Error source removal & error blocking
n • Defect removal: Inspection/testing/etc.
n • Defect containment: Fault tolerance and failure
containment (safety assurance).
Generic Ways for Defect Prevention
n • Error blocking
. Error: missing/incorrect actions
. Direct intervention
. Error blocked ⇒ fault injections prevented
(or errors tolerated)
. Rely on technology/tools/etc.
n • Error source removal
. Root cause analysis ⇒ identify error
sources
. Removal through education/training/etc.
Inspection as Part of QA
n • Throughout the software process
. Coding phase: code inspection
. Design phase: design inspection
. Inspection in other phases and at transitions from
one phase to another
n • Many different software artifacts:
. program code, typically
. requirement/design/other documents
. charts/models/diagrams/tables/etc.
n • Other characteristics:
. People focus.
. Not waiting for implemented system.
. Complementary to other QA activities.
Generic Inspection Process

n 1. Planning and preparation (individual)


n 2. Collection (group/meeting)
n 3. Repair (follow-up)
Inspection Process Variations
n • Overall planning:
. who? team organization/size/roles/etc. .
what? inspection objects
. objectives?
. number/coordination of multiple sessions?
n • Technique
. for preparation (individual inspection)
. for collection
n • What to do with defects?
. always: detect and confirm defects
. classify/analyze defects for feedback?
n • Use of post-collection feedback?
Fagan Inspection
n • General description
. Earliest, Fagan at IBM
. Lead to other variations
. Generic process and steps
n • Six steps of Fagan inspection:
1. Planning
2. Overview (1-to-n meeting)
3. Preparation (individual inspection)
4. Inspection (n-to-n meeting)
5. Rework
6. Follow-up.
Other Inspection Methods
n Variations to Fagan inspection: size/scope and
formality variations.
n Alternative inspection techniques/processes:
. Two-person inspection
. Meeting-less inspections
. Gilb inspection
. Phased inspections
. N-fold inspections
. Informal check/review/walkthrough
. Active design reviews
. Inspection for program correctness
. Code reading
. Code reading with stepwise abstraction
QA and Formal Verification
n • Formal methods = formal specification + formal
verification
n • Formal specification (FS):
. As part of defect prevention
. Formal ⇒ prevent/reduce defect injection due to
imprecision, ambiguity, etc.
. Briefly covered as related to FV.
n • Formal verification (FV):
. As part of QA, but focus on positive: “Prove absence of
fault”
. People intensive
. Several commonly used approaches
Formal Specification: Ideas
n Formal specification:
. Correctness focus
. Different levels of details
. 3Cs: complete, clear, consistent
. Two types: descriptive & behavioral
n Descriptive formal specifications:
. Logic: pre-/post-conditions.
. Math functions
. Notations and language support: Z, VDM, etc.
n Behavioral formal specifications: FSM, Petri-Net,
etc.
Formal Specification: Ideas
n • “Testing shows the presence of errors, not their
absence.” — Dijkstra
n • Formal verification: proof of correctness
. Formal specs: as pre/post-conditions
. Axioms for components or functional units
. Composition (bottom-up, chaining)
. Development and verification together
n • Other related approaches:
. Semi-formal verification
. Model checking
. Inspection for correctness
FT: Recovery Blocks (RB)
n General idea:
. Periodic check-pointing
. Problem detection/acceptance test
. Rollback (recovery)
FT: Recovery Blocks (RB)
n • Periodic check-pointing
. too often: expensive check-pointing
. too rare: expensive recovery
. smart/incremental check-pointing
n • Problem detection/acceptance test
. exceptions due to in/external causes
. periodic vs event-triggered
n • Recovery (rollback) from problems:
. external disturbance: environment?
. internal faults: tolerate/correct?
Fault Tolerance: N- Version Programming

n • FT with NVP:
. NVP: N-Version Programming
. Multiple independent versions
. Dynamic voting/decision ⇒ FT.
Hazard Elimination
n • Hazard sources identification ⇒ elimination
(Some specific faults prevented or removed.)
n • Traditional QA (but with hazard focus):
. Fault prevention activities:
» education/process/technology/etc
» formal specification & verification
. Fault removal activities:
» rigorous testing/inspection/analyses
n • “Safe” design: More specialized techniques:
. Substitution, simplification, decoupling.
. Human error elimination.
. Hazardous material/conditions↓
Hazard Reduction
n • Hazard identification ⇒ reduction (Some specific
system failures prevented or tolerated.)
n • Traditional QA (but with hazard focus):
. Fault tolerance
. Other redundancy
n • “Safe” design: More specialized techniques:
. Creating hazard barriers
. Safety margins and safety constraints
. Locking devices
. Reducing hazard likelihood
. Minimizing failure probability
. Mostly “passive” or “reactive”
Hazard Control
n • Hazard identification ⇒ control
. Key: failure severity reduction.
. Post-failure actions.
. Failure-accident link weakened.
. Traditional QA: not much, but good design principles
may help.
n • “Safe” design: More specialized techniques:
. Isolation and containment
. Fail-safe design & hazard scope↓
. Protection system . More “active” than “passive”
. Similar techniques to hazard reduction,
» but focus on post-failure severity↓ vs. pre-failure
hazard likelihood↓.
Software Safety Program (SSP)
n • Leveson’s approach (Leveson, 1995) — Software
safety program (SSP)
n • Process and technology integration
. Limited goals
. Formal verification/inspection based
. But restricted to safety risks
. Based on hazard analyses results
. Safety analysis and hazard resolution
. Safety verification:
» few things carried over
Software Safety Program (SSP)
n • In overall development process:
. Safety as part of the requirement
. Safety constraints at different levels/phases
. Verification/refinement activities
. Distribution over the whole process
TFM: Two-Frame-Model

n • TFM: Two-Frame-Model
. Physical frame
. Logical frame
. Sensors: physical ⇒ logical
. Actuators: logical ⇒ physical
n • TFM characteristics and comparison:
. Interaction between the two frames
. Nondeterministic state transitions and
encoding/decoding functions
. Focuses on symmetry/consistency between the
two frames.
Comparison of the QA Alternatives: Applicability
n • Objects QA activities applied on:
. Mostly on specific objects, e.g., testing executable
code
. Exception: defect prevention on (implementation
related) dev. activities
Comparison: Applicability
n • Applicability to development phases:
. In waterfall or V-model: implementation
(req/design/coding) & testing/later.
. Inspection in all phases.
. Other QA in specific sets of phases.
Comparison: Applicability/Expertise
n • General expertise levels: mostly in ranges,
depending on specific techniques used.
n • Specific background knowledge

Required expertise and background knowledge for people to perform


different QA Alternatives
Comparison: Effectiveness
n • Defect specifics or perspectives:
. Dealing with errors/faults/failures?
. Direct action vs follow-up action: may deal with
different defect perspectives.
. Example: failures detected in testing but (failure-
causing) faults fixed in follow-up.
Comparison: Effectiveness
n Problem or defect types:
errors/faults/failures of different types or
characteristics
Comparison: Effectiveness
n Ease of result interpretation
n Specific info/measurement
Comparison: Cost
n • Overall cost comparison:
. rough values and ranges
. multiple factors but focus on performing
the specific QA activities
Importance of Feedback Loop
n All QA activities covered in Part II and Part III
need additional support:
. Planning and goal setting
. Management via feedback loop:
» When to stop?
» Adjustment and improvement, etc.
» All based on assessments/predictions
Importance of Feedback Loop
n Feedback loop for quantification/improvement:
. Focus of Part IV chapters .
» mechanism and implementation.
» models and measurements.
» defect analyses and techniques.
» risk identification techniques.
» software reliability engineering
QE Activities and Process Review
n • Major activities:
. Pre-QA planning (earlier lectures).
. QA (Part II and Part III).
. Post-QA analysis & feedback
» Part IV (maybe parallel instead of “post-”)
n • Overall process:– Software quality engineering
(SQE)

Quality Engg. Process


QE Activities and Process Review
n Feedback loop zoom-in:
. Multiple measurement sources.
. Many types of analysis performed.
. Multiple feedback paths.
Refined Quality Engineering Process: Management, Analysis and feedback
for Quantifiable improvement
Quality Models
n • Model taxonomy:
. Generalized:
» – overall, segmented, and dynamic
. Product-specific:
» – semi-customized: product history
» – observation-based: observations
» – measurement-driven: predictive

Classification of quality
assessment models
Quality Model Summary
Defect Analysis
n Goal: (actual/potential) defect↓ or quality↑ in current
and future products.
n General defect analyses:
. Questions: what/where/when/how/why?
. Distribution/trend/causal analyses.
n Analyses of classified defect data:
. Prior: defect classification.
. Use of historical baselines.
. Attribute focusing in 1-way and 2-way analyses.
. Tree-based defect analysis (Lecture 29)
Risk Identification Techniques
n • New statistical techniques:
. PCA: principal component analysis
. DA: discriminant analysis
. TBM: tree-based modeling
n • AI-based new techniques:
. NN: artificial neural networks.
. OSR: optimal set reduction.
. Abductive-reasoning, etc.
What Is SRE
n • Reliability: Probability of failure-free operation for a
specific time period or input set under a specific environment
. Failure: behavioral deviations
. Time: how to measure?
. Input state characterization and Environment: OP
n • Software reliability engineering:
. Engineering (applied science) discipline
. Measure, predict, manage reliability
. Statistical modeling
. Customer perspective:
» – failures vs. faults
» – meaningful time vs. development days
» – customer operational profile
The End of the Course

Das könnte Ihnen auch gefallen