Beruflich Dokumente
Kultur Dokumente
Testing 1
Background
Testing 2
Faults & Failure
Testing 4
Detecting defects in Testing
Testing 5
2 Basic principles
Test early
Test parts as soon as they are implemented
Test each method in turn
Test often
Run tests at every reasonable opportunity
After small additions
After changes have been made
Re-run prior tests (confirm still working) + test the
new functionality
Testing 6
Retesting: Regression Testing
Testing 9
Common Test Oracles
specifications and documentation,
other products (for instance, an oracle for a software
program might be a second program that uses a different
algorithm to evaluate the same mathematical expression as
the product under test)
an heuristic oracle that provides approximate results or exact
results for a set of a few test inputs,
a statistical oracle that uses statistical characteristics,
a consistency oracle that compares the results of one test
execution to another for similarity,
a model-based oracle that uses the same model to generate
and verify system behavior,
or a human being's judgment (i.e. does the program "seem"
to the user to do the correct thing?).
Testing 10
Role of Test cases
Testing 12
Black box testing
Testing 13
Black box testing
Testing 14
Also need white box testing
Testing 15
TESTING PROCESS
Testing 16
Testing
Testing 17
Incremental Testing
Testing 18
Integration and Testing
Testing 19
Top-down and Bottom-up
Testing 21
User needs Acceptance testing
Testing 23
Integration Testing
Testing 24
System Testing
Testing 26
Other forms of testing
Performance testing
tools needed to “measure” performance
Stress testing
load the system to peak, load generation tools
needed
Regression testing
test that previous functionality works alright
important when changes are made
Previous test records are needed for comparisons
Prioritization of testcases needed when complete
test suite cannot be executed for a change
Testing 27
Test Plan
Testing 28
Test Plan…
Testing 29
Typical Steps
Testing 32
2. Determine type of testing
Regression testing
Validates changes did not create defects in existing code
Acceptance testing
Customer agreement that contract is satisfied
Installation testing
Works as specified once installed on required platform
Robustness testing
Validates ability to handle anomalies
Performance testing
Is fast enough / uses acceptable amount of memory
Testing 33
3. Determine the extent
Testing 35
4. Decide on test documentation
Testing 36
Documentation questions
Testing 37
5. Determine input sources
Testing 38
6. Decide who will test
Individual engineer responsible for some (units)?
Testing beyond the unit usually
planned/performed by people other than coders
Unit level tests made available for
inspection/incorporation in higher level tests
How/when inspected by QA
Typically black box testing only
How/when designed and performed by third
parties?
Testing 39
7. Estimate the resources
Testing 40
8. Identify & track metrics
Testing 41
“More than the act of testing, the act of
designing tests is one of the best bug
preventers known. The thinking that must be
done to create a useful test can discover and
eliminate bugs before they are coded –
indeed, test-design thinking can discover and
eliminate bugs at every stage in the creation
of software, from conception to specification,
to design, coding and the rest.” – Boris Beizer
Testing 42
Software Testing Templates
http://www.the-software-tester.com/templates.html
Software Test Plan
Software Test Report
http://softwaretestingfundamentals.com/test-plan/
Software Test Plan
Testing 43
MOVING BEYOND THE PLAN
Testing 44
Test case specifications
Testing 45
Test case specifications…
Testing 46
Test case specifications…
Testing 47
Test case specifications…
Testing 48
Test case specifications…
Testing 49
Test case execution and
analysis
Executing test cases may require drivers or stubs to be
written; some tests can be auto, others manual
A separate test procedure document may be prepared
Test summary report is often an output – gives a
summary of test cases executed, effort, defects found,
etc
Monitoring of testing effort is important to ensure that
sufficient time is spent
Computer time also is an indicator of how testing is
proceeding
Testing 50
Defect logging and tracking
Testing 51
Defect logging…
Testing 52
Defect logging…
Testing 53
Defect logging…
Testing 54
Defect logging…
Testing 55
Defect logging and tracking…
Testing 56
Defect arrival and closure
trend
Testing 57
Defect analysis for
prevention
Quality control focuses on removing defects
Goal of defect prevention (DP) is to reduce the
defect injection rate in future
DP done by analyzing defect log, identifying
causes and then remove them
Is an advanced practice, done only in mature
organizations
Finally results in actions to be undertaken by
individuals to reduce defects in future
Testing 58
Metrics - Defect removal
efficiency
Basic objective of testing is to identify defects
present in the programs
Testing is good only if it succeeds in this goal
Defect removal efficiency (DRE) of a QC activity
= % of present defects detected by that QC
activity
High DRE of a quality control activity means
most defects present at the time will be removed
Testing 59
Defect removal efficiency …
DRE for a project can be evaluated only when all defects
are know, including delivered defects
Delivered defects are approximated as the number of
defects found in some duration after delivery
The injection stage of a defect is the stage in which it was
introduced in the software, and detection stage is when it
was detected
These stages are typically logged for defects
With injection and detection stages of all defects, DRE
for a QC activity can be computed
Testing 60
Defect Removal Efficiency …
Testing 61
Metrics – Reliability
Estimation
High reliability is an important goal being
achieved by testing
Reliability is usually quantified as a probability or
a failure rate
For a system it can be measured by counting
failures over a period of time
Measurement often not possible for software as
reliability changes as a result of fixes, and with
one-off, not possible to measure
Testing 62
Reliability Estimation…
Testing 63
Summary
Testing 64
Summary …
Testing 67
Black box testing…
For modules:
Specifications produced in design detail expected
behavior
For system testing,
SRS specifies expected behavior
Testing 68
Black Box Testing…
Testing 69
White box testing
Black box testing focuses only on functionality
What the program does; not how it is
implemented
White box testing focuses on implementation
Aim is to exercise different program structures
with the intent of uncovering errors
Is also called structural testing
Various criteria exist for test case design
Test cases have to be selected to satisfy
coverage criteria
Testing 70
Types of structural testing
Testing 71
Testing Methods
Testing 72
Equivalence Class
partitioning
Divide the input space into equivalent classes
If the software works for a test case from a class
the it is likely to work for all
Can reduce the set of test cases if such
equivalent classes can be identified
Getting ideal equivalent classes is impossible
Approximate it by identifying classes for which
different behavior is specified
http://www.testing-world.com/58828/Equivalence-Class-Partitioning
Testing 73
Equivalence Class Examples
In a computer store, the computer item can have a quantity
between -500 to +500. What are the equivalence classes?
Testing 74
Equivalence Class Examples
Answer:
Valid class: 0 <= account <= 499
Valid class: 500 <= account <= 1000
Valid class: 2000 <= account <= 2000
Invalid class: account < 0
Invalid class: 1000 < account < 2000
Invalid class: account > 2000
Testing 75
Equivalence class
partitioning…
Rationale: specification requires same
behavior for elements in a class
Software likely to be constructed such that it
either fails for all or for none.
E.g. if a function was not designed for
negative numbers then it will fail for all the
negative numbers
For robustness, should form equivalent
classes for invalid as well as valid inputs
Testing 76
Equivalent class
partitioning..
Every condition specified as input is an
equivalent class
Define invalid equivalent classes also
E.g. range 0< value<Max specified
one range is the valid class
input < 0 is an invalid class
input > max is an invalid class
Whenever that entire range may not be
treated uniformly - split into classes
Testing 77
Equivalence class…
Testing 78
Example
Testing 79
Example..
Testing 80
Example…
Testing 81
Boundary value analysis
Testing 82
Boundary value analysis
(cont)...
For each equivalence class
choose values on the edges of the class
choose values just outside the edges
E.g. if 0 <= x <= 1.0
0.0 , 1.0 are edges inside
-0.1,1.1 are just outside
E.g. a bounded list - have a null list , a
maximum value list
Consider outputs also and have test cases
generate outputs on the boundary
Testing 83
Boundary Value Analysis
Min Max
If multiple inputs, how to combine them into test
cases; two strategies possible
Try all possible combination of BV of diff variables, with
n vars this will have 7n test cases!
Select BV for one var; have other vars at normal values
+ 1 of all normal values
Testing 84
BVA.. (test cases for two vars – x
and y)
Testing 85
Cause Effect graphing
Testing 86
CE-graphing
Testing 87
CE-graphing
Testing 88
Step 1: Break the specification
down into workable pieces.
Testing 89
Step 2: Identify the causes
and effects.
a) Identify the causes (the distinct or
equivalence classes of input conditions) and
assign each one a unique number.
b) Identify the effects or system
transformation and assign each one a unique
number.
Testing 90
Example
Testing 92
Step 3: Construct Cause & Effect
Graph
Testing 93
Step 4: Annotate the graph
with constraints
Annotate the graph with constraints describing
combinations of causes and/or effects that are
impossible because of syntactic or
environmental constraints or considerations.
Example: Can be both Male and Female?
Types of constraints?
Exclusive: Both cannot be true
Inclusive: At least one must be true
One and only one: Exactly one must be true
Requires: If A implies B
Mask: If effect X then not effect Y
Testing 94
Types of Constraints
Testing 95
Example: Adding a One-and-
only-one Constraint
Testing 96
Step 5: Construct limited
entry decision table
Methodically trace state conditions in the
graphs, converting them into a limited-entry
decision table.
Each column in the table represents a test case.
Test Case 1 2 3 … n
Cause 1 1 0 …
… 0 1 …
Cause c 0 0 …
Effect 100 … … …
…
Effect e 0
Testing 97
Example: Limited entry
decision table
Testing 98
Step 6: Convert into test cases
Columns
to rows
Read off
the 1’s
Testing 99
Notes
Testing 100
Exercise: You try it!
A bank database which allows two commands
Credit acc# amt
Debit acc# amt
Requirements
If credit and acc# valid, then credit
If debit and acc# valid and amt less than balance, then
debit
Invalid command – message
Your task…
Identify and name causes and effects
Draw CE graphs and add constraints
Construct limited entry decision table
Construct test cases
Testing 101
Example…
Causes
C1: command is credit
C2: command is debit
C3: acc# is valid
C4: amt is valid
Effects
# 1 2 3 4 5
Print “Invalid command”
C1 0 1 x x x
Print “Invalid acct#”
C2 0 x 1 1 x
Print “Debit amt not valid” C3 x 0 1 1 1
Debit account C4 x x 0 1 1
Credit account E1 1
E2 1
E3 1
E4 1
Testing 102
E5 1
Pair-wise testing
Testing 103
Pair-wise testing…
Testing 104
Pair-wise testing…
Testing 105
Pair-wise testing…
Testing 106
Pair-wise testing…
Testing 107
Pair-wise testing…
Testing 108
Pair-wise testing, Example
Testing 109
Pair-wise testing…
Testing 110
Special cases
Programs often fail on special cases
These depend on nature of inputs, types of
data structures,etc.
No good rules to identify them
One way is to guess when the software
might fail and create those test cases
Also called error guessing
Play the sadist & hit where it might hurt
Testing 111
Error Guessing
Use experience and judgement to guess situations where
a programmer might make mistakes
Special cases can arise due to assumptions about inputs,
user, operating environment, business, etc.
E.g. A program to count frequency of words
file empty, file non existent, file only has blanks, contains only
one word, all words are same, multiple consecutive blank lines,
multiple blanks between words, blanks at the start, words in
sorted order, blanks at end of file, etc.
Perhaps the most widely used in practice
Testing 112
State-based Testing
Testing 113
State-based Testing…
Testing 114
State-based Testing…
Testing 115
State-based Testing…
Testing 116
State-based Testing,
example…
Consider a student survey example
A system to take survey of students
Student submits survey and is returned results of
the survey so far
The result may be from the cache (if the database
is down) and can be up to 5 surveys old
Testing 117
State-based Testing,
example…
In a series of requests, first 5 may be treated
differently
Hence, we have two states: one for req no 1-4
(state 1), and other for 5 (2)
The db can be up or down, and it can go down in
any of the two states (3-4)
Once db is down, the system may get into failed
state (5), from where it may recover
Testing 118
State-based Testing,
example…
Testing 119
State-based Testing…
Testing 120
State-based Testing criteria
Testing 121
Example, test cases for AT
criteria
SNo Transition Test case
1 1 -> 2 Req()
2 1 -> 2 Req(); req(); req(); req();req(); req()
3 2 -> 1 Seq for 2; req()
4 1 -> 3 Req(); fail()
5 3 -> 3 Req(); fail(); req()
6 3 -> 4 Req(); fail(); req(); req(); req();req(); req()
7 4 -> 5 Seq for 6; req()
8 5 -> 2 Seq for 6; req(); recover()
Testing 122
State-based testing…
Testing 123
White box testing
Black box testing focuses only on functionality
What the program does; not how it is
implemented
White box testing focuses on implementation
Aim is to exercise different program structures
with the intent of uncovering errors
Is also called structural testing
Various criteria exist for test case design
Test cases have to be selected to satisfy
coverage criteria
Testing 124
Types of structural testing
Testing 125
Control flow based criteria
Testing 126
Statement Coverage Criterion
Criterion: Each statement is executed at least once during
testing
i.e., set of paths executed during testing should include all
nodes
Limitation: does not require a decision to evaluate to false
if no else clause
E.g. ,: abs (x) : if ( x>=0) x = -x; return(x)
The set of test cases {x = 0} achieves 100% statement coverage,
but error not detected
Guaranteeing 100% coverage not always possible due to
possibility of unreachable nodes
Testing 127
Branch coverage
Testing 128
Control flow based…
Testing 129
Data flow-based testing
Testing 130
Data flow based…
Testing 131
Data flow based criteria
Testing 132
Relationship between diff
criteria
Testing 133
Tool support and test case
selection
Two major issues for using these criteria
How to determine the coverage
How to select test cases to ensure coverage
For determining coverage - tools are essential
Tools also tell which branches and statements
are not executed
Test case selection is mostly manual - test plan is
to be augmented based on coverage data
Testing 134
In a Project
Testing 135
Comparison
Testing 136
TESTING PROCESS
Testing 137
Testing
Testing 138
Incremental Testing
Testing 139
Integration and Testing
Testing 140
Top-down and Bottom-up
Testing 142
User needs Acceptance testing
Testing 144
Integration Testing
Testing 145
System Testing
Testing 147
Other forms of testing
Performance testing
tools needed to “measure” performance
Stress testing
load the system to peak, load generation tools
needed
Regression testing
test that previous functionality works alright
important when changes are made
Previous test records are needed for comparisons
Prioritization of testcases needed when complete
test suite cannot be executed for a change
Testing 148
Test Plan
Testing 149
Test Plan…
Testing 150
Test case specifications
Testing 151
Test case specifications…
Testing 152
Test case specifications…
Testing 153
Test case specifications…
Testing 154
Test case specifications…
Testing 155
Test case execution and
analysis
Executing test cases may require drivers or stubs to be
written; some tests can be auto, others manual
A separate test procedure document may be prepared
Test summary report is often an output – gives a
summary of test cases executed, effort, defects found,
etc
Monitoring of testing effort is important to ensure that
sufficient time is spent
Computer time also is an indicator of how testing is
proceeding
Testing 156
Defect logging and tracking
Testing 157
Defect logging…
Testing 158
Defect logging…
Testing 159
Defect logging…
Testing 160
Defect logging…
Testing 161
Defect logging and tracking…
Testing 162
Defect arrival and closure
trend
Testing 163
Defect analysis for
prevention
Quality control focuses on removing defects
Goal of defect prevention is to reduce the defect
injection rate in future
DP done by analyzing defect log, identifying
causes and then remove them
Is an advanced practice, done only in mature
organizations
Finally results in actions to be undertaken by
individuals to reduce defects in future
Testing 164
Metrics - Defect removal
efficiency
Basic objective of testing is to identify defects
present in the programs
Testing is good only if it succeeds in this goal
Defect removal efficiency of a QC activity = % of
present defects detected by that QC activity
High DRE of a quality control activity means
most defects present at the time will be removed
Testing 165
Defect removal efficiency …
DRE for a project can be evaluated only when all defects
are know, including delivered defects
Delivered defects are approximated as the number of
defects found in some duration after delivery
The injection stage of a defect is the stage in which it was
introduced in the software, and detection stage is when it
was detected
These stages are typically logged for defects
With injection and detection stages of all defects, DRE
for a QC activity can be computed
Testing 166
Defect Removal Efficiency …
Testing 167
Metrics – Reliability
Estimation
High reliability is an important goal being
achieved by testing
Reliability is usually quantified as a probability or
a failure rate
For a system it can be measured by counting
failures over a period of time
Measurement often not possible for software as
due to fixes reliability changes, and with one-off,
not possible to measure
Testing 168
Reliability Estimation…
Testing 169
Summary
Testing 170
Summary …
Testing 171
Summary…
Testing 172