Beruflich Dokumente
Kultur Dokumente
Crescimento
Contedos formativos
Setembro de 2013
Realizado por:
Know-how
Competncias
Antnio Marco
SUMRIO
Fundamental
s of Testing
Testing
throughout
the Software
Life Cycle
Static
Techniques
Test Design
Techniques
Test
Management
Tool Support
for Testing
Why is testing
necessary?
Software
Development
Models
Static
Techniques and
the Test Process
The Test
Development
Process
Test
Organization
Types of Test
Tools
Test Levels
Review Process
Categories of
Test Design
Techniques
Test Planning
and Estimation
Effective Use of
Tools
Seven Test
Principles
Test Types
Static Analysis
by Tools
Test Progress
Monitoring and
Control
Introducing a
tool into a
organization
Fundamental
Test Process
Maintenance
Testing
Structure-Based
or White-Box
Techniques
Configuration
Management
Experience
Based
Techniques
Choosing Test
Techniques
Incident
Management
What is Testing?
Psychology of
Testing
1.
2.
What is Testing?
3.
4.
5.
Psychology of Testing
Failures in software can lead to problems, including loss of money, time or business
reputation, or even injury or death.
Error
Defect (bug,
fault)
Failure
Human mistake
Result of an
error
occurs when
software is run
and it fails to do
what it should.
complex code;
complexity of infrastructure;
changed technologies;
system interactions;
environmental conditions i.e. radiation, magnetism, electronic fields can influence the
execution of software by changing hardware conditions.
2. What is Testing?
Testing
Objectives
We can measure the quality of
software;
Testing and
quality?
to find bugs;
to ensure it meets requirements / is
fit for purpose;
to manage and mitigate risk;
previous projects.
take account of the level of risk
including product and project
How Much
Testing is
Enough
Principle 1
Principle 2
Principle 3
Principle 4
Early testing
Defect clustering
Principle 5
if the same tests are repeated over and over again, eventually they
will no longer find any new defects. Review & revise tests regularly.
Principle 6
Absence-of-errors fallacy
Principle 7
finding and fixing defects does not help if the system built is
unusable and does not fulfil the users needs and expectations
Planning &
Control
Test Planning:
Test Control:
10
Planning &
Control
11
Analysis &
Design
Planning &
Control
Analysis &
Design
Implement
ation &
Execution
12
reporting defects;
regression testing.
Planning &
Control
13
Analysis
& Design
Implemen
tation &
Execution
Evaluating
Exit
Criteria &
Reporting
Planning &
Control
14
Analysis
& Design
Implemen
tation &
Execution
Evaluatin
g Exit
Criteria &
Reporting
Closure
Activities
5. Psychology of Testing
Psychology of Testing
tester
prove &
break
things
15
16
1.
2.
Test Levels
3.
Test Types
4.
Maintenance Testing
Specify
requirement
s
System
Test Spec
Outline
Design
Ve
rif
i
ca
tio
Detailed
Design
Integration
Test Spec
Unit
Test Spec
Code & Unit
Test
17
Aceptance
Test
Acceptance
Test Spec
System
Testing
Integration
Testing
n
it o
a
id
l
Va
NOTA: Proibida a reproduo (total ou parcial) ou distribuio deste documento sem a autorizao de ALTRANPORTUGAL
18
Test analysis and design of tests for a given test level should begin during the
corresponding development activity;
19
2. Test Levels
Component Testing Integration Testing
module, program,
unit, class, object;
testing of software
bits that are
separately
testable;
searches for
defects & verifies
functionality;
may include nonfunctional aspects
(ex: search for
memory leaks);
includes structural
testing of code;
usually done by
developers
(defects are
typically fixed in
20 the moment).
done at component
level (after
component testing);
or system level
(after system
testing);
tests interfaces
between items;
concentrates on the
interactions
between items,
rather than
functionality;
can involve both
functional &
structural
approaches
System Testing
Acceptance Testing
concerned with
the behavior of
the whole product;
typically done by
customers and / or
users;
test environment
should represent
live;
goal is to establish
confidence in the
software;
End to end,
business process
testing;
finding defects is
not the main
focus;
functional and
non-functional
testing;
can involve
functional & nonfunctional testing;
often done by
independent test
team.
2. Test Levels
Types of Acceptance Testing
21
3. Test Types
A group of test activities aimed to verify the software system based
on a specific reason or target for testing
22
3. Test Types
Functional
testing of software
structure /
architecture;
White-box;
may be performed
at all test levels.
Includes:
Performance, Load
and Stress testing;
usability testing;
interoperability
maintainability
testing;
may be performed
at all test levels.
Structural
testing and
testing;
23
Non-Functional
reliability testing;
portability testing.
Testing may be
performed at all
test levels;
measures
coverage:
extent that a
structure has
been exercised;
expressed as a
percentage of the
items being
covered.
Related to Changes
tests must be
repeatable;
Retesting /
confirmation
testing:
following the
release of a bug
fix;
to confirm that it
has been fixed.
regression
testing:
repeated testing;
after modification
of the SW or
environment;
find defects
caused by
changes.
4. Maintenance Testing
Maintenance Testing
Can be done at any or all test levels and for any or all test types;
24
25
1.
2.
Review Process
3.
Beneficts
performed before dynamic testing;
manual - reviews
26
27
management support
2. Review Process
Activities of Formal Review
Planning
Allocating roles;
28
Defining the entry and exit criteria for more formal review types (e.g.,
inspections);
2. Review Process
Activities of Formal Review
Planning
29
Kick-off
Distributing documents;
2. Review Process
Activities of Formal Review
Planning
30
Kick-off
Individual
preparation
2. Review Process
Activities of Formal Review
Planning
31
Kick-off
Individual
preparatio
n
Examination
,
evaluation,
recording
of results
2. Review Process
Activities of Formal Review
Planning
32
Kick-off
Individual
preparatio
n
Examination,
evaluation,
recording
of results
Rework
2. Review Process
Activities of Formal Review
Planning
33
Kick-off
Individual
preparatio
n
Examination,
evaluation,
recording
of results
Rework
Gathering metrics;
Follow-up
2. Review Process
Roles of a typical formal review
Manager
Moderator
Author
34
2. Review Process
Roles of a typical formal review
Reviewer
(checker,
inspector)
Scribe
(Recorder)
35
review meetings.
Documents all issues, problems and open points that were identified
during the meeting.
2. Review Process
Types of review
Informal
review
no formal process;
Walkthrough
36
2. Review Process
Types of review (cont)
technical /
peer review
37
2. Review Process
Types of review (cont)
Inspections
38
Defined roles;
Pre-meeting preparation;
Optional reader;
2. Review Process
Success Factor for reviews:
Testers are valued reviewers who contribute to the review and also learn about the product
which enables them to prepare tests earlier;
The review is conducted in an atmosphere of trust; the outcome will not be used for the
evaluation of the participants;
Review techniques are applied that are suitable to achieve the objectives and to the type
and level of software work products and reviewers;
Checklists or roles are used if appropriate to increase effectiveness of defect identification;
Training is given in review techniques, especially the more formal techniques such as
inspection;
39
2. Review Process
Tools support the Review process
Report defects;
Review
review
process
support
tools
40
Static
analyzers
Modeling
tools
Prevention of defects;
Used by developers.
Used by developers
41
Security vulnerabilities;
42
43
1.
2.
3.
4.
5.
models.
44
Structure-based
(White-box)
Experience-based
software is constructed is
used to derive the test
cases;
The extent of coverage of
of people;
Knowledge of testers,
developers, users and other
stakeholders about the
environment;
cases.
Exercise
Values less than 10 are rejected, values
between 10 and 21 are accepted, values
greater than or equal to 22 are rejected.
Which of the following input values cover all
of the equivalence partitions?
A. 10,11,21
B. 3,20,21
C. 3,10,22
D. 10,21,22
Answer: C
45
Exercise
An input field takes the year of birth between
1900 and 2004. The boundary values for
testing this field are
A. 0,1900,2004,2005
B. 1900, 2004
C. 1899,1900,2004,2005
D. 1899, 1900, 1901,2003,2004,2005
Answer: C
46
Exercise
B.
A, B, C, D.
C. C. D, A, B.
D.
A, B, C.
Answer: A
47
identify conditions;
48
describe interactions between actors, including users and the system, which produce a result
of value to a system user;
describe the process flows through a system based on its actual likely use;
uncover defects in the process flows during real-world use of the system;
49
Exercise
combination of paths
once.
Answer: 1 - V W
Y
50
Exercise
combination of paths
least once;
Answer: 2 - V W Y; V X Z
51
exploratory testing
simultaneous test design, test execution,
test recording & learning;
useful when time is tight and when
documentation is non-existent.
52
Equivalence Partitioning
regulatory standards;
Decision tables
test objective;
documentation available;
coverage
Decision testing and
coverage
Error Guessing
exploratory testing
found.
53
Techniques
54
1.
Test Organisation
2.
3.
4.
Configuration Management
5.
6.
Incident Management
1. Test Organization
Task of the test leader
Coordinate the test strategy and plan with project managers and others;
Write or review a test strategy for the project, and test policy for the organization;
Contribute the testing perspective to other project activities, such as integration planning;
Initiate the specification, preparation, implementation and execution of tests, monitor the
test results and check the exit criteria;
Adapt planning based on test results and progress (sometimes documented in status
reports) and take any action necessary to compensate for problems;
Set up adequate configuration management of testware for traceability;
Introduce suitable metrics for measuring test progress and evaluating the quality of the
testing and the product.
Write test summary reports based on the information gathered during testing.
55
1. Test Organization
Task of the tester
Analyze, review and assess user requirements, specifications and models for testability;
Set up the test environment (often coordinating with system administration and network
management);
Prepare and acquire test data;
Implement tests on all test levels, execute and log the tests, evaluate the results and
document the deviations from expected results;
Use test administration or management tools and test monitoring tools as required;
56
determining the scope and risks, and identifying the objectives of testing;
integrating and coordinating the testing activities into the software life cycle activities;
selecting metrics:
57
58
making decisions:
determining documentation:
level of detail;
Structure;
Templates.
59
Entry Criteria
Test environment availability and
readiness;
goal.
typical examples:
cost;
60
the metricsbased
approach
the expertbased
approach
61
Dynamic and
heuristic
Regression
verse
process or
standard
compliant
Consultative
62
actions.
Test Metrics.
63
Test Reporting
What happened during a
period of testing, such as
dates when exit criteria
were met;
Analyzed information and
metrics to support
recommendations and
decisions about future
actions:
Assessment of defects
remaining;
The economic benefit of
continued testing;
Outstanding risks;
And the level of confidence
of the software.
Test Control
any guiding or corrective
actions;
taken as a result of
information and metrics
gathered and reported;
typical test control actions:
making decisions based
on information from test
monitoring;
Reprioritize tests when an
identified risk occurs;
change the test schedule;
set an entry criterion developers must test
defect fixes.
test case execution - number of test cases run / not run; test cases passed / failed;
defect information - defect density; defects found and fixed; failure rate; retest results;
testing costs - the cost of running the next test; versus the benefit of finding the next defect.
64
4. Configuration Management
Configuration Management
Identified;
version controlled;
Version 1
Version 2
Version 3
65
a potential problem.
Probability
Low
Medium
High
Low
Medium
Medium
Low
Low
Low
Impact
66
67
Technical factors
Problems in defining the
right requirements;
The extent to which
requirements cannot be
met given existing
constraints;
Test environment not ready
on time;
Late data conversion,
migration planning and
development and testing
data conversion/migration
tools;
Low quality of the design,
code, configuration data,
test data and tests.
Supplier factors
Failure of a third party;
Contractual issues.
68
A Risk-based approach to
testing.
reduce the levels of product risk;
69
Testing
6. Incident manager
Defects
Should be
tracked
May be raised
at any time
They may be
raised for
anything
70
during review;
during testing;
testing;
issues in code;
the working system;
any type of documentation.
6. Incident manager
Details to include on the defects:
Software or system life cycle process in which the incident was observed;
Change history;
References including the identity of the test case specification that revealed the problem.
71
72
1.
2.
3.
73
Requirement
s
management
Allow
requirements
to be prioritized.
Not strictly
testing
tools;
Store information and keep track of different versions and builds of software and tes
Configuration
Enable traceability
management
Useful when developing on several hardware/software environment.
Incident
management
defect
tracking
74
Test design
Test data
preparation
75
From code.
Coverage
measurement
Security
tools
Test
execution
76
Can be intrusive;
Used by developers.
Test harness
unit test
framework
Test
comparators
77
Used by developers.
Dynamic
analysis
tools
Performance
Load
stress
Monitoring
tools
78
Find defects that are evident only when software is executing, Such
as time dependencies or memory leaks;
Used by developers.
Data quality
assessment
79
Ease of access to information about tests or testing (e.g., statistics and graphs
about test progress, incident rates and performance).
80
Unrealistic expectations for the tools (including functionality and ease of use);
Underestimating the time, cost and effort for the initial introduction of a tool (including
training and external expertise);
Underestimating the time and effort needed to achieve significant and continuing benefits
from the tool (including the need for changes in the testing process and continuous
improvement of the way the tool is used);
Underestimating the effort required to maintain the test assets generated by the tool;
Over-reliance on the tool (replacement for test design or use of automated testing where
manual testing would be better);
81
Risk of tool vendor going out of business, retiring the tool, or selling the tool to a different
vendor;
Poor response from vendor for support, upgrades, and defect fixes;
82
A proof-of-concept, by using a test tool during the evaluation phase to establish whether it
identify changes needed to that infrastructure to effectively use the tool;
Evaluation of the vendor (including training, support and commercial aspects) or service
support suppliers in case of non-commercial tools;
Identification of training needs considering the current test teams test automation skills;
83
Evaluate how the tool fits with existing processes and practices, and determine
what would need to change;
Decide on standard ways of using, managing, storing and maintaining the tool
and the test assets (e.g., deciding on naming conventions for files and tests,
creating libraries and defining the modularity of test suites);
84
Adapting and improving processes to fit with the use of the tool;
85