Sie sind auf Seite 1von 86

Academia de Software Testers

Crescimento

Contedos formativos
Setembro de 2013

Realizado por:
Know-how
Competncias

Antnio Marco

SUMRIO
Fundamental
s of Testing

Testing
throughout
the Software
Life Cycle

Static
Techniques

Test Design
Techniques

Test
Management

Tool Support
for Testing

Why is testing
necessary?

Software
Development
Models

Static
Techniques and
the Test Process

The Test
Development
Process

Test
Organization

Types of Test
Tools

Test Levels

Review Process

Categories of
Test Design
Techniques

Test Planning
and Estimation

Effective Use of
Tools

Seven Test
Principles

Test Types

Static Analysis
by Tools

SpecificationBased or BlackBox Techniques

Test Progress
Monitoring and
Control

Introducing a
tool into a
organization

Fundamental
Test Process

Maintenance
Testing

Structure-Based
or White-Box
Techniques

Configuration
Management

Experience
Based
Techniques

Risk and Testing

Choosing Test
Techniques

Incident
Management

What is Testing?

Psychology of
Testing

Module I Fundamentals of Testing

1.

Why is testing necessary?

2.

What is Testing?

3.

Seven Test Principles

4.

Fundamental Test Process

5.

Psychology of Testing

1. Why is testing necessary?


Software Systems Context

Software are integral part of life (e.g. Business, Consumer products);

Failures in software can lead to problems, including loss of money, time or business
reputation, or even injury or death.

1. Why is testing necessary?

Causes of Software Defects

Error

Defect (bug,
fault)

Failure

Human mistake

Result of an
error

occurs when
software is run
and it fails to do
what it should.

1. Why is testing necessary?


Why do defects occur?

human beings are fallible;

there is time pressure;

complex code;

complexity of infrastructure;

changed technologies;

system interactions;

environmental conditions i.e. radiation, magnetism, electronic fields can influence the
execution of software by changing hardware conditions.

2. What is Testing?
Testing

Objectives
We can measure the quality of
software;

Testing and
quality?

Testing give confidence in the


quality of the software;
Lessons should be learned from

to find bugs;
to ensure it meets requirements / is
fit for purpose;
to manage and mitigate risk;

previous projects.
take account of the level of risk
including product and project
How Much
Testing is
Enough

risks project constraints such as


time and budget

to gain confidence in the software;


compliance - laws, standards,
regulations;
Quality.

3. Seven testing principles

Principle 1

Testing shows presence of defects

Testing cannot prove that there are no defects.

Exhaustive testing is impossible

Principle 2

Principle 3

Principle 4

There is not enough time to test everything

Risk analysis & priorities should be used to focus effort

Early testing

We should start testing as soon as possible

Defect clustering

Small areas contain most of the defects

3. Seven testing principles


Pesticide paradox

Principle 5

if the same tests are repeated over and over again, eventually they
will no longer find any new defects. Review & revise tests regularly.

Principle 6

Test is context dependent

testing is done differently in different contexts

Absence-of-errors fallacy

Principle 7

finding and fixing defects does not help if the system built is
unusable and does not fulfil the users needs and expectations

4. Fundamental Test Process

Planning &
Control

Test Planning:

Test Control:

the activity of verifying the


mission of testing;

10

defining the mission of testing;

specification of test activities.

comparing actual progress


against the plan;
reporting the status, including
deviations from plan;
taking corrective actions when
necessary.

4. Fundamental Test Process

Planning &
Control

11

Analysis &
Design

review the test basis;


evaluate testability of requirements
& system;
identify & prioritize test conditions;
design & prioritize test cases
(Standard for software Test
Documentation, IEEE STD 8291998);
identify test data;

design test environment;


Creating bi-directional traceability
between test basis and test cases.
Implement test approach:

Select the test design


techniques;
Identify risks.

4. Fundamental Test Process

Planning &
Control

Analysis &
Design

Implement
ation &
Execution

developing, implementing &


prioritizing test cases;

12

developing & prioritizing test


procedures (sequence of actions
for the execution of a test);

executing test cases;


comparing actual & expected
results;

reporting defects;

creating test suites;

re-testing following defect fix;

creating test data;

regression testing.

verifying test environment;

4. Fundamental Test Process

Planning &
Control

13

Analysis
& Design

Implemen
tation &
Execution

Checking test logs against exit criteria;

Assessing if more tests are needed;

Writing a test summary report.

Evaluating
Exit
Criteria &
Reporting

4. Fundamental Test Process

Planning &
Control

14

Analysis
& Design

Implemen
tation &
Execution

Evaluatin
g Exit
Criteria &
Reporting

checking which planned deliverables have been delivered;

finalising and archiving testware;

handover of testware to the maintenance organisation.

Closure
Activities

5. Psychology of Testing
Psychology of Testing

Test cases are designed by the person(s) writing the software

Test cases are designed by another person(s)

Test cases are designed by a person(s) from a different section

Test cases are designed by a person(s) from a different organization

developer build things

tester
prove &
break
things

15

Module II Testing throughout the Software Life Cycle

16

1.

Software Development Models

2.

Test Levels

3.

Test Types

4.

Maintenance Testing

1. Software Development Models - V Model

Specify
requirement
s

System
Test Spec

Outline
Design

Ve
rif
i

ca
tio

Detailed
Design

Integration
Test Spec

Unit
Test Spec
Code & Unit
Test

17

Aceptance
Test

Acceptance
Test Spec

System
Testing

Integration
Testing

n
it o
a
id
l
Va

1. Software Development Models - Interactive-incremental


Development Models
Interactive-incremental Development is the process of establishing
requirements, designing, building and testing a system in a series of
short development Cycles, ex: Rapid Application Development.

NOTA: Proibida a reproduo (total ou parcial) ou distribuio deste documento sem a autorizao de ALTRANPORTUGAL

18

1. Software Development Models


Testing within a life cycle model

For every development activity there is a corresponding testing activity;

Each test level has test objectives specific to that level;

Test analysis and design of tests for a given test level should begin during the
corresponding development activity;

Testers should be involved in reviewing documents as soon as drafts are available


in the development life cycle.

19

2. Test Levels
Component Testing Integration Testing

module, program,
unit, class, object;
testing of software
bits that are
separately
testable;
searches for
defects & verifies
functionality;
may include nonfunctional aspects
(ex: search for
memory leaks);
includes structural
testing of code;
usually done by
developers
(defects are
typically fixed in
20 the moment).

done at component
level (after
component testing);
or system level
(after system
testing);
tests interfaces
between items;
concentrates on the
interactions
between items,
rather than
functionality;
can involve both
functional &
structural
approaches

System Testing

Acceptance Testing

concerned with
the behavior of
the whole product;

typically done by
customers and / or
users;

test environment
should represent
live;

goal is to establish
confidence in the
software;

End to end,
business process
testing;

finding defects is
not the main
focus;

functional and
non-functional
testing;

can involve
functional & nonfunctional testing;

often done by
independent test
team.

There are 4 types

2. Test Levels
Types of Acceptance Testing

user acceptance testing;

operation acceptance testing (e.g., Backup and Restore);

contract & regulation acceptance testing;

alpha and beta testing:

21

alpha is conducted at development site;

beta is conducted at customer site.

3. Test Types
A group of test activities aimed to verify the software system based
on a specific reason or target for testing

A function to be performed by the software


A non-functional quality characteristic, such as reliability or usability
The structure or architecture of the software or system
Changed related, i.e., confirming that defects have been fixed and looking for
unintended changes (regression tests)

22

3. Test Types
Functional

Black box testing;


testing of functions;
tests the software's
external behavior;
"what the system
does";
includes security

testing of software
structure /
architecture;

testing of "how" the


system works;

White-box;

may be performed
at all test levels.
Includes:
Performance, Load
and Stress testing;
usability testing;

interoperability

maintainability
testing;

may be performed
at all test levels.

Structural

testing of nonfunctional software


characteristics;

testing and
testing;

23

Non-Functional

reliability testing;
portability testing.

Testing may be
performed at all
test levels;
measures
coverage:
extent that a
structure has
been exercised;
expressed as a
percentage of the
items being
covered.

Related to Changes

tests must be
repeatable;
Retesting /
confirmation
testing:
following the
release of a bug
fix;
to confirm that it
has been fixed.
regression
testing:
repeated testing;
after modification
of the SW or
environment;
find defects
caused by
changes.

4. Maintenance Testing
Maintenance Testing

Testing is done on an existing operational system;

Modifications such as:

Planned enhancement changes;

Corrective and emergency changes;

Changes of environment (eg: upgrades, patches).

Maintenance testing for migration;

Maintenance Testing for the retirement;

Can be done at any or all test levels and for any or all test types;

Impact Analysis may be used to determine the regression test suite.

24

Module 3 Static Techniques

25

1.

Static Techniques and the Test Process

2.

Review Process

3.

Static Analysis by Tools

1. Static Techniques and the Test Process


what is static testing

Beneficts
performed before dynamic testing;

testing without execution of code


find defects early;

manual - reviews

reduce development timescales &


costs;

automated - static analysis by tools

test any system deliverable

26

reduce testing time & costs;

1. Static Techniques and the Test Process

clear predefined objective

defects are welcomed


Success Factors

In using Static Techniques

involve the right people

27

management support

2. Review Process
Activities of Formal Review

Planning

Defining the review criteria;

Selecting the personnel;

Allocating roles;

28

Defining the entry and exit criteria for more formal review types (e.g.,
inspections);

Selecting which parts of documents to review;

Checking entry criteria (for more formal review types).

2. Review Process
Activities of Formal Review

Planning

29

Kick-off

Distributing documents;

Explaining the objectives, process and documents to the participants.

2. Review Process
Activities of Formal Review

Planning

30

Kick-off

Individual
preparation

Preparing for the review meeting by reviewing the document(s);

Noting potential defects, questions and comments.

2. Review Process
Activities of Formal Review

Planning

31

Kick-off

Individual
preparatio
n

Examination
,
evaluation,
recording
of results

Discussing or logging, with documented results or minutes (for more


formal review types);
Noting defects, making recommendations regarding handling the
defects, making decisions about defects;
Examining/evaluating and recording issues during any physical
meetings or tracking any group electronic communications.

2. Review Process
Activities of Formal Review

Planning

32

Kick-off

Individual
preparatio
n

Examination,
evaluation,
recording
of results

Rework

Fixing defects found (Typically done by the author);

Recording updated status of defects (in formal reviews).

2. Review Process
Activities of Formal Review

Planning

33

Kick-off

Individual
preparatio
n

Examination,
evaluation,
recording
of results

Rework

Checking that defects have been addressed;

Gathering metrics;

Checking on exit criteria (for more forma review types).

Follow-up

2. Review Process
Roles of a typical formal review

Manager

Moderator

Author

34

Decides on the execution of review;


Allocates time in project schedules;
Determines if the review objectives have been met.

Leads the review of the document, including planning the review,


running the meeting, and following-up after the meeting;
Mediate between the various points of view.

Writer or person with chief responsibility for the document to be


reviewed.

2. Review Process
Roles of a typical formal review

Reviewer

identify and describe findings. Reviewer should have different


perspectives and roles in the review process, and take part in any

(checker,
inspector)

Scribe
(Recorder)

35

Individual with a specific technical or business background that

review meetings.

My also provide aid for online reviews.

Documents all issues, problems and open points that were identified
during the meeting.

2. Review Process
Types of review

Informal
review

no formal process;

May take thee form of a pair programming or a technical lead


reviewing designs and code;

Results may be documented;

Varies in usefulness depending on the reviewers;

Main purpose: inexpensive way to get some benefit.

Meeting led by author;


May take form of scenarios, dry runs, peer group participation;
Open-ended sessions:
Optional pre-meeting preparation of reviewer;
Optional preparation of a review report including list of findings;
Optional Scribe;
May vary in practice from quite informal to very formal;
Main purposes: learning, gaining understanding, finding defects.

Walkthrough

36

2. Review Process
Types of review (cont)

Documented, defined defect-detection process that includes peers


and technical experts with optional management participation;

May be performed as a peer review without management


participation;

technical /
peer review

Ideally led by trained moderator;

Pre-meeting preparation by reviewers;

Optional use of checklists;

Preparation of a review report which includes the list of findings, the


verdict whether the software product meets its requirements and,
where appropriate, recommendations related to findings;

May vary in practice from quite informal to very formal;

Main purposes: discussing, making decisions, evaluating, plans,


regulations, and standards.

37

2. Review Process
Types of review (cont)

Inspections

38

Lead by trained moderator (not the author);

Usually conducted as peer examination;

Defined roles;

Includes metrics gathering;

Formal process based on rules and checklists;

Specified entry and exit criteria for acceptance of the software;

Pre-meeting preparation;

Inspection report including list of findings;

Formal follow-up process (with optional process improvement


components);

Optional reader;

Main purpose: finding defects.

2. Review Process
Success Factor for reviews:

Each review has clear predefined objectives;

The right people for the review objectives are involved;

Testers are valued reviewers who contribute to the review and also learn about the product
which enables them to prepare tests earlier;

Defects found are welcome and expressed objectively;

People issues and psychological aspects are dealt with;

The review is conducted in an atmosphere of trust; the outcome will not be used for the
evaluation of the participants;
Review techniques are applied that are suitable to achieve the objectives and to the type
and level of software work products and reviewers;
Checklists or roles are used if appropriate to increase effectiveness of defect identification;
Training is given in review techniques, especially the more formal techniques such as
inspection;

Management supports a good review process;

There is an emphasis on learning and process improvement.

39

2. Review Process
Tools support the Review process

Store information about review processes;

Store and communicate review comments;

Report defects;

Manage references to reviews rules;

Keep track of traceability between documents and code;

My also provide aid for online reviews.

Review
review
process
support
tools

40

3. Static Analysis by tools


Tools support for Static testing

Static
analyzers

Modeling
tools

Find defects before dynamic testing;

The enforcement of coding standards;

The analysis of structures &


dependencies;

Prevention of defects;

Calculate metrics from the code;

Evaluate code against coding rules;

Used by developers.

Identification of defects not


easily found by dynamic
testing;

Validate models of the software;

Used by developers

Early detection of defects


prior to test execution;

Early warning bout suspicious


aspects of the code or design
by the calculation of metrics,
such as a high complexity
measure;
Detecting dependencies and
inconsistencies in software
models such as links;
Improved maintainability of
code and design.

41

3. Static Analysis by tools


Typical defects discovered by static analysis tools include

Referencing a variable with an undefined value;

Inconsistent interfaces between modules and components;

Variables that are not used or are improperly declared;

Unreachable (dead) code;

Missing and erroneous logic (potentially infinite loops);

Overly complicated constructs;

Programming standards violations;

Security vulnerabilities;

Syntax violations of code and software models.

42

Module IV Test Design Techniques

43

1.

Categories of Test Design Techniques

2.

Specification-Based or Black-Box Techniques

3.

Structure-Based or White-Box Techniques

4.

Experience Based Techniques

5.

Choosing Test Techniques

1. Categories of Test Design Techniques


Common Caracteristiques of Test Design Techniques
Specification-based
(Black-box)
Models, either formal or
informal are used for the
specification of the problem
to be solved;
Test cases can derived
systematically from these
44

models.

44

Structure-based
(White-box)

Experience-based

Information about how the

Knowledge and experience

software is constructed is
used to derive the test
cases;
The extent of coverage of

of people;
Knowledge of testers,
developers, users and other
stakeholders about the

the software can be

software, its usage and its

measured for existing test

environment;

cases.

Defects and distribution

2. Specification-based or Black-box Technics


Equivalence Partitioning
identify sets of inputs;
where each value will be treated same as
any other value in that set;
select one value to represent that set;
identify valid and invalid partitions;
reduces the number of test cases needed.

Exercise
Values less than 10 are rejected, values
between 10 and 21 are accepted, values
greater than or equal to 22 are rejected.
Which of the following input values cover all
of the equivalence partitions?
A. 10,11,21
B. 3,20,21
C. 3,10,22
D. 10,21,22

Answer: C

45

2. Specification-based or Black-box Technics


Boundary value analysis
testing the boundaries of an input or
output;

Exercise
An input field takes the year of birth between
1900 and 2004. The boundary values for
testing this field are

test the boundary itself;


one increment above;

A. 0,1900,2004,2005
B. 1900, 2004

one increment below.

C. 1899,1900,2004,2005
D. 1899, 1900, 1901,2003,2004,2005

Answer: C

46

2. Specification-based or Black-box Technics


State transition testing

Exercise

for testing embedded systems;


a system may exhibit a different response
depending on current conditions or
previous history (its state).

Given the following state transition table


Which of the test cases below will cover the
following series of state transitions?
S1 SO S1 S2 SO
A. D, A, B, C.

B.

A, B, C, D.

C. C. D, A, B.

D.

A, B, C.
Answer: A

47

2. Specification-based or Black-box Technics


Decision Tables

for systems with lots of business rules;

identify conditions;

calculate combinations (rules);

enter true / false;

identify system actions;

for each rule, determine true / false for each action.

48

2. Specification-based or Black-box Technics


Use case testing

describe interactions between actors, including users and the system, which produce a result
of value to a system user;

describe the process flows through a system based on its actual likely use;

test cases can be directed derived from use cases;

uncover defects in the process flows during real-world use of the system;

often referred to as scenarios;

uncover integration defects caused by the interaction and interference of different


components.

49

3. Structure-based or White-box Technics


Statement testing and coverage

Exercise

testing of each line of code;

What is the MINIMUM

each statement must be exercised at least

combination of paths

once.

required to provide full


statement coverage?

Answer: 1 - V W
Y
50

3. Structure-based or White-box Technics


Decision testing and coverage

Exercise

testing of each decision outcome;

What is the MINIMUM

exercise each true and false outcome at

combination of paths

least once;

required to provide full


Decision coverage?

not concerned with combinations of true


and false outcomes.

Answer: 2 - V W Y; V X Z

51

2. Experience Based Techniques


Error Guessing
using gut feel & experience to guess
where errors were made (fault-attack);
supplements formal test design.

exploratory testing
simultaneous test design, test execution,
test recording & learning;
useful when time is tight and when
documentation is non-existent.

52

5. Chosing Test Techniques

The choice of which test


techniques to use depends on:
the type of system;

Equivalence Partitioning

regulatory standards;

Boundary value analysis

customer or contractual requirements;

State transition testing

Level and type of risk;

Decision tables

test objective;

Use case testing

documentation available;

Statement testing and

knowledge of the testers;


time and budget;
development life cycle;

coverage
Decision testing and
coverage

use case models;

Error Guessing

previous experience of types of defects

exploratory testing

found.

53

Techniques

Module V Test Management

54

1.

Test Organisation

2.

Test Planning and Estimation

3.

Test Progress Monitoring and Control

4.

Configuration Management

5.

Risk and Testing

6.

Incident Management

1. Test Organization
Task of the test leader

Coordinate the test strategy and plan with project managers and others;

Write or review a test strategy for the project, and test policy for the organization;

Contribute the testing perspective to other project activities, such as integration planning;

Plan the tests;

Initiate the specification, preparation, implementation and execution of tests, monitor the
test results and check the exit criteria;
Adapt planning based on test results and progress (sometimes documented in status
reports) and take any action necessary to compensate for problems;
Set up adequate configuration management of testware for traceability;
Introduce suitable metrics for measuring test progress and evaluating the quality of the
testing and the product.

Decide about the implementation of the test environment;

Write test summary reports based on the information gathered during testing.

55

1. Test Organization
Task of the tester

Review and contribute to test plans;

Analyze, review and assess user requirements, specifications and models for testability;

Create test specifications;

Set up the test environment (often coordinating with system administration and network
management);
Prepare and acquire test data;
Implement tests on all test levels, execute and log the tests, evaluate the results and
document the deviations from expected results;

Use test administration or management tools and test monitoring tools as required;

Automate tests (may be supported by a developer or a test automation expert);

Measure performance of components and systems (if applicable);

Review tests developed by others.

56

2. Test Planning and Estimation


Planning Activities

determining the scope and risks, and identifying the objectives of testing;

defining the overall approach of testing:

including the definition of the test levels;

entry and exit criteria;

integrating and coordinating the testing activities into the software life cycle activities;

scheduling test analysis and design activities;

scheduling test implementation, execution and evaluation;

assigning resources for the different activities defined;

selecting metrics:

57

for monitoring and controlling test preparation and execution;

for defect resolution and risk issues.

2. Test Planning and Estimation


Planning Activities (cont)

58

making decisions:

about what to test;

what roles will perform the test activities;

how the test activities should be done;

how the test results will be evaluated.

determining documentation:

defining the amount;

level of detail;

Structure;

Templates.

2. Test Planning and Estimation


Planning Activities (cont)

59

identify the tasks for test preparation:

identification of test conditions;

specification of test environment;

specification and creation of test data;

identification and creation of tests, test cases and test procedures.

identify the tasks for test execution:

preparation of test data;

preparation / verification of test environment;

execution of test cases, recording of test results, reporting incidents;

analyzing incidents and test results;

creating post-execution reports.

2. Test Planning and Estimation


Exit Criteria

Entry Criteria
Test environment availability and
readiness;

define when to stop testing:

such as at the end of a test level;

when a set of tests has a specific

Test tool readiness in the test


environment;
Testable code availability;

goal.
typical examples:

thoroughness measures, such as


coverage of code, functionality or
risk;

Test data availability.

estimates of defect density or


reliability measures;

cost;

residual risks, such as defects not


fixed or lack of test coverage;

60

schedules such as those based on


time to market.

2. Test Planning and Estimation


Approaches for the estimation of test effort

the metricsbased
approach

the expertbased
approach

61

estimating the testing effort based on metrics of former or similar


projects or based on typical values.

estimating the tasks by the owner of these tasks or by experts

2. Test Planning and Estimation


Test Approach Implementation of test strategy. Typical Approaches:
Analytical
Model-based
Methodical

such as risk-based testing.

such as stochastic testing using statistical information about failure


rates or usage.

Dynamic and
heuristic

Regression
verse

such as failure-based, experience-based, checklist-based, and quality


characteristic-based.

process or
standard
compliant

Consultative

62

such as those specified by industry-specific standards or agile


methodologies.
such as exploratory testing whereas testing is more reactive to
events than pre-planned.

such as where testing is driven by consultants or experts outsie the


test team.

reuse of test material, extensive automation of functional regresion


tests.

3. Test Progress Monitoring and Control


Test Progress Monitoring
to give feedback and
visibility about test
activities;
report the state of testing:
what happened during a
period of testing;
analyzed information and
metrics;
recommendations and
decisions about future
63

actions.

Test Metrics.

63

Test Reporting
What happened during a
period of testing, such as
dates when exit criteria
were met;
Analyzed information and
metrics to support
recommendations and
decisions about future
actions:
Assessment of defects
remaining;
The economic benefit of
continued testing;
Outstanding risks;
And the level of confidence
of the software.

Test Control
any guiding or corrective
actions;
taken as a result of
information and metrics
gathered and reported;
typical test control actions:
making decisions based
on information from test
monitoring;
Reprioritize tests when an
identified risk occurs;
change the test schedule;
set an entry criterion developers must test
defect fixes.

3. Test Progress Monitoring and Control


Common Test metrics

percentage of work done - test case preparation; test environment preparation;

test case execution - number of test cases run / not run; test cases passed / failed;

defect information - defect density; defects found and fixed; failure rate; retest results;

test coverage of requirements, risks or code;

subjective confidence of testers in the product;

dates of test milestones;

testing costs - the cost of running the next test; versus the benefit of finding the next defect.

64

4. Configuration Management
Configuration Management

all documents and software items are referenced in test documentation;

all items of testware are:

Identified;

version controlled;

tracked for changes;

Version 1
Version 2
Version 3

65

related to each other;

related to development items;

so that traceability can be maintained.

5. Risk and Testing


Risk
the chance of an event, hazard, threat or situation occurring;

and its undesirable consequences;

a potential problem.

Probability

Low

Medium

High

Low

Medium

Medium

Low

Low

Low

Impact

66

5. Risk and Testing


Project Risks
Organizational factors
Skill, training and staff
shortages;
Personnel issues;
Improper attitude toward or
expectations of testing;
Political issues:
Problems with testers
communicating their needs
and test results;
Failure by the team to
follow up on information
67
found.

67

Technical factors
Problems in defining the
right requirements;
The extent to which
requirements cannot be
met given existing
constraints;
Test environment not ready
on time;
Late data conversion,
migration planning and
development and testing
data conversion/migration
tools;
Low quality of the design,
code, configuration data,
test data and tests.

Supplier factors
Failure of a third party;
Contractual issues.

5. Risk and Testing


Product Risks

Failure-prone software delivered;


The potential that the software/hardware could cause harm to an individual or
company;
Poor software characteristics /e.g., functionality, reliability, usability and
performance);
Poor data integrity and quality (e.g., data migration issues, data conversion problems,
data transport problems);
Software that does not perform its intended functions.

68

5. Risk and Testing

A Risk-based approach to
testing.
reduce the levels of product risk;

Reduce the risk of an


adverse effect occurring;

starts at the beginning of a project;

uses them to guiding testing activities.

69

Testing

identifies product risks;

Reduce the impact of an


adverse effect.

6. Incident manager
Defects

Should be
tracked

from discovery and classification to


correction and confirmation of the
solution.
during development;

May be raised
at any time

They may be
raised for
anything

70

Provide developers and other parties


with feedback bout problem to
enable identification, isolation and
correction as necessary;
Provide test leader a means of

during review;

tracking the quality of the system

during testing;

under test and the progress of the

during use of a software product.

testing;

issues in code;
the working system;
any type of documentation.

Provide ideas for test process


improvement.

6. Incident manager
Details to include on the defects:

Date of issue, issuing organization, and author;

Expected and actual results;

Identification of the test item and environment;

Software or system life cycle process in which the incident was observed;

Description of the incident and logs;

Scope or degree of impact on stakeholder(s) interests;

Severity of the impact on the system;

Urgency/priority to fix; Status of the incident

Conclusions, recommendations and approvals;

Global issues, such as other areas that may be affected;

Change history;

References including the identity of the test case specification that revealed the problem.

71

Module VI Tool Support for Testing

72

1.

Types of Test Tools

2.

Effective Use of Tools: Potential Benefits and Risks

3.

Introducing a tool into a organization

1. Types of Test Tools


Test tools can be used for one or more activities that support tests
Tools that are directly used in testing
(e.g., test execution, test data generation
and result comparison).
Tools that help in managing the testing
(e.g., manage test, test result, data,
requirements, incidents, defects).
Tools that are used in exploration
(e.g., tools that monitor file activity for an
application).
Any tool that aids in testing
(a spreadsheet is also a test tool in this
meaning).

73

Improve the efficiency of test activities by automation


repetitive tasks or supporting manual test activities
like test planning, test design, test reporting and
monitoring;
Automatic activities that require significant resources
when done manually (e.g., static testing);
Automate activities that cannot be executed
manually (e.g., large scale performance testing of
client-server applications);
Increase reliability of testing (e.g., automation large
data comparisons or simulating behavior).

1. Types of Test Tools


Tools support management of testing and tests
Test
management

Requirement
s
management

Help manage testing & testware;


Store tests and test results;
May include defect tracking;
Can link results to source for traceability;
Generate metrics and test reports.
Store requirements statements;

Check for consistency;

Check for missing requirements;

Allow
requirements
to be prioritized.
Not strictly
testing
tools;
Store information and keep track of different versions and builds of software and tes
Configuration
Enable traceability
management
Useful when developing on several hardware/software environment.

Incident
management
defect
tracking
74

Store and manage incident reports;


Facilitating their prioritization;
Assign actions to people;
Attribution of status;
Monitor their progress;
Statistical analysis and reporting;

1. Types of Test Tools


Tools support for Test Specification

Generate test inputs or executables tests:


From requirements;
From graphical user interface;
From design models (state, data or object);

Test design

Test data
preparation

75

From code.

May generate expected outcomes as well;

My also provide aid for online reviews.

Manipulate databases, files or data transitions;

To set up test data;

To be used during the execution of tests.

1. Types of Test Tools


Tools support for Test Execution and logging

Coverage
measurement

Security
tools

Test
execution

76

Measure the percentage of specific types of code structure that have


been exercised (e.g. Statements, branches or decisions and module);

Can be intrusive;

Used by developers.

Check for computer viruses and denial of service attacks;

Search for specific vulnerabilities of the system.

Enables test to be executed automatically;

Using stored inputs and expected outcomes;

Using a scripting language;

1. Types of Test Tools


Tools support for Test Execution and logging (cont)

Test harness
unit test
framework

Test
comparators

77

Facilitate the testing of components or part of a system;

By simulating the environment in which that test object will run;

Used by developers.

determine differences between files, databases or test results.

1. Types of Test Tools


Tools support for performance and monitoring

Dynamic
analysis
tools

Performance
Load
stress

Monitoring
tools

78

Find defects that are evident only when software is executing, Such
as time dependencies or memory leaks;

Used by developers.

Monitor and report on how a system behaves under a variety of


simulated usage conditions;
They simulate a load on a application, such as a network or a server;
Often based on automated repetitive execution of tests, controlled by
parameters.

Not strictly testing tools;

Analyse, verify and report on usage of specific system resources.

1. Types of Test Tools


Tool support for specific testing needs

Data quality
assessment

79

data conversion/migration, data warehouses.

2. Effective use of tools: Potential Benefits and Risks


Potential benefits

Repetitive work is reduced;

Greater consistency and repeatability;

Objective assessment (e.g., static measures, coverage);

Ease of access to information about tests or testing (e.g., statistics and graphs
about test progress, incident rates and performance).

80

2. Effective use of tools: Potential Benefits and Risks


Potential Risks

Unrealistic expectations for the tools (including functionality and ease of use);

Underestimating the time, cost and effort for the initial introduction of a tool (including
training and external expertise);

Underestimating the time and effort needed to achieve significant and continuing benefits
from the tool (including the need for changes in the testing process and continuous
improvement of the way the tool is used);

Underestimating the effort required to maintain the test assets generated by the tool;

Over-reliance on the tool (replacement for test design or use of automated testing where
manual testing would be better);

81

2. Effective use of tools: Potential Benefits and Risks


Potential Risks (cont)

Neglecting version control of test assets within the tool;

Neglecting relationships and interoperability issues between critical tools, such as


requirements management tools, version control tools, incident management tools, defect
tracking tools and tools from multiple vendors;

Risk of tool vendor going out of business, retiring the tool, or selling the tool to a different
vendor;

Poor response from vendor for support, upgrades, and defect fixes;

Risk of suspension of open-source/free tool project;

Unforeseen, such as the inability to support a new platform.

82

3. Introducing a tool into an organization


Main considerations in selecting a tool for an organization

Assessment of organizational maturity, strengths and weaknesses and identification of


opportunities for an improved test process supported by tools;

Evaluation against clear requirements and objective criteria;

A proof-of-concept, by using a test tool during the evaluation phase to establish whether it
identify changes needed to that infrastructure to effectively use the tool;

Evaluation of the vendor (including training, support and commercial aspects) or service
support suppliers in case of non-commercial tools;

Identification of training needs considering the current test teams test automation skills;

Estimation of a cost-benefit ration based on concrete business case.

83

3. Introducing a tool into an organization


Pilot project

Learn more detail about the tool;

Evaluate how the tool fits with existing processes and practices, and determine
what would need to change;

Decide on standard ways of using, managing, storing and maintaining the tool
and the test assets (e.g., deciding on naming conventions for files and tests,
creating libraries and defining the modularity of test suites);

84

Assess whether the benefits will be achieved at reasonable cost.

3. Introducing a tool into an organization


Success factors for the deployment of the tool

Rolling out the tool to the rest of the organization incrementally;

Adapting and improving processes to fit with the use of the tool;

Providing training and coaching/mentoring for new users;

Defining usage guidelines

Implementing a way to gather usage information from the actual use;

Monitoring tool use and benefits;

Providing support for the test team for a given tool;

Gathering lessons learned from all teams.

85

Das könnte Ihnen auch gefallen