Sie sind auf Seite 1von 78

C

Certifi
ied Testerr

Found
dation
n Lev
vel Sy
yllabu
us

Released
R
Verrsion 201
11

Intternatio
onal Software Testing
g Qualiffication
ns Boarrd
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

Copyrighht Notice
This doccument may be copied in its entirety, or extracts made,
m if the source
s is ackknowledged.

Copyrigh
ht Notice © Innternational Software Teesting Qualific
cations Boarrd (hereinafte
er called ISTQB®)
ISTQB iss a registered
d trademark of the Intern
national Softw
ware Testingg Qualifications Board,

Copyrigh
ht © 2011 the hair), Debra Friedenberg, and
e authors forr the update 2011 (Thomas Müller (ch
the ISTQ
QB WG Foun ndation Level)

Copyrigh e authors forr the update 2010 (Thomas Müller (ch


ht © 2010 the hair), Armin B
Beer, Martin
Klonk, Rahul
R Verma))

Copyrigh
ht © 2007 thee authors forr the update 2007 (Thomas Müller (ch
hair), Dorothyy Graham, Debra
D
berg and Erikk van Veenendaal)
Friedenb

Copyrigh
ht © 2005, th
he authors (TThomas Mülle
er (chair), Re grid Eldh, Dorothy Graham,
ex Black, Sig
Klaus Ollsen, Maaret Pyhäjärvi, Geoff
G Thompson and Erik k van Veenenndaal).

All rightss reserved.

The auth
hors hereby transfer
t the copyright
c to the
t Internatio
onal Softwarre Testing Qu
ualifications Board
(ISTQB). The authorrs (as currentt copyright holders) and ISTQB
I (as th pyright holder)
he future cop
have agrreed to the fo
ollowing cond
ditions of usee:

1) Any individual orr training com mpany may useu this syllaabus as the basis
b for a tra
aining course
e if the
authors and the ISTQB are acknowledge ed as the soource and co opyright owners of the sy yllabus
and provided tha at any adverrtisement of such a trainning course may
m mention n the syllabu
us only
afterr submission n for official accreditatio
on of the tra
aining materrials to an IISTQB recognized
Natio onal Board.
2) Any individual orr group of in ndividuals maay use this syllabus
s as the
t basis for articles, boo oks, or
otheer derivative writings if th he authors and
a the ISTQB are acknowledged a as the sourcce and
copyyright ownerss of the syllabus.
3) Any ISTQB-reco ognized Natio onal Board may
m translate e this syllabu
us and licensse the syllabbus (or
its trranslation) to
o other parties.

Version 2011
2 Page 2 of 78
7 31-Marr-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

Revision Histo
ory
Version D
Date Remarks
s

ISTQB 2011
2 E
Effective 1-Ap
pr-2011 Certified Tester Foundation Levell Syllabus
Maintena ance Releasee – see Appe
endix E – Reelease
Notes
ISTQB 2010
2 E
Effective 30-M
Mar-2010 Certified Tester Foundation Levell Syllabus
Maintena ance Releasee – see Appe
endix E – Reelease
Notes
ISTQB 2007
2 0
01-May-2007
7 Certified Tester Foundation Levell Syllabus
Maintena ance Releasee
ISTQB 2005
2 01-July-2005
0 Certified Tester Foundation Levell Syllabus
ASQF V2.2 J
July-2003 ASQF Sy yllabus Foundation Level Version 2.2
“Lehrplann Grundlagenn des Softwa
are-testens“
ISEB V2
2.0 2
25-Feb-1999 ISEB Sofftware Testin
ng Foundatioon Syllabus V2.0
V
25 February 1999

Version 2011
2 Page 3 of 78
7 31-Marr-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

Table of Conte
ents
Acknowledgements..................................................................................................................................... 7 
Introducttion to this Syllabus
S ....................................................................................................................... 8 
Purpoose of this Do ocument ..................................................................................................................... 8 
The Certified
C Testter Foundatio on Level in Software
S Testing ................................................................. 8 
Learnning Objective es/Cognitive Level of Kno owledge ............................................................................. 8 
The Examination
E .
................................................................................................................................... 8 
Accreeditation ........................................................................................................................................... 8 
Level of Detail......................................................................................................................................... 9 
How this
t Syllabus is Organized ............................................................................................................. 9 
1.  Fun ndamentals of o Testing (K K2)......................................................................................................... 10 
1.1  Why is Te esting Necessary (K2) .............................................................................................. 11 
1.1.1  Software Systems Context C (K1)) ........................................................................................ 11 
1.1.2  Causess of Software e Defects (K2 2) ..................................................................................... 11 
1.1.3  Role off Testing in Software S Devvelopment, Maintenance
M and Operations (K2) ............... 11 
1.1.4  Testing g and Qualityy (K2) .................................................................................................... 11 
1.1.5  How Much Testing is Enough? (K2) ................................................................................. 12 
1.2  What is Testing? (K2) ............................................................................................................. 13 
1.3  Seven Testing Princip ples (K2) ................................................................................................ 14 
1.4  Fundamental Test Pro ocess (K1) ............................................................................................ 15 
4.1  Test Planning and Control
1.4 C (K1) ........................................................................................ 15 
4.2  Test An
1.4 nalysis and Design D (K1) ....................
. ..................................................................... 15 
4.3  Test Im
1.4 mplementatio on and Execu ution (K1).......................................................................... 16 
4.4  Evaluating Exit Critteria and Rep
1.4 porting (K1) ...................................................................... 16 
4.5  Test Cllosure Activitties (K1) ............................................................................................... 16 
1.4
1.5  The Psych hology of Testing (K2) ............................................................................................. 18 
1.6  Code of Ethics
E ........................................................................................................................ 20 
2.  Tessting Throug ghout the Sofftware Life Cycle C (K2) .......................................................................... 21 
2.1  Software Developmen nt Models (K2 2) ..................................................................................... 22 
2.1.1  V-mode el (Sequentia al Development Model) (K2) ............................................................... 22 
2.1.2  Iterative e-incrementa al Development Models (K2) ( .............................................................. 22 
2.1.3  Testing g within a Life e Cycle Model (K2) ............................................................................. 22 
2.2  Test Leve els (K2) ..................................................................................................................... 24 
2.1  Compo
2.2 onent Testing g (K2) .................................................................................................... 24 
2.2  Integra
2.2 ation Testing (K2) ..................................................................................................... 25 
2.3  System
2.2 m Testing (K2 2) .......................................................................................................... 26 
2.4  Acceptance Testing
2.2 g (K2).................................................................................................... 26 
2.3  Test Type es (K2) ...................................................................................................................... 28 
3.1  Testing
2.3 g of Function n (Functional Testing) (K2 2) .................................................................. 28 
3.2  Testing
2.3 g of Non-funcctional Softw ware Characte eristics (Non n-functional T Testing) (K2) ......... 28 
3.3  Testing
2.3 g of Software e Structure/A Architecture (Structural Te esting) (K2) .............................. 29 
3.4  Testing
2.3 g Related to Changes: Re e-testing and d Regression n Testing (K2 2)........................... 29 
2.4  Maintenan nce Testing (K2) ( ...................................................................................................... 30 
3.  Sta atic Techniqu ues (K2).................................................................................................................... 31 
3.1  Static Tecchniques and d the Test Prrocess (K2) ....................................................................... 32 
3.2  Review Process (K2) .............................................................................................................. 33 
2.1  Activitie
3.2 es of a Form mal Review (K K1) .................................................................................... 33 
2.2  Roles and
3.2 a Responssibilities (K1)) ........................................................................................ 33 
2.3  Types of
3.2 o Reviews (K2) ( ....................................................................................................... 34 
2.4  Successs Factors fo
3.2 or Reviews (K K2).................................................................................... 35 
3.3  Static Ana alysis by Too ols (K2) ................................................................................................. 36 
4.  Tesst Design Te echniques (K K4) ......................................................................................................... 37 
4.1  The Test Developmen nt Process (K K3) .................................................................................... 38 
4.2  Categorie es of Test De esign Techniq ques (K2) ......................................................................... 39 
Version 2011
2 Page 4 of 78
7 31-Marr-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

4.3  Specificattion-based orr Black-box Techniques T (K3)


( .............................................................. 40 
3.1  Equivalence Partitio
4.3 oning (K3) ............................................................................................ 40 
3.2  Bounda
4.3 ary Value An nalysis (K3) ........................................................................................... 40 
3.3  Decisio
4.3 on Table Tessting (K3) .............................................................................................. 40 
3.4  State Transition
4.3 T Testing (K3) ............................................................................................. 41 
3.5  Use Ca
4.3 ase Testing (K2) ( ....................................................................................................... 41 
4.4  Structure--based or Wh hite-box Techniques (K4 4)................................................................... 42 
4.1  Statem
4.4 ment Testing and a Coverag ge (K4) ............................................................................. 42 
4.2  Decisio
4.4 on Testing an nd Coverage e (K4) ................................................................................ 42 
4.3  Other Structure-bas
4.4 S sed Techniqu ues (K1) ........................................................................... 42 
4.5  Experiencce-based Tecchniques (K2 2) ...................................................................................... 43 
4.6  Choosing Test Techniiques (K2)............................................................................................. 44 
5.  Tesst Management (K3) ................................................................................................................... 45 
5.1  Test Orga anization (K2 2) ........................................................................................................... 47 
5.1.1  Test Organization and a Independ dence (K2) ....................................................................... 47 
5.1.2  Tasks of o the Test Leader L and Tester
T (K1) ........................................................................ 47 
5.2  Test Planning and Esttimation (K3))........................................................................................ 49 
2.1  Test Planning (K2) ............................................................................................................. 49 
5.2
2.2  Test Planning Activvities (K3) .............................................................................................. 49 
5.2
2.3  Entry Criteria
5.2 C (K2) .............................................................................................................. 49 
2.4  Exit Criteria (K2)................................................................................................................. 49 
5.2
2.5  Test Esstimation (K2
5.2 2) .......................................................................................................... 50 
2.6  Test Sttrategy, Testt Approach (K
5.2 K2) ................................................................................... 50 
5.3  Test Prog gress Monitorring and Con ntrol (K2) .......................................................................... 51 
3.1  Test Prrogress Monitoring (K1) ........................................................................................... 51 
5.3
3.2  Test Re
5.3 eporting (K2)............................................................................................................ 51 
3.3  Test Co
5.3 ontrol (K2)................................................................................................................ 51 
5.4  Configura ation Manage ement (K2) ............................................................................................ 52 
5.5  Risk and TestingT (K2) ............................................................................................................. 53 
5.1  Projectt Risks (K2) .............................................................................................................. 53 
5.5
5.2  Producct Risks (K2) ............................................................................................................. 53 
5.5
5.6  Incident Management
M (K3) ..................................................................................................... 55 
6.  Too ol Support fo or Testing (K2 2).......................................................................................................... 57 
6.1  Types of Test T Tools (K K2) ........................................................................................................ 58 
6.1.1  Tool Su upport for Te esting (K2) ............................................................................................ 58 
6.1.2  Test To ool Classifica ation (K2) .............................................................................................. 58 
6.1.3  Tool Su upport for Ma anagement of o Testing an nd Tests (K1)) ............................................... 59 
6.1.4  Tool Su upport for Sta atic Testing (K1) ................................................................................. 59 
6.1.5  Tool Su upport for Te est Specificattion (K1) ........................................................................... 59 
6.1.6  Tool Su upport for Te est Execution n and Loggin ng (K1) .......................................................... 60 
6.1.7  Tool Su upport for Pe erformance and a Monitorin ng (K1).......................................................... 60 
6.1.8  Tool Su upport for Sp pecific Testin ng Needs (K1 1) .................................................................. 60 
6.2  Effective Use U of Toolss: Potential Benefits B and Risks (K2) ................................................... 62 
2.1  Potential Benefits and
6.2 a Risks of Tool Supporrt for Testing (for all toolss) (K2) ................... 62 
2.2  Special Considerations for Som
6.2 me Types of Tools T (K1) ..................................................... 62 
6.3  Introducin ng a Tool into o an Organizzation (K1) ........................................................................ 64 
7.  References ....................................................................................................................................... 65 
Standdards ............................................................................................................................................. 65 
Bookss.................................................................................................................................................... 65 
8.  Appendix A – Syllabus S Background ................................................................................................ 67 
Historry of this Doccument ..................................................................................................................... 67 
Objecctives of the Foundation
F C
Certificate Qualification ....................................................................... 67 
Objecctives of the International
I Qualification n (adapted frrom ISTQB meeting m at So ollentuna,
Novem mber 2001)................................................................................................................................... 67 
Entry Requiremen nts for this Qu ualification ............................................................................................ 67 
Version 2011
2 Page 5 of 78
7 31-Marr-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

Backgground and History


H of the e Foundation n Certificate in Software Testing T ..................................... 68 
9.  Appendix B – Learning
L Obje ectives/Cogn nitive Level of o Knowledge e ............................................... 69 
Level 1: Remember (K1) ..................................................................................................................... 69 
Level 2: Understand (K2) .................................................................................................................... 69 
Level 3: Apply (K3 3) .............................................................................................................................. 69 
Level 4: Analyze (K4) ( .......................................................................................................................... 69 
10.  A
Appendix C – Rules App plied to the IS STQB ................................................................................ 71 
Founddation Syllab bus ............................................................................................................................ 71 
10..1.1  Genera al Rules .................................................................................................................... 71 
10..1.2  Current Content ................................................................................................................. 71 
10..1.3  Learnin ng Objectivess ........................................................................................................... 71 
10..1.4  Overalll Structure ................................................................................................................ 71 
11.  A
Appendix D – Notice to Training T Provviders ............................................................................... 73 
12.  A
Appendix E – Release Notes...................................................................................................... 74 
Releaase 2010 ....................................................................................................................................... 74 
Releaase 2011 ....................................................................................................................................... 74 
13.  Index ............................................................................................................................................ 76 

Version 2011
2 Page 6 of 78
7 31-Marr-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

Ackno
owledgements
Internatio
onal Softwarre Testing Quualifications Board Working Group Fooundation Leevel (Edition 2011):
Thomas Müller (chair), Debra Friedenberg. The T core teamm thanks the review teamm (Dan Almog g,
Armin Be eer, Rex Black, Julie Garrdiner, Judy McKay, Tuulla Pääkköne
en, Eric Riou du Cosquierr Hans
Schaefer, Stephanie Ulrich, Erik van Veenendaal) and all National Bo oards for the suggestions for
the curre
ent version of
o the syllabus.

Internatio ualifications Board Working Group Fo


onal Softwarre Testing Qu oundation Le evel (Edition 2010):
Thomas Müller (chair), Rahul Verma, Martin Klonk
K and Arrmin Beer. The
T core team m thanks the
review te
eam (Rex Bla ack, Mette Bruhn-Peders
B son, Debra Friedenberg,
F Klaus Olsen n, Judy McKa ay,
Tuula Päääkkönen, Meile
M Posthumma, Hans Scchaefer, Step phanie Ulrich, Pete William
ms, Erik van
Veenend daal) and all National Boa
ards for theirr suggestions
s.

Internatio ualifications Board Working Group Fo


onal Softwarre Testing Qu oundation Le
evel (Edition 2007):
Thomas Müller (chair), Dorothy Graham,
G Debbra Friedenberg, and Erikk van Veenendaal. The core
c
team thaanks the revie
ew team (Ha ans Schaeferr, Stephanie Ulrich, Meilee Posthuma, Anders
Pettersson, and Won nil Kwon) andd all the National Boards for their sug
ggestions.

Internatio ualifications Board Working Group Fo


onal Softwarre Testing Qu evel (Edition 2005):
oundation Le
Thomas Müller (chair), Rex Blackk, Sigrid Eldhh, Dorothy Graham,
G Klau
us Olsen, Maaaret Pyhäjärrvi,
Geoff Thhompson and d Erik van Ve
eenendaal an nd the review
w team and all
a National B
Boards for their
suggestions.

Version 2011
2 Page 7 of 78
7 31-Marr-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

Introd
duction
n to this
s Syllabus
Purpo
ose of thiss Docume
ent
This sylla
abus forms the
t basis for the International Softwarre Testing Qualification a
at the Founda ation
Level. Thhe Internatio
onal Softwaree Testing Qualifications Board
B (ISTQBB) provides it to the Natio
onal
Boards for
f them to accredit the trraining providders and to derive
d examination questtions in their local
language e. Training providers
p will determine appropriate
a te
eaching methhods and prooduce course eware
editation. The syllabus will
for accre w help candidates in their preparation for the exa amination.
Information on the history and ba ackground off the syllabuss can be foun
nd in Appenddix A.

The Certified
C T
Tester Foundation Level in Software
e Testing
The Foundation Leve el qualificatio
on is aimed at
a anyone inv volved in softtware testingg. This includdes
people inn roles such as testers, te ers, test conssultants, testt managers, user
est analysts,, test enginee
acceptan nce testers and
a software developers. This Founda ation Level qualification
q iis also appro
opriate
for anyone who wantts a basic un nderstanding of software testing, such h as project m managers, quality
managerrs, software developmen nt managers, business an nalysts, IT dirrectors and mmanagement
consultaants. Holders of the Foundation Certifficate will be able to go on n to a higherr-level softwa
are
testing qualification.
q

Learniing Objecctives/Co
ognitive Level of Knowledge
K e
Learning
g objectives are
a indicated
d for each section in this syllabus
s and
d classified ass follows:
o K1: remember
r
o K2: understand
u
o K3: apply
a
o K4: analyze
a
Further details
d and examples
e of learning
l obje
ectives are given in Appe
endix B.

All termss listed under “Terms” jusst below chappter heading emembered ((K1), even if not
gs shall be re
explicitlyy mentioned in the learnin
ng objectivess.

The Examinatio
E on
The Foundation Leve el Certificate examination
n will be base
ed on this syyllabus. Answ
wers to
examinaation question ns may require the use ofo material baased on more e than one section of this
s
syllabus. All sectionss of the syllab
bus are exam
minable.
The form
mat of the exa
amination is multiple cho
oice.
Exams may
m be taken n as part of an
a accredited
d training cou
urse or taken
n independen ntly (e.g., at an
a
examina ation center or
o in a public exam). Commpletion of an
a accredited urse is not a pre-
d training cou
requisite
e for the exam
m.

Accred
ditation
An ISTQ QB National Board
B may accredit training providers s whose courrse material ffollows this
syllabus. Training pro
oviders shouuld obtain acccreditation guidelines from the board or body thatt
performss the accreditation. An acccredited cou
urse is recoggnized as con nforming to this syllabus, and
is allowe
ed to have an
n ISTQB exa amination as part of the course.

Further guidance
g for training provviders is give
en in Append
dix D.

Version 2011
2 Page 8 of 78
7 31-Marr-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

Level of Detail
The leveel of detail in this syllabuss allows interrnationally co
onsistent teaching and exxamination. In
order to achieve this goal, the syllabus consissts of:
o General instructional objectivves describin ng the intentio
on of the Fouundation Levvel
o A list of informatiion to teach, including a description,
d and
a referencces to additio onal sources if
requuired
o Learrning objectivves for each knowledge area, a describbing the cognnitive learning outcome and
a
minddset to be acchieved
o A list of terms tha at students must
m be ablee to recall and
d understand d
o A de escription of the
t key conccepts to teach, including sources
s suchh as accepte ed literature or
o
standards

The sylla
abus contentt is not a desscription of th
he entire kno
owledge area a of software testing; it refflects
the level of detail to be
b covered inn Foundation n Level trainiing courses.

How th
his Syllab
bus is Orrganized
There arre six major chapters.
c The top-level heading
h for each chapter shows the h highest level of
learning objectives th ed within the chapter and specifies the
hat is covere e time for the
e chapter. Foor
examplee:

2. Tes
sting Thrroughoutt the Sofftware Life Cycle (K2) 115 min
nutes
This heaading shows that Chapterr 2 has learning objectivees of K1 (asssumed when a higher level is
shown) and
a K2 (but notn K3), and it is intended 5 minutes to teach the material in the
d to take 115 e
chapter. Within each chapter there are a num mber of sectio
ons. Each se ection also ha
as the learning
objective
es and the am mount of time
e required. Subsections
S that do not have
h given are included
a time g
within the time for the
e section.

Version 2011
2 Page 9 of 78
7 31-Marr-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

1. Fundam
mentals of Testting (K2
2) 15
55 minu
utes
Learniing Objecctives forr Fundam
mentals off Testing
The obje
ectives identify what you will be able to
t do followin
ng the complletion of each
h module.

1.1 Wh
hy is Testing Necess
sary? (K2)
LO-1.1.1
1 Describee, with examples, the wayy in which a defect in sofftware can ca ause harm tooa
person, to
t the enviroonment or to a company (K2)(
LO-1.1.2
2 Distinguish between the root cau use of a defecct and its effeects (K2)
LO-1.1.3
3 Give rea
asons why testing is nece essary by givving example es (K2)
LO-1.1.4
4 Describee why testingg is part of qu
uality assurance and give e examples o of how testing
contributtes to higherr quality (K2)
LO-1.1.5
5 Explain and
a compare e the terms error,
e e, and the corrresponding terms
defect, fault, failure
mistake and bug, usiing exampless (K2)

1.2 Wh
hat is Testing? (K2)
LO-1.2.1
1 Recall th
he common objectives
o off testing (K1)
LO-1.2.2
2 Provide examples for the objectivves of testingg in different phases of th
he software life
cycle (K22)
LO-1.2.3
3 Differenttiate testing from
f debugg
ging (K2)

1.3 Sev
ven Testin
ng Princip
ples (K2)
LO-1.3.1
1 Explain the
t seven prrinciples in te
esting (K2)

1.4 Fun
ndamenta
al Test Pro
ocess (K1))
LO-1.4.1
1 Recall th
he five fundamental test activities
a and
d respective tasks
t from planning to closure
(K1)

1.5 The
e Psychology of Tes
sting (K2))
LO-1.5.1
1 Recall th
he psycholog
gical factors that
t influence
e the successs of testing ((K1)
LO-1.5.2
2 Contrastt the mindsett of a tester and
a of a deve eloper (K2)

Version 2011
2 Page 10 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

1.1 Why is Testing


g Necesssary (K2
2) 20 minuttes

Terms
Bug, deffect, error, fa
ailure, fault, mistake,
m quallity, risk

1.1.1 Software
e Systems
s Context (K1)
(
Softwaree systems arre an integrall part of life, from
f busines
ss applications (e.g., bannking) to cons
sumer
productss (e.g., cars).. Most people
e have had an a experienc ce with softwaare that did n
not work as
expectedd. Software that
t does nott work correcctly can lead to many pro oblems, including loss of
money, time
t or businness reputation, and coulld even caus se injury or de
eath.

1.1.2 Causes of
o Softwarre Defects
s (K2)
A human n being can make
m an erroor (mistake), which produuces a defect (fault, bug) in the prograam
code, or in a docume ent. If a defecct in code is executed, th
he system ma ay fail to do w
what it shoulld do
(or do soomething it shouldn’t), ca ausing a failure. Defects in software, systems
s or d
documents may
m
result in failures, but not all defeccts do so.

Defects occur because human be eings are fallible and bec


cause there is time presssure, complex
x
code, co
omplexity of infrastructure
e, changing technologies
t , and/or man
ny system intteractions.

Failures can be caussed by enviro onmental connditions as well.


w For example, radiatiion, magnetis sm,
electroniic fields, and pollution can cause faults in firmwarre or influencce the executtion of softwa
are by
changingg the hardwa are conditions.

1.1.3 Role of Testing


T in Software Developm
ment, Main
ntenance a
and
Operattions (K2)
Rigorouss testing of systems
s and documentatiion can help to reduce th he risk of problems occurring
during operation and o the quality of the software system, if the defectss found are
d contribute to
corrected
d before the system is re eleased for operational us se.

Software
e testing mayy also be req
quired to mee
et contractua
al or legal req or industry-specific
quirements, o
standard
ds.

1.1.4 Testing and


a Qualitty (K2)
With the help of testing, it is posssible to meassure the quallity of software in terms o
of defects fou und,
for both functional
f annd non-functiional softwarre requireme ents and charracteristics (e
e.g., reliabilitty,
usability, efficiency, maintainabili
m ty and portab
bility). For more information on non-fu unctional tes sting
see Cha apter 2; for more informattion on software characte eristics see ‘S
Software Eng gineering –
Software e Product Qu uality’ (ISO 9126).

Testing can
c give con nfidence in th
he quality of the
t software if it finds few
w or no defeccts. A properrly
designedd test that pa
asses reduce es the overall level of risk
k in a system
m. When testing does findd
defects, the quality of
o the softwarre system inccreases whe en those defe
ects are fixed
d.

Lessonss should be leearned from previous proojects. By understanding the root causes of defec cts
found in other projeccts, processe
es can be imp
proved, whic ch in turn sho
ould prevent those defectts from
reoccurring and, as a consequen nce, improve the quality of
o future systtems. This is an aspect of
o
quality assurance.

Testing should
s be inttegrated as one
o of the qu
uality assurance activitiess (i.e., alongsside develop
pment
standard
ds, training and defect annalysis).
Version 2011
2 Page 11 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

1.1.5 How Muc


ch Testing
g is Enoug
gh? (K2)
Deciding g how much testing
t is eno
ough should take accoun nt of the leve
el of risk, inclu
uding technic
cal,
safety, and
a businesss risks, and project
p constrraints such as
a time and budget.
b Riskk is discussed
d
further in
n Chapter 5.

Testing should
s provid
de sufficient information to
t stakeholders to make informed decisions abou ut the
release of
o the softwa
are or systemm being testeed, for the next development step or h
handover to
customeers.

Version 2011
2 Page 12 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

1.2 What iss Testing? (K2) 30 minuttes

Terms
Debugging, requirem
ment, review, test case, te
esting, test objective
o

Backgrround
A common perceptio on of testing is
i that it onlyy consists of running testss, i.e., execu
uting the softw
ware.
This is part
p of testing
g, but not all of
o the testing g activities.

Test actiivities exist before


b and affter test execcution. Thesse activities in
nclude plann ning and conttrol,
choosing g test conditions, designing and execcuting test cases, checkin ng results, evvaluating exitt
criteria, reporting
r on the testing process
p and system unde er test, and fiinalizing or ccompleting closure
activitiess after a test phase has been
b completted. Testing also includess reviewing d documents
(including source cod de) and cond ducting staticc analysis.

Both dynnamic testing esting can be used as a means for acchieving sim
g and static te milar objective
es,
c be used to improve both
and will provide inforrmation that can b the systtem being tessted and the e
developm ment and tessting processses.

Testing can
c have the e following obbjectives:
o Finding defects
o Gain ning confiden
nce about thee level of qua
ality
o Provviding information for deccision-makingg
o Prevventing defeccts

The thouught processs and activitie


es involved in
n designing tests
t e (verifying the
early in the life cycle
test basis via test design) can heelp to preventt defects from
m being introoduced into ccode. Review ws of
documen nts (e.g., reqquirements) and
a the identtification and d resolution of
o issues also o help to prev vent
defects appearing
a in the code.

Differentt viewpoints in testing takke different objectives


o into
o account. For
F example, in developm ment
testing (e
e.g., compon nent, integrattion and systtem testing), the main obbjective may be to cause as
many faiilures as posssible so thatt defects in th
he software are
a identified
d and can be e fixed. In
acceptan nce testing, the
t main obje ective may beb to confirm that the system works as expected, to
gain connfidence that it has met thhe requireme ents. In somee cases the main
m objectivve of testing may
be to asssess the quaality of the so
oftware (with no intention of fixing defeects), to givee informationn to
r of releasing the syste
stakeholders of the risk em at a givenn time. Mainttenance testiing often inclludes
testing th
hat no new defects
d have been introdu uced during developmen
d t of the channges. During
operational testing, the main obje ective may be to assess system
s chara acteristics suuch as reliabbility or
availability.

Debugging and testin ng are differeent. Dynamicc testing can show failurees that are caaused by deffects.
Debugging is the devvelopment acctivity that fin nds, analyzes and removves the cause e of the failurre.
Subsequuent re-testin ng by a tester ensures tha at the fix doe
es indeed ressolve the failure. The
responsiibility for thesse activities is
i usually tessters test and
d developerss debug.

The proccess of testin


ng and the te es are explained in Section 1.4.
esting activitie

Version 2011
2 Page 13 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

1.3 Seven Testing Principles (K2)) 35 minuttes

Terms
Exhaustive testing

Princip
ples
A numbe er of testing principles
p ha
ave been sug
ggested overr the past 40 years and offer general
guideline
es common for f all testing
g.

Principle 1 – Testing shows pre esence of defects


d
Testing can
c show tha at defects are present, buut cannot pro
ove that there
e are no defe
ects. Testingg
reduces the probability of undisco overed defeccts remaining
g in the softw
ware but, eve
en if no defec
cts are
found, it is not a proo
of of correctn
ness.

Principle 2 – Exhau ustive testing is imposs sible


Testing everything
e (a
all combinatio
ons of inputss and preconnditions) is no
ot feasible exxcept for triviial
cases. Innstead of exh
haustive testting, risk ana
alysis and priorities should
d be used too focus testing
efforts.

Principle 3 – Early testing


t
To find defects
d early, testing activvities shall be started as early as posssible in the ssoftware or system
s
developmment life cycle, and shall be focused on defined objectives.
o

Principle 4 – Defectt clustering


Testing effort
e shall be
e focused prroportionally to the expec
cted and later observed d defect density
y of
moduless. A small number of mod dules usuallyy contains moost of the deffects discove
ered during pre-
p
release testing,
t or is responsible for most of the
t operation nal failures.

Principle 5 – Pestic cide paradox x


If the sam
me tests are repeated ovver and over again, eventtually the sam me set of tesst cases will no
longer fin
nd any new defects.
d To overcome
o thiis “pesticide paradox”, test cases neeed to be regu ularly
reviewed d and revised a different tests need to
d, and new and o be written to
t exercise ddifferent partss of
the softwware or syste
em to find potentially morre defects.

Principle 6 – Testing is contextt dependentt


Testing is
i done differrently in diffe
erent contextts. For example, safety-critical softwa
are is tested
differentlly from an e--commerce site.
s

Principle 7 – Absennce-of-errors
s fallacy
Finding and
a fixing deefects does not e system built is unusable
n help if the e and does n
not fulfill the users’
needs annd expectatio
ons.

Version 2011
2 Page 14 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

1.4 Fundam
mental Test
T Pro
ocess (K
K1) 35 minuttes

Terms
Confirmaation testing,, re-testing, exit
e criteria, incident, regrression testin
ng, test basiss, test condittion,
test cove
erage, test daata, test execution, test log, test plan, test proced
dure, test policy, test suitee, test
summaryy report, testtware

Backgrround
The mosst visible partt of testing iss test executiion. But to be
e effective an
nd efficient, ttest plans sh
hould
also inclu
ude time to be
b spent on planning
p the tests, designing test casses, preparin ng for execution
and evalluating resultts.

The fund onsists of the following ma


damental tesst process co ain activities:
o Testt planning annd control
o Testt analysis and design
o Testt implementa ation and exeecution
o Evaluating exit criteria
c and re
eporting
o Testt closure activities

Althoughh logically se e activities in the process may overlap


equential, the p or take placce concurrenntly.
g these main activities witthin the context of the system and the
Tailoring e project is u
usually requirred.

1.4.1 Test Plan


nning and
d Control (K1)
(
Test plannning is the activity
a of de
efining the ob
bjectives of te
esting and th
he specificatio
on of test ac
ctivities
in order to meet the objectives
o an
nd mission.

Test con ntrol is the on


ngoing activitty of comparing actual prrogress again nst the plan, and reportin
ng the
status, inncluding deviations from the plan. It in nvolves takin ng actions ne
ecessary to m meet the misssion
and obje ectives of thee project. In order
o to contrrol testing, th
he testing activities shoulld be monitorred
througho out the projecct. Test planning takes innto account the feedbackk from monito oring and conntrol
activitiess.

nning and co
Test plan ontrol tasks are
a defined in
n Chapter 5 of
o this syllab
bus.

1.4.2 Test Ana


alysis and Design (K
K1)
Test anaalysis and deesign is the activity
a during
g which gene
eral testing objectives are
e transformed
d into
tangible test conditio
ons and test cases.
c

The test analysis and d design actiivity has the following ma ajor tasks:
o Reviiewing the te est basis (succh as require ements, softw ware integrityy level1 (risk level), risk
analysis reports, architecture e, design, inte
erface speciffications)
o Evaluating testab bility of the te
est basis andd test objects
s
o Identifying and prioritizing
p tesst conditions based on an nalysis of tesst items, the specification n,
behaavior and stru ucture of the e software
o Desiigning and prioritizing hig gh level test cases
c
o Identifying necesssary test data to supportt the test con nditions and test cases
o Desiigning the tesst environme ent setup andd identifying any required d infrastructuure and tools
o Crea ating bi-direcctional tracea ability betwee
en test basis and test casses

1
The degrree to which sofftware compliess or must complyy with a set of stakeholder-sele
s ected software aand/or software--based
system cha aracteristics (e.g
g., software com
mplexity, risk asssessment, safeety level, securitty level, desired performance,
reliability, or
o cost) which are
a defined to re eflect the importaance of the softtware to its stakkeholders.
Version 2011
2 Page 15 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

1.4.3 Test Imp


plementation and Ex
xecution (K1)
Test imp
plementation and executio on is the activity where teest procedurres or scriptss are specifie
ed by
combinin
ng the test ca
ases in a parrticular orderr and includin
ng any other information needed for test
t
executio
on, the enviro
onment is sett up and the tests are run n.

Test imp
plementation and executio on has the foollowing majo or tasks:
o Fina alizing, implem menting and prioritizing test
t ncluding the identification
cases (in n of test data
a)
o Deve eloping and prioritizing te est procedure es, creating test
t data and d, optionally, preparing te
est
harnnesses and writing
w autommated test scrripts
o Crea ating test suittes from the test procedu ures for efficient test execcution
o Veriffying that the e test environnment has be een set up co orrectly
o Veriffying and updating bi-dire eability between the test basis and te
ectional trace est cases
o Execcuting test prrocedures either manuallly or by using g test executtion tools, acccording to thhe
planned sequencce
o Logg ging the outccome of test execution an nd recording the identitiess and versions of the sofftware
unde er test, test to
ools and testtware
o Com mparing actua al results with
h expected results
r
o Repo orting discrepancies as in ncidents and d analyzing th hem in orderr to establish h their cause (e.g.,
a deefect in the co ode, in specified test dataa, in the test document, or o a mistake in the way th he test
was executed)
o Repe eating test activities as a result of acttion taken for each discre epancy, for e example, re-
execcution of a te est that previoously failed in order to coonfirm a fix (cconfirmation testing), exeecution
of a corrected tesst and/or exe ecution of tessts in order to
t ensure tha at defects have not been
intro
oduced in uncchanged areas of the sofftware or that defect fixing did not unccover other
defeects (regressiion testing)

1.4.4 Evaluatin
ng Exit Crriteria and Reporting
g (K1)
Evaluatin
ng exit criteria is the activvity where te
est execution is assessedd against the defined
objective
es. This shouuld be done for f each test level (see Section
S 2.2).

ng exit criteria has the following majo


Evaluatin or tasks:
o Checcking test log gs against th
he exit criteria
a specified in
n test plannin
ng
o Asse essing if morre tests are needed
n or if the
t exit criterria specified should be ch
hanged
o Writiing a test sum mmary reporrt for stakeho olders

1.4.5 Test Clos


sure Activ
vities (K1)
Test clossure activities collect data
a from comp pleted test acctivities to consolidate exp
perience,
testwaree, facts and numbers.
n Tesst closure acctivities occurr at project milestones
m su
uch as when a
softwaree system is re eleased, a te
est project is completed (o or cancelled), a milestone has been
achievedd, or a mainte enance relea ase has been n completed.

Version 2011
2 Page 16 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

Test clossure activities include the


e following major
m tasks:
o Checcking which planned deliverables havve been deliv vered
o Clossing incident reports or ra aising change e records forr any that rem
main open
o Docu umenting the e acceptancee of the syste
em
o Fina alizing and arrchiving testwware, the tesst environmen nt and the teest infrastructture for later reuse
o Hand ding over the e testware to
o the maintennance organiization
o Anallyzing lesson ns learned too determine changes
c neeeded for futurre releases aand projects
o Usin ng the information gathere ed to improvve test maturity

Version 2011
2 Page 17 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

1.5 The Pssycholog


gy of Tessting (K2) 25 minuttes

Terms
Error gue
essing, indep
pendence

Backgrround
The mind dset to be ussed while tessting and revviewing is diffferent from thhat used whiile developing
softwaree. With the rig ght mindset developers
d a able to te
are est their own code, but seeparation of this
t
responsiibility to a tesster is typically done to help focus efffort and provide additiona
al benefits, su
uch as
an indeppendent view w by trained anda professio onal testing resources.
r In
ndependent ttesting may beb
carried out
o at any levvel of testing.

A certainn degree of inndependencce (avoiding the t author bias) often ma akes the teste er more effecctive
at finding
g defects and d failures. Inddependence e is not, howe
ever, a replaccement for faamiliarity, an
nd
develope ers can efficiently find ma any defects in their own code.
c Severaal levels of in
ndependence e can
be defineed as shown n here from lo ow to high:
o Testts designed by b the person(s) who wro ware under test (low level of independence)
ote the softw
o Testts designed by b another person(s) (e.g g., from the development
d t team)
o Testts designed by b a person(s) from a diffferent organiizational grou up (e.g., an iindependentt test
team
m) or test spe ecialists (e.g.., usability orr performancce test specia
alists)
o Testts designed by b a person(s) from a diffferent organiization or com mpany (i.e., outsourcing or
certification by an external bo ody)

People and
a projects are driven byy objectives.. People tend eir plans with the objectives set
d to align the
by manaagement and other stakeh holders, for example,
e to find
f defects or
o to confirmm that softwarre
meets itss objectives. Therefore, itt is important to clearly state the objeectives of testing.

Identifyin uring testing may be percceived as criticism againsst the producct and agains
ng failures du st the
author. As
A a result, te en seen as a destructive activity,
esting is ofte a evenn though it iss very constru
uctive
in the maanagement of o product rissks. Looking for failures in a system requires
r curio osity, profess
sional
pessimissm, a critical eye, attentio
on to detail, good
g commu unication with
h developme ent peers, and
experiennce on which to base erro or guessing.

If errors, defects or fa
ailures are coommunicated in a constrructive way, bad feelings between the e
testers and
a the analyysts, designe ers and developers can be
b avoided. This
T applies tto defects fou
und
during reeviews as we ell as in testin
ng.

The testeer and test le


eader need good
g interperrsonal skills to communiccate factual information about
a
defects, progress and risks in a constructive
c w
way. For the author of the software o
or document,
defect in
nformation ca an help them
m improve theeir skills. Defe
ects found and fixed duriing testing will
w
save time and moneyy later, and reduce
r risks..

Commun blems may occcur, particularly if testers are seen only


nication prob o as messengers of
unwante ed news abou ut defects. However, therre are severa al ways to im
mprove comm
munication an
nd
relationsships betweeen testers andd others:

Version 2011
2 Page 18 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

o Startt with collabo


oration rather than battless – remind everyone
e of thhe common goal of bette er
quality systems
o Commmunicate fin ndings on the e product in a neutral, fac
ct-focused way
w without crriticizing the
persson who crea ated it, for example, write objective an nd factual inccident reportss and review
w
findings
o t understand how the otther person feels
Try to f and why they react as they do
o Conffirm that the other person n has undersstood what yo ou have said d and vice ve
ersa

Version 2011
2 Page 19 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

1.6 Code of
o Ethicss 10 minuttes

Involvemment in software testing enables


e indivviduals to learn confidential and privile
eged informa ation. A
code of ethics
e is necessary, amoong other reaasons to ensu ure that the information iss not put to
inapproppriate use. Reecognizing th
he ACM and d IEEE code of ethics for engineers, th he ISTQB states the
following
g code of ethics:

PUBLIC - Certified so ers shall act consistently with the pub


oftware teste blic interest

CLIENT AND EMPLO OYER - Certtified softwarre testers shaall act in a manner
m that iss in the best interests
of their client
c and em
mployer, conssistent with th
he public inte
erest

PRODUC CT - Certified
d software te
esters shall ensure
e that th
he deliverables they provvide (on the products
p
and systtems they tesst) meet the highest profeessional stanndards possible

JUDGME ENT- Certifie


ed software testers
t shall maintain inte
egrity and ind
dependence in their profe
essional
judgmen
nt

MANAGEMENT - Ce ertified softwa


are test man
nagers and leeaders shall subscribe to and promote an
ethical approach
a to the managem ment of softw
ware testing

PROFES SSION - Cerrtified softwarre testers shall advance the integrity and reputatio
on of the pro
ofession
consistent with the public interestt

COLLEAAGUES - Cerrtified softwa hall be fair to and supporttive of their ccolleagues, and
are testers sh a
promote cooperation
n with software developerrs

SELF - Certified
C softw
ware testers shall particip
pate in lifelon
ng learning regarding
r e practice of their
the
on and shall promote an ethical appro
professio oach to the practice
p of the profession
n

Refere
ences
1.1.5 Black, 2001, Kaner,
K 2002
1.2 Beizzer, 1990, Bla
ack, 2001, Myers,
M 1979
1.3 Beizzer, 1990, He
etzel, 1988, Myers,
M 1979
1.4 Hetzzel, 1988
1.4.5 Black, 2001, Craig,
C 2002
1.5 Blacck, 2001, Hettzel, 1988

Version 2011
2 Page 20 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

2. Testing
T g Throug
ghout th
he Softw
ware Liffe 1
115 minutes
Cycle
e (K2)
Learniing Objecctives forr Testing Througho
out the Software
S L
Life Cycle
e
The obje
ectives identify what you will be able to
t do followin
ng the complletion of each
h module.

2.1 Sofftware Dev


velopmen
nt Models (K2)
(
LO-2.1.1
1 Explain the
t relationship between developmen nt, test activitties and work products inn the
developm ment life cyccle, by giving examples us sing project and
a product types (K2)
LO-2.1.2
2 Recognize the fact th hat software developmen nt models mu ust be adapte ed to the con
ntext
of projecct and producct characterisstics (K1)
LO-2.1.3
3 Recall chharacteristicss of good tessting that are
e applicable tot any life cyycle model (KK1)

2.2 Tes
st Levels (K2)
(
LO-2.2.1
1 Compare e the differen
nt levels of teesting: majorr objectives, typical objeccts of testing,,
typical ta
argets of testting (e.g., fun
nctional or sttructural) and
d related worrk products, people
who testt, types of deefects and failures to be id dentified (K2
2)

2.3 Tes
st Types (K2)
LO-2.3.1
1 Compare e four softwa
are test typess (functional,, non-functional, structura
al and chang
ge-
related) by example (K2)
LO-2.3.2
2 Recognize that functtional and strructural tests
s occur at any test level (K1)
LO-2.3.3
3 Identify and
a describe e non-functio
onal test typees based on non-function
n al requireme
ents
(K2)
LO-2.3.4
4 Identify and
a describe e test types based
b on the
e analysis of a software syystem’s struccture
or archite
ecture (K2)
LO-2.3.5
5 Describe e the purpose
e of confirmaation testing and regression testing (K K2)

2.4 Maintenance
e Testing (K2)
(
LO-2.4.1
1 Compare e maintenance testing (te esting an existing systemm) to testing a new application
with resp
pect to test tyypes, triggers for testing and amount of testing (KK2)
LO-2.4.2
2 Recognize indicatorss for mainten nance testing on, migration and retirement)
g (modificatio
(K1)
LO-2.4.3
3. Describee the role of regression
r te mpact analysis in mainten
esting and im nance (K2)

Version 2011
2 Page 21 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

2.1 Softwa
are Deve
elopmen
nt Models (K2) 20 minuttes

Terms
Commerrcial Off-The--Shelf (COTS
S), iterative-iincremental development model, validation,
verification, V-model

Backgrround
Testing does
d not exisst in isolation
n; test activitiies are relate
ed to softwarre developme
ent activities.
Differentt developmen nt life cycle models
m need d different approaches to testing.

2.1.1 V-model (Sequential Develo


opment Mo
odel) (K2)
Although
h variants of the V-modell exist, a com
mmon type off V-model usses four test levels,
correspo
onding to the
e four develop
pment levelss.

The fourr levels used in this syllab


bus are:
o Com mponent (unitt) testing
o Integ gration testin
ng
o Systtem testing
o Acce eptance testiing

In practicce, a V-mode el may have more, fewerr or different levels of devvelopment an nd testing,
dependin ng on the prooject and the
e software prroduct. For example, therre may be co omponent
integratioon testing aftter componeent testing, an
nd system in
ntegration tessting after syystem testing
g.

Softwaree work produ ucts (such ass business sccenarios or use


u cases, re equirements sspecification ns,
design documents
d annd code) prooduced during developme ent are often the basis off testing in on
ne or
more tesst levels. Refferences for generic
g workk products include Capab bility Maturityy Model Integgration
(CMMI) or
o ‘Software life cycle proocesses’ (IEE EE/IEC 1220 07). Verification and validdation (and early
e
test design) can be carried
c out duuring the devvelopment off the softwaree work produ ucts.

2.1.2 Iterative--incremen
ntal Develo
opment Models (K2)
Iterative--incremental developmen blishing requiirements, designing, build
nt is the proccess of estab ding
and testiing a system m in a series of
o short deve elopment cyc cles. Examples are: proto otyping, Rapid
Applicatiion Developm ment (RAD), Rational Un nified Processs (RUP) and agile develo opment mode els. A
system that
t is producced using theese models may m be teste ed at several test levels dduring each
iteration.. An increme ent, added to others deve eloped previoously, forms a growing pa artial system,
which sh hould also be e tested. Reggression testing is increas singly importtant on all ite
erations afterr the
first one.. Verification and validatio
on can be ca arried out on each increm
ment.

2.1.3 Testing within


w a Life Cycle Model
M (K2
2)
In any liffe cycle model, there are several characteristics of o good testin
ng:
o For everye develoopment activiity there is a corresponding testing acctivity
o Each h test level has
h test objecctives specifiic to that leve
el
o The analysis and ests for a givven test level should begiin during the correspondiing
d design of te
deve elopment acttivity
o Testters should be b involved inn reviewing documents
d as
a soon as drrafts are available in the
deve elopment life cycle

Test leve
els can be coombined or reorganized
r d
depending on the nature of the projecct or the systtem
architectture. For exa
ample, for the
e integration of a Commeercial Off-The
e-Shelf (COTTS) software
product into a systemm, the purchaaser may perrform integra
ation testing at
a the system
m level (e.g.,

Version 2011
2 Page 22 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

integratio
on to the infrrastructure and other systems, or sys
stem deploym ment) and accceptance tes
sting
(function
nal and/or noon-functional,, and user an
nd/or operational testing)).

Version 2011
2 Page 23 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

2.2 Test Le
evels (K
K2) 40 minuttes

Terms
Alpha testing, beta teesting, comp ponent testing
g, driver, field
d testing, fun
nctional requuirement,
integratio
on, integratio
on testing, noon-functionall requiremen nt, robustnesss testing, stu
ub, system te
esting,
test environment, tesst level, test-d
driven development, use er acceptance e testing

Backgrround
For eachh of the test levels, the fo
ollowing can be
b identified: the genericc objectives, tthe work
product(s) being refe erenced for deriving
d test cases
c (i.e., th
he test basiss), the test ob
bject (i.e., wh
hat is
being tessted), typicall defects and
d failures to be
b found, tes st harness requirements a and tool supp port,
and speccific approacches and responsibilities.

Testing a system’s configuration data shall be


e considered
d during test planning,

2.2.1 Component Testin


ng (K2)
Test bassis:
o Com mponent requ uirements
o Deta ailed design
o Code e

Typical test
t objects:
o Com mponents
o Prog grams
o Data a conversion / migration programs
p
o Data abase modules

Compon nent testing (a


also known asa unit, modu ule or prograam testing) searches for d defects in, an
nd
verifies the
t functionin ng of, softwa
are modules, programs, objects,
o classses, etc., that are separattely
testable.. It may be do ystem, depending on the
one in isolatiion from the rest of the sy e context of the
developm ment life cycle and the syystem. Stubss, drivers and d simulators may be used d.

Compon nent testing may


m include testing
t of fun
nctionality an
nd specific no
on-functionall characteristtics,
such as resource-behavior (e.g., searching fo or memory le eaks) or robuustness testin
ng, as well ass
structura
al testing (e.g
g., decision coverage).
c Teest cases aree derived fro
om work prod ducts such as sa
specifica
ation of the component, th he software design or the e data model.

Typicallyy, component testing occcurs with acceess to the co


ode being tessted and withh the supportt of a
developm ment environ nment, such as a unit test framework or debugging tool. In practice, comp ponent
testing usually
u involvves the progrrammer who wrote the co ode. Defects are typicallyy fixed as soo
on as
they are found, witho out formally managing
m the
ese defects.

One app proach to com mponent testting is to prepare and auttomate test cases
c before
e coding. Thiss is
called a test-first app
proach or tesst-driven deveelopment. Thhis approach h is highly iterative and is
s
based on n cycles of developing teest cases, theen building and integratinng small piecces of code, and
a
executing the compo onent tests coorrecting anyy issues and iterating unttil they pass.

Version 2011
2 Page 24 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

2.2.2 Integratio
on Testing
g (K2)
Test bassis:
o Softw ware and sysstem design
o Arch hitecture
o Workflows
o Use cases

Typical test
t objects:
o Subssystems
o Data abase implem mentation
o Infraastructure
o Interrfaces
o Systtem configura ation and configuration data
d

on testing tests interfaces between components, interactions with differen


Integratio nt parts of a
system, such as the operating syystem, file syystem and ha
ardware, and interfaces bbetween systtems.

There may be more than t one levvel of integrattion testing and


a it may be e carried out on test objeccts of
varying size
s as follow
ws:
1. Com mponent integ gration testin
ng tests the innteractions between
b softw
ware compo onents and iss done
afterr component testing
2. Systtem integratio on testing tests the interaactions betwe een differentt systems or between
harddware and so oftware and may
m be done e after systemm testing. In this case, the developingg
orgaanization mayy control onlyy one side off the interface. This migh ht be consideered as a risk
k.
Business processses impleme ented as worrkflows may involve a serries of system ms. Cross-platform
issuees may be siignificant.

The grea pe of integrattion, the more difficult it becomes


ater the scop b to issolate defectts to a speciffic
componeent or system
m, which mayy lead to incrreased risk and a additionaal time for tro
oubleshooting g.

Systema atic integratio


on strategies may be bassed on the sy ystem architeecture (such as top-down n and
bottom-uup), functionaal tasks, tran
nsaction proccessing sequuences, or soome other asspect of the system
s
or compo onents. In orrder to ease fault isolation
n and detectt defects earlly, integration
n should norrmally
be increm
mental rathe er than “big bang”.

Testing of
o specific no
on-functionall characteristtics (e.g., performance) may
m be includ
ded in integrration
testing as
a well as funnctional testin
ng.

At each stage of inteegration, testeers concentrrate solely onn the integrattion itself. Fo
or example, iff they
are integ
grating modu ule A with mo odule B they are intereste ed in testing the commun nication betw
ween
the modu ules, not the functionalityy of the indiviidual module
e as that wass done during g componentt
testing. Both
B function
nal and strucctural approaches may be e used.

Ideally, testers
t should understand d the archite
ecture and inffluence integ
gration plann
ning. If integra
ation
tests are
e planned before compon nents or systeems are built, those commponents can n be built in th
he
order reqquired for mo
ost efficient testing.
t

Version 2011
2 Page 25 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

2.2.3 System Testing


T (K
K2)
Test bassis:
o Systtem and softwware require
ement specification
o Use cases
o Funcctional speciffication
o Riskk analysis rep
ports

Typical test
t objects:
o Systtem, user and operation manuals
m
o Systtem configura ation and configuration data
d

System testing
t is con
ncerned withh the behavio
or of a whole system/prod duct. The tessting scope shall
s
be clearlly addressedd in the Master and/or Levvel Test Plan
n for that test level.

In system
m testing, the
e test environ
nment should correspond d to the final target or pro
oduction
environmment as much as possible e in order to minimize the
e risk of environment-spe ecific failures
s not
being fou
und in testing
g.

System testing
t may include testss based on risks and/or on o requirements specificaations, busine
ess
processe es, use casees, or other high level textt descriptions
s or models of system be
ehavior,
ons with the operating syystem, and syystem resources.
interactio

System testing
t should investigatee functional and
a non-func ctional requirrements of thhe system, and
data quaality characte
eristics. Teste
ers also needd to deal with
h incompletee or undocum mented
requiremments. System m testing of functional
f requirements starts
s by usinng the most a appropriate
specifica
ation-based (black-box)
( te
echniques foor the aspectt of the syste
em to be teste ed. For exammple, a
decision table may beb created for combinations of effects s described inn business ru ules. Structure-
based teechniques (wwhite-box) ma ay then be ussed to assesss the thoroughness of th he testing with
respect to
t a structuraal element, such
s as menu u structure or
o web page navigation
n (ssee Chapter 4).

An indep
pendent test team often carries
c out syystem testing
g.

2.2.4 Acceptan
nce Testin
ng (K2)
Test bassis:
o Userr requiremen nts
o Systtem requirem ments
o Use cases
o Business processses
o Riskk analysis rep
ports

Typical test
t objects:
o Business processses on fully integrated syystem
o Operational and maintenance e processes
o Userr proceduress
o Form ms
o Repo orts
o Conffiguration da ata

esponsibility of the customers or userrs of a system


Acceptance testing iss often the re m; other
stakeholders may bee involved ass well.

The goal in acceptan nce testing iss to establish


h confidence in the system
m, parts of th
he system orr
specific non-functional characteriistics of the system.
s Find
ding defects is not the maain focus in
acceptan nce testing. Acceptance
A t
testing may assess the system’s
s read
diness for deeployment an nd
Version 2011
2 Page 26 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

use, alth
hough it is no
ot necessarilyy the final levvel of testing. For example, a large-sccale system
integratio
on test may come
c he acceptancce test for a system.
after th

Acceptance testing may


m occur att various time es in the life cycle, for exa
ample:
o A CO OTS software product ma ay be accepttance tested when it is in nstalled or inttegrated
o Acce eptance testiing of the usa
ability of a co
omponent may be done during d component testingg
o Acce eptance testiing of a new functional en nhancementt may come before b system m testing

Typical forms
f of acce
eptance testiing include th
he following:

User acc ceptance te esting


Typicallyy verifies the fitness for use of the sysstem by business users.

onal (accepttance) testin


Operatio ng
The acce he system byy the system administrato
eptance of th ors, including
g:
o Testting of backu
up/restore
o Disaaster recoverry
o Userr manageme ent
o Main ntenance tassks
o Data a load and migration
m taskks
o Perioodic checks of security vulnerabilitiess

Contracct and regula ation accepttance testin ng


Contractt acceptance e testing is pe
erformed aga ainst a contra
act’s accepta
ance criteria for producing
custom-ddeveloped so oftware. Accceptance criteeria should be
b defined wh hen the partiies agree to the
t
contract.. Regulation acceptance testing is pe erformed aga ainst any regu
ulations that must be adhhered
to, such as governme ent, legal or safety regula
ations.

Alpha and beta (or field) testing g


Developers of marke et, or COTS, software ofte en want to get feedback from potentia al or existing
g
custome ers in their ma arket before the software e product is put
p up for sale commercia ally. Alpha te
esting
is performed at the developing
d orrganization’ss site but not by the developing team. Beta testing g, or
field-testting, is perforrmed by custtomers or po omers at their own locatio
otential custo ons.

Organiza ations may use


u other term ms as well, such
s as facto
ory acceptancce testing an
nd site accep
ptance
testing fo
or systems th
hat are tested before andd after being moved to a customer’s ssite.

Version 2011
2 Page 27 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

2.3 Test Tyypes (K2


2) 40 minuttes

Terms
Black-boox testing, coode coverage e, functional testing, interroperability teesting, load ttesting,
maintainnability testing
g, performan nce testing, portability
p tes
sting, reliabiliity testing, se
ecurity testing,
stress te
esting, structuural testing, usability
u testting, white-bo
ox testing

Backgrround
A group of test activities can be aimed
a at veriifying the sofftware system
m (or a part o
of a system) based
on a spe
ecific reason or target for testing.

A test typ
pe is focusedd on a particcular test obje
ective, which
h could be an ny of the follo
owing:
o A fun nction to be performed byy the software
o A no on-functional quality characteristic, suuch as reliabiility or usability
o The structure or architecture of the softwa are or system
m
o Change related, i.e., confirmiing that defe ects have beeen fixed (con nfirmation tessting) and loo
oking
for unintended
u chhanges (regrression testin ng)

A model of the software may be developed


d and/or used in n structural te
esting (e.g., a control flow
w
model orr menu struccture model), non-functionnal testing (e
e.g., performa ance model, usability mo odel
security threat modeling), and fun
nctional testing (e.g., a process flow model,
m a statte transition model
or a plain
n language specification)
s ).

2.3.1 Testing of
o Functio
on (Functio
onal Testiing) (K2)
The funcctions that a system, subssystem or co omponent aree to perform may be described in work
productss such as a reequirementss specification
n, use cases
s, or a functio
onal specifica
ation, or they
y may
be undoccumented. TheT functionss are “what” the
t system does.
d

Functionnal tests are based on funnctions and features


f (desscribed in do
ocuments or uunderstood by b the
testers) and
a their inteeroperability with specificc systems, an
nd may be pe erformed at a
all test levels
s (e.g.,
tests for componentss may be bassed on a com mponent specification).

Specifica
ation-based techniques
t m be used to derive tes
may st conditionss and test casses from the
functiona
ality of the so
oftware or syystem (see Chapter
C 4). Fu
unctional tessting conside
ers the externnal
behaviorr of the softw
ware (black-bbox testing).

A type off functional testing, security testing, in


nvestigates the
t functionss (e.g., a firewwall) relating
g to
n of threats, such as virusses, from ma
detection alicious outsiders. Anothe er type of funnctional testing,
ng, evaluatess the capabiliity of the softtware producct to interact with one or more
interoperrability testin
specified
d componentts or systemss.

2.3.2 Testing of
o Non-fun
nctional Software
S Characteris
C stics (Non
n-functional
Testing
g) (K2)
Non-funcctional testing includes, but
b is not limited to, perfo ormance testing, load testing, stress
testing, usability
u testing, maintain
nability testin
ng, reliability testing
t and portability
p tessting. It is the
e
testing of
o “how” the system
s workss.

Non-funcctional testing may be pe


erformed at all
a test levels. The term non-functiona
al testing desscribes
aracteristics of systems and
the testss required to measure cha a software e that can be quantified on
o a
varying scale,
s such as
a response times for perrformance teesting. These
e tests can be referenced d to a
quality model
m such as
a the one deefined in ‘Sofftware Engine
eering – Softtware Product Quality’ (IS
SO

Version 2011
2 Page 28 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

9126). Non-functiona
N al testing con
nsiders the external behaavior of the so
oftware and in most case
es
uses bla
ack-box test design
d techn
niques to acccomplish thatt.

2.3.3 Testing of
o Softwarre Structu
ure/Archite
ecture (Strructural Testing) (K
K2)
Structura
al (white-boxx) testing mayy be perform
med at all testt levels. Structural techniiques are best
used afte
er specification-based techniques, in order to help p measure th he thoroughnness of testin
ng
through assessment of coverage e of a type of structure.

Coverag ge is the exte


ent that a stru
ucture has beeen exerciseed by a test suite,
s expressed as a
percenta age of the ite
ems being co overed. If covverage is not 100%, then more tests m may be desig
gned
to test th
hose items thhat were misssed to increa ase coveragee. Coverage techniques a are covered in
Chapter 4.

n componentt testing and component integration te


At all tesst levels, but especially in esting, tools can
be used to measure the code covverage of ele ements, such ents or decisions. Structural
h as stateme
testing may
m be based d on the archhitecture of th
he system, such
s as a callling hierarch
hy.

Structuraal testing appproaches can


n also be applied at syste
em, system integration
i or acceptance
e
testing le
evels (e.g., to
o business models
m or me
enu structurees).

2.3.4 Testing Related


R to
o Changes
s: Re-testing and Re
egression Testing (K2)

After a defect
d is dete
ected and fixe
ed, the softwware should be
b re-tested to
t confirm th
hat the originaal
defect ha as been succcessfully rem
moved. This is i called confirmation. De
ebugging (loccating and fix
xing a
defect) iss a developmment activity, not a testing
g activity.

Regresssion testing iss the repeateed testing of an already te


ested prograam, after mod
dification, to
discoverr any defectss introduced oro uncovered ge(s). These defects may
d as a result of the chang y be
either in the softwaree being tested, or in anothher related or
o unrelated software
s com
mponent. It is s
performe ed when the software, or its environm ment, is changed. The exttent of regresssion testingg is
based on n the risk of not finding defects in soft
ftware that was working previously.
p

Tests shhould be repe


eatable if the
ey are to be used
u for conffirmation testting and to assist regress
sion
testing.

Regresssion testing may


m be performed at all te nd includes functional, no
est levels, an on-functional and
structura
al testing. Re
egression tesst suites are run
r many tim mes and gene erally evolve
e slowly, so
regressio
on testing is a strong canndidate for auutomation.

Version 2011
2 Page 29 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

2.4 Maintenance Testing


T (
(K2) 15 minuttes

Terms
Impact analysis,
a maintenance tessting

Backgrround
Once deeployed, a sooftware systeem is often in
n service for years
y or decades. During g this time the
system, its configura
ation data, or its environm
ment are oftenn corrected, changed or extended. Th he
planning
g of releases in advance isi crucial for successful maintenance
m e testing. A distinction hass to be
made be etween plannned releases and hot fixe es. Maintenan nce testing iss done on ann existing
operational system, and
a is triggered by modiffications, mig he software or
gration, or retirement of th
system.

Modifica
ations include
e planned en
nhancement changes
c (e.g
g., release-ba
ased), correcctive and
emergenncy changes, and change es of environ nment, such as
a planned operating
o sysstem or database
upgrades, planned upgrade of Co ommercial-O Off-The-Shelff software, orr patches to correct newly
exposedd or discovere
ed vulnerabilities of the operating
o sysstem.

Maintena ance testing for migration m to another) should inclu


n (e.g., from one platform ude operation
nal
tests of the
t new environment as wellw as of the e changed so oftware. Migration testing
g (conversion
n
testing) is
i also needeed when data a from anoth her applicatio
on will be mig
grated into th
he system beeing
maintainned.

Maintenaance testing for the retire


ement of a syystem may in
nclude the te a migration or
esting of data
archiving
g if long data
a-retention pe
eriods are required.

In additioon to testing what has be een changed, maintenanc ce testing inccludes regression testing g to
parts of the
t system that have nott been chang ged. The sco ope of mainte enance testin
ng is related tot the
risk of th
he change, th he size of the
e existing sysstem and to the
t size of th he change. DDepending on n the
changess, maintenancce testing may be done at a any or all test
t nd for any orr all test types.
levels an
Determin ning how thee existing sysstem may be affected by changes is called
c impactt analysis, an nd is
used to help
h decide how
h much re egression tessting to do. The
T impact analysis may be used to
determin ne the regresssion test suiite.

Maintena o of date orr missing, or testers with


ance testing can be difficcult if specificcations are out
domain knowledge
k a not availa
are able.

Refere
ences
2.1.3 CMMMI, Craig, 2002,
2 Hetzell, 1988, IEEE
E 12207
2.2 Hetzzel, 1988
2.2.4 Coopeland, 20004, Myers, 19979
2.3.1 Beeizer, 1990, Black,
B 2001, Copeland, 2004
2
2.3.2 Black, 2001, ISSO 9126
2.3.3 Beeizer, 1990, Copeland,
C 20004, Hetzel, 1988
2.3.4 Heetzel, 1988, IEEE
I STD 82 29-1998
2.4 Blacck, 2001, Cra
aig, 2002, Heetzel, 1988, IEEE
I STD 82
29-1998

Version 2011
2 Page 30 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

3. S
Static T
Techniques (K2
2) 6
60 minuttes
Learniing Objecctives forr Static Te
echniques
The obje
ectives identify what you will be able to
t do followin
ng the complletion of each
h module.

3.1 Sta
atic Techniques and
d the Test Process (K2)
(
LO-3.1.1
1 Recognize software work produccts that can be b examined by the differrent static
techniquues (K1)
LO-3.1.2
2 Describee the importaance and valu ue of conside
ering static te
echniques fo
or the assesssment
of softwa
are work products (K2)
LO-3.1.3
3 Explain the
t differencce between static
s and dynnamic techniques, consid dering objecttives,
types of defects to be a the role of these tech
e identified, and hniques with
hin the softwa
are life
cycle (K22)

3.2 Rev
view Proc
cess (K2)
LO-3.2.1
1 Recall th
he activities, roles and responsibilities
s of a typical formal revie
ew (K1)
LO-3.2.2
2 Explain the
t differencces between different type es of reviewss: informal re
eview, techniical
review, walkthrough
w and inspection (K2)
LO-3.2.3
3 Explain the
t factors fo or successful performanc ce of reviewss (K2)

3.3 Sta
atic Analys
sis by Too
ols (K2)
LO-3.3.1
1 Recall tyypical defectss and errors identified by
y static analyssis and comppare them too
reviews and dynamicc testing (K1)
LO-3.3.2
2 Describe e, using exam
mples, the tyypical benefits of static an
nalysis (K2)
LO-3.3.3
3 List typiccal code and design defe ects that mayy be identified nalysis tools (K1)
d by static an

Version 2011
2 Page 31 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

3.1 Static Techniq


T ues and
d the Tes
st Proce
ess 15 minuttes
(K2)
Terms
Dynamicc testing, stattic testing

Backgrround
Unlike dyynamic testin
ng, which reqquires the exxecution of so
oftware, static testing tecchniques rely
y on
the manu ual examinattion (reviewss) and autom
mated analysiis (static anaalysis) of the code or otheer
project documentatio
d on without the execution of the code.

Reviewss are a way of


o testing softtware work products
p (including code)) and can be performed well w
before dynamic test execution.
e D
Defects eviews early in the life cyycle (e.g., deffects
deteccted during re
found in requirementts) are often much cheap per to remove e than those detected byy running testts on
the execcuting code.

A revieww could be doone entirely as


a a manual activity, but there is also tool supportt. The main
manual activity
a is to examine
e a work
w product and make co omments about it. Any so oftware workk
product can
c be reviewed, includin ng requireme ents specificaations, desig
gn specifications, code, te
est
est specifications, test casses, test scriipts, user guides or web pages.
plans, te

Benefits of reviews in
nclude early defect detecction and corrrection, deveelopment pro oductivity
improvem ments, reducced developm ment timesca ales, reducedd testing cosst and time, liifetime cost
reductionns, fewer deffects and improved comm munication. Reviews
R can
n find omissioons, for exam mple,
in require
ements, whicch are unlikeely to be foun
nd in dynamic testing.

amic testing have the same objective


Reviewss, static analyysis and dyna e – identifyingg defects. Th
hey
are complementary; the different techniques can find diffe erent types of
o defects effe ectively and
efficiently. Compared es find causes of failures (defects) rather
d to dynamicc testing, stattic technique
than the failures them
mselves.

Typical defects
d a easier to find in reviews than in dynamic testin
that are ng include: ddeviations froom
standardds, requiremeent defects, design
d defeccts, insufficie
ent maintaina
ability and inccorrect interfa
ace
specifica
ations.

Version 2011
2 Page 32 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

3.2 Review
w Processs (K2) 25 minuttes

Terms
Entry criteria, formal review, inforrmal review, inspection, metric,
m mode
erator, peer rreview, review
wer,
scribe, te
echnical review, walkthro ough

Backgrround
The diffe
erent types of
o reviews vary from inform mal, characterized by noo written instrructions for
reviewerrs, to systematic, charactterized by teaam participattion, documeented resultss of the revieww, and
documen nted procedu ures for condducting the re
eview. The fo
ormality of a review proce ess is related
d to
factors such
s as the maturity
m of the developme ent process, any legal or regulatory re equirements s or the
need forr an audit traiil.

The wayy a review is carried out depends


d t agreed objectives of the
on the t review (ee.g., find defe
ects,
gain und
derstanding, educate testters and new
w team membbers, or discu
ussion and d
decision by
consenssus).

3.2.1 Activities
s of a Form
mal Revie
ew (K1)
A typicall formal revie
ew has the fo
ollowing main
n activities:

1. Plannning
• Defining
D the review criterria
• Selecting
S thee personnel
• Allocating
A ro
oles
• Defining
D the entry and exxit criteria forr more forma al review type
es (e.g., insp
pections)
• Selecting
S wh hich parts of documents to t review
• Checking
C enntry criteria (ffor more form
mal review types)
2. Kickk-off
• Distributing
D d
documents
• Explaining
E th
he objectivess, process an nd documentts to the participants
3. Indivvidual preparration
• Preparing
P for the review meeting by reviewing
r the
e document(s)
• Noting
N poten ntial defects, questions an nd commentts
4. Exam mination/eva aluation/recorrding of resu ults (review meeting)
m
• Discussing
D o logging, with documented results or
or o minutes (fo or more formmal review typ
pes)
• Noting
N defeccts, making re ecommenda ations regarding handling the defects, making dec cisions
a
about the deefects
• Examining/e
E evaluating and recording issues during g any physiccal meetings or tracking any
a
g
group electroonic commun nications
5. Rewwork
• Fixing
F defectts found (typpically done by
b the authorr)
• Recording
R updated status of defects (in formal reviews)
6. Folloow-up
• Checking
C thaat defects ha ave been add dressed
• Gathering
G metrics
• Checking
C onn exit criteria (for more forrmal review types)
t

3.2.2 Roles an
nd Respon
nsibilities (K1)
A typicall formal revie
ew will includ
de the roles below:
b
o Manager: decide es on the exeecution of revviews, alloca
ates time in project
p sched
dules and
dete
ermines if the e review objeectives have been met.
Version 2011
2 Page 33 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

o Moderator: the person


p who le
eads the review of the do ocument or set
s of docume ents, includin
ng
planning the reviiew, running the meeting, and followin ng-up after th
he meeting. If necessary y, the
moderator may mediate
m betw
ween the various points of o view and iss often the pe erson upon whom
w
the success
s of th
he review ressts.
o Authhor: the writer or person with
w chief ressponsibility fo or the documment(s) to be reviewed.
o Reviiewers: indivviduals with a specific technical or bus siness backgground (also called check kers or
inspeectors) who, after the necessary prep paration, idenntify and desscribe finding
gs (e.g., defe
ects) in
the product
p unde er review. Re
eviewers sho ould be chose en to represeent different perspectivess and
roless in the revie
ew process, and
a should ta ake part in any review me eetings.
o Scribbe (or record der): documeents all the isssues, probleems and open points thatt were identiffied
durinng the meetin ng.

Looking at software products


p or related
r work products fro om different perspectives
p and using
checklistts can make reviews morre effective and
a efficient. For example e, a checklistt based on various
v
perspecttives such ass user, mainttainer, testerr or operation al requirements
ns, or a checcklist of typica
problemss may help too uncover prreviously und detected issuues.

3.2.3 Types off Reviews (K2)


A single software prooduct or relatted work prooduct may be
e the subject of more than n one review
w. If
more thaan one type of
o review is used,
u the ordder may vary
y. For example, an informmal review ma ay be
carried out
o before a technical
t revview, or an in
nspection ma
ay be carried out on a req quirements
specifica
ation before a walkthrouggh with customers. The main
m characteeristics, optio
ons and purp
poses
of commmon review tyypes are:

Informal Review
o No foormal processs
o May take the form m of pair pro
ogramming or
o a technical lead review
wing designs and code
o Resu ults may be documented
d
o Variees in usefuln
ness dependiing on the re
eviewers
o Main n purpose: in
nexpensive way
w to get so ome benefit

Walkthrrough
o Mee eting led by author
a
o May take the form m of scenarios, dry runs,, peer group participation n
o Open-ended sesssions
• Optional
O pre-meeting pre eparation of reviewers
r
• Optional
O preparation of a review repoort including list of finding
gs
o Optio onal scribe (who is not th
he author)
o May vary in pracctice from quiite informal to very formaal
o Main n purposes: learning, gaining undersstanding, find ding defects

Techniccal Review
o Docu umented, de efined defect--detection prrocess that in
ncludes peerrs and techniical experts with
w
optio
onal manage ement particippation
o May be performe ed as a peer review witho out managem ment participation
o Idea ally led by trained modera ator (not the author)
a
o Pre-meeting prep paration by reviewers
r
o Optio onal use of checklists
c
o Prep paration of a review reporrt which inclu udes the list of findings, the
t verdict whether the
softw
ware productt meets its re equirements and, where appropriate,
a recommendations relate ed to
findings
o May vary in pracctice from quiite informal to very forma al
o Main n purposes: discussing, making decissions, evalua ating alternattives, finding
g defects, sollving
technical problem ms and checcking conform mance to spe ecifications, plans,
p regulaations, and
standards
Version 2011
2 Page 34 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

Inspectiion
o Led by trained moderator
m (noot the author)
o Usua ed as a peer examination
ally conducte n
o Defin ned roles
o Incluudes metrics gathering
o Formmal process based
b on rules and checcklists
o Speccified entry and
a exit criterria for accep ptance of the software pro
oduct
o Pre-meeting prep paration
o Inspection reportt including lisst of findings
o Formmal follow-upp process (wiith optional process
p improovement com mponents)
o Optio onal reader
o Main n purpose: fin
nding defectss

Walkthro oughs, technical reviews and inspectiions can be performed


p w
within a peer g
group,
i.e., colle
eagues at the
e same organizational levvel. This type
e of review iss called a “pe
eer review”.

3.2.4 Success
s Factors for
f Review
ws (K2)
Successs factors for reviews
r include:
o Each h review hass clear predeffined objectivves
o The right people for the revie ew objectivess are involved d
o Testters are value ed reviewerss who contrib bute to the reeview and alsso learn abou ut the producct
whicch enables th hem to prepa are tests earlier
o Defe ects found arre welcomed and expresssed objective ely
o Peop ple issues an nd psycholog gical aspectss are dealt with (e.g., makking it a positive experiennce for
the author)
a
o The review is conducted in an atmospherre of trust; th he outcome will
w not be ussed for the
evaluation of the e participantss
o Reviiew techniqu ues are applie ed that are suitable
s to ac bjectives and to the type and
chieve the ob a
level of software work produccts and revie ewers
o Checcklists or role es are used if appropriate e to increasee effectivenesss of defect identification
n
o Train ning is given in review techniques, esspecially the more formall techniques such as
inspe ection
o Management sup pports a goo od review proocess (e.g., by
b incorporatting adequate e time for revview
activvities in proje
ect scheduless)
o Therre is an emphasis on learrning and pro ocess improv vement

Version 2011
2 Page 35 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

3.3 Static Analysis


A s by Too
ols (K2) 20 minuttes

Terms
Compiler, complexityy, control flow
w, data flow, static analys
sis

Backgrround
The obje ective of statiic analysis iss to find defects in softwa are source coode and softw ware models s.
Static annalysis is perrformed witho out actually executing
e thee software beeing examine ed by the toool;
dynamicc testing doess execute the e software co ode. Static analysis can locate
l defectts that are ha
ard to
find in dyynamic testinng. As with re eviews, staticc analysis finnds defects rather
r than fa
ailures. Staticc
analysis tools analyzze program code c (e.g., co
ontrol flow an
nd data flow)), as well as g
generated ou utput
such as HTML and XML. X

The valu
ue of static an
nalysis is:
o Earlyy detection ofo defects prior to test exeecution
o Earlyy warning ab bout suspicio ous aspects of
o the code or
o design by the
t calculatioon of metrics
s, such
as a high comple exity measurre
o Identification of defects
d not easily
e found by
b dynamic testing
t
o Dete ecting dependencies and inconsistenccies in softw ware models such
s as linkss
o Imprroved mainta ainability of code
c and dessign
o Prevvention of defects, if lesso ons are learn
ned in develo
opment

Typical defects
d disco
overed by staatic analysis tools include
e:
o Refe erencing a vaariable with an
a undefined d value
o Inconsistent interfaces betwe een moduless and compon nents
o Varia ables that arre not used or
o are improp perly declaredd
o Unre eachable (de ead) code
o Misssing and erro oneous logic (potentially infinite loops)
o Overly complicatted constructts
o Prog gramming sta andards violaations
o Secu urity vulnerabbilities
o Synttax violationss of code and d software models
m

Static an
nalysis tools are typically used by devvelopers (che
ecking against predefined d rules or
programming standa ards) before and
a during component an nd integration testing or w
when checking-in
code to configuration
c n manageme ent tools, and
d by designers during sofftware modelling. Static
analysis tools may produce a larg ge number ofo warning meessages, which need to b be well-manaaged
to allow the most effe
ective use off the tool.

Compilers may offer some suppo


ort for static analysis,
a inclluding the ca
alculation of m
metrics.

Refere
ences
3.2 IEEE
E 1028
3.2.2 Giilb, 1993, van
n Veenendaa
al, 2004
3.2.4 Giilb, 1993, IEE
EE 1028
3.3 van Veenendaall, 2004

Version 2011
2 Page 36 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

4. T
Test De
esign Te
echniqu
ues (K4)) 28
85 minu
utes
Learniing Objecctives forr Test Dessign Tech
hniques
The obje
ectives identify what you will be able to
t do followin
ng the complletion of each
h module.

4.1 The
e Test Dev
velopmentt Process (K3)
LO-4.1.1
1 Differenttiate between n a test desiggn specificattion, test case specificatio on and test
procedure specificatiion (K2)
LO-4.1.2
2 Compare e the terms test
t condition
n, test case and
a test proccedure (K2)
LO-4.1.3
3 Evaluate e the quality of test casess in terms of clear traceab bility to the re
equirements s and
expected d results (K22)
LO-4.1.4
4 Translate e test cases into a well-sstructured tes
st procedure specification n at a level of
o
detail rellevant to the knowledge of o the testerss (K3)

4.2 Cattegories of
o Test Des
sign Tech
hniques (K
K2)
LO-4.2.1
1 Recall reeasons that both
b specification-based (black-box) and
a structure e-based (white-
box) testt design tech
hniques are useful
u and lis
st the commoon technique
es for each (KK1)
LO-4.2.2
2 Explain the
t characteristics, comm monalities, an
nd difference
es between sspecification--based
testing, structure-bas
s sed testing and
a experienc ce-based tessting (K2)

4.3 Spe
ecification
n-based or Black-bo
ox Techniques (K3)
LO-4.3.1
1 Write tesst cases from
m given softwware models using equiva alence partitiioning, bound
dary
value annalysis, decission tables an
nd state transition diagra
ams/tables (KK3)
LO-4.3.2
2 Explain the
t main purrpose of each h of the four testing techn
niques, whatt level and ty
ype of
testing could
c use the
e technique, and
a how cov verage may be b measured d (K2)
LO-4.3.3
3 Explain the
t concept of use case testing and its benefits (K K2)

4.4 Strructure-ba
ased or Wh
hite-box Technique
T s (K4)
LO-4.4.1
1 Describe e the conceppt and value of
o code cove erage (K2)
LO-4.4.2
2 Explain the
t conceptss of statemen nt and decision coveragee, and give reeasons why these
t
conceptss can also be e used at tesst levels othe
er than component testingg (e.g., on
businesss proceduress at system le evel) (K2)
LO-4.4.3
3 Write tesst cases from
m given contrrol flows usinng statementt and decision test designn
techniqu ues (K3)
LO-4.4.4
4 Assess statement
s an
nd decision coverage
c for completenesss with respe
ect to defined
d exit
criteria. (K4)
(

4.5 Exp
perience-b
based Tec
chniques (K2)
(
LO-4.5.1
1 Recall re
easons for writing
w test ca
ases based on
o intuition, experience
e annd knowledgge
about coommon defeccts (K1)
LO-4.5.2
2 Compare e experiencee-based tech hniques with specification
n-based testin
ng technique
es (K2)

4.6 Cho
oosing Te
est Techniiques (K2))
LO-4.6.1
1 Classify test design techniques
t a
according to their
t fitness to
t a given co
ontext, for the
e test
basis, re
espective moodels and sofftware characcteristics (K2
2)

Version 2011
2 Page 37 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

4.1 The Te
est Deve
elopmen
nt Proces
ss (K3) 15 minuttes

Terms
Test casse specificatio
on, test desig
gn, test execcution schedule, test proccedure speciification, testt
script, tra
aceability

Backgrround
The test developmen nt process deescribed in thhis section caan be done in different wways, from veery
informal with little or no documen ntation, to very formal (as s it is describ
bed below). T
The level of
formalityy depends on n the contextt of the testin
ng, including the maturity of testing annd development
processe es, time consstraints, safe
ety or regulattory requiremments, and th he people invvolved.

During te est analysis, the test basis documenttation is analyzed in orde er to determinne what to te est,
i.e., to id
dentify the tesst conditionss. A test cond
dition is defin
ned as an item or event th hat could be
verified byb one or mo ore test casees (e.g., a fun
nction, transaaction, quality characterisstic or structu
ural
element)).

Establishhing traceability from testt conditions back


b to the specifications
s s and require ements enab bles
both effe
ective impactt analysis wh hen requiremments change e, and determmining requirrements cove erage
for a set of tests. Durring test analysis the deta
ailed test approach is implemented to o select the test
t
design teechniques too use based on,o among other
o conside erations, the identified risks (see Chapter 5
for more on risk analysis).

During teest design th


he test casess and test datta are create ed and speciffied. A test ccase consists
s of a
set of inp
put values, execution
e pre
econditions, expected
e res
sults and exe
ecution postcconditions, de efined
to cover a certain tesst objective(ss) or test con
ndition(s). The ‘Standard for Software Test
Docume EE STD 829-1998) descriibes the conttent of test design specifiications
entation’ (IEE
(containiing test cond
ditions) and test case spe ecifications.

Expectedd results sho


ould be produ uced as part of the specification of a test case and include outputs,
changess to data and states, and any other coonsequences s of the test. If expected rresults have not
been deffined, then a plausible, but erroneouss, result mayy be interpretted as the co
orrect one.
Expectedd results sho
ould ideally be
b defined prrior to test ex
xecution.

During teest implemen ntation the te


est cases are e developed, implementeed, prioritized
d and organiz
zed in
the test procedure
p sppecification (IEEE STD 829-1998). Th he test proce
edure specifies the seque ence
of action
ns for the exe
ecution of a test.
t If tests are
a run usingg a test execution tool, the sequence of
actions is specified in
n a test scrip
pt (which is an automatedd test procedure).

The varioous test proccedures and automated test t scripts are subsequeently formed into a test
executioon schedule that
t defines the
t order in which
w the various test pro
ocedures, an
nd possibly
automate ed test scriptts, are execuuted. The test execution schedule will take into account such
factors as
a regression n tests, prioritization, and technical an
nd logical dependencies.

Version 2011
2 Page 38 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

4.2 Catego
ories of Test
T Design Tec
chnique
es 15 minuttes
(K2)
Terms
Black-bo
ox test design
n technique, experience--based test design
d techniique, test dessign techniqu
ue,
white-bo
ox test design
n technique

Backgrround
The purp
pose of a tesst design tech
hnique is to identify
i test conditions,
c te
est cases, an
nd test data.

assic distinction to denote


It is a cla e test techniq
ques as blac ck-box or whiite-box. Blacck-box test de
esign
techniqu ues (also called specificattion-based te echniques) are
a a way to derive
d and select test
condition ns, test cases, or test datta based on an analysis of asis documentation. This
o the test ba s
includes both functio onal and non--functional te
esting. Black k-box testing, by definition, does not use
u
any inforrmation rega arding the inte
ernal structure of the commponent or system
s to be tested. Whitte-box
test design technique es (also calle
ed structural or structure--based technniques) are bbased on an
analysis of the structture of the co omponent or system. Black-box and white-box
w tessting may als
so be
combine ed with experrience-based d techniques to leverage the experien nce of develoopers, testers
s and
users to determine whatw should be
b tested.

Some techniques fall clearly into a single cate


egory; others
s have eleme
ents of more than one
categoryy.

This sylla
abus refers to
t specificatio
on-based tesst design tec
chniques as black-box
b tecchniques and
d
structure
e-based test design technniques as whhite-box techniques. In ad
ddition experrience-based
d test
design teechniques arre covered.

Common n characterisstics of specification-base


ed test design techniquess include:
o Models, either fo ormal or inforrmal, are use
ed for the spe
ecification off the problem
m to be solved
d, the
softw
ware or its co
omponents
o Testt cases can beb derived syystematicallyy from these models

Common n characterisstics of structture-based te


est design te
echniques incclude:
o Inforrmation abou ut how the so oftware is constructed is used to derivve the test ca
ases (e.g., code
c
and detailed dessign informatiion)
o The extent of covverage of the e software caan be measu ured for existting test case
es, and furthe
er test
case
es can be derived system matically to in
ncrease coveerage

Commonn characterisstics of experrience-basedd test design techniques include:


o The knowledge anda experien nce of people e are used too derive the test
t cases
o The knowledge of o testers, deevelopers, ussers and othe er stakeholdeers about the
e software, itts
usag
ge and its en
nvironment iss one source of informatio on
o Knowwledge abou ut likely defeccts and their distribution is
i another soource of inforrmation

Version 2011
2 Page 39 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

4.3 Specification-b
based orr Black-b
box 1
150 minu
utes
Techn
niques (K3)
(
Terms
Boundarry value analysis, decisio
on table testin
ng, equivalen
nce partitioniing, state transition testin
ng, use
case tessting

4.3.1 Equivale
ence Partittioning (K
K3)
In equivaalence partitiioning, inputss to the softw
ware or systeem are divide ed into group ps that are
expected d to exhibit similar
s behavvior, so they are
a likely to be
b processed d in the same way.
Equivale ence partition
ns (or classess) can be fou und for both valid data, i.e., values that should be e
accepted d and invalid data, i.e., vaalues that shhould be rejected. Partitioons can also be identified d for
outputs, internal valuues, time-relaated values (e.g.,
( before or after an event)
e and for interface
parameters (e.g., inte egrated com mponents bein ng tested during integratiion testing). TTests can be e
designed d to cover alll valid and in
nvalid partitions. Equivaleence partitionning is appliccable at all levels of
testing.

Equivale
ence partition
ning can be used
u to achie
eve input and erage goals. It can be ap
d output cove pplied
to human input, input via interfacces to a syste
em, or interfa
ace parameteers in integra
ation testing.

4.3.2 Boundarry Value Analysis


A (K
K3)
Behaviorr at the edge e of each equuivalence partition is morre likely to be
e incorrect th
han behavior within
the partittion, so boun
ndaries are an
a area wherre testing is likely to yield defects. The e maximum and
minimum m values of a partition aree its boundarry values. A boundary
b va
alue for a valiid partition is
sa
valid bou
undary value e; the bounda alid partition is an invalid boundary va
ary of an inva alue. Tests canc be
designedd to cover bo oth valid and invalid bounndary values. When desig gning test ca
ases, a test fo or
each bou undary value e is chosen.

Boundarry value analysis can be applied at all test levels. It is relatively easy to apply and its defect-
finding capability
c is high.
h Detailed
d specificatio
ons are helpfful in determiining the inte
eresting
boundarries.

This techhnique is ofte


en considereed as an exte
ension of equuivalence partitioning or o
other black-b
box
test design technique es. It can be used on equ
uivalence cla
asses for use
er input on sccreen as well as,
for exam
mple, on time ranges (e.g., time out, trransactional speed requirements) or ttable ranges s (e.g.,
table size is 256*2566).

4.3.3 Decision
n Table Testing (K3))
Decisionn tables are a good way tot capture syystem require ements that contain
c logiccal conditions
s, and
to documment internal system design. They ma ay be used to o record commplex business rules thatt a
system is to impleme ent. When crreating decision tables, thhe specificatiion is analyzzed, and cond ditions
and actio entified. The input conditions and actions are mosst often stated in
ons of the syystem are ide
w that theyy must be true or false (Boolean). The
such a way e decision tabble contains the triggering
conditionns, often com
mbinations off true and false for all inp
put conditionss, and the ressulting actionns for
each com mbination of conditions. Each
E columnn of the tablee corresponds to a busine ess rule that
defines a unique com mbination of conditions annd which res ecution of the actions
sult in the exe
associated with that rule. The covverage stand dard commonly used with h decision table testing iss to
have at least
l one tesst per column
n in the table
e, which typiccally involvess covering alll combination ns of
triggering
g conditions..

Version 2011
2 Page 40 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

The strength of decission table tessting is that itt creates com


mbinations of
o conditions that otherwis se
might noot have been exercised during testing g. It may be applied
a to all situations w
when the actio
on of
ware depends on several logical decissions.
the softw

4.3.4 State Tra


ansition Te
esting (K3
3)
A system m may exhibiit a different response de epending on current cond ditions or previous history
y (its
state). In hat aspect off the system can be show
n this case, th wn with a staate transition diagram. It allows
a
the testeer to view the
e software in terms of its states, transsitions betweeen states, the inputs or events
e
that trigg
ger state chaanges (transittions) and the actions wh hich may result from thosse transitionss. The
states off the system or object und der test are separate,
s ide
entifiable and
d finite in num
mber.

A state table shows the


t relationship between the states and
a inputs, an
nd can highliight possible
transition
ns that are in
nvalid.

Tests ca an be designe ed to cover a typical sequuence of states, to coverr every state,, to exercise every
transitionn, to exercise
e specific seq
quences of transitions
t orr to test invalid transitionss.

State tra
ansition testin
ng is much used within th he embedded d software in ndustry and technical
automation in genera al. However, the techniqu ue is also suitable for mo odeling a bussiness object
having specific
s statess or testing screen-dialog
s gue flows (e..g., for Intern
net applicatio
ons or busine
ess
scenarioos).

4.3.5 Use Case


e Testing (K2)
Tests ca
an be derived d from use ca ases. A use case
c describ
bes interactio ons between actors (userrs or
systems), which prod duce a resultt of value to a system useer or the cusstomer. Use ccases may be b
describe
ed at the absttract level (business use case, techno ology-free, business
b proccess level) or
o at
the syste
em level (sysstem use casse on the sysstem function nality level). Each use caase has
preconditions which need to be metm for the usse case to work
w successffully. Each use case
terminatees with postcconditions which are the observable results
r and final state of tthe system after
a
the use case
c has beeen completed d. A use casse usually has a mainstre eam (i.e., most likely) sce
enario
and alterrnative scenaarios.

Use case es describe the


t “processs flows” throuugh a system m based on itss actual likely use, so the
e test
cases deerived from use
u cases are most usefu ul in uncoverring defects in the processs flows durinng
real-worlld use of the system. Usee cases are very
v useful fo
or designing acceptance tests with
customeer/user participation. Theyy also help uncover
u integ
gration defects caused byy the interacttion
and interrference of different
d components, which individua al componentt testing wou uld not see.
Designinng test casess from use caases may be combined withw other spe ecification-baased test
techniqu
ues.

Version 2011
2 Page 41 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

4.4 Structu
ure-base
ed or Wh
hite-box 60 minuttes
Techn
niques (K4)
Terms
Code co
overage, deciision coverag
ge, statemen
nt coverage, structure-ba
ased testing

Backgrround
Structure
e-based or white-box
w testing is based
d on an identtified structurre of the softtware or the
system, as seen in th he following examples:
o Com mponent level: the structu ure of a softw
ware component, i.e., stattements, deccisions, branc ches
or evven distinct paths
p
o Integ gration level: the structure may be a call
c tree (a diagram in wh hich moduless call other
modules)
o Systtem level: the e structure may
m be a men nu structure, business prrocess or web page struc cture

In this se
ection, three code-relatedd structural te echniques for code coverrage, based on
est design te o
statemen nts, branches and decisio ons, are disccussed. For decision
d testting, a contro
ol flow diagra
am
may be used
u to visua
alize the alte
ernatives for each
e decisio
on.

4.4.1 Statemen
nt Testing
g and Cove
erage (K4)
In compoonent testing
g, statement coverage is the assessm ment of the pe
ercentage off executable
statemennts that have
e been exerccised by a tesst case suite. The statem echnique derives
ment testing te
test case
es to execute atements, normally to increase statem
e specific sta ment coverag ge.

Statement coverage is determine ed by the num


mber of exec cutable statements coverred by (desig
gned
or execu
uted) test casses divided by
b the numbe er of all exec ments in the code under test.
cutable statem

4.4.2 Decision
n Testing and
a Coverrage (K4)
Decisionn coverage, related
r to bra
anch testing, is the asses ssment of thee percentage e of decision
outcome es (e.g., the True
T and Fallse options of
o an IF statement) that ha ave been exxercised by a test
case suite. The decission testing technique
t deerives test ca
ases to execuute specific d
decision outc comes.
Branche es originate frrom decision
n points in the
e code and show
s the tran
nsfer of contrrol to differen
nt
locationss in the code
e.

Decision
n coverage iss determinedd by the numb
ber of all dec
cision outcom
mes covered by (designe
ed or
executed
d) test casess divided by the
t number ofo all possible
e decision ou he code under
utcomes in th
test.

Decision
n testing is a form of conttrol flow testin
ng as it follow
ws a specificc flow of conttrol through the
t
decision points. Deciision coveragge is stronge er than statem ment coverag ge; 100% de ecision coverrage
guaranteees 100% sta atement cove erage, but no ot vice versaa.

4.4.3 Other Structure-ba


ased Tech
hniques (K
K1)
There arre stronger le
evels of strucctural covera
age beyond decision
d cove
erage, for exxample, cond
dition
coveragee and multiple condition coverage.
c

d at other test levels For example, at the integration


The conccept of coverage can also be applied
level the percentage of modules, componentss or classes that have beeen exercisedd by a test caase
suite couuld be expresssed as mod
dule, compon
nent or class coverage.

Tool sup
pport is usefu
ul for the stru
uctural testing
g of code.
Version 2011
2 Page 42 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

4.5 Experie
ence-ba
ased Tecchniques
s (K2) 30 minuttes

Terms
Exploratory testing, (fault)
( attack

Backgrround
Experiennce-based teesting is where tests are derived
d from the tester’s skill and intuuition and the
eir
experiennce with similar applicatio
ons and technologies. Wh hen used to augment
a sysstematic
techniquues, these tecchniques can n be useful in
n identifying special testss not easily ccaptured by formal
f
techniquues, especially when applied after mo ore formal appproaches. However, this technique may m
yield wid
dely varying degrees
d of effectiveness, depending on the testerrs’ experiencce.

A commonly used exxperience-ba ased techniquue is error gu


uessing. Gen nerally testerrs anticipate
defects based
b on exp perience. A structured
s ap
pproach to th he error guesssing techniqque is to
enumera ate a list of possible defects and to de
esign tests th ese defects. This systematic
hat attack the
approach is called fa hese defect and failure lists can be built based on
ault attack. Th n experience e,
available
e defect and failure data, and from co ommon know wledge aboutt why softwarre fails.

Exploratory testing iss concurrent test design, test executio on, test loggiing and learn
ning, based on
o a
test charrter containinng test objecttives, and ca
arried out within time-boxxes. It is an a
approach that is
most use eful where thhere are few or inadequatte specificatiions and sevvere time pre essure, or in order
o
to augme ent or complement otherr, more forma al testing. It can
c serve ass a check on the test proc cess,
to help ensure
e that th
he most serioous defects are
a found.

Version 2011
2 Page 43 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

4.6 Choosiing Testt Techniques (K


K2) 15 minuttes

Terms
No specific terms.

Backgrround
The choice of which test techniquues to use de
epends on a number of fa actors, includ
ding the type
e of
system, regulatory sttandards, customer or co quirements, level of risk, type of risk, test
ontractual req
objectivee, documenta
ation availab
ble, knowledgge of the testters, time and
d budget, deevelopment liife
cycle, usse case models and prevvious experie
ence with types of defectss found.

Some techniques are


e more applicable to certtain situations and test levels; others are applicab
ble to
all test le
evels.

When crreating test cases,


c testerss generally use nation of test techniques including pro
u a combin ocess,
rule and data-driven techniques tot ensure adequate cove
erage of the object
o under test.

Refere
ences
4.1 Craiig, 2002, Hettzel, 1988, IE
EEE STD 829-1998
4.2 Beizzer, 1990, Coopeland, 200 04
4.3.1 Coopeland, 20004, Myers, 19 979
4.3.2 Coopeland, 20004, Myers, 19 979
4.3.3 Beeizer, 1990, Copeland,
C 20004
4.3.4 Beeizer, 1990, Copeland,
C 20004
4.3.5 Coopeland, 20004
4.4.3 Beeizer, 1990, Copeland,
C 20004
4.5 Kaner, 2002
4.6 Beizzer, 1990, Coopeland, 200 04

Version 2011
2 Page 44 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

5. T
Test Ma
anagem
ment (K3
3) 17
70 minu
utes
Learniing Objecctives forr Test Managemen
nt
The obje
ectives identify what you will be able to
t do followin
ng the complletion of each
h module.

5.1 Tes
st Organizzation (K2)
LO-5.1.1
1 Recognize the imporrtance of inde ependent tessting (K1)
LO-5.1.2
2 Explain the
t benefits and drawbaccks of indepe endent testin
ng within an o
organization (K2)
LO-5.1.3
3 Recognize the differeent team members to be considered for the creation of a test team
(K1)
LO-5.1.4
4 he tasks of a typical test leader
Recall th l and te
ester (K1)

5.2 Tes
st Planning and Esttimation (K
K3)
LO-5.2.1
1 Recognize the differe ent levels an
nd objectives of test plann ning (K1)
LO-5.2.2
2 Summarrize the purpose and content of the te est plan, test design speccification and d test
procedure documentts according to the ‘Stand dard for Softw ware Test Documentatio on’
(IEEE Sttd 829-1998)) (K2)
LO-5.2.3
3 Differenttiate between n conceptuallly different te
est approach hes, such as analytical, model-
m
based, methodical,
m p
process/stand dard complia ant, dynamic/heuristic, co onsultative and
regressioon-averse (K K2)
LO-5.2.4
4 Differenttiate between n the subjectt of test planning for a syystem and sccheduling tes st
executioon (K2)
LO-5.2.5
5 Write a test
t execution schedule for f a given se et of test casses, considerring prioritiza
ation,
and techhnical and log gical dependdencies (K3)
LO-5.2.6
6 List test preparation and executio on activities that
t should beb considered during testt
planningg (K1)
LO-5.2.7
7 Recall tyypical factorss that influencce the effort related to testing (K1)
LO-5.2.8
8 Differenttiate between n two concep ptually differe
ent estimatioon approache es: the metriccs-
based ap pproach and d the expert-b based approa ach (K2)
LO-5.2.9
9 Recognize/justify ade equate entryy and exit critteria for speccific test leve
els and groupps of
test casees (e.g., for integration te
esting, accep ptance testing g or test case es for usability
testing) (K2)
(

5.3 Tes
st Progres
ss Monitorring and Control
C (K
K2)
LO-5.3.1
1 Recall co
ommon metrrics used for monitoring test preparation and execcution (K1)
LO-5.3.2
2 Explain and
a compare e test metricss for test rep
porting and te
est control (e
e.g., defects found
f
and fixed
d, and tests passed
p and failed)
f relate
ed to purposee and use (K K2)
LO-5.3.3
3 Summarrize the purpose and content of the te est summaryy report document accord ding to
the ‘Stan
ndard for Sofftware Test Documentatio
D on’ (IEEE Std 829-1998) (K2)

5.4 Configuratio
on Manage
ement (K2)
LO-5.4.1
1 Summarrize how configuration ma
anagement supports
s testting (K2)

5.5 Ris
sk and Tes
sting (K2)
LO-5.5.1
1 Describee a risk as a possible prooblem that wo ould threatenn the achieve
ement of onee or
more sta
akeholders’ project
p objectives (K2)
LO-5.5.2
2 Rememb ber that the level of risk iss determined
d by likelihoo ning) and impact
od (of happen
(harm re
esulting if it does happen)) (K1)
LO-5.5.3
3 Distinguish between the project anda product risks (K2)
LO-5.5.4
4 Recognize typical product and prroject risks (K K1)
LO-5.5.5
5 Describee, using exam r analysis and risk man
mples, how risk nagement may be used forf test
planning
g (K2)
Version 2011
2 Page 45 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

5.6 Inc
cident Man
nagement (K3)
LO-5.6.1
1 Recognize the conte ent of an incid
dent report according
a to the
t ‘Standard d for Softwarre
Test Doccumentation’’ (IEEE Std 829-1998)
8 (K
K1)
LO-5.6.2
2 Write an incident rep
port covering the observa ation of a failu
ure during te
esting. (K3)

Version 2011
2 Page 46 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

5.1 Test Organizattion (K2) 30 minuttes

Terms
Tester, test leader, te
est managerr

5.1.1 Test Org


ganization and Indep
pendence
e (K2)
The effectiveness of finding defects by testing g and review ws can be impproved by ussing independent
testers. Options
O for in
ndependencce include the e following:
o No in ndependent testers; deve elopers test their
t own codde
o Inde ependent testters within th
he developme ent teams
o Inde ependent testt team or gro oup within the on, reporting to project management or
e organizatio o
execcutive manag gement
o Inde ependent testters from thee business orrganization or o user comm munity
o Inde ependent testt specialists for
f specific te est types succh as usability testers, se
ecurity testerrs or
certification testeers (who certtify a softwarre product aggainst standaards and regu ulations)
o Inde ependent testters outsourcced or extern nal to the org
ganization

For large
e, complex oro safety criticcal projects, it is usually best
b to have multiple leve els of testing, with
some or all of the levvels done by independent testers. Development staff s may partticipate in tessting,
especially at the loweer levels, butt their lack off objectivity often
o limits th
heir effectiveness. The
independ dent testers may have th he authority to o require and d define test processes a and rules, bu ut
testers should
s take on
o such process-related roles r only in the presence e of a clear mmanagementt
mandate e to do so.

The benefits of indep


pendence incclude:
o Indeependent testters see othe
er and differe
ent defects, and
a are unbiased
o An inndependent tester can ve erify assump e made during specificatio
ptions people on and
imple
ementation ofo the system
m

Drawbaccks include:
o Isola
ation from thee developme
ent team (if trreated as tottally independent)
o Deve elopers may lose a sensee of responssibility for qua
ality
o Indeependent tessters may be seen as a bottleneck
b or blamed for delays
d in rele
ease

Testing tasks
t may be eople in a specific testing
e done by pe g role, or mayy be done byy someone in
n
another role, such ass a project manager,
m qua
ality managerr, developer, business an nd domain ex
xpert,
infrastruccture or IT operations.

5.1.2 Tasks off the Test Leader an


nd Tester (K1)
(
In this syyllabus two te
est positionss are covered
d, test leaderr and tester. The activities and tasks
performe ed by peoplee in these twoo roles depen
nd on the pro oject and prooduct contexxt, the people
e in the
roles, annd the organization.

Sometimmes the test leader is called a test ma anager or tesst coordinatorr. The role off the test leader
may be performed
p byy a project manager,
m a de
evelopment manager, a quality
q assurrance manag ger or
the mana arger projects two positio
ager of a tesst group. In la ons may exist: test leaderr and test
managerr. Typically thhe test leadeer plans, monnitors and co
ontrols the testing activitie
es and tasks s as
defined in
i Section 1.4.

Typical test
t leader ta
asks may incclude:
o Coordinate the te est strategy and
a plan with h project managers and others
o
o Write e or review a test strateg
gy for the pro
oject, and tes
st policy for th
he organizatiion
Version 2011
2 Page 47 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

o Conttribute the te esting perspeective to otheer project acttivities, such as integratio


on planning
o Plan n the tests – considering
c t context and
the a understa anding the test objectivess and risks –
incluuding selectinng test approoaches, estim me, effort and cost of testing, acquirin
mating the tim ng
resoources, definiing test levels, cycles, an nd planning inncident management
o Initia
ate the speciffication, prep
paration, imp plementation and execution of tests, m monitor the teest
results and checck the exit criteria
o Adap pt planning based
b on tesst results andd progress (sometimes do ocumented in n status repo orts)
and take any acttion necessary to compen nsate for pro
oblems
o Set upu adequate e configuratioon managem ment of testwa are for traceaability
o Introoduce suitablle metrics forr measuring test progress s and evalua ating the quaality of the tes
sting
and the product
o Deciide what sho ould be autommated, to what degree, and how
o Sele ect tools to su
upport testingg and organiize any trainiing in tool usse for testerss
o Deciide about the e implementa ation of the te
est environmment
o Write e test summary reports based
b e information gathered du
on the uring testing

Typical tester
t tasks may
m include:
o Reviiew and conttribute to testt plans
o Anallyze, review and assess user requirements, specifications and d models forr testability
o Crea ate test speccifications
o Set upu the test environment (often
( coordinating with system
s administration and network
management)
o Prep pare and acq quire test data
o Implement tests on all test levels, execute e and log the
e tests, evalu
uate the resu
ults and docu ument
the deviations
d froom expected d results
o Use test adminisstration or ma anagement tools and test monitoring tools as requ uired
o Auto omate tests (may be supp ported by a developer
d or a test autom
mation expertt)
o Mea asure perform mance of com mponents and systems (iff applicable)
o Reviiew tests devveloped by others
o

People who
w work on test analysiss, test design n, specific test types or te
est automatio on may be
specialissts in these ro
oles. Dependding on the test
t level andd the risks re product and the
elated to the p
project, different
d peo
ople may takee over the role of tester, keeping
k me degree of independence.
som
Typicallyy testers at th
he componen nt and integrration level would
w be developers, testters at the
acceptan nce test leveel would be business expe erts and useers, and testeers for operattional accepttance
testing would
w be ope
erators.

Version 2011
2 Page 48 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

5.2 Test Pllanning and Esttimation (K3) 40 minuttes

Terms
Test app
proach, test strategy
s

5.2.1 Test Plan


nning (K2))
This secction covers the
t purpose of o test planning within de
evelopment and
a impleme entation projeects,
and for maintenance
m e activities. Planning mayy be documen nted in a master test plan
n and in sepa arate
test plan
ns for test levvels such as system testinng and acceptance testin
ng. The outlin ne of a test-
planningg document iss covered byy the ‘Standa are Test Doccumentation’ (IEEE Std 829-
ard for Softwa 8
1998).

Planning
g is influence ed by the testt policy of the
e organizatioon, the scope objectives, risks,
e of testing, o
constrain
nts, criticalityy, testability and
a the availlability of res
sources. As the project annd test plann
ning
progresss, more inform mation becomes availablle and more detail can be e included in
n the plan.

Test plan
nning is a co
ontinuous acttivity and is performed
p in all life cycle processes aand activities
s.
Feedbacck from test activities
a is used
u to recoggnize changin
ng risks so th hat planning can be adjusted.

5.2.2 Test Plan


nning Activities (K3
3)
Test plan
nning activities for an enttire system or o part of a sy ystem may in nclude:
o Dete ermining the scope and riisks and iden ntifying the objectives
o of testing
o Defin ning the overall approach h of testing, including
i the
e definition off the test leveels and entryy and
exit criteria
c
o Integ grating and coordinating
c the testing activities
a into the software e life cycle acctivities
(acquisition, supply, developm ment, operattion and maintenance)
o Makking decisionss about whatt to test, wha at roles will perform
p the te
est activities,, how the tes
st
activvities should be done, and how the te est results willl be evaluate ed
o Sche eduling test analysis
a and design activvities
o Sche eduling test implementati
i ion, executio on and evalua ation
o Assigning resourrces for the different
d activvities definedd
o Defin ning the amo ount, level off detail, struccture and tem mplates for thhe test docum mentation
o Sele ecting metricss for monitorring and conttrolling test preparation
p a execution
and n, defect reso
olution
and risk issues
o Settiing the level of detail for test procedu ures in order to provide en nough inform mation to suppport
reprooducible testt preparationn and executiion

5.2.3 Entry Criteria (K2))


Entry criteria define when
w to startt testing such
h as at the beginning of a test level or when a sett of
tests is ready
r for exe
ecution.

Typicallyy entry criteria may coverr the following:


o Testt environmen nt availability and readine
ess
o Testt tool readine ess in the tesst environment
o Testtable code avvailability
o Testt data availab bility

5.2.4 Exit Crite


eria (K2)
Exit crite
eria define wh
hen to stop testing
t such as at the end
d of a test levvel or when a set of tests
s has
achieved d specific goa
al.

Version 2011
2 Page 49 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

Typicallyy exit criteria may cover the


t following:
o Thorroughness measures,
m such as covera age of code, functionalityy or risk
o Estim mates of defe ect density or
o reliability measures
m
o Costt
o Resiidual risks, such as defeccts not fixed or lack of tes st coverage in i certain are
eas
o Sche edules such as those bassed on time to t market

5.2.5 Test Estiimation (K


K2)
Two app
proaches for the estimatio on of test effo
ort are:
o The metrics-base ed approach h: estimating the testing effort
e based on
o metrics off former or siimilar
ects or based
proje d on typical values
v
o The expert-based approach: estimating the tasks bas sed on estimates made bby the owner of the
taskss or by experts

e test effort iss estimated, resources can


Once the c be identiffied and a scchedule can b
be drawn up
p.

The testing effort maay depend on n a number ofo factors, inc


cluding:
o Characteristics of o the producct: the qualityy of the speciification and other informmation used foor test
models (i.e., the test basis), the
t size of th he product, th
he complexitty of the prob blem domain, the
requuirements forr reliability an a the requiirements for documentatiion
nd security, and
o Characteristics of o the development proce ess: the stabiility of the org
ganization, to
ools used, te
est
proccess, skills off the people involved,
i and d time pressure
o The outcome of testing:
t the number
n of deefects and thhe amount off rework requ uired

5.2.6 Test Stra


ategy, Tes
st Approac
ch (K2)
The test approach is the impleme entation of th
he test strategy for a speccific project. The test appproach
is define
ed and refined d in the test plans and teest designs. It typically inccludes the deecisions mad de
based on n the (test) project’s
p goall and risk asssessment. It is the startin
ng point for planning the test
t
process,, for selectingg the test dessign techniqu ues and test types to be applied, and d for defining the
entry andd exit criteria
a.

The sele
ected approa ach depends on the conteext and may consider riskks, hazards a and safety,
available
e resources and
a skills, the technologyy, the nature of the system (e.g., custtom built vs.
COTS), test
t objective
es, and regu
ulations.

Typical approaches
a i
include:
o Anallytical approa aches, such as risk-base ed testing where testing iss directed to areas of gre eatest
risk
o Model-based app proaches, su uch as stocha astic testing using statisttical informattion about faiilure
ratess (such as re
eliability grow
wth models) oro usage (such as operattional profiless)
o Meth hodical approoaches, such h as failure-b
based (includ ding error guessing and ffault attacks),
expe erience-baseed, checklist--based, and quality
q charaacteristic-bassed
o Proccess- or standard-complia ant approach hes, such as those speciffied by indusstry-specific
standards or the various agile e methodolo ogies
o Dyna amic and heuristic approaches, such as explorato ory testing where
w testing is more reacctive to
even nts than pre-planned, and d where execcution and ev valuation aree concurrent tasks
o Conssultative app proaches, such as those in which testt coverage iss driven primarily by the advicea
and guidance of technology and/or
a business domain experts outsside the test tteam
o Regression-averrse approach hes, such as those that in nclude reuse of existing test material,
extensive automation of funcctional regresssion tests, anda standard d test suites

Differentt approachess may be com


mbined, for example,
e a risk-based dynamic appro
oach.

Version 2011
2 Page 50 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

5.3 Test Prrogress Monitorring and Control 20 minuttes


(K2)
Terms
Defect density, failurre rate, test control,
c test monitoring,
m te
est summaryy report

5.3.1 Test Progress Mon


nitoring (K
K1)
The purp pose of test monitoring
m iss to provide feedback
f andd visibility ab
bout test activvities. Inform
mation
to be mo onitored mayy be collected d manually or automatica ally and may be used to m measure exitt
criteria, such
s as coveerage. Metriccs may also be b used to assess progre ess against tthe planned
schedule e and budgett. Common test t metrics include:
o Perccentage of wo ork done in test
t case preeparation (or percentage of planned te est cases
preppared)
o Perccentage of wo ork done in test
t environmment preparaation
o Testt case execution (e.g., nu umber of testt cases run/n
not run, and test
t cases pa assed/failed))
o Defe ect informatioon (e.g., defeect density, defects
d found
d and fixed, failure
f rate, aand re-test re esults)
o Testt coverage off requiremen nts, risks or code
c
o Subjjective confid dence of testters in the prroduct
o Date es of test mile
estones
o Testting costs, including the cost c compareed to the ben
nefit of finding the next de efect or to run the
nextt test

5.3.2 Test Rep


porting (K2
2)
Test reporting is concerned with summarizing g information
n about the teesting endea avor, includin
ng:
o Wha at happened during a perriod of testing g, such as daates when exxit criteria we
ere met
o Anallyzed informa ation and me etrics to supp
port recommendations an nd decisions about future e
actio
ons, such as an assessm ment of defects remaining g, the econom mic benefit off continued
testing, outstandding risks, and the level of
o confidence e in the tested
d software

The outline of a test summary rep


port is given in ‘Standard
d for Software mentation’ (IEEE
e Test Docum
Std 829--1998).

Metrics should
s be co
ollected durin
ng and at the end of a tesst level in ord
der to assesss:
o The adequacy off the test objectives for thhat test level
o The adequacy off the test app proaches takken
o The effectivenesss of the testiing with resp
pect to the ob
bjectives

5.3.3 Test Con


ntrol (K2)
Test con
ntrol describe es any guidin
ng or correctiive actions ta
aken as a ressult of inform
mation and metrics
m
gatheredd and reporte ed. Actions may
m cover an ny test activitty and may affect
a any othher software life
cycle acttivity or task..

Example
es of test conntrol actions include:
o Makking decisionss based on in nformation frrom test mon nitoring
o Re-pprioritizing tests when an identified rissk occurs (e.g., software delivered latte)
o Changing the tesst schedule dued to availa ability or unav nment
vailability of a test environ
o Settiing an entry criterion requuiring fixes to
o have been re-tested (confirmation ttested) by a
deve
eloper before e accepting them into a build
b

Version 2011
2 Page 51 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

5.4 Configu
uration Manage
M ement (K
K2) 10 minuttes

Terms
Configurration manag
gement, verssion control

Backgrround
The purp pose of configuration management is to establish and maintain the integritty of the prod
ducts
(compon nents, data and
a documen ntation) of the
e software orr system thro
ough the projject and prod
duct
life cycle
e.

For testing, configuraation manage


ement may involve ensurring the following:
o All items of testwware are iden on controlled, tracked for changes, related to each
ntified, versio h other
and related to deevelopment ittems (test obbjects) so tha
at traceabilityy can be maintained
throuughout the te
est process
o All iddentified doccuments and software item enced unambiguously in test
ms are refere
docuumentation

For the tester,


t config
guration management hellps to unique ely identify (a
and to reprod
duce) the tes
sted
item, tesst documentss, the tests and the test harness(es).
h

During te
est planning,, the configurration manag
gement procedures and infrastructure
i e (tools) shou
uld be
chosen, documented d and implem mented.

Version 2011
2 Page 52 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

5.5 Risk an
nd Testing (K2) 30 minuttes

Terms
Product risk, project risk, risk, risk-based testting

Backgrround
Risk cann be defined as the chancce of an even
nt, hazard, th
hreat or situa
ation occurrin
ng and resultting in
undesiraable consequuences or a potential
p prob
blem. The level of risk will be determiined by the
likelihood
d of an adve
erse event ha
appening and d the impact (the harm re esulting from that event).

5.5.1 Project Risks


R (K2))
Project risks
r are the risks that surround the project’s capa ability to delivver its objecttives, such as:
o Orga anizational faactors:
• Skill, traiining and sta aff shortagess
• Personn nel issues
• Political issues, such h as:
ƒ Problems with testers co ommunicating g their needss and test ressults
ƒ Failure by the team to follow up on in nformation fo ound in testinng and review ws
(e.g., not impproving deve elopment andd testing pracctices)
• Improper attitude tow ward or expectations of te
esting (e.g., not
n appreciating the value of
finding defects
d during
g testing)
o Tech hnical issuess:
• Problem ms in defining the right req quirements
• The exte ent to which requirements
r s cannot be met given exxisting constrraints
• Test envvironment no ot ready on timme
• Late data a conversion n, migration planning
p d development and testing data
and
conversiion/migration n tools
• Low qua ality of the de
esign, code, configuration
c n data, test data and testss
o Supp plier issues:
• Failure of o a third partty
• Contracttual issues

When an nalyzing, managing and mitigating


m the
ese risks, the
e test manag ger is followin
ng well-estab
blished
project management
m t principles. The
T ‘Standarrd for Software Test Docu umentation’ ((IEEE Std 82 29-
1998) ouutline for testt plans requirres risks andd contingenciies to be statted.

5.5.2 Product Risks (K2


2)
Potential failure area
as (adverse future
f eventss or hazards)) in the software or system m are knownn as
product risks,
r as theyy are a risk to
o the quality of the produ uct. These in nclude:
o Failu ure-prone software delive ered
o The potential tha at the softwarre/hardware could cause e harm to an individual orr company
o Poorr software ch haracteristicss (e.g., functionality, reliability, usability and perforrmance)
o Poorr data integriity and qualitty (e.g., data migration issues, data conversion
c prroblems, data
a
transsport problemms, violation of data standards)
o Softw ware that does not perforrm its intended functions

Risks aree used to de


ecide where to t start testin
ng and where
e to test more used to reduce the
e; testing is u
risk of an
n adverse efffect occurring, or to reduce the impac
ct of an adve
erse effect.

Version 2011
2 Page 53 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

Product risks are a special


s type of
o risk to the success of a project. Tessting as a rissk-control acttivity
providess feedback abbout the residual risk by measuring th
he effectiven
ness of critica al defect rem
moval
and of coontingency plans.
p

A risk-baased approacch to testing provides pro oactive oppo ortunities to re


educe the levvels of produuct
risk, starrting in the in
nitial stages of
o a project. It
I involves the identificatio on of producct risks and th
heir
use in gu uiding test planning and control,
c speccification, pre
eparation and of tests. In a risk-
d execution o
based ap pproach the risks identifie ed may be used to:
o Dete ermine the te est techniquees to be employed
o Dete ermine the exxtent of testin ng to be carrried out
o Priorritize testing in an attemp pt to find the critical defeccts as early as
a possible
o Dete ermine wheth her any non-ttesting activiities could be e employed to t reduce risk (e.g., proviiding
training to inexpe erienced dessigners)

Risk-bassed testing draws on the collective kn


nowledge and d insight of th
he project sta
akeholders to
t
determin
ne the risks and
a the levelss of testing required
r to ad
ddress those e risks.

To ensurre that the ch


hance of a product failure e is minimize agement actiivities provide a
ed, risk mana
ed approach to:
discipline
o Asse ess (and reassess on a regular basis) what can go wrong (riskks)
o Dete ermine what risks are imp
portant to deal with
o Implement action ns to deal witth those risks

In additio
on, testing may
m support the
t identificattion of new risks,
r may he
elp to determ
mine what risk
ks
should be
b reduced, and
a may lower uncertaintty about risks
s.

Version 2011
2 Page 54 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

5.6 Inciden
nt Manag
gement (K3) 40 minuttes

Terms
Incident logging, incident manage
ement, incide
ent report

Backgrround
Since onne of the objeectives of tessting is to find
d defects, the discrepanccies between n actual and
expectedd outcomes need
n to be loogged as inccidents. An in ncident mustt be investiga
ated and may turn
out to be
e a defect. Apppropriate acctions to disp pose incidents and defeccts should bee defined. Inc
cidents
and defeects should be
b tracked fro om discoveryy and classification to corrrection and confirmation n of the
solution. In order to manage
m all in
ncidents to completion,
c an
a organizatioon should esstablish an in
ncident
management processs and rules for f classificattion.

Incidentss may be raissed during development,, review, testting or use off a software product. Theey may
be raised d for issues in
i code or the working syystem, or in any
a type of documentatio
d on including
requiremments, develo opment docu uments, test documents,
d ormation succh as “Help” or
and user info
installatio
on guides.

Incident reports havee the followin


ng objectivess:
o Provvide develope ers and otheer parties with
h feedback about
a oblem to enable identifica
the pro ation,
isola
ation and corrrection as ne
ecessary
o Provvide test leadders a meanss of tracking the quality of
o the system and the progress
m under test a
of the testing
o Provvide ideas forr test processs improveme ent

Details of
o the inciden nt report mayy include:
o Date e of issue, isssuing organizzation, and author
a
o Expe ected and acctual results
o Identification of the
t test item (configuratio on item) and environmentt
o Softw ware or syste em life cycle process in which
w the inc
cident was ob bserved
o Desccription of the incident to enable repro oduction and d resolution, including log
gs, database e
dum mps or screen nshots
o Scop pe or degree e of impact onn stakeholde er(s) interestss
o Seve erity of the im
mpact on the system
o Urge ency/priority to fix
o Statu us of the inciident (e.g., open,
o deferre
ed, duplicate,, waiting to beb fixed, fixed
d awaiting ree-test,
closeed)
o Concclusions, reccommendatio ons and apprrovals
o Glob bal issues, suuch as other areas that may
m be affectted by a change resulting g from the inccident
o Change history, such as the sequence off actions take en by projectt team memb bers with res
spect
to the incident too isolate, repaair, and conffirm it as fixed
d
o Refe erences, inclu uding the ide
entity of the test
t case spe ecification tha
at revealed tthe problem

The structure of an in
ncident report is also covvered in the ‘Standard forr Software Te
est
Docume entation’ (IEE
EE Std 829-1998).

Version 2011
2 Page 55 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

Refere
ences
5.1.1 Black, 2001, Hetzel,
H 1988
5.1.2 Black, 2001, Hetzel,
H 1988
5.2.5 Black, 2001, Craig,
C 2002, IEEE
I Std 8299-1998, Kaner 2002
5.3.3 Black, 2001, Craig,
C 2002, Hetzel,
H 1988
8, IEEE Std 829-1998
8
5.4 Craiig, 2002
5.5.2 Black, 2001 , IEEE Std 8299-1998
5.6 Blacck, 2001, IEE
EE Std 829-1
1998

Version 2011
2 Page 56 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

6. T
Tool Su
upport for
f Testiing (K2)) 8
80 minutes
Learniing Objecctives forr Tool Sup
pport for Testing
The obje
ectives identify what you will be able to
t do followin
ng the complletion of each
h module.

6.1 Typ
pes of Tes
st Tools (K
K2)
LO-6.1.1
1 Classify different types of test too g to their purpose and to the activities
ols according s of
the funda
amental testt process and d the softwarre life cycle (K2)
(
LO-6.1.3
3 Explain the
t term testt tool and the K2) 2
e purpose of tool support for testing (K

6.2 Effe
ective Use
e of Tools
s: Potentia
al Benefits
s and Risk
ks (K2)
LO-6.2.1
1 Summarrize the poten ntial benefitss and risks off test automaation and toool support forr
testing (K
K2)
LO-6.2.2
2 Rememb ber special consideration
c ns for test exeecution toolss, static analyysis, and tes
st
management tools (K K1)

6.3 Intrroducing a Tool into


o an Orga
anization (K1)
LO-6.3.1
1 State the
e main princiiples of introd
ducing a tool into an orgaanization (K1
1)
LO-6.3.2
2 State the
e goals of a proof-of-conc
p cept for tool evaluation and a piloting phase for to
ool
impleme entation (K1)
LO-6.3.3
3 Recognize that factoors other than n simply acquiring a tool are required for good toool
support (K1)

2
LO-6.1.2 Intentiona
ally skipped
Version 2011
2 Page 57 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

6.1 Types of Test Tools


T (K
K2) 45 minuttes

Terms
Configurration manag gement tool, coverage too ol, debugging tool, dynam mic analysis tool, inciden nt
management tool, load testing to ool, modeling oring tool, performance te
g tool, monito esting tool, probe
effect, re
equirements managemen w tool, security tool, staticc analysis tool, stress tes
nt tool, review sting
tool, testt comparatorr, test data prreparation to ool, test desig
gn tool, test harness,
h testt execution tool,
test man nagement too ol, unit test frramework too ol

6.1.1 Tool Sup


pport for Testing
T (K
K2)
Test toolls can be useed for one orr more activitties that support testing. These inclu ude:
1. Tools that are dirrectly used in n testing succh as test exe
ecution toolss, test data ge eneration toools
and result compa arison tools
2. Tools that help in n managing thet testing process such as those used to manag ge tests, test
results, data, req quirements, incidents, defects, etc., and for reportting and mon nitoring test
execcution
3. Tools that are ussed in reconn naissance, or, in simple terms:
t explorration (e.g., ttools that moonitor
file activity
a for an
n application))
4. Any tool that aidss in testing (aa spreadshe eet is also a test
t tool in this meaning)

Tool sup
pport for testing can have e one or more e of the follow
wing purpose es depending on the con ntext:
o Imprrove the efficciency of testt activities byy automating repetitive ta asks or suppo orting manua al test
activvities like testt planning, te
est design, te
est reporting and monitorring
o Auto omate activitiies that require significan nt resources when done manually
m (e.g g., static testting)
o Auto omate activitiies that cann not be executted manually y (e.g., large scale perforrmance testin ng of
clien
nt-server app plications)
o Incre omating large data comp
ease reliabilitty of testing (e.g., by auto parisons or siimulating
beha avior)

m “test frameworks” is alsso frequently used in the industry, in at


The term a least threee meanings:
o Reussable and exxtensible testting libraries that can be used to buildd testing toolls (called tes
st
harnnesses as weell)
o A typpe of design of test autommation (e.g., data-driven,, keyword-driven)
o Overall process of execution of testing

For the purpose


p his syllabus, the term “tesst framework
of th ks” is used in its first two meanings as
s
describe
ed in Section 6.1.6.

6.1.2 Test Too


ol Classific
cation (K2
2)
There arre a number of tools that support diffe erent aspects
s of testing. Tools
T can be e classified based
on severral criteria su
uch as purpose, commerccial / free / op
pen-source / shareware, technology used
and so fo
orth. Tools are bus according to the testiing activities that they support.
a classified in this syllab

Some to ools clearly su


upport one activity;
a otherrs may suppo
ort more than
n one activityy, but are
classified
d under the activity
a with which
w they are most clos
sely associate
ed. Tools fro
om a single
provider, especially those
t that ha
ave been dessigned to work together, may be bund dled into one
e
package e.

Some types of test to ools can be intrusive, which means th hat they can affect the acctual outcomee of
the test. For example e, the actual timing may be
b different due
d to the exxtra instructioons that are
executed d by the tool,, or you mayy get a differe
ent measure of code coveerage. The cconsequence e of
intrusive
e tools is calle
ed the probee effect.
Version 2011
2 Page 58 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

Some to
ools offer sup
pport more ap
ppropriate fo
or developerss (e.g., tools that are usedd during
componeent and component integ g). Such tools are marke
gration testing ed with “(D)” iin the list bellow.

6.1.3 Tool Sup


pport for Manageme
M ent of Testing and Tests
T (K1)
Management tools apply to all tesst activities over
o the entirre software life cycle.

Test Ma anagement Tools


T
These toools provide interfaces for executing tests,
t tracking defects annd managing requirementts,
along with support foor quantitativee analysis annd reporting of the test objects. They also supporrt
he test objeccts to require
tracing th ement specifications and might have an a independent version control
c
capability or an interfface to an exxternal one.

Require ements Mana agement To ools


These to ools store req
quirement sta butes for the requirementts (including
atements, store the attrib
priority), provide uniq
que identifierrs and suppoort tracing the
e requiremen nts to individu
ual tests. Th
hese
tools ma ay also help with
w identifyin ng inconsiste
ent or missing requirements.

Incidentt Manageme ent Tools (DDefect Tracking Tools)


These to ools store and manage in cts, failures, change requ
ncident reporrts, i.e., defec uests or percceived
problemss and anoma alies, and he
elp in managiing the life cyycle of incideents, optionally with supp
port for
statistica
al analysis.

Configuuration Mana agement Tools


Although
h not strictly test
t tools, the
ese are nece
essary for sto
orage and veersion manag
gement of
testware
e and related software especially whe en configuringg more than one hardware/software
environm
ment in termss of operating g system verrsions, comp
pilers, browse
ers, etc.

6.1.4 Tool Sup


pport for Static
S Testting (K1)
Static tessting tools prrovide a costt effective wa
ay of finding more defects at an earlie
er stage in th
he
developm ment processs.

Review Tools
These toools assist with review pro
ocesses, che ecklists, revie
ew guidelinees and are ussed to store and
a
commun nicate review
w comments anda report on n defects and d effort. Theyy can be of ffurther help by
b
providing
g aid for onlin
ne reviews fo
or large or ge
eographically y dispersed teams.
t

Static Analysis
A Toools (D)
These toools help devvelopers and testers find defects priorr to dynamic testing by providing support
for enforrcing coding standards (inncluding seccure coding), analysis of structures
s annd dependenncies.
They can n also help in
n planning orr risk analysiis by providin
ng metrics fo
or the code (e
e.g., complex
xity).

Modelin
ng Tools (D)
These to
ools are used
d to validate software moodels (e.g., physical data model (PDM M) for a relational
databasee), by enume
erating inconnsistencies and finding deefects. These
e tools can o
often aid in
generating some test cases base ed on the moodel.

6.1.5 Tool Sup


pport for Test
T Speciification (K
K1)
Test Dessign Tools
These to
ools are used
d to generate e test inputs or executable tests and/oor test oraclees from
requirem
ments, graphiical user inte
erfaces, desiggn models (s
state, data orr object) or ccode.

Version 2011
2 Page 59 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

Test Datta Preparation Tools


Test data
a preparation
n tools manip
pulate databases, files or data transm
missions to set up test da
ata to
be used during the execution
e of tests
t to ensu hrough data anonymity.
ure security th

6.1.6 Tool Sup


pport for Test
T Execu
ution and Logging (K1)
(
Test Exe ecution Toools
These toools enable te
ests to be exxecuted auto
omatically, orr semi-autommatically, usin
ng stored inp
puts
and expe ected outcommes, throughh the use of a scripting language and usually provvide a test log
g for
each tesst run. They can
c also be used
u to recorrd tests, and usually support scripting
g languages or
GUI-bassed configuraation for para
ameterizationn of data andd other customization in th
he tests.

Test Harness/Unit TestT Framewwork Tools (D)(


A unit test harness oro framework facilitates th
he testing of components
c or parts of a system by
simulatinng the enviroonment in wh un, through the provision of mock objects
hich that test object will ru
as stubss or drivers.

Test Comparators
Test commparators de etermine diffe
erences betw
ween files, da
atabases or test
t results. T Test executioon
tools typ
pically include
e dynamic co omparators, but post-exeecution comp parison may b be done by a
separatee comparison n tool. A test comparator may use a teest oracle, especially if itt is automate
ed.

Coverag ge Measurem ment Tools (D)


These toools, through intrusive or non-intrusive
e means, meeasure the pe ercentage off specific types of
uctures that have been exercised
code stru e g., statements, branchess or decisionss, and module or
(e.g
function calls) by a set of tests.

Security y Testing To ools


These toools are used d to evaluate
e the securityy characteristtics of softwa
are. This inccludes evaluaating
the abilitty of the softw
ware to prote
ect data conffidentiality, in
ntegrity, authentication, a
authorization,,
availability, and non--repudiation. Security too ols are mostly y focused on n a particular technology,
platform, and purposse.

6.1.7 Tool Sup


pport for Performan
P nce and Mo
onitoring (K1)
Dynamic c Analysis Tools
T (D)
Dynamicc analysis too
ols find defeccts that are evident
e only when
w softwa
are is executiing, such as time
depende encies or memory leaks. They are typ pically used in componennt and compoonent integra ation
testing, and
a when tessting middlew ware.

Perform mance Testin ng/Load Tes sting/Stress Testing Too ols


Performa ance testing tools monitoor and report on how a sy ystem behaves under a vvariety of sim mulated
usage co onditions in terms
t of num
mber of concu urrent users, their ramp-uup pattern, frrequency and d
relative percentage
p o transaction
of ns. The simuulation of load
d is achievedd by means o of creating viirtual
users caarrying out a selected set of transactio
ons, spread across
a variou
us test mach hines commo only
known asa load generrators.

Monitorring Tools
Monitorin
ng tools conttinuously anaalyze, verify and report on usage of specific
s syste
em resources
s, and
give warrnings of posssible service
e problems.

6.1.8 Tool Sup


pport for Specific
S Te
esting Nee
eds (K1)
Data Qu uality Assessment
a the center of some pro
Data is at ojects such as data conve ersion/migrattion projects and applicattions
like data
a warehousess and its attriibutes can vaary in terms of criticality and
a volume. In such conttexts,
tools neeed to be emp
ployed for daata quality asssessment to o review and verify the daata conversio
on and
Version 2011
2 Page 60 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

migration
n rules to ensure that the
e processed data is corre
ect, complete
e and complie
es with a pre
e-
defined context-spec
c cific standard
d.

Other tessting tools exxist for usabiility testing.

Version 2011
2 Page 61 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

6.2 Effectivve Use of


o Tools: Poten
ntial 20 minuttes
Benefits and Risks (K
K2)
Terms
Data-drivven testing, keyword-driv
k ven testing, scripting
s lang
guage

6.2.1 Potential Benefits and Risks


s of Tool Support
S fo
or Testing
g (for all to
ools)
(K2)
Simply purchasing
p or leasing a toool does not guarantee success with that tool. Each type of to ool
may requuire additional effort to acchieve real and
a lasting beenefits. Therre are potenttial benefits and
a
opportun
nities with the
e use of toolss in testing, but
b there are
e also risks.

Potential benefits of using tools in


nclude:
o Repe etitive work is
i reduced (e e.g., running regression tests, re-ente ering the samme test data, and
checcking againstt coding stanndards)
o Grea ater consiste
ency and repe eatability (e.g
g., tests exec
cuted by a to ool in the sam
me order withh the
same frequency,, and tests de erived from requirements
r s)
o Obje ective assesssment (e.g., static measu ures, coverag ge)
o Ease e of access to
t information n about testss or testing (e
e.g., statisticcs and graphs about test
prog
gress, incidennt rates and performance e)

Risks of using tools include:


o Unre ealistic expecctations for th
he tool (incluuding functionality and ea ase of use)
o Unde erestimating the time, co ost and effort for the initia
al introductionn of a tool (in
ncluding trainning
and external exp pertise)
o Unde erestimating the time and d effort need ded to achiev ve significantt and continu uing benefits from
the tool
t (including the need fo or changes in the testing g process and d continuouss improvement of
the way
w the tool is used)
o Unde erestimating the effort required to ma aintain the test assets generated by th he tool
o Over-reliance on n the tool (repplacement fo or test designn or use of au utomated tessting where
manual testing would
w be bettter)
o Neglecting versio on control off test assets within
w the toool
o Neglecting relatio onships and interoperabiility issues be etween criticcal tools, such as requirem ments
management too ols, version control
c tools, incident management to ools, defect trracking tools
s and
toolss from multipple vendors
o Riskk of tool vend dor going outt of business, retiring the tool, or sellin ng the tool to
o a different
vend dor
o Poorr response frrom vendor for f support, upgrades,
u an
nd defect fixe es
o Riskk of suspension of open-ssource / free tool project
o Unfo oreseen, succh as the inab bility to support a new pla atform

6.2.2 Special Considera


C ations for Some Typ
pes of Too
ols (K1)
Test Exeecution Too ols
ecution tools execute testt objects usin
Test exe ng automated d test scriptss. This type o
of tool often
requires significant effort
e in orderr to achieve significant
s be
enefits.

Capturin
ng tests by re
ecording the actions of a manual teste er seems attractive, but tthis approach h does
e to large numbers of auttomated test scripts. A ca
not scale aptured scrippt is a linear rrepresentatio
on
with specific data and
d actions as part of each
h script. This type of scrip
pt may be unsstable when
unexpeccted events occur.
o

Version 2011
2 Page 62 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

A data-ddriven testing
g approach separates outt the test inputs (the data a), usually intto a spreadsheet,
and usess a more gen neric test scrript that can read
r the inpu
ut data and execute
e the ssame test script
with diffe
erent data. Testers who area not familiiar with the scripting
s guage can then create the
lang e test
data for these predeffined scripts..

There arre other techniques employed in data a-driven techniques, wherre instead off hard-coded data
combina ations placed in a spreadssheet, data is generated using algoritthms based o on configura
able
parameters at run timme and supplied to the appplication. Foor example, a tool may use an algoritthm,
which ge enerates a ra
andom user ID,
I and for re epeatability in
n pattern, a seed
s is emplloyed for
controllin
ng randomne ess.

In a keywword-driven testing
t appro
oach, the sprreadsheet coontains keyw words describbing the actioons to
be takenn (also called
d action word
ds), and test data. Testers
s (even if the
ey are not fam
miliar with th
he
c then define tests usin
scripting language) can ng the keywoords, which can
c be tailore ed to the
application being tessted.

Technica al expertise in the scriptin or all approacches (either by testers orr by


ng language is needed fo
specialissts in test auttomation).

Regardleess of the sccripting techn


nique used, th
he expected results for each
e test nee
ed to be store
ed for
later com
mparison.

Static Analysis
A Toools
nalysis tools applied to so
Static an ource code can
c enforce coding
c standards, but if a
applied to exiisting
code ma ay generate a large quanttity of messaages. Warnin ng messagess do not stop the code fro om
being tra
anslated into an executab ble program, but ideally should
s be addressed so tthat maintena ance
of the co
ode is easier in the future
e. A gradual implementatiion of the analysis tool w
with initial filte
ers to
exclude some messa ages is an efffective appro
oach.

Test Maanagement Tools


T
Test management to ools need to interface
i withh other tools or spreadsh
heets in order to produce useful
information in a format that fits th
he needs of the organizattion.

Version 2011
2 Page 63 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

6.3 Introdu
ucing a Tool
T into
o an Org
ganizatio
on 15 minuttes
(K1)
Terms
No specific terms.

Backgrround
The main considerattions in seleccting a tool fo
or an organiz zation include
e:
o Asse essment of organizationa
o al maturity, sttrengths and d weaknesses and identiffication of
oppoortunities for an improvedd test processs supported by tools
o Evaluation again nst clear requ
uirements an nd objective criteria
c
o A prooof-of-conce ept, by using a test tool during the eva aluation phasse to establissh whether itt
orms effectivvely with the software und
perfo der test and within the cuurrent infrastrructure or to
identify changes needed to th hat infrastruccture to effec
ctively use th
he tool
o Evaluation of the e vendor (inccluding trainin
ng, support and
a commerccial aspects)) or service support
s
supppliers in case
e of non-commmercial toolss
o Identification of internal requiirements for coaching an nd mentoring in the use o of the tool
o Evaluation of training needs considering the current test team’s te est automatio on skills
o Estimmation of a cost-benefit
c r
ratio based ono a concrete e business ca ase

Introducing the seleccted tool into an organiza ation starts with


w a pilot prooject, which has the followwing
objectivees:
o Learrn more deta ail about the tool
t
o Evaluate how the e tool fits with existing prrocesses andd practices, and
a determin ne what would
needd to change
o Deciide on standard ways of using, mana aging, storing
g and maintaining the too ol and the tes
st
asseets (e.g., decciding on nam ming conventtions for files
s and tests, creating
c libraries and defining
the modularity
m off test suites)
o Asse ess whether the benefits will be achie eved at reaso onable cost

Successs factors for the deployme ent of the too


ol within an organization
o include:
o Rolling out the to ool to the resst of the orga
anization incrrementally
o Adap w the use of the tool
pting and improving proccesses to fit with
o Provviding training g and coaching/mentorin ng for new ussers
o Defin ning usage guidelines
g
o Implementing a way w to gathe er usage information from m the actual use
u
o Monitoring tool use
u and bene efits
o Provviding supporrt for the testt team for a given
g tool
o Gath hering lesson ns learned fro om all teamss

Refere
ences
6.2.2 Bu
uwalda, 20011, Fewster, 1999
1
6.3 Fewwster, 1999

Version 2011
2 Page 64 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

7. Referen
nces
Stand
dards
ISTQB Glossary
G of Terms
T used in Software Testing
T Versiion 2.1

[CMMI] Chrissis,
C M.B
B., Konrad, M.
M and Shrumm, S. (2004) CMMI, Guidelines for Pro
ocess Integrration
and Prod
duct Improveement, Addisson Wesley: Reading, MAA
See Secction 2.1
[IEEE Sttd 829-1998] IEEE Std 82 29™ (1998) IEEE Standa ware Test Doccumentation,
ard for Softw
See Secctions 2.3, 2.4
4, 4.1, 5.2, 5.3,
5 5.5, 5.6
[IEEE 10
028] IEEE Sttd 1028™ (20
008) IEEE Standard for Software
S Revviews and Au
udits,
See Secction 3.2
[IEEE 12
2207] IEEE 12207/ISO/IE
1 EC 12207-20
008, Software
e life cycle processes,
See Secction 2.1
[ISO 912
26] ISO/IEC 9126-1:2001
9 1, Software Engineering
E – Software Product
P Quality,
See Secction 2.3

Bookss
[Beizer, 1990] Beizerr, B. (1990) Software
S Tessting Techniq
ques (2nd ed Nostrand Reinhold:
dition), Van N
Boston
See Secctions 1.2, 1.3
3, 2.3, 4.2, 4.3,
4 4.4, 4.6
[Black, 2001]
2 Black, R. (2001) Ma anaging the Testing Proccess (3rd ediition), John W
Wiley & Sons
s: New
York
See Secctions 1.1, 1.2
2, 1.4, 1.5, 2.3,
2 2.4, 5.1, 5.2, 5.3, 5.5,, 5.6
[Buwaldaa, 2001] Buw a (2001) Inttegrated Test Design and
walda, H. et al. d Automation
n, Addison Wesley:
W
Reading, MA
See Secction 6.2
[Copelan
nd, 2004] Co opeland, L. (22004) A Pracctitioner’s Gu
uide to Softw
ware Test Dessign, Artech
House: Norwood,
N MAA
See Secctions 2.2, 2.3
3, 4.2, 4.3, 4.4,
4 4.6
[Craig, 2002]
2 Craig, Rick
R D. and Jaskiel,
J Steffan P. (2002)) Systematic Software Te
esting, Artech
h
House: Norwood,
N MA A
See Secctions 1.4.5, 2.1.3,
2 2.4, 4.1, 5.2.5, 5.3, 5.4
[Fewsterr, 1999] Fewster, M. and Graham, D. (1999) Softw
ware Test Au
utomation, A
Addison Weslley:
Reading, MA
See Secctions 6.2, 6.3
3
om and Graham, Dorothyy (1993) Software Inspection, Addison
[Gilb, 1993]: Gilb, To n Wesley:
Reading, MA
See Secctions 3.2.2, 3.2.4
3
[Hetzel, 1988] Hetzel, W. (1988) Complete Guide
G to Softw
ware Testing
g, QED: Welle
esley, MA
See Secctions 1.3, 1.4
4, 1.5, 2.1, 2.2,
2 2.3, 2.4, 4.1,
4 5.1, 5.3
[Kaner, 2002]
2 Kaner,, C., Bach, J. and Petttico
ord, B. (2002
2) Lessons Learned
L in So
oftware Testiing,
John Willey & Sons: New
N York
See Secctions 1.1, 4.5
5, 5.2

Version 2011
2 Page 65 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

[Myers 1979] Myers, Glenford J. (1979) The Art


A of Softwa
are Testing, John
J Wiley & Sons: New
w York
See Secctions 1.2, 1.3
3, 2.2, 4.3
[van Vee
enendaal, 20 004] van Vee
enendaal, E. (ed.) (2004) The Testing
g Practitionerr (Chapters 6,
6 8,
10), UTN
N Publishers: The Netherrlands
See Secctions 3.2, 3.3
3

Version 2011
2 Page 66 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

8. A
Append
dix A – Syllabu
S s Backg
ground
History
ry of this Documen
D nt
This doccument was prepared
p bettween 2004 anda 2011 by y a Working Group
G comprrised of mem mbers
appointe ed by the Inteernational So oftware Testing Qualificattions Board (ISTQB).
( It w
was initially
reviewed d by a selectted review pa anel, and theen by represeentatives draawn from the internationa al
software e testing commmunity. The rules used in the produc ction of this document
d are
e shown in
Appendix C.
This doccument is the ation Certificcate in Software Testing, the
e syllabus forr the Internattional Founda
first level internationa
al qualificatio
on approved by the ISTQ QB (www.istqb.org).

Objectives of th
he Found
dation Ce
ertificate Qualificat
Q tion
o To gain
g recognitiion for testing as an esse ential and proofessional sooftware enginneering
speccialization
o To provide
p a stan
ndard framew work for the developmen nt of testers' careers
c
o To enable
e professsionally quaalified testerss to be recognized by employers, customers and peers,p
and to raise the profile
p of testers
o To promote
p conssistent and good testing practices
p within all softwa
are engineerring discipline
es
o To iddentify testing topics thatt are relevantt and of value to industryy
o To enable
e softwaare supplierss to hire certified testers and
a thereby gaing commercial advanta age
overr their compe etitors by advvertising their tester recruuitment policyy
o To provide
p an oppportunity forr testers and those with ana interest in testing to accquire an
interrnationally re
ecognized qu ualification in the subject

Objectives of th
he International Qualificatio
Q on (adapted from ISTQB
meetin
ng at Solllentuna, Novembe
N er 2001)
o To be
b able to com mpare testing skills acrosss different countries
c
o To enable
e testers to move accross countryy borders mo ore easily
o To enable
e multin national projects to have a common understandin
national/intern u ng of testing issues
o To inncrease the number
n of qu
ualified teste
ers worldwide e
o To have
h more im
mpact/value as a an interna ationally-base ed initiative than from anyy country-specific
apprroach
o To develop
d a commmon international bodyy of understanding and kn nowledge abbout testing
throuugh the sylla
abus and term minology, and to increase e the level off knowledge about testing g for
all pa
articipants
o To promote
p testing as a profeession in mo ore countries
o To enable
e testers to gain a reecognized qu n their native language
ualification in
o To enable
e sharinng of knowled dge and reso ources acros ss countries
o To provide
p intern
national recoognition of tessters and this s qualification due to parrticipation from
many countries

Entry Requirem
ments forr this Qua
alification
The entrry criterion fo
or taking the ISTQB Foun ndation Certifficate in Softw
ware Testingg examinatioon is
that canddidates have e an interest in software testing.
t Howe ever, it is stro
ongly recommended thatt
candidattes also:
o Have e at least a minimal
m backkground in either software e developme ent or software testing, su
uch as
six months
m experience as a system
s or usser acceptanc ce tester or as
a a softwaree developer

Version 2011
2 Page 67 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

o Takee a course th
hat has been accredited to
t ISTQB sta
andards (by one
o of the IS
STQB-recogn
nized
Natio
onal Boards)).

Backgground an
nd Historyy of the Foundatio
F on Certificcate in So
oftware
Testin
ng
The indeependent cerrtification of software
s testters began inn the UK with h the British C
Computer
Society'ss Information
n Systems Exxamination Board
B (ISEB)), when a Software Testin ng Board wa as set
up in 19998 (www.bcss.org.uk/iseb). In 2002, ASQF
A in Germmany began to support a German tes ster
qualification scheme (www.asqf.d de). This sylllabus is baseed on the ISE EB and ASQ QF syllabi; it
includes reorganized d, updated an nd additionall content, and d the empha ed at topics that
asis is directe
will provide the mostt practical help to testers..
An existiing Foundation Certificatee in Software e Testing (e.g., from ISEB, ASQF or a an ISTQB-
recognizzed National Board) awarrded before this t Internatio
onal Certifica
ate was relea ased, will be
deemed to be equiva alent to the In
nternational Certificate. The
T Foundation Certificatte does not expire e
and doess not need too be renewed d. The date iti was awarded is shown on the Certificate.
Within eaach participa
ating countryy, local aspeccts are contro olled by a naational ISTQB B-recognized d
Software e Testing Boaard. Duties of
o National Boards are sp pecified by thhe ISTQB, bu ut are implem mented
within eaach country. The duties of
o the countryy boards are expected to o include accreditation of
training providers
p and the settingg of exams.

Version 2011
2 Page 68 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

9. Append
A dix B – Learnin
L ng Objec
ctives/C
Cognitiv
ve Level of
Know
wledge
The follo
owing learnin
ng objectives are defined as applying to this syllab
bus. Each top
pic in the syllabus
will be exxamined acccording to the
e learning ob
bjective for it..

Level 1: Remember (K1


1)
The cand
didate will re
ecognize, remmember and recall a term m or concept..
Keywordds: Rememb ber, retrieve, recall, recog
gnize, know

Examplee
Can reco
ognize the deefinition of “fa
ailure” as:
o “Non n-delivery of service to ann end user or any other stakeholder”
s or
o “Actuual deviation
n of the comp s expected delivery, service or result””
ponent or sysstem from its

Level 2: Underrstand (K2


2)
The cand didate can se
elect the reaasons or expllanations for statements related to thee topic, and can
summarize, compare e, classify, ca
ategorize andd give examples for the testing
t conce ept.
Keyword ds: Summarrize, generaliize, abstract,, classify, compare, map,, contrast, exxemplify, inte
erpret,
translate
e, represent, infer, concluude, categorize, construct models

Examplees
Can exp
plain the reasson why testss should be designed
d as early as posssible:
o To find defects when
w they aree cheaper to o remove
o To find the most important de efects first

Can exp
plain the similarities and differences
d between integ gration and system
s testin
ng:
o Similarities: testing more thann one compo onent, and ca an test non-ffunctional asspects
o Diffe
erences: integration testinng concentraates on interffaces and intteractions, an nd system te
esting
conccentrates on whole-system aspects, suchs as end--to-end proce essing

Level 3: Apply (K3)


The cand didate can se
elect the corrrect applicattion of a conc
cept or techn
nique and appply it to a giv
ven
context.
Keyword ds: Implemeent, execute, use, follow a procedure, apply a proccedure
Example e
o Can identify boundary valuess for valid and invalid parrtitions
o Can select test cases
c from a given state transition diaagram in order to cover a
all transitions
s

Level 4: Analyzze (K4)


The cand didate can seeparate inforrmation relatted to a proceedure or techhnique into itts constituennt parts
for better understandding, and can n distinguish between fac cts and infereences. Typiccal application is to
analyze a document,, software orr project situa ation and pro
opose approp priate actionss to solve a
problem or task.
Keyword ds: Analyze,, organize, finnd coherencce, integrate, outline, parsse, structure,, attribute,
deconstrruct, differentiate, discrim
minate, disting
guish, focus,, select

Version 2011
2 Page 69 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

e
Example
o Anallyze product risks and propose preve entive and coorrective mitig
gation activitties
o Desccribe which portions
p of an
n incident report are factual and whicch are inferre
ed from results

Refere
ence
(For the cognitive levvels of learning objectivess)
Andersoon, L. W. and Krathwohl, D. R. (eds) (2001) A Tax xonomy for Learning, Tea aching, and
Assessinng: A Revisio on of Bloom'ss Taxonomyy of Education es, Allyn & Bacon
nal Objective

Version 2011
2 Page 70 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

10. Append
A dix C – Rules
R A
Applied to the ISTQB
Found
dation Syyllabus
The rules listed here were used in the develoopment and review
r of thiss syllabus. (A
A “TAG” is sh
hown
after eacch rule as a shorthand
s ab
bbreviation of
o the rule.)

10.1.1 General Rules


SG1. The syllabus sh hould be undderstandablee and absorbbable by peop ple with zero
o to six month
hs (or
more) exxperience in testing. (6-MMONTH)
SG2. The syllabus sh hould be pra
actical rather than theorettical. (PRACTTICAL)
SG3. The syllabus sh hould be clea
ar and unam mbiguous to itts intended readers.
r (CLEEAR)
SG4. The syllabus sh hould be undderstandablee to people frrom different countries, and easily
translata
able into diffe
erent languagges. (TRANS SLATABLE)
SG5. The syllabus sh hould use Ammerican English. (AMERICAN-ENGLISH)

10.1.2 Current Content


C
SC1. The syllabus sh hould include e recent testiing conceptss and should reflect curre ent best practtices
in softwa
are testing where this is generally
g agrreed. The syllabus is sub
bject to review
w every three e to
five yearrs. (RECENT T)
SC2. The syllabus sh hould minimiize time-relatted issues, such
s as curre
ent market co onditions, to
enable itt to have a sh
helf life of thrree to five ye
ears. (SHELF F-LIFE).

10.1.3 Learning
g Objective
es
LO1. Lea arning objecttives should distinguish between
b item
ms to be recoognized/reme embered (cognitive
level K1)), items the candidate
c should understtand concepttually (K2), ittems the can ndidate should be
able to practice/use
p (
(K3), and items the candidate should be able to useu to analyzze a document,
softwaree or project situation in co
ontext (K4). (KNOWLEDG GE-LEVEL)
LO2. The e descriptionn of the conteent should bee consistent with the learrning objectivves. (LO-
CONSIS STENT)
LO3. To illustrate thee learning obbjectives, sam
mple exam questions for each major ssection shou uld be
issued along
a with the
e syllabus. (LLO-EXAM)

10.1.4 Overall Structure


S
ST1. The e structure of
o the syllabus should be clear and allow cross-refferencing to and from oth her
parts, fro
om exam que estions and from
f other re
elevant documents. (CRO OSS-REF)
ST2. Overlap betwee en sections of
o the syllabuus should be minimized. (OVERLAP)
ST3. Eacch section off the syllabuss should havve the same structure.
s (STRUCTURE E-CONSISTE ENT)
ST4. The e syllabus shhould containn version, da
ate of issue and
a page num mber on everry page.
(VERSIO ON)
ST5. The e syllabus shhould includee a guideline for the amouunt of time to
o be spent in
n each sectio
on (to
reflect th mportance of each topic). (TIME-SPEN
he relative im NT)

Refere
ences
SR1. So ources and reeferences willl be given fo or concepts in
n the syllabuus to help training provide
ers
find out more
m informaation about the topic. (RE EFS)
SR2. Wh here there arre not readilyy identified an
nd clear sources, more detail
d should be provided in the
syllabus. For examplle, definitionss are in the Glossary,
G ms are listed in the syllab
so only the term bus.
(NON-REF DETAIL)

Version 2011
2 Page 71 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

Source
es of Inforrmation
Terms used in the syyllabus are defined in the
e ISTQB Glos
ssary of Term
ms used in S
Software Testting. A
version of
o the Glossaary is availab
ble from ISTQ
QB.

A list of recommende
r ed books on software tessting is also issued in parrallel with thiss syllabus. The
T
main boo ok list is partt of the Referrences sectio
on.

Version 2011
2 Page 72 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

11. A
Appendi
ix D – No
otice to Training
T Providerrs
Each ma ajor subject heading
h in th
he syllabus iss assigned an n allocated tiime in minutees. The purp
pose of
this is bo
oth to give gu
uidance on thhe relative prroportion of time
t to be allocated to ea
ach section of
o an
accrediteed course, and to give ann approximatte minimum time t for the teaching
t of e
each section..
Training providers may spend mo ore time thann is indicated
d and candidates may spend more tim me
again in reading and research. A course curriiculum does not have to follow the sa ame order ass the
syllabus.
The syllaabus contains referencess to establishhed standards, which musst be used inn the prepara ation
of trainin
ng material. Each
E sion quoted in the currentt version of this
standarrd used musst be the vers
syllabus. Other publications, templates or sta andards not referenced
r n this syllabus may also be
in b
used and d referenced
d, but will nott be examine
ed.
All K3 an
nd K4 Learning Objective
es require a practical
p exe
ercise to be in
ncluded in th
he training
materialss.

Version 2011
2 Page 73 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

12. A
Appendi
ix E – Re
elease No
otes

Release 2010
1. CChanges to Learning Ob bjectives (LO) include som me clarificatio
on
a. Wording
W chaanged for the e following LO Os (content and
a level of L LO remains
unchanged):: LO-1.2.2, LO-1.3.1, LO--1.4.1, LO-1..5.1, LO-2.1.1, LO-2.1.3, LO-
2
2.4.2, LO-4.1 1.3, LO-4.2.1 1, LO-4.2.2, LO-4.3.1, LO O-4.3.2, LO-4 4.3.3, LO-4.4 4.1,
LO-4.4.2, LO O-4.4.3, LO-4 4.6.1, LO-5.1 1.2, LO-5.2.2 2, LO-5.3.2, LLO-5.3.3, LO O-
5
5.5.2, LO-5.6 6.1, LO-6.1.1 1, LO-6.2.2, LO-6.3.2.
b. LO-1.1.5 hass been reworrded and upg graded to K2 2. Because a comparison n of
t
terms of defeect related teerms can be expected.
c. LO-1.2.3 (K2 2) has been added.
a The content wass already covvered in the 2007 2
s
syllabus.
d. LO-3.1.3 (K2 2) now comb bines the con ntent of LO-3.1.3 and LO--3.1.4.
e. LO-3.1.4 hass been removed from the e 2010 syllab bus, as it is p
partially redun
ndant
w LO-3.1.3.
with
f. LO-3.2.1 hass been reworrded for cons sistency withh the 2010 syyllabus conte ent.
g. LO-3.3.2 hass been modiffied, and its level l has bee en changed from K1 to K2, K for
c
consistency with LO-3.1.2.
h. LO 4.4.4 hass been modiffied for clarity y, and has been changed d from a K3 tot a
K4. Reason n: LO-4.4.4 had already been b written in
i a K4 mann ner.
i. LO-6.1.2 (K1 1) was dropp ped from the 2010 syllabu us and was rreplaced with h LO-
6
6.1.3 (K2). There
T is no LO-6.1.2
L in th
he 2010 sylla abus.
2. Consistent
C u for test approach acccording to the
use e definition in
n the glossaryy. The term test
t
s
strategy will not be required as term to t recall.
3. Chapter
C 1.4 now contains the concep pt of traceability between test basis and test cases.
4. Chapter
C 2.x now containss test objectss and test ba asis.
5. Re-testing
R iss now the ma ain term in the glossary in nstead of con nfirmation tessting.
6. The
T aspect data d quality and
a testing has h been add ded at severa al locations in the syllabuus:
d
data quality and
a risk in Chapter
C 2.2, 5.5,
5 6.1.8.
7. Chapter
C 5.2.3 Entry Crite eria are adde ed as a new subchapter.
s Reason: Consistency to Exit
C
Criteria (-> entry
e criteria added to LO O-5.2.9).
8. Consistent
C u of the terrms test strattegy and testt approach with
use w their definition in the
g
glossary.
9. Chapter
C 6.1 shortened be ecause the tool
t descriptions were too o large for a 45 minute leesson.
10. IEEE Std 829:2008 has been b release ed. This vers sion of the syyllabus does not yet consider
t
this new edittion. Section 5.2 refers to o the docume ent Master Test Plan. The e content of the
M
Master Test Plan is cove ered by the co oncept that the
t documen nt “Test Plan”” covers diffeerent
l
levels of plan nning: Test plans
p for the test levels ca an be create ed as well as a test plan ono the
p
project level covering mu s named Master Test Pla
ultiple test levvels. Latter is an in this syllabus
a in the IS
and STQB Glossa ary.
11. Code
C of Ethics has been moved from m the CTAL to o CTFL.

Release 2011
Changess made with the “mainten nance releasse” 2011
1. General:
G Wo orking Party replaced
r by Working
W Gro
oup
2. Replaced
R po
ost-conditionss by postcon
nditions in ord
der to be connsistent with the ISTQB
G
Glossary 2.1.
3. First
F occurreence: ISTQB replaced by ISTQB®
4. Introduction to this Syllab
bus: Descripttions of Cognnitive Levelss of Knowledgge removed,
b
because thiss was redunddant to Appendix B.
Version 2011
2 Page 74 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

5. SSection 1.6: Because the e intent was not to define e a Learning Objective forr the “Code of o
E
Ethics”, the cognitive
c leveel for the secction has bee en removed.
6. Section
S 2.2.11, 2.2.2, 2.2.3 and 2.2.4, 3.2.3: Fixed formatting isssues in listss.
7. Section
S 2.2.22 The word failure
f was no ot correct forr “…isolate fa ailures to a sspecific compponent
…”. Thereforre replaced with w “defect” in that sente ence.
8. Section
S 2.3: Corrected fo ormatting of bullet
b list of test
t objectivees related to test terms in n
s
section Test Types (K2).
9. Section
S 2.3.44: Updated description
d off debugging to be consistent with Verrsion 2.1 of the
ISTQB Glosssary.
10. Section
S 2.4 removed
r worrd “extensive e” from “inclu udes extensivve regression n testing”,
b
because the “extensive” depends on the change (size, risks, value, v etc.) a
as written in the
t
n
next sentencce.
11. Section
S 3.2: The word “in ncluding” hass been remov ved to clarifyy the sentencce.
12. Section
S 3.2.11: Because thet activities of a formal review r had been
b incorrecctly formattedd, the
r
review proceess had 12 main
m activitiess instead of six,
s as intend ded. It has beeen changed d back
t six, which makes this section
to s comp pliant with thhe Syllabus 2007
2 and thee ISTQB Advanced
L
Level Syllabu us 2007.
13. Section
S 4: Word
W “developped” replaced by “defined d” because test cases ge et defined annd not
d
developed.
14. Section
S 4.2: Text change e to clarify ho ow black-box x and white-b box testing co ould be used d in
c
conjunction w experien
with nce-based te echniques.
15. Section
S 4.3.55 text change e “..between actors, inclu uding users anda the syste em..” to “ …
b
between actoors (users orr systems), … “.
16. Section
S 4.3.55 alternative path replace ed by alterna ative scenario o.
17. Section
S 4.4.22: In order to
o clarify the te erm branch testing
t in the
e text of Secttion 4.4, a
s
sentence to clarify the focus of brancch testing has s been chang ged.
18. Section
S 4.5, Section 5.2.6: The term “experienced d-based” tessting has bee en replaced byb the
c
correct term “experience-based”.
19. Section
S 6.1: Heading “6.1.1 Understa anding the Meaning
M and Purpose of T Tool Supportt for
T
Testing (K2)” replaced byy “6.1.1 Tooll Support for Testing (K2)”.
20. Section
S 7 / Books:
B 3 edition of [Black,2001] listed, repla
The 3rd acing 2nd edittion.
21. Appendix
A D: Chapters re equiring exerccises have been b replaced by the gen neric requiremment
t
that all Learn ning Objectivves K3 and higherh requiree exercises. This is a req quirement specified
i the ISTQB
in B Accreditatio on Process (Version
( 1.266).
22. Appendix
A E: The change ed learning objectives bettween Versio on 2007 and 2010 are no ow
c
correctly liste
ed.

Version 2011
2 Page 75 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

13. Index
action word .............................................. 63 dyynamic testin ng ..................... 13, 31, 32, 3 36
alpha tessting ...................................... 24, 27 emmergency ch hange ................................. 30
architectture ................ 15, 21, 22, 25, 28, 29 ennhancement .................................... 27, 2 30
archiving g ............................................ 17, 30 enntry criteria ............................................. 33
automation ............................................... 29 eqquivalence partitioning
p .......................... 40
benefits of independe ence ....................... 47 10, 11, 18, 43,
errror.................................. 1 4 50
benefits of using tooll ............................... 62 errror guessing g ............................. 18, 43, 4 50
beta testting ........................................ 24, 27 exxhaustive tessting ................................... 14
black-bo ox technique ....................
. 37, 39, 40 exxit criteria13,, 15, 16, 33, 35, 45, 48, 49, 4 50,
black-bo ox test design n technique ............. 39 51
black-bo ox testing ...................................... 28 exxpected resu ult ...................... 16, 38, 48, 4 63
bottom-u up................................................. 25 exxperience-ba ased technique ....... 37, 39, 3 43
boundarry value analyysis ......................... 40 exxperience-ba ased test dessign techniqu ue 39
bug ........................................................... 11 exxploratory tessting ............................. 43, 4 50
captured d script ......................................... 62 fa
actory accepttance testing g ...................... 27
checklistts ........................................... 34, 35 fa
ailure10, 11, 13, 14, 18, 2 21, 24, 26, 32 2, 36,
choosing g test techniq que .......................... 44 43, 46, 50, 51, 53, 54, 6 69
code covverage ................. 28, 29, 37, 42, 58 ailure rate .......................................... 50,
fa 5 51
commerccial off the sh helf (COTS) ............ 22 ault .............................................. 10, 11, 43
fa
compilerr ................................................... 36 ault attack ................................................ 43
fa
complexxity .............................. 11, 36, 50, 59 eld testing ......................................... 24,
fie 2 27
compone ent integratio on testing22, 25, 29, 59, fo
ollow-up ....................................... 33, 34, 3 35
60 fo
ormal review ....................
. ................. 31,
3 33
compone ent testing22 2, 24, 25, 27,, 29, 37, 41, fu
unctional requ uirement ...................... 24, 2 26
42 fu
unctional spe ecification ............................ 28
configura ation management ......... 45, 48, 52 unctional taskk ......................................... 25
fu
Configurration manag gement tool ............. 58 unctional testt .......................................... 28
fu
confirma ation testing.... 13, 15, 16, 21, 28, 29 unctional testting ..................................... 28
fu
contract acceptance testing .................... 27 fu
unctionality ................ 24, 2 25, 28, 50, 53, 5 62
control fllow............................. 28, 36, 37, 42 im
mpact analysis ........................... 21, 30, 3 38
coverage e 15, 24, 28, 29, 37, 38, 39, 40, 42, incident ... 15, 16, 17, 19, 2 24, 46, 48, 55 5, 58,
50, 51 1, 58, 60, 62 59, 62
coverage e tool ........................................... 58 incident loggin ng ....................................... 55
custom-d developed so oftware.................... 27 incident mana agement.................. 48, 55, 5 58
data floww .................................................. 36 incident mana agement tool ................. 58, 5 59
data-drivven approach h .............................. 63 incident reportt ................................... 46, 4 55
data-drivven testing ................................... 62 independence e ............................. 18, 47, 4 48
debuggin ng .............................. 13, 24, 29, 58 informal review w ............................ 31, 33,3 34
debuggin ng tool ................................... 24, 58 inspection ............................... 31, 33, 34, 3 35
decision coverage .............................. 37, 42 inspection leader ..................................... 33
decision table testing g ........................ 40, 41 integration13, 22, 24, 25, 2 27, 29, 36, 40, 41,
decision testing ........................................ 42 42, 45, 48, 59, 60, 69
defect10 0, 11, 13, 14, 16, 18, 21, 24, 26, 28, integration tessting22, 24, 2 25, 29, 36, 40 0, 45,
29, 31 1, 32, 33, 34, 35, 36, 37, 39, 40, 41, 59, 60, 69
43, 44 4, 45, 47, 49, 50, 51, 53, 54, 55, 59, interoperabilityy testing ............................. 28
60, 69 9 introducing a tool t into an o organization5 57, 64
defect de ensity..................................... 50, 51 IS
SO 9126 ................................ 11, 29, 30, 3 65
defect tra acking tool................................... 59 deevelopment model m ................................. 22
developm ment .. 8, 11, 12, 13, 14, 18, 21, 22, ite
erative-increm mental development mod del22
24, 29 9, 32, 33, 36, 38, 44, 47, 49, 50, 52, keeyword-drive en approach........................ 63
53, 55 5, 59, 67 keeyword-drive en testing ............................ 62
developm ment model ....................
. ...... 21, 22 kick-off ...................................................... 33
drawbaccks of indepe endence ................... 47 le
earning objecctive ... 8, 9, 10, 21, 31, 37 7, 45,
driver ........................................................ 24 57, 69, 70, 71
dynamicc analysis too ol ....................... 58, 60 lo
oad testing ................................... 28, 58, 5 60

Version 2011
2 Page 76 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

load testting tool........................................ 58 se


ecurity testing ........................................ 28
maintain nability testing g ............................. 28 ecurity tool ........................................ 58,
se 5 60
maintena ance testing ......................... 21, 30 simulators ................................................. 24
management tool ................... 48, 58, 59, 63 site acceptancce testing ........................... 27
maturity .................................. 17, 33, 38, 64 so
oftware deve elopment ............. 8, 11, 21, 2 22
metric ........................................... 33, 35, 45 so
oftware deve elopment model .................. 22
mistake ........................................ 10, 11, 16 sp
pecial consid derations for some types of tool 62
modelling tool........................................... 59 est case ................................................... 38
te
moderator .................................... 33, 34, 35 sp
pecification-b based technique..... 29, 39, 3 40
monitorin ng tool ................................... 48, 58 sp
pecification-b based testing g...................... 37
non-funcctional requirrement ......... 21, 24, 26 sttakeholders ... 12, 13, 16, 1 18, 26, 39, 45, 4 54
non-funcctional testing g ....................... 11, 28 sttate transition n testing ....................... 40, 4 41
objective es for testing ............................... 13 sttatement covverage ................................ 42
off-the-shelf .............................................. 22 sttatement testting ..................................... 42
operational acceptan nce testing ............... 27 sttatic analysiss .................................... 32, 3 36
operational test ............................ 13, 23, 30 sttatic analysiss tool ........... 3 31, 36, 58, 59, 5 63
patch ........................................................ 30 sttatic techniqu ue ................................. 31,3 32
peer review .................................. 33, 34, 35 sttatic testing ....................................... 13, 32
performa ance testing ....................
. ...... 28, 58 sttress testing ....................
. ........... 28, 58,
5 60
performa ance testing tool ................... 58, 60 sttress testing tool .............................. 58, 5 60
pesticide e paradox..................................... 14 sttructural testiing .................... 24, 28, 29, 2 42
portabilitty testing ...................................... 28 sttructure-base ed technique e ................ 39,
3 42
probe eff ffect .............................................. 58 sttructure-base ed test design technique .... 42
procedurre................................................. 16 sttructure-base ed testing ..................... 37, 3 42
product riskr ............................ 18, 45, 53, 54 sttub .......................................................... 24
project risk ................................... 12, 45, 53 uccess factors ....................................... 35
su
prototyping ............................................... 22 sy
ystem integra ation testing ................. 22, 2 25
quality 8,8 10, 11, 13, 19, 28, 37, 38, 47, 48, sy
ystem testing g13, 22, 24, 2 25, 26, 27, 49, 4 69
50, 53 3, 55, 59 te
echnical revie ew .................... 31, 33, 34, 3 35
rapid application devvelopment (R RAD) ..... 22 te
est analysis ........................... 15, 38, 48, 4 49
Rational Unified Proccess (RUP) ............. 22 est approach ........................ 38, 48, 50,
te 5 51
recorderr ................................................... 34 est basis .................................................. 15
te
regressio on testing ...... 15, 16, 21, 28, 29, 30 est case . 13, 14, 15, 16, 2
te 24, 28, 32, 37 7, 38,
Regulation acceptance testing ............... 27 39, 40, 41, 42, 45, 51, 5 55, 59, 69
reliabilityy ..................... 11, 13, 28, 50, 53, 58 est case speccification ................. 37, 38,
te 3 55
reliabilityy testing ....................................... 28 est cases ................................................. 28
te
requirem ment...................... 13, 22, 24, 32, 34 est closure ................................... 10, 15, 16
te
requirem ments manag gement tool .............. 58 te
est condition ....................
. ....................... 38
requirem ments specificcation ................ 26, 28 te
est conditionss ........... 13, 1 15, 16, 28, 38, 3 39
responsiibilities ............................. 24, 31, 33 te
est control.................................... 15, 45, 4 51
re-testing g . 29, See co onfirmation testing,
t See  est coverage .................................... 15, 50
te
confirm mation testingg te
est data ......... 15, 16, 38, 4 48, 58, 60, 62, 6 63
review13 3, 19, 31, 32, 33, 34, 35, 36, 47, 48, te
est data preparation tool .................. 58, 5 60
53, 55 5, 58, 67, 71 te
est design13,, 15, 22, 37, 38, 39, 43, 48, 4 58,
review to ool................................................ 58 62
reviewerr ............................................. 33, 34 te
est design sp pecification .......................... 45
risk11, 12, 13, 14, 25 5, 26, 29, 30,, 38, 44, 45, est design tecchnique .................. 37, 38,
te 3 39
49, 50 0, 51, 53, 54 te
est design too ol .................................. 58,
5 59
risk-base ed approach ............................... 54 Teest Developm ment Processs ..................... 38
risk-base ed testing......................... 50, 53, 54 est effort .................................................. 50
te
risks ....................................... 11, 25, 49, 53 te
est environme ent . 15, 16, 1 17, 24, 26, 48, 4 51
risks of using
u tool ..................................... 62 te
est estimatio on ....................................... 50
robustne ess testing.................................... 24 te
est execution n13, 15, 16, 3 32, 36, 38, 43 3, 45,
roles ................ 8, 31, 33, 34, 35, 47, 48, 49 57, 58, 60
root causse .......................................... 10, 11 te
est execution n schedule .......................... 38
scribe ................................................. 33, 34 te
est execution n tool ..... 16, 3 38, 57, 58, 60, 6 62
scripting language........................ 60, 62, 63 te
est harness...................... 1 16, 24, 52, 58, 5 60
security ...................... 27, 28, 36, 47, 50, 58 te
est implemen ntation ..................... 16, 38, 3 49
Version 2011
2 Page 77 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board

test leadder .............................. 18, 45, 47, 55 te


esting and qu uality ................................... 11
test leadder tasks....................................... 47 te
esting princip ples ............................... 10, 14
test level. 21, 22, 24, 28, 29, 30, 37, 40, 42, te
estware............................ 1 15, 16, 17, 48, 4 52
44, 45 5, 48, 49 to
ool support ...................... 2 24, 32, 42, 57, 5 62
test log ................................... 15, 16, 43, 60 to
ool support fo or manageme ent of testing g and
test man nagement ............................... 45, 58 tests ..................................................... 59
test man nagement too ol ....................... 58, 63 to
ool support fo or performance and moniitoring 60
test man nager .................................. 8, 47, 53 to
ool support fo or static testin ng ................... 59
test mon nitoring ................................... 48, 51 to
ool support fo or test execution and logg ging60
test objeective ....... 13 3, 22, 28, 43, 44, 48, 51 to
ool support fo or test speciffication ............ 59
test oraccle ................................................ 60 to
ool support fo or testing ...................... 57, 5 62
test orgaanization ...................................... 47 op-down .................................................. 25
to
test plann .. 15, 16, 32 2, 45, 48, 49, 52, 53, 54 aceability .................................... 38, 48,
tra 4 52
test plannning .............. 15, 16, 45, 49, 52, 54 tra
ansaction pro ocessing seq quences ......... 25
test plan nning activities ......................... 49 ty
ypes of test to ool ................................ 57,
5 58
test proccedure ............ 15, 16, 37, 38, 45, 49 unnit test frame ework ...................... 24, 58, 5 60
test proccedure speciffication .............. 37, 38 unnit test frame ework tool ..................... 58, 5 60
test proggress monitoring ......................... 51 uppgrades .................................................. 30
test repoort........................................... 45, 51 ussability ...................... 11, 227, 28, 45, 47, 4 53
test repoorting ...................................... 45, 51 ussability testin ng ................................. 28,2 45
test scrippt ..................................... 16, 32, 38 usse case test ....................
. ................. 37,
3 40
test strattegy ............................................. 47 usse case testing .......................... 37, 40, 4 41
test suitee .................................................. 29 usse cases ............................... 22, 26, 28, 2 41
test summmary report ........ . 15, 16, 45, 48, 51 usser acceptan nce testing .......................... 27
test tool classification n .............................. 58 vaalidation .................................................. 22
test typee ........................... 21, 28, 30, 48, 75 veerification ................................................ 22
test-driveen developm ment ......................... 24 veersion contro ol ......................................... 52
tester 100, 13, 18, 34, 41, 43, 45, 47, 4 48, 52, V--model .................................................... 22
62, 67 7 walkthrough .................................. 31, 33, 3 34
tester tassks .............................................. 48 white-box testt design tech hnique ....... 39, 3 42
test-first approach .................................... 24 white-box testting ............................... 28, 2 42

Version 2011
2 Page 78 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board

Das könnte Ihnen auch gefallen