Sie sind auf Seite 1von 158

M.

Sc Information Technology
SOFTWARE QUALITY ASSURANE AN! TESTIN"
Unit I
Principles of Testing Software Development Life Cycle Models
Unit II # Te$ting F%n&amental$ ' (
White box Testing ntegration testing System and acceptance testing!
Unit III
Testing "#ndamentals$ % & Speciali'ed testing( Performance testing )egression
Testing Testing of ob*ect oriented systems +sability and accessibility testing!
Unit I)
Test planning, Management, -xec#tion and )eporting
Unit )
Software Test .#tomation Test Metrics and Meas#rements!
Te*t +oo,-$.#
/! Software testing Srinivasan Desi0an, 1opalswamy )amesh$ Pearson
-d#cation %223
Reference$
/! ntrod#cing Software Testing Lo#is Tamres, .ddison Wesley P#blications,
"irst -dition!
%! Software Testing, )on Potton, S.MS Techmedia, ndian -dition %22/
4! Software 5#ality Prod#cing Practical, Consistent Software Mordechai 6en
Menachem, 1ary s! Marliss, Thomson Learning, %224!
/
/
UNIT / I
Str%ct%re
/!/ 7b*ectives
/!% ntrod#ction
/!4 Software Testing "#ndamentals
/!4!/! Software Chaos
/!4!%! Criteria for Pro*ect S#ccess
/!8 Testing Principles
/!9 Software Development Life Cycle Models
/!9!/ 6ig$6ang
/!9!% Code and fix
/!9!4 Waterfall
/!9!8 Prototype Model
/!9!9 The ).D Model
/!3! -vol#tionary Software Process Models
/!3!/ The incremental model
/!3!% The Spiral Model
/!3!4 The W:$W: spiral model
/!3!8 The Conc#rrent development model
/!;! S#mmary
/!<! Chec0 yo#r progress
%
%
(.(. O01ecti2e$
To 0now the testing f#ndamentals and ob*ectives
To learn the principles of testing
To #nderstand the vario#s life cycle models for software
(.3 Intro&%ction
Testing presents an interesting anomaly for the software engineer! D#ring earlier
software engineering activities, the engineer attempts to b#ild software from an
abstract concept to a tangible prod#ct! :ow comes testing! The engineer creates a
series of test cases that are intended to =demolish= the software that has been b#ilt! n
fact, testing is the one step in the software process that co#ld be viewed
>psychologically, at least? as destr#ctive rather than constr#ctive! Software engineers
are by their nat#re constr#ctive people! Testing re@#ires that the developer discard
preconceived notions of the =correctness= of software *#st developed and overcome a
conflict of interest that occ#rs when errors are #ncovered!
6ei'er describes this sit#ation effectively when he states(
ThereAs a myth that if we were really good at programming, there wo#ld be no b#gs
to catch! f only we co#ld really concentrate, if only everyone #sed str#ct#red
programming, top down design, decision tables, if programs were written in S5+SB,
if we had the right silver b#llets, then there wo#ld be no b#gs! Therefore, testing and
test case design is an admission of fail#re, which instills a goodly dose of g#ilt! .nd
the tedi#m of testing is *#st p#nishment for o#r errors! Software re@#irements are
exercised #sing Cblac0 boxD test case design techni@#es! n both cases, the intent is to
find the maxim#m n#mber of errors with the minim#m amo#nt of effort and time!
What is the wor0 prod#ctE . set of test cases designed to exercise both internal logic
and external re@#irements is designed and doc#mented, expected res#lts are defined,
and act#al res#lts are recorded!
Bow do ens#re that Fve done it rightE When yo# begin testing, change yo#r point of
view! Try hard to Cbrea0D the softwareG
5
4
4
(.4 Soft5are Te$ting F%n&amental$
The f#ndamental principles of testing are as follows(
/! The goal of testing is to find defects before c#stomers find them o#t!
%! -xha#stive testing is not possibleH program testing can only show the presence
of defects, never their absence!
4! Testing applies all thro#gh the software lifecycle and is not an end$of$cycle
activity!
8! +nderstand the reason behind the test!
9! Test the test first!
3! Tests develop imm#nity and have to revised constantly!
;! Defects occ#r in convoys or cl#sters, and testing sho#ld foc#s on these
convoys!
<! Testing encompasses defect prevention!
I! Testing is fine balance of defect prevention and defect detection!
/2! ntelligent and well$planned a#tomation is 0ey to reali'ing the benefits of
testing!
//! Testing re@#ires talented, committed people who beleive in themselves and
wor0 in teams!
Soft5are Te$ting Techni6%e$
Sho#ld testing instill g#iltE s testing really destr#ctiveE The answer to these
@#estions is =:oG= Bowever, the ob*ectives of testing are somewhat different than we
might expect!
Te$ting O01ecti2e$
n an excellent boo0 on software testing, 1len Myers states a n#mber of r#les that can
serve well as testing ob*ectives(
(. Testing is a process of exec#ting a program with the intent of finding an error!
3. . good test case is one that has a high probability of finding an as$yet #ndiscovered
error!
8
8
4. . s#ccessf#l test is one that #ncovers an as$yet$#ndiscovered error!
These ob*ectives imply a dramatic change in viewpoint! They move co#nter to the
commonly held view that a s#ccessf#l test is one in which no errors are fo#nd! 7#r
ob*ective is to design tests that systematically #ncover different classes of errors and
to do so with a minim#m amo#nt of time and effort! f testing is cond#cted
s#ccessf#lly >according to the ob*ectives stated previo#sly?, it will #ncover errors in
the software! .s a secondary benefit, testing demonstrates that software f#nctions
appear to be wor0ing according to specification, that behavioral and performance
re@#irements appear to have been met! n addition, data collected as testing provides a
good indication of software reliability and some indication of software @#ality as a
whole! 6#t testing cannot show the absence of errors and defects, it can show only
that software errors and defects are present! t is important to 0eep this >rather
gloomy? statement in mind as testing is being cond#cted!
t is easy to ta0e software for granted and not really appreciate how it has infiltrated
o#r daily lives! Most of #s now canFt go a day witho#t logging on to the internet and
chec0ing o#r email! We rely on overnight pac0ages, long distance phone service, and
c#tting$edge medical treatments!
(.4.( Soft5are hao$
Software is everywhere! Bowever, it is written by people! So it is not perfect, as the
following examples show(
!i$ney7$ Lion 8ing9 (::;/(::<
n the fall of /II8, the Disney Company released its first m#ltimedia CD$)7M game
for children! 7n December %3, c#stomer s#pport engineers were swamped with calls
from angry parents who co#ld not get the software to wor0! t t#rns o#t that Disney
failed to properly test the software on the many different PC models available on the
mar0et!
The other infamo#s software error case st#dies are listed below(
ntel Penti#m "loating Point Division 6#g, /II8
:.S. Mars Polar Lander, /III
Patriot Missile Defense system, /II/
9
9
The J%K 6#g, circa /I;8
What i$ a +%g=
We have *#st read examples of what happens when software fails! n these instances,
it was obvio#s that the software did not operate as intended! Problem, error, and b#g
are probably the most generic terms #sed!
Why do b#gs occ#rE
The n#mber one ca#se of software b#gs is the specification! There are several reasons
why specifications are the largest b#g prod#cers! n many cases specifications are not
written! 7ther reasons may be that the specification is not thoro#gh eno#gh, it is
constantly changing, or it is not comm#nicated well to the entire development team!
Planning software is vitally important! f it is not done correctly, b#gs will be created!
The next largest so#rce of b#gs is the design! Coding errors may be more familiar to
yo# if yo# are a programmer!
The o$t of +%g$
Software does not *#st magically appear! There is #s#ally a planned, methodical
development process #sed to create it! "rom its inception, thro#gh the planning,
programming and testing, to itFs #se by the p#blic, there is the potential for b#gs to be
fo#nd! The cost to fix b#gs increases dramatically over time!
What exactly does a software tester doE
The goal of software tester is
To find b#gs
"ind the b#gs as early as possible
.nd ma0e s#re they get fixed!
t has been said, Cf yo# do not 0now where yo# are going, all roads lead there!D
Traditionally, many T organi'ations ann#ally develop a list of improvements to
incorporate into their operations witho#t establishing a goal! +sing this approach, the
T organi'ation can declare CvictoryD any time it wants! This lesson will help yo#
#nderstand the importance of following a well$defined process for becoming a
world$class software testing organi'ation! This lesson will help yo# define yo#r
3
3
strengths and deficiencies, yo#r staff competencies and deficiencies, and areas of #ser
dissatisfaction!
(.4.3 riteria for >ro1ect S%cce$$
The Three/Ste? >roce$$ to +ecoming a Worl&/la$$ Te$ting Organi@ation
The roadmap to become a world$class software testing organi'ation is a simple
three$step process, as follows(
/! Define or adopt a world$class software testing model!
%! Determine yo#r organi'ationFs c#rrent level of software testing capabilities,
competencies, and #ser satisfaction!
4! Develop and implement a plan to #pgrade from yo#r c#rrent capabilities,
competencies, and #ser satisfaction to those in the world$class software testing model!
This three$step process re@#ires yo# to compare yo#r c#rrent capabilities,
competencies, and #ser satisfaction against those of the world$class software testing
model! This assessment will enable yo# to develop a baseline of yo#r organi'ationFs
performance! The plan that yo# develop will, over time, move that baseline from its
c#rrent level of performance to a world$class level! +nderstanding the model for a
world$class software testing organi'ation and then comparing yo#r organi'ation will
provide yo# with a plan for #sing the remainder of the material in this boo0!
Software testing is an integral part of the software$development process, which
comprises the following fo#r components >see "ig#re /?(
/! >lan ->.# !e2i$e a ?lan. Define yo#r ob*ective and determine the strategy and
s#pporting methods to achieve it! Jo# sho#ld base the plan on an assessment of yo#r
c#rrent sit#ation, and the strategy sho#ld clearly foc#s on the strategic initiativesL0ey
#nits that will drive yo#r improvement plan!
%! !o -!.# E*ec%te the ?lan. Create the conditions and perform the necessary
training to exec#te the plan! Ma0e s#re everyone thoro#ghly #nderstands the
ob*ectives and the plan! Teach wor0ers the proced#res and s0ills they need to f#lfill
the plan and thoro#ghly #nderstand the *ob! Then perform the wor0 according to these
proced#res!
4! hec, -.# hec, the re$%lt$. Chec0 to determine whether wor0 is progressing
according to the plan and whether the expected res#lts are being obtained! Chec0 for
;
;
performance of the set proced#res, changes in conditions, or abnormalities that may
appear! .s often as possible, compare the res#lts of the wor0 with the ob*ectives!
8! Act -A.# Ta,e the nece$$ary action. f yo#r chec0#p reveals that the wor0 is not
being performed according to the plan or that res#lts are not what yo# anticipated,
devise meas#res to ta0e appropriate actions!
"ig! / The fo#r components of a software development process!
Testing involves only the Cchec0D component of the plan$do$chec0$act >PDC.?
cycle! The software development team is responsible for the three remaining
components! The development team plans the pro*ect and b#ilds the software >the
CdoD component?H the testers chec0 to determine that the software meets the needs of
the c#stomers and #sers! f it does not, the testers report defects to the development
team! t is the development team that ma0es the determination as to whether the
#ncovered defects are to be corrected! The role of testing is to f#lfill the chec0
responsibilities assigned to the testersH it is not to determine whether the software can
be placed into prod#ction! That is the responsibility of the c#stomers, #sers, and
development team!
(.; Te$ting >rinci?le$
6efore applying methods to design effective test cases, a software engineer m#st
#nderstand the basic principles that g#ide software testing! Davis s#ggests a set/ of
testing principles that have been adapted for #se in this boo0(
<
<
M All te$t$ $ho%l& 0e tracea0le to c%$tomer re6%irement$. .s we have seen, the
ob*ective of software testing is to #ncover errors! t follows that the most severe
defects >from the c#stomerFs point of view? are those that ca#se the program to fail to
meet its re@#irements!
M Te$t$ $ho%l& 0e ?lanne& long 0efore te$ting 0egin$. Test planning can begin as
soon as the re@#irements model is complete! .ll tests can be planned and designed
before any code has been generated!
M The >areto ?rinci?le a??lie$ to $oft5are te$ting. Stated simply, the Pareto
principle implies that <2 percent of all errors #ncovered d#ring testing will li0ely be
traceable to %2 percent of all program components! The problem, of co#rse, is to
isolate these s#spect components and to thoro#ghly test them!
M Te$ting $ho%l& 0egin Ain the $mallB an& ?rogre$$ to5ar& te$ting Ain the large.B
The first tests planned and exec#ted generally foc#s on individ#al components! .s
testing progresses, foc#s shifts in an attempt to find errors in integrated cl#sters of
components and #ltimately in the entire system!
M E*ha%$ti2e te$ting i$ not ?o$$i0le. The n#mber of path perm#tations for even a
moderately si'ed program is exceptionally large! "or this reason, it is impossible to
exec#te every combination of paths d#ring testing! t is possible, however, to
ade@#ately cover program logic and to ens#re that all conditions in the
component$level design have been exercised!
M To 0e mo$t effecti2e9 te$ting $ho%l& 0e con&%cte& 0y an in&e?en&ent thir&
?arty. 6y most effective, we mean testing that has the highest probability of finding
errors >the primary ob*ective of testing?!
Te$ta0ility
n ideal circ#mstances, a software engineer designs a comp#ter program, a system, or
a prod#ct with CtestabilityD in mind! This enables the individ#als charged with testing
to design effective test cases more easily! 6#t what is testability? Names 6ach%
describes testability in the following manner! Software testability is simply how easily
Oa comp#ter programP can be tested! Since testing is so profo#ndly diffic#lt, it pays to
0now what can be done to streamline it! Sometimes programmers are willing to do
things that will help the testing process and a chec0list of possible design points,
feat#res, etc!, can be #sef#l in negotiating with them! There are certainly metrics that
co#ld be #sed to meas#re testability in most of its aspects!
I
I
O?era0ility. =The better it wor0s, the more efficiently it can be tested!=
M The system has few b#gs >b#gs add analysis and reporting overhead to the test
process?!
M :o b#gs bloc0 the exec#tion of tests!
M The prod#ct evolves in f#nctional stages >allows sim#ltaneo#s development and
testing?!
O0$er2a0ility. =What yo# see is what yo# test!=
M Distinct o#tp#t is generated for each inp#t!
M System states and variables are visible or @#eriable d#ring exec#tion!
M Past system states and variables are visible or @#eriable >e!g!, transaction logs?!
M .ll factors affecting the o#tp#t are visible!
M ncorrect o#tp#t is easily identified!
M nternal errors are a#tomatically detected thro#gh self$testing mechanisms!
M nternal errors are a#tomatically reported!
M So#rce code is accessible!
ontrolla0ility. =The better we can control the software, the more the testing can be
a#tomated and optimi'ed!=
M .ll possible o#tp#ts can be generated thro#gh some combination of inp#t!
M .ll code is exec#table thro#gh some combination of inp#t!
M Software and hardware states and variables can be controlled directly by the test
engineer!
M np#t and o#tp#t formats are consistent and str#ct#red!
M Tests can be conveniently specified, a#tomated, and reprod#ced!
!ecom?o$a0ility. =6y controlling the scope of testing, we can more @#ic0ly isolate
problems and perform smarter retesting!=
M The software system is b#ilt from independent mod#les!
M Software mod#les can be tested independently!
Sim?licity. =The less there is to test, the more @#ic0ly we can test it!=
/2
/2
M "#nctional simplicity >e!g!, the feat#re set is the minim#m necessary to meet
re@#irements?!
M Str#ct#ral simplicity >e!g!, architect#re is mod#lari'ed to limit the propagation of
fa#lts?!
M Code simplicity >e!g!, a coding standard is adopted for ease of inspection and
maintenance?!
Sta0ility. =The fewer the changes, the fewer the disr#ptions to testing!=
M Changes to the software are infre@#ent!
M Changes to the software are controlled!
M Changes to the software do not invalidate existing tests!
M The software recovers well from fail#res!
Un&er$tan&a0ility. =The more information we have, the smarter we will test!=
M The design is well #nderstood!
M Dependencies between internal, external, and shared components are well
#nderstood!
M Changes to the design are comm#nicated!
M Technical doc#mentation is instantly accessible!
M Technical doc#mentation is well organi'ed!
M Technical doc#mentation is specific and detailed!
M Technical doc#mentation is acc#rate!
The attrib#tes s#ggested by 6ach can be #sed by a software engineer to develop a
software config#ration >i!e!, programs, data, and doc#ments? that is amenable to
testing! .nd what abo#t the tests themselvesE Kaner, "al0, and :g#yen s#ggest the
following attrib#tes of a CgoodD test(
(. . good test has a high probability of finding an error! To achieve this goal, the
tester m#st #nderstand the software and attempt to develop a mental pict#re of how
the software might fail! deally, the classes of fail#re are probed! "or example, one
class of potential fail#re in a 1+ >1raphical +ser nterface? is a fail#re to recogni'e
proper mo#se position! . set of tests wo#ld be designed to exercise the mo#se in an
attempt to demonstrate an error in mo#se position recognition!
//
//
3. . good test is not red#ndant! Testing time and reso#rces are limited! There is no
point in cond#cting a test that has the same p#rpose as another test! -very test sho#ld
have a different p#rpose >even if it is s#btly different?! "or example, a mod#le of the
Safe Home software is designed to recogni'e a #ser password to activate and
deactivate the system! n an effort to #ncover an error in password inp#t, the tester
designs a series of tests that inp#t a se@#ence of passwords! Qalid and invalid
passwords >fo#r n#meral se@#ences? are inp#t as separate tests!
(.< Soft5are !e2elo?ment Life ycle Mo&el$
To solve act#al problems in an ind#stry setting, a software engineer or a team of
engineers m#st incorporate a development strategy that encompasses the process,
methods, tools layers and the generic phases! This strategy is often referred to as a
process model or a software engineering paradigm. . process model for software
engineering is chosen based on the nat#re of the pro*ect and application, the methods
and tools to be #sed, and the controls and deliverables that are re@#ired! n an
intrig#ing paper on the nat#re of the software process, L! 6! S! )accoon O).CI9P
#ses fractals as the basis for a disc#ssion of the tr#e nat#re of the software process!
CToo often, software wor0 follows the first law of bicycling( :o matter where yo#Are
going its #phill and against the wind!D
n the sections that follow, a variety of different process models for software
engineering are disc#ssed! -ach represents an attempt to bring order to an inherently
chaotic activity! t is important to remember that each of the models has been
characteri'ed in a way that >ideally? assists in the control and coordination of a real
software pro*ect!
. life cycle model describes how the phases combine together to form a complete
pro*ect or life cycle! S#ch a model is characteri'ed by the following attrib#tes(
The activities performed
The deliverables form each activity
Methods of validation of the deliverables
The se@#ence of activities
/%
/%
Methods of verification of each activity, incl#ding the mechanism of
comm#nication amongst the activities!
The process #sed to create a software prod#ct from its initial conception to its release
is 0nown as the software development life cycle model!
(.<.( +ig ' +ang Mo&el
7ne theory of the creation of the #niverse is the big$bang theory! t states that billions
of years ago, the #niverse was created in a single h#ge explosion of nearly infinite
energy! -verything that exists is the res#lt of energy! . big$bang model for software
development follows the same principle! . h#ge amo#nt of matter >People and
money? is p#t together, a lot of energy is expended$ often violently and o#t comes
the perfect software prod#ct! The bea#ty of the big bang method is that itFs simple!
There is little if any planning, sched#ling, or formal development process! .ll the
effort is spent developing the software and writing the code! t is an ideal process if
the prod#ct re@#irements are not well #nderstood and the final release date is flexible!
t is also important to have very flexible c#stomers, too, beca#se they wonFt 0now
what they are getting #ntil the very end!
"ig!% 6ig$6ang Model
:otice that testing is not shown in the fig#re! n most cases, there is little to no formal
testing done #nder the big$bang model! f testing does occ#r, it is s@#ee'ed in *#st
before the prod#ct is released! f yo# are called in to test a prod#ct #nder the big$bang
model, yo# have both an easy and a diffic#lt tas0! 6eca#se the software is already
complete, yo# have the perfect specification$ the prod#ct itself! .nd, beca#se itFs
impossible to go bac0 and fix things that are bro0en, yo#r *ob is really *#st to report
what yo# find so the c#stomers can be told abo#t the problems! The downside is that,
in the eyes of pro*ect management, the prod#ct is ready to go, so yo#r wor0 is holding
/4
/4
677M
E
or
#p delivery to the c#stomer! The longer yo# ta0e to do yo#r *ob and the more b#gs
yo# find, the more contentio#s the sit#ation will become! Try to stay away from
testing in this model!
(.<.3 o&e an& Fi* Mo&el
The code and fix model is #s#ally the one that pro*ect teams fall into by defa#lt if
they donFt conscio#sly attempt to #se something else! t is a step #p, proced#rally,
from the big$bang model in that at least re@#ires some idea of what the prod#ct
re@#irements are!

Typically informal Code, "ix "inal prod#ct
Prod#ct specification )epeat #ntil
"ig! 4 Code and "ix model
. team #sing this approach #s#ally starts with a ro#gh idea of what they want, does
some simple design, and then proceeds into a long repeating cycle of coding, testing
and fixing b#gs! .t some point they decide that it is eno#gh and release the prod#ct!
.s there is very little overhead for planning and doc#menting, a pro*ect team can
show res#lts immediately! "or this reason the code and fix model wor0s very well for
some pro*ects intended to be created @#ic0ly and then thrown o#t shortly after they
are done, s#ch as prototypes and demos! -ven so, code and fix has been #sed on many
large and well 0now software prod#cts! f yo#r word processor or spreadsheet
software has lots of little b#gs or it *#st doesnFt seem @#ite finished, it was li0ely
created with the code and fix model!
.s a tester on a code and fix pro*ect, yo# need to be aware that yo#, along with the
programmers, will be in a constant state of cycling! .s often as every day yo# will be
given new or #pdated releases of the software and will set off to test it! Jo# will r#n
yo#r tests, report the b#gs, and then get a new software release! Jo# may not have
/8
/8
finished testing the previo#s release when the new one arrives, and the new one may
have new or changed feat#res! -vent#ally, yo# will get a chance to test most of the
feat#res, find fewer and fewer b#gs, and then someone will decide that it is time to
release the prod#ct!
(.<.4 Waterfall Mo&el
Sometimes called the classic life cycle or the waterfall model, the linear sequential
model s#ggests a systematic, se@#ential approach to software development that begins
at the system level and progresses thro#gh analysis, design, coding, testing, and
s#pport! Modeled after a conventional engineering cycle, the linear se@#ential model
encompasses the following activities(
Sy$temCinformation engineering an& mo&eling. 6eca#se software is always part of
a larger system >or b#siness?, wor0 begins by establishing re@#irements for all system
elements and then allocating some s#bset of these re@#irements to software! This
system view is essential when software m#st interact with other elements s#ch as
hardware, people, and databases! System engineering and analysis encompass
re@#irements gathering at the system level with a small amo#nt of top level design
and analysis! nformation engineering encompasses re@#irements gathering at the
strategic b#siness level and at the b#siness area level!
Soft5are re6%irement$ analy$i$. The re@#irements gathering process is intensified
and foc#sed specifically on software! To #nderstand the nat#re of the program>s? to be
b#ilt, the software engineer >=analyst=? m#st #nderstand the information domain for
the software, as well as re@#ired f#nction, behavior, performance, and interface!
)e@#irements for both the system and the software are doc#mented and reviewed with
the c#stomer!
!e$ign. Software design is act#ally a m#ltistep process that foc#ses on fo#r distinct
attrib#tes of a program( data str#ct#re, software architect#re, interface representations,
and proced#ral >algorithmic? detail! The design process translates re@#irements into a
representation of the software that can be assessed for @#ality before coding begins!
Li0e re@#irements, the design is doc#mented and becomes part of the software
config#ration!
/9
/9
o&e generation. The design m#st be translated into a machine$readable form! The
code generation step performs this tas0! f design is performed in a detailed manner,
code generation can be accomplished mechanistically!
Te$ting. 7nce code has been generated, program testing begins! The testing process
foc#ses on the logical internals of the software, ens#ring that all statements have been
tested, and on the f#nctional externalsH that is, cond#cting tests to #ncover errors and
ens#re that defined inp#t will prod#ce act#al res#lts that agree with re@#ired res#lts!
S%??ort. Software will #ndo#btedly #ndergo change after it is delivered to the
c#stomer >a possible exception is embedded software?! Change will occ#r beca#se
errors have been enco#ntered, beca#se the software m#st be adapted to accommodate
changes in its external environment >e!g!, a change re@#ired beca#se of a new
operating system or peripheral device?, or beca#se the c#stomer re@#ires f#nctional or
performance enhancements! Software s#pportLmaintenance reapplies each of the
preceding phases to an existing program rather than a new one!
The waterfall model is #s#ally the first one ta#ght in programming school!
"ig! 8 Water fall model!
The above "ig#re!8 shows the steps involved in this model! . pro*ect #sing the
waterfall model moves down a series of steps starting from an initial idea to a final
prod#ct! .t the end of each step, the pro*ect team holds a review to determine if they
are ready to move to the next step! :otice three important things abo#t the waterfall
model(
/3
/3
dea
.nalysis
Design
Development
Tes
t
"inal
prod#ct
There is a large emphasis on specifying what the prod#ct will be!
The steps are discreteH there is no overlap!
There is no way to bac0 #p! .s soon as yo# are on a step, yo# need to
complete the tas0s for that step and then move on$ Jo# canFt go bac0!
The advantage is everything is caref#lly and thoro#ghly specified! 6#t, with this
advantage, comes a large disadvantage! 6eca#se testing occ#rs only at the end, a
f#ndamental problem co#ld creep in early on and not be detected #ntil days before the
sched#led prod#ct release!
The linear se@#ential model is the oldest and the most widely #sed paradigm for
software engineering! Bowever, criticism of the paradigm has ca#sed even active
s#pporters to @#estion its efficacy! .mong the problems that are sometimes
enco#ntered when the linear se@#ential model is applied are(
(. )eal pro*ects rarely follow the se@#ential flow that the model proposes! .ltho#gh
the linear model can accommodate iteration, it does so indirectly! .s a res#lt, changes
can ca#se conf#sion as the pro*ect team proceeds!
3. t is often diffic#lt for the c#stomer to state all re@#irements explicitly! The linear
se@#ential model re@#ires this and has diffic#lty accommodating the nat#ral
#ncertainty that exists at the beginning of many pro*ects!
4. The c#stomer m#st have patience! . wor0ing version of the program>s? will not be
available #ntil late in the pro*ect time$span! . ma*or bl#nder, if #ndetected #ntil the
wor0ing program is reviewed, can be disastro#s!
n an interesting analysis of act#al pro*ects 6radac O6).I8P, fo#nd that the linear
nat#re of the classic life cycle leads to Cbloc0ing statesD in which some pro*ect team
members m#st wait for other members of the team to complete dependent tas0s! n
fact, the time spent waiting can exceed the time spent on prod#ctive wor0G The
bloc0ing state tends to be more prevalent at the beginning and end of a linear
se@#ential process!
-ach of these problems is real! Bowever, the classic life cycle paradigm has a
definite and important place in software engineering wor0! t provides a template into
which methods for analysis, design, coding, testing, and s#pport can be placed! The
classic life cycle remains a widely #sed proced#ral model for software engineering!
While it does have wea0nesses, it is significantly better than a hapha'ard approach to
software development!
/;
/;
(.<.; The >rototy?ing Mo&el
7ften, a c#stomer defines a set of general ob*ectives for software b#t does not
identify detailed inp#t, processing, or o#tp#t re@#irements! n other cases, the
developer may be #ns#re of the efficiency of an algorithm, the adaptability of an
operating system, or the form that h#manLmachine interaction sho#ld ta0e! n these,
and many other sit#ations, a prototyping paradigm may offer the best approach!
The prototyping paradigm >"ig#re /!9? begins with re@#irements gathering! Developer
and c#stomer meet and define the overall ob*ectives for the software, identify
whatever re@#irements are 0nown, and o#tline areas where f#rther definition is
mandatory! . =@#ic0 design= then occ#rs! The @#ic0 design foc#ses on a
representation of those aspects of the software that will be visible to the c#stomerL#ser
>e!g!, inp#t approaches and o#tp#t formats?! The @#ic0 design leads to the constr#ction
of a prototype! The prototype is eval#ated by the c#stomerL#ser and #sed to refine
re@#irements for the software to be developed! teration occ#rs as the prototype is
t#ned to satisfy the needs of the c#stomer, while at the same time enabling the
developer to better #nderstand what needs to be done!
"ig! 9 Prototyping Paradigm
/<
/<
deally, the prototype serves as a mechanism for identifying software re@#irements!
f a wor0ing prototype is b#ilt, the developer attempts to #se existing program
fragments or applies tools >e!g!, report generators, window managers? that enable
wor0ing programs to be generated @#ic0ly! 6#t what do we do with the prototype
when it has served the p#rpose *#st describedE 6roo0s O6)7;9P provides an answer(
n most pro*ects, the first system b#ilt is barely #sable! t may be too slow, too big,
aw0ward in #se or all three! There is no alternative b#t to start again, smarting b#t
smarter, and b#ild a redesigned version in which these problems are solved ! ! ! When
a new system concept or new technology is #sed, one has to b#ild a system to throw
away, for even the best planning is not so omniscient as to get it right the first time!
The management @#estion, therefore, is not whether to b#ild a pilot system and throw
it away! Jo# will do that! The only @#estion is whether to plan in advance to b#ild a
throwaway, or to promise to deliver the throwaway to c#stomers!
The prototype can serve as =the first system!= The one that 6roo0s recommends we
throw away! 6#t this may be an ideali'ed view! t is tr#e that both c#stomers and
developers li0e the prototyping paradigm! +sers get a feel for the act#al system and
developers get to b#ild something immediately! Jet, prototyping can also be
problematic for the following reasons(
(. The c#stomer sees what appears to be a wor0ing version of the software, #naware
that the prototype is held together Cwith chewing g#m and baling wire,D #naware that
in the r#sh to get it wor0ing no one has considered overall software @#ality or
long$term maintainability! When informed that the prod#ct m#st be reb#ilt so that
high levels of @#ality can be maintained, the c#stomer cries fo#l and demands that =a
few fixes= be applied to ma0e the prototype a wor0ing prod#ct! Too often, software
development management relents!
3. The developer often ma0es implementation compromises in order to get a
prototype wor0ing @#ic0ly! .n inappropriate operating system or programming
lang#age may be #sed simply beca#se it is available and 0nownH an inefficient
algorithm may be implemented simply to demonstrate capability!
.fter a time, the developer may become familiar with these choices and forget all the
reasons why they were inappropriate! The less$than$ideal choice has now become an
integral part of the system!
/I
/I
.ltho#gh problems can occ#r, prototyping can be an effective paradigm for software
engineering! The 0ey is to define the r#les of the game at the beginningH that is, the
c#stomer and developer m#st both agree that the prototype is b#ilt to serve as a
mechanism for defining re@#irements! t is then discarded >at least in part? and the
act#al software is engineered with an eye toward @#ality and maintainability!
(.<.< The RA! Mo&el
Rapid application development >).D? is an incremental software development
process model that emphasi'es an extremely short development cycle! The ).D
model is a Chigh$speedD adaptation of the linear se@#ential model in which rapid
development is achieved by #sing component$based constr#ction! f re@#irements are
well #nderstood and pro*ect scope is constrained, the ).D process enables a
development team to create a Cf#lly f#nctional systemD within very short time periods
>e!g!, 32 to I2 days?! +sed primarily for information systems applications, the ).D
approach encompasses the following phases(
%2
%2
"ig! 3 the ).D Model
+%$ine$$ mo&eling. The information flow among b#siness f#nctions is modeled in a
way that answers the following @#estions( What information drives the b#siness
processE What information is generatedE Who generates itE Where does the
information goE Who processes itE
!ata mo&eling. The information flow defined as part of the b#siness modeling phase
is refined into a set of data ob*ects that are needed to s#pport the b#siness! The
characteristics >called attributes? of each ob*ect are identified and the relationships
between these ob*ects defined!
>roce$$ mo&eling. The data ob*ects defined in the data modeling phase are
transformed to achieve the information flow necessary to implement a b#siness
f#nction! Processing descriptions are created for adding, modifying, deleting, or
retrieving a data ob*ect!
A??lication generation. ).D ass#mes the #se of fo#rth generation techni@#es!
)ather than creating software #sing conventional third generation programming
%/
%/
lang#ages the ).D process wor0s to re#se existing program components >when
possible? or create re#sable components >when necessary?! n all cases, a#tomated
tools are #sed to facilitate constr#ction of the software!
Te$ting an& t%rno2er. Since the ).D process emphasi'es re#se, many of the
program components have already been tested! This red#ces overall testing time!
Bowever, new components m#st be tested and all interfaces m#st be f#lly exercised!
The ).D process model is ill#strated in "ig#re %!3! 7bvio#sly, the time constraints
imposed on a ).D pro*ect demand Cscalable scopeD OK-)I8P! f a b#siness
application can be mod#lari'ed in a way that enables each ma*or f#nction to be
completed in less than three months >#sing the approach described previo#sly?, it is a
candidate for ).D! -ach ma*or f#nction can be addressed by a separate ).D team
and then integrated to form a whole!
Li0e all process models, the ).D approach has drawbac0s(
M "or large b#t scalable pro*ects, ).D re@#ires s#fficient h#man reso#rces to create
the right n#mber of ).D teams!
M ).D re@#ires developers and c#stomers who are committed to the rapid$fire
activities necessary to get a system complete in a m#ch abbreviated time frame! f
commitment is lac0ing from either constit#ency, ).D pro*ects will fail!
M :ot all types of applications are appropriate for ).D! f a system cannot be properly
mod#lari'ed, b#ilding the components necessary for ).D will be problematic! f high
performance is an iss#e and performance is to be achieved thro#gh t#ning the
interfaces to system components, the ).D approach may not wor0!
M ).D is not appropriate when technical ris0s are high! This occ#rs when a new
application ma0es heavy #se of new technology or when the new software re@#ires a
high degree of interoperability with existing comp#ter programs!
(.D E2ol%tionary >roce$$ mo&el$
There is growing recognition that software, li0e all complex systems, evolves over a
period of time! 6#siness and prod#ct re@#irements often change as development
proceeds, ma0ing a straight path to an end prod#ct #nrealisticH tight mar0et deadlines
ma0e completion of a comprehensive software prod#ct impossible, b#t a limited
version m#st be introd#ced to meet competitive or b#siness press#reH a set of core
%%
%%
prod#ct or system re@#irements is well #nderstood, b#t the details of prod#ct or
system extensions have yet to be defined! n these and similar sit#ations, software
engineers need a process model that has been explicitly designed to accommodate a
prod#ct that evolves over time!
The linear se@#ential model is designed for straight$line development! n essence, this
waterfall approach ass#mes that a complete system will be delivered after the linear
se@#ence is completed! The prototyping model is designed to assist the c#stomer >or
developer? in #nderstanding re@#irements! n general, it is not designed to deliver a
prod#ction system! The evol#tionary nat#re of software is not considered in either of
these classic software engineering paradigms!
-vol#tionary models are iterative! They are characteri'ed in a manner that enables
software engineers to develop increasingly more complete versions of the software!
(.D.( The Incremental Mo&el
The incremental model combines elements of the linear se@#ential model >applied
repetitively? with the iterative philosophy of prototyping! )eferring to "ig#re!;, the
incremental model applies linear se@#ences in a staggered fashion as calendar time
progresses! -ach linear se@#ence prod#ces a deliverable CincrementD of the software
OMD-I4P! "or example, word$processing software developed #sing the incremental
paradigm might deliver basic file management, editing, and doc#ment prod#ction
f#nctions in the first incrementH more sophisticated editing and doc#ment prod#ction
capabilities in the second incrementH spelling and grammar chec0ing in the third
incrementH and advanced page layo#t capability in the fo#rth increment! t sho#ld be
noted that the process flow for any increment can incorporate the prototyping
paradigm!
When an incremental model is #sed, the first increment is often a core product.
That is, basic re@#irements are addressed, b#t many s#pplementary feat#res >some
0nown, others #n0nown? remain #ndelivered! The core prod#ct is #sed by the
c#stomer >or #ndergoes detailed review?! .s a res#lt of #se andLor eval#ation, a plan
is developed for the next increment! The plan addresses the modification of the core
prod#ct to better meet the needs of the c#stomer and the delivery of additional
%4
%4
feat#res and f#nctionality! This process is repeated following the delivery of each
increment, #ntil the complete prod#ct is prod#ced!
"ig! ; The ncremental Model
The incremental process model, li0e prototyping and other evol#tionary approaches,
is iterative in nat#re! 6#t #nli0e prototyping, the incremental model foc#ses on the
delivery of an operational prod#ct with each increment! -arly increments are stripped
down versions of the final prod#ct, b#t they do provide capability that serves the #ser
and also provide a platform for eval#ation by the #ser!
ncremental development is partic#larly #sef#l when staffing is #navailable for a
complete implementation by the b#siness deadline that has been established for the
pro*ect! -arly increments can be implemented with fewer people! f the core prod#ct
is well received, then additional staff >if re@#ired? can be added to implement the next
increment! n addition, increments can be planned to manage technical ris0s! "or
example, a ma*or system might re@#ire the availability of new hardware that is #nder
development and whose delivery date is #ncertain! t might be possible to plan early
increments in a way that avoids the #se of this hardware, thereby enabling partial
f#nctionality to be delivered to end$#sers witho#t inordinate delay!
%8
%8
(.D.3 S?iral Mo&el
"ig! < The spiral model
The spiral model, originally proposed by 6oehm, is an evol#tionary software process
model that co#ples the iterative nat#re of prototyping with the controlled and
systematic aspects of the linear se@#ential model! t provides the potential for rapid
development of incremental versions of the software! +sing the spiral model, software
is developed in a series of incremental releases! D#ring early iterations, the
incremental release might be a paper model or prototype! D#ring later iterations,
increasingly more complete versions of the engineered system are prod#ced! . spiral
model is divided into a n#mber of framewor0 activities, also called task regions.!
"ig#re!< depicts a spiral model that contains six tas0 regions(
%$tomer comm%nicationRtas0s re@#ired to establish effective
comm#nication between developer and c#stomer!
>lanningRtas0s re@#ired to define reso#rces, timelines, and other pro*ect
related information!
Ri$, analy$i$Rtas0s re@#ired to assess both technical and management ris0s!
%9
%9
EngineeringRtas0s re@#ired to b#ild one or more representations of the
application!
on$tr%ction an& relea$eRtas0s re@#ired to constr#ct, test, install, and
provide #ser s#pport >e!g!, doc#mentation and training?!
%$tomer e2al%ation ' tas0s re@#ired to eval#ate the pro*ect!
The spiral model is a realistic approach to the development of large$scale systems
and software! 6eca#se software evolves as the process progresses, the developer and
c#stomer better #nderstand and react to ris0s at each evol#tionary level! The spiral
model #ses prototyping as a ris0 red#ction mechanism b#t, more important, enables
the developer to apply the prototyping approach at any stage in the evol#tion of the
prod#ct! t maintains the systematic stepwise approach s#ggested by the classic life
cycle b#t incorporates it into an iterative framewor0 that more realistically reflects the
real world! The spiral model demands a direct consideration of technical ris0s at all
stages of the pro*ect and, if properly applied, sho#ld red#ce ris0s before they become
problematic!
6#t li0e other paradigms, the spiral model is not a panacea! t may be diffic#lt to
convince c#stomers >partic#larly in contract sit#ations? that the evol#tionary approach
is controllable! t demands considerable ris0 assessment expertise and relies on this
expertise for s#ccess! f a ma*or ris0 is not #ncovered and managed, problems will
#ndo#btedly occ#r! "inally, the model has not been #sed as widely as the linear
se@#ential or prototyping paradigms! t will ta0e a n#mber of years before efficacy of
this important paradigm can be determined with absol#te certainty!
(.D.4 The WIN/WIN S?iral Mo&el
The spiral model disc#ssed in the previo#s Section s#ggests a framewor0 activity that
addresses c#stomer comm#nication! The ob*ective of this activity is to elicit pro*ect
re@#irements from the c#stomer! n an ideal context, the developer simply as0s the
c#stomer what is re@#ired and the c#stomer provides s#fficient detail to proceed!
+nfort#nately, this rarely happens! n reality, the c#stomer and the developer enter
into a process of negotiation, where the c#stomer may be as0ed to balance
f#nctionality, performance, and other prod#ct or system characteristics against cost
and time to mar0et!
%3
%3
The best negotiations strive for a Cwin$winD res#lt!; That is, the c#stomer wins by
getting the system or prod#ct that satisfies the ma*ority of the c#stomerFs needs and
the developer wins by wor0ing to realistic and achievable b#dgets and deadlines!
6oehmFs W:W: spiral model defines a set of negotiation activities at the
beginning of each pass aro#nd the spiral! )ather than a single c#stomer
comm#nication activity, the following activities are defined(
(. dentification of the system or s#bsystemFs 0ey Csta0eholders!D
3. Determination of the sta0eholdersF Cwin conditions!D
4. :egotiation of the sta0eholdersF win conditions to reconcile them into a set of
win$win conditions for all concerned >incl#ding the software pro*ect team?!
S#ccessf#l completion of these initial steps achieves a win$win res#lt, which becomes
the 0ey criterion for proceeding to software and system definition! The W:W:
spiral model is ill#strated in the following "ig#re!
"ig! I The W: W: Spiral Model
n addition to the emphasis placed on early negotiation, the W:W: spiral model
introd#ces three process milestones, called anchor points that help establish the
completion of one cycle aro#nd the spiral and provide decision milestones before the
software pro*ect proceeds!
n essence, the anchor points represent three different views of progress as the pro*ect
traverses the spiral! The first anchor point, life cycle objectives >LC7?, defines a set of
ob*ectives for each ma*or software engineering activity! "or example, as part of LC7,
%;
%;
a set of ob*ectives establishes the definition of top$level systemLprod#ct re@#irements!
The second anchor point, life cycle architecture >LC.?, establishes ob*ectives that
m#st be met as the system and software architect#re is defined! "or example, as part
of LC., the software pro*ect team m#st demonstrate that it has eval#ated the
applicability of off$the$shelf and re#sable software components and considered their
impact on architect#ral decisions! nitial operational capability >7C? is the third
anchor point and represents a set of ob*ectives associated with the preparation of the
software for installationLdistrib#tion, site preparation prior to installation, and
assistance re@#ired by all parties that will #se or s#pport the software!
(.D.; The onc%rrent !e2elo?mental Mo&el
The conc#rrent process model can be represented schematically as a series of ma*or
technical activities, tas0s, and their associated states! "or example, the engineering
activity defined for the spiral model is accomplished by invo0ing the following tas0s(
prototyping andLor analysis modeling, re@#irements specification, and design!
%<
%<
"ig! /2 The Conc#rrent Developmental Model
"ig#re!/2 provides a schematic representation of one activity with the conc#rrent
process model! The activityRanalysisRmay be in any one of the states noted at any
given time! Similarly, other activities >e!g!, design or c#stomer comm#nication? can
be represented in an analogo#s manner! .ll activities exist conc#rrently b#t reside in
different states! "or example, early in a pro*ect the customer communication activity
>not shown in the fig#re? has completed its first iteration and exists in the a5aiting
change$ state! The analysis activity >which existed in the none state while initial
c#stomer comm#nication was completed? now ma0es a transition into the %n&er
&e2elo?ment state! f, however, the c#stomer indicates that changes in re@#irements
m#st be made, the analysis activity moves from the %n&er &e2elo?ment
State into the a5aiting change$ state!
The conc#rrent process model defines a series of events that will trigger transitions
from state to state for each of the software engineering activities! "or example, d#ring
early stages of design, an inconsistency in the analysis model is #ncovered! This
%I
%I
generates the event analysis model correction which will trigger the analysis activity
from the &one state into the a5aiting change$ state!
The conc#rrent process model is often #sed as the paradigm for the development of
clientLserver// applications! . clientLserver system is composed of a set of f#nctional
components! When applied to clientLserver, the conc#rrent process model defines
activities in two dimensions( a system dimension and a component dimension! System
level iss#es are addressed #sing three activities( design, assembly, and #se! The
component dimension is addressed with two activities( design and reali'ation!
Conc#rrency is achieved in two ways( >/? system and component activities occ#r
sim#ltaneo#sly and can be modeled #sing the state$oriented approach described
previo#slyH >%? a typical clientLserver application is implemented with many
components, each of which can be designed and reali'ed conc#rrently!
n reality, the conc#rrent process model is applicable to all types of software
development and provides an acc#rate pict#re of the c#rrent state of a pro*ect! )ather
than confining software engineering activities to a se@#ence of events, it defines a
networ0 of activities! -ach activity on the networ0 exists sim#ltaneo#sly with other
activities! -vents generated within a given activity or at some other place in the
activity networ0 trigger transitions among the states of an activity!
(.E S%mmary
Software engineering is a discipline that integrates process, methods, and tools for the
development of comp#ter software! . n#mber of different process models for
software engineering have been proposed, each exhibiting strengths and wea0nesses,
b#t all having a series of generic phases in common! .s can be seen from the above
disc#ssion, each of the models has its advantages and disadvantages! -ach of them
has applicability in a specific scenario! -ach of them also provides different iss#es,
challenges, and opport#nities for verification and validation!
42
42
(.F hec, yo%r >rogre$$
1. What are the ob*ectives of testingE
2. Write down the principles of testing!
3. Disc#ss the criteria for pro*ect s#ccess!
4. Write short notes on vario#s software development lifecycle models!
5. What are called evol#tionary life cycle modelsE Bow they differ from
the old onesE
6. -xplain the Spiral and W: $ W: spiral model with a neat diagram!
7. What are the phases involved in a Waterfall modelE -xplain
8. -xplain the Prototyping and ).D model!
9. Write short notes on Code and "ix model & 6ig 6ang Model!
10. Write down the advantages of ma0ing a conc#rrent
developmental model!
4/
4/
Unit II
Str%ct%re
%!2 7b*ectives
%!/! ntrod#ction
%!%! White 6ox Testing
%!%!/ Static Testing
%!%!% Str#ct#ral Testing
%!%!4 Code Complexity Testing
%!4 ntegration Testing
%!4!/ Top$Down ntegration Testing
%!4!% 6ottom +p ntegration testing
%!4!4 ntegration testing Doc#mentation
%!4!8 .lpha and 6eta Testing
%!8 System and .cceptance Testing
%!8!/ System Testing
%!8!% .cceptance Testing
%!9! S#mmary
%!3 Chec0 yo#r progress
4%
4%
3.G O01ecti2e$
To 0now the vario#s types of testing and their importance
To learn the White box testing methods and its feat#res
To #nderstand what is the necessity of doing ntegration testing in the top down and
bottom #p se@#ences
To learn the system and acceptance testing to finally accept the software to #se
3.( Intro&%ction
Testing re@#ires as0ing abo#t and #nderstanding what yo# are trying to test, 0nowing
what the correct o#tcome is, and why yo# are performing any test! Why one test is as
important as what to test and how to test! +nderstanding the rationale of why we are
testing certain f#nctionality leads to different types of tests, which we will see in the
following sections!
We do white box testing to chec0 the vario#s paths in the code and ma0e s#re they
are exercised correctly! Knowing which code paths sho#ld be exercised for a given
test enables ma0ing necessary changes to ens#re that appropriate paths are covered!
Knowing the external f#nctionality of what the prod#ct sho#ld do, we design blac0
box tests! ntegration tests are #sed to ma0e s#re that the different components fit
together! )egression testing is done to ens#re that changes wor0 as designed and do
not have any #nintended side$effects! So teat the test first, a defective test is more
dangero#s than a defective prod#ct!
TY>ES OF TESTIN"
The vario#s types of testing which are often #sed are listed below(
White 6ox Testing
6lac0 6ox Testing
ntegration Testing
System and .cceptance Testing
Performance Testing
44
44
)egression testing
Testing of 7b*ect 7riented Systems
+sability and .ccessibility Testing
3.3 WHITE/+OI TESTIN"
White box testing is a way of testing the external f#nctionality of the code by
examining and testing the program code that reali'es the external f#nctionality! This
is also 0nown as clear box, or glass box or open box testing! White box testing ta0es
into acco#nt the program code, code str#ct#re, and internal design flow! White box
testing is classified into static and str#ct#ral testing!
White$box testing, sometimes called glass!bo" testing, is a test case design method
that #ses the control str#ct#re of the proced#ral design to derive test cases! +sing
white$box testing methods, the software engineer can derive test cases that
>/? 1#arantee that all independent paths within a mod#le have been exercised at least
once
>%? -xercise all logical decisions on their tr#e and false sides
>4? -xec#te all loops at their bo#ndaries and within their operational bo#nds and
>8? -xercise internal data str#ct#res to ens#re their validity!
t is not possible to exha#stively test every program path beca#se the n#mber of paths
is simply too large! White$box tests can be designed only after a component$level
design >or so#rce code? exists! The logical details of the program m#st be available!
48
48
"ig ! // Classification of white box testing
3.3.( Static te$ting
Static testing re@#ires only the so#rce code of the prod#ct, not the binaries or
exec#tables! Static testing does not involve exec#ting the programs on comp#ters b#t
involves select people going thro#gh the code to find o#t whether
The code wor0s according to the f#nctional re@#irement
The code has been written in accordance with the design developed
-arlier in the pro*ect life cycle
The code for any f#nctionality has been missed o#t
The code handles errors properly
Static testing can be done by h#mans or with the help of speciali'ed tools!
White box testing
Static testing
Str#ct#ral testing
Des0 chec0ing
Code
wal0thro#gh
Code
nspection
+nitLcode
"#nctional
testing
Code
coverage
Code
complexity
Cyclomatic
complexity
Statement
Coverage
Path coverage
Condition
coverage
"#nction
coverage
49
49
Static te$ting 0y h%man
These methods rely on the principle of h#mans reading the program code to detect
errors rather than comp#ters exec#ting the code to find errors! This process has
several advantages!
/! Sometimes h#man can find errors that comp#ters can not! "or example, when
there are two variables with similar names and the programmer #sed a wrong
variable by mista0e in an expression, the comp#ter will not detect the error b#t
exec#te the statement and prod#ce incorrect res#lts, whereas a h#man being
can spot s#ch an error!
%! 6y ma0ing m#ltiple h#mans read and eval#ate the program, we can get
m#ltiple perspectives and therefore have more problems identified #pfront
than a comp#ter co#ld!
4! . h#man eval#ation of the code can compare it against the specifications or
design and th#s ens#re that it does what is intended to do! This may not always
be possible when a comp#ter r#ns a test!
8! . h#man eval#ation can detect many problems at one go an can even try to
identify the root ca#ses of the problems!
9! 6y ma0ing h#mans test the code before exec#tion, comp#ter reso#rces can be
saved! 7f co#rse, this comes at the expense of h#man reso#rces!
3! . proactive method of testing li0e static testing minimi'es the delay in
identification of the problems!
;! "rom a psychological point of view, finding defects later in the cycle creates
immense press#re on programmers! They have to fix defects with less time to
spare! With this 0ind of press#re, there are higher chances of other defects
creeping in!
There are m#ltiple methods to achieve static testing by h#mans! They are
/! Des0 chec0ing of the code
%! Code wal0 thro#gh
4! Code review
8! Code inspection
!e$, chec,ing :ormally done man#ally by the a#thor of the code to verify the
portions of the code for correctness! S#ch verification is done by comparing the code
43
43
with the design or specifications to ma0e s#re that the code does what it is s#pposed
to do and effectively! Whenever errors are fo#nd the a#thor applies the correction for
errors on the spot! This method of catching and correcting errors is characteri'ed by
/! :o str#ct#red method or form#lation to ens#re completeness and
%! :o maintaining of a log or chec0 list!
Some of the disadvantages of this method of testing are as follows(
/! . developer is not the best person to detect problems in his own
code!
%! Developers generally prefer to write new code rather than any form
of testing!
4! This method is essentially person dependent and informal!
o&e 5al,thro%gh
Wal0thro#ghs are less formal than inspection! The advantage that wal0thro#gh
has over des0 chec0ing is that it brings m#ltiple prospective! n wal0thro#ghs, a set of
people loo0 at the program code and raise @#estions for the a#thor! The a#thor
explains the logic of the code and answers the @#estions!
Formal in$?ection
Code inspection also called "agan inspection is a method normally with a
high degree of formalism! The foc#s of this method is to detect all fa#lts, violations,
and other side effects!
om0ining 2ario%$ metho&$
The methods disc#ssed above are not m#t#ally excl#sive! They need to be
#sed in a *#dicio#s combination to be effective in achieving the goal of finding defects
early!
Static analy$i$ tool$
There are several static analysis tools available in the mar0et that can red#ce the
man#al wor0 and perform analysis of the code to find o#t errors s#ch as
/! Whether there are #nreachable codes
%! Qariables declared b#t not #sed
4! Mismatch in definition and assignment of val#es to variables etc!
4;
4;
While following any of the methods of h#man chec0ing des0 chec0ing,
wal0thro#ghs, or formal inspection$ it is #sef#l to have a code review chec0 list!
o&e re2ie5 chec,li$t
$ Data item declaration related
$ Data #sage related
$ Control flow related
$ Standards related
$ Style related!
-ns#ring that program re@#irements have been metE= Stated another way, why donAt
we spend all of o#r energy on blac0$box testsE The answer lies in the nat#re of
software defects!
M #ogic errors and incorrect assumptions are inversely proportional to the probability
that a program path will be e"ecuted. -rrors tend to creep into o#r wor0 when we
design and implement f#nction, conditions, or control that is o#t of the mainstream!
-veryday processing tends to be well #nderstood >and well scr#tini'ed?, while
=special case= processing tends to fall into the crac0s!
M $e often believe that a logical path is not likely to be e"ecuted when, in fact, it may
be e"ecuted on a regular basis. The logical flow of a program is sometimes
co#nterint#itive, meaning that o#r #nconscio#s ass#mptions abo#t flow of control and
data may lead #s to ma0e design errors that are #ncovered only once path testing
commences!
M %ypographical errors are random. When a program is translated into programming
lang#age so#rce code, it is li0ely that some typing errors will occ#r! Many will be
#ncovered by syntax and type chec0ing mechanisms, b#t others may go #ndetected
#ntil testing begins! t is as li0ely that a typo will exist on an obsc#re logical path as
on a mainstream path! -ach of these reasons provides an arg#ment for cond#cting
white$box tests! 6lac0 box testing, no matter how thoro#gh, may miss the 0inds of
errors noted here! White box testing is far more li0ely to #ncover them!
4<
4<
3.3.3 Str%ct%ral te$ting
Str#ct#ral testing ta0es into acco#nt the code, code str#ct#re, internal design, and how
they are coded! n str#ct#ral testing tests are act#ally r#n by the comp#ter on the b#ilt
prod#ct, whereas in static testing the prod#ct is tested by h#mans #sing *#st the so#rce
code and not the exec#tables or binaries! Str#ct#ral testing can be f#rther classified
into
+nitLcode f#nctional testing,
Code coverage and
Code complexity testing!
UnitCo&e f%nctional te$ting
This initial part of str#ct#ral testing corresponds to some @#ic0 chec0s that a
developer performs before s#b*ecting the code to more extensive code coverage
testing or code complexity testing!
nitially the developer can perform certain obvio#s tests, 0nowing the inp#t
variables and the corresponding expected o#tp#t variables! This can be a @#ic0 test
that chec0s o#t any obvio#s mista0es! 6y repeating these tests for m#ltiple val#es of
inp#t variables, the confidence level of the developer to go to the next level increases!
This can even be done prior to formal reviews of static testing so that the review
mechanism does not waste time catching obvio#s errors!
"or mod#les with complex logic or conditions, the developer can b#ild a
Cdeb#g versionD of the prod#ct by p#tting intermediate print statements and ma0ing
s#re the program is passing thro#gh the right loops and iterations the right n#mber of
times! t is important to remove the intermediate print statements after the defects are
fixed!
.nother approach to do the initial test is to r#n the prod#ct #nder a deb#gger
or an ntegrated Development -nvironment >D-?! These tools allow single stepping
of instr#ctions, stepping brea0 points at any f#nction or instr#ction, and viewing the
vario#s system parameters or program variable val#es!
o&e co2erage te$ting
4I
4I
Code coverage testing involves designing and exec#ting test cases and finding o#t the
percentage of code that is covered by testing! The percentage of code that is covered
by a test is fo#nd by a test is fo#nd by adopting a techni@#e called instr#mentation of
code! There are speciali'ed tools available to achieve instr#mentation!
The tools also allow reporting on the portions of the code that are covered fre@#ently,
so that the critical or most$often portions of code can be identified!
Code coverage testing is made #p of the following types of coverage!
/! Statement coverage
%! Path coverage
4! Condition coverage
8! "#nction coverage
Statement co2erage
Program constr#cts in most conventional programming lang#ages can be classified as
/! Se@#ential control flow
%! Two$way decision statements li0e if then else
4! M#lti$way decision statements li0e switch
8! Loops li0e while do, repeat #ntil and for
7b*ect$oriented lang#ages have all of the above and, in addition, a n#mber of other
constr#cts and concepts! Statement coverage refers to writing test cases that exec#te
each of the program statements!
>ath co2erage
n path coverage, we split a program into a n#mber of distinct paths! . program can
start from the beginning and ta0e any of the paths to its completion! Path coverage
provides a stronger condition of coverage than statement coverage as it relates to the
vario#s logical paths in the program rather than *#st program statements!
on&ition co2erage
n the path coverage testing tho#gh we have covered all the paths possible, it wo#ld
not mean that the program is f#lly covered!
Condition coverage S >Total decisions exercised L Total n#mber of decisions in
program?T/22
82
82
The condition coverage as defined by the form#la alongside in the margin gives an
indication of the percentage of conditions covered by a set of test cases! Condition
coverage is a m#ch stronger criteria than statement coverage!
F%nction co2erage
This is a new addition to str#ct#ral testing to identify how many program f#nctions
are covered by test cases! The re@#irements of a prod#ct are mapped into f#nctions
d#ring the design phase and each of the f#nctions from a logical #nit! The advantages
that f#nction coverage provides over the other types of coverage are as follows(
"#nctions are easier to identify in a program and hence it is easier to write test
cases to provide f#nction coverage!
Since f#nctions are at a m#ch higher level of abstraction than code, it is easier
to achieve /22 percent f#nction coverage than /22 percent coverage in any of
the earlier methods!
"#nctions have a more logical mapping to re@#irements and hence can provide
a more direct correlation to the test coverage of the prod#ct!
"#nction coverage provides a nat#ral transition to blac0 box testing!
+a$i$ ?ath te$ting
&asis path testing is a white$box testing techni@#e first proposed by Tom McCabe!
The basis path method enables the test case designer to derive a logical complexity
meas#re of a proced#ral design and #se this meas#re as a g#ide for defining a basis set
of exec#tion paths! Test cases derived to exercise the basis set are g#aranteed to
exec#te every statement in the program at least one time d#ring testing! 6efore the
basis path method can be introd#ced, a simple notation for the representation of
control flow, called a flow graph >or program graph? m#st be introd#ced! n act#ality,
the basis path method can be cond#cted witho#t the #se of flow graphs! Bowever,
they serve as a #sef#l tool for #nderstanding control flow and ill#strating the
approach!
8/
8/

"ig#re /% $ "low graph notation
The above fig#re!/% maps the flowchart into a corresponding flow graph >ass#ming
that no compo#nd conditions are contained in the decision diamonds of the
flowchart?! )eferring to "ig#re, each circle, called a flow graph node, represents one
or more proced#ral statements! . se@#ence of process boxes and a decision diamond
can map into a single node! The arrows on the flow graph, called edges or links,
represent flow of control and are analogo#s to flowchart arrows! .n edge m#st
terminate at a node, even if the node does not represent any proced#ral statements
>e!g!, see the symbol for the if$then$else constr#ct?! .reas bo#nded by edges and
nodes are called regions. When co#nting regions, we incl#de the area o#tside the
graph as a region!
When compo#nd conditions are enco#ntered in a proced#ral design, the generation of
a flow graph becomes slightly more complicated! . compo#nd condition occ#rs when
one or more 6oolean operators >logical 7), .:D, :.:D, :7)? is present in a
conditional statement! )eferring to "ig#re!/%, the PDL segment translates into the
flow graph shown! :ote that a separate node is created for each of the conditions a
and b in the statement " a 7) b! -ach node that contains a condition is called a
predicate node and is characteri'ed by two or more edges emanating from it!
8%
8%
3.3.4 o&e com?le*ity te$ting
yclomatic om?le*ity
'yclomatic comple"ity is software metric that provides a @#antitative meas#re of the
logical complexity of a program! When #sed in the context of the basis path testing
method, the val#e comp#ted for cyclomatic complexity defines the n#mber of
independent paths in the basis set of a program and provides #s with an #pper bo#nd
for the n#mber of tests that m#st be cond#cted to ens#re that all statements have been
exec#ted at least once! .n independent path is any path thro#gh the program that
introd#ces at least one new set of processing statements or a new condition!

84
84

Fig%re (4 / Flo5chart9 -A. an& flo5 gra?h -+.

Fig%re (; / om?o%n& logic
n flow graph, an independent path m#st move along at least one edge that has not
been traversed before the path is defined! "or example, a set of independent paths for
the flow graph ill#strated in the above fig#re!/8 is
path /( /$//
path %( /$%$4$8$9$/2$/$//
path 4( /$%$4$3$<$I$/2$/$//
88
88
path 8( /$%$4$3$;$I$/2$/$//
:ote that each new path introd#ces a new edge! The path
/$%$4$8$9$/2$/$%$4$3$<$I$/2$/$// is not considered to be an independent path
beca#se it is simply a combination of already specified paths and does not traverse
any new edges!
Paths /, %, 4, and 8 constit#te a basis set for the flow graph in the "ig#re given above!
That is, if tests can be designed to force exec#tion of these paths >a basis set?, every
statement in the program will have been g#aranteed to be exec#ted at least one time
and every condition will have been exec#ted on its tr#e and false sides! t sho#ld be
noted that the basis set is not #ni@#e! n fact, a n#mber of different basis sets can be
derived for a given proced#ral design!
'yclomatic comple"ity is a useful metric for predicting those modules that are likely
to be error prone. t can be used for test planning as well as test case design.
Bow do we 0now how many paths to loo0 forE The comp#tation of cyclomatic
complexity provides the answer! Cyclomatic complexity has a fo#ndation in graph
theory and provides #s with an extremely #sef#l software metric! Complexity is
comp#ted in one of three ways(
(. The n#mber of regions of the flow graph correspond to the cyclomatic complexity!
3. Cyclomatic complexity, (>)?, for a flow graph, ), is defined as Compo#nd logic
where * is the n#mber of flow graph edges, + is the n#mber of flow graph nodes!
4. Cyclomatic complexity, (>)?, for a flow graph, ), is also defined as (>)? S , U /
where , is the n#mber of predicate nodes contained in the flow graph 1!
)eferring once more to the flow graph in "ig#re /8, the cyclomatic complexity can be
comp#ted #sing each of the algorithms *#st noted(
(. The flow graph has fo#r regions!
3. (>)? S // edges V I nodes U % S 8!
4. (>)? S 4 predicate nodes U / S 8!
Therefore, the cyclomatic complexity of the flow graph in "ig#re /8 is 8! More
important, the val#e for (>)? provides #s with an #pper bo#nd for the n#mber of
independent paths that form the basis set and, by implication, an #pper bo#nd on the
n#mber of tests that m#st be designed and exec#ted to g#arantee coverage of all
program statements!
89
89
!eri2ing Te$t a$e$
The basis path testing method can be applied to a proced#ral design or to so#rce
code! n this section, we present basis path testing as a series of steps! The proced#re
average, depicted in PDL in "ig#re /!8, will be #sed as an example to ill#strate each
step in the test case design method! :ote that average, altho#gh an extremely simple
algorithm contains compo#nd conditions and loops! The following steps can be
applied to derive the basis set(
"ig#re /9 $ "low graph for the proced#re average
(. U$ing the &e$ign or co&e a$ a fo%n&ation9 &ra5 a corre$?on&ing flo5 gra?h. .
flow graph is created #sing the symbols and constr#ction r#les! The corresponding
flow graph is in the fig#re given above!
3. !etermine the cyclomatic com?le*ity of the re$%ltant flo5 gra?h. The
cyclomatic complexity, (>)?, is determined by applying the algorithms ! t sho#ld be
noted that (>)? can be determined witho#t developing a flow graph by co#nting all
83
83
conditional statements in the PDL >for the proced#re average, compo#nd conditions
co#nt as two? and adding /!
)eferring to "ig#re,
(>)? S 3 regions
(>)? S /; edges V /4 nodes U % S 3
(>)? S 9 predicate nodes U / S 3
4. !etermine a 0a$i$ $et of linearly in&e?en&ent ?ath$. The val#e of (>)? provides
the n#mber of linearly independent paths thro#gh the program control str#ct#re! n the
case of proced#re average, we expect to specify six paths(
path /( /$%$/2$//$/4
path %( /$%$/2$/%$/4
path 4( /$%$4$/2$//$/4
path 8( /$%$4$8$9$<$I$%$! ! !
path 9( /$%$4$8$9$3$<$I$%$! ! !
path 3( /$%$4$8$9$3$;$<$I$%$! ! !
The ellipsis >! ! !? following paths 8, 9, and 3 indicates that any path thro#gh the
remainder of the control str#ct#re is acceptable! t is often worthwhile to identify
predicate nodes as an aid in the derivation of test cases! n this case, nodes %, 4, 9, 3,
and /2 are predicate nodes!
;. >re?are te$t ca$e$ that 5ill force e*ec%tion of each ?ath in the 0a$i$ $et. Data
sho#ld be chosen so that conditions at the predicate nodes are appropriately set as
each path is tested! Test cases that satisfy the basis set *#st described are
P)7C-D+)- averageH
:T-)".C- )-T+):S average, total!inp#t, total!validH
:T-)".C- .CC-PTS val#e, minim#m, maxim#mH
TJP- val#eO/(/22P S SC.L.) .)).JH
TJP- average, total!inp#t, total!validH
minim#m, maxim#m, s#m S SC.L.)H
TJP- i S :T-1-)H
T This proced#re comp#tes the average of /22 or fewer n#mbers that lie between
bo#nding val#esH it also comp#tes the s#m and the total n#mber valid!
8;
8;
i S /H
total!inp#t S total!valid S 2H
s#m S 2H
D7 WBL- val#eOiP WX III .:D total!inp#t W /22
-:DD7
" total!valid X 2
-:D"
-:D average
increment total!inp#t by /H
" val#eOiP X S minim#m .:D val#eOiP W S maxim#m
-:D"
increment i by /H
TB-: average S s#m L total!validH
-LS- average S IIIH
TB-: increment total!valid by /H
s#m S s s#m U val#eOiP
-LS- s0ip
>ath ( te$t ca$e#
val#e>k? S valid inp#t, where k W i for % YZi YZ/22
val#e>i? S VIII where % YZ i YZ/22
*"pected results- Correct average based on k val#es and proper totals!
+ote- Path / cannot be tested stand$alone b#t m#st be tested as part of path 8, 9, and 3
tests!
>ath 3 te$t ca$e#
val#e>/? S VIII
*"pected results- .verage S VIIIH other totals at initial val#es!
>ath 4 te$t ca$e#
.ttempt to process /2/ or more val#es!
"irst /22 val#es sho#ld be valid!
*"pected results- Same as test case /!
>ath ; te$t ca$e#
val#e>i? S valid inp#t where i W /22
8<
8<
val#e>k? W minim#m where k W i
*"pected results- Correct average based on k val#es and proper totals!
/
%
4
8
9
3
;
<
I
/2
/% //
/4
>ath < te$t ca$e#
val#e>i? S valid inp#t where i W /22
val#e>k? X maxim#m where k WS i
*"pected results- Correct average based on n val#es and proper totals!
>ath D te$t ca$e#
val#e>i? S valid inp#t where i W /22
*"pected results- Correct average based on n val#es and proper totals!
-ach test case is exec#ted and compared to expected res#lts! 7nce all test cases have
been completed, the tester can be s#re that all statements in the program have been
exec#ted at least once! t is important to note that some independent paths >e!g!, path /
in o#r example? cannot be tested in stand$alone fashion! That is, the combination of
data re@#ired to traverse the path cannot be achieved in the normal flow of the
program! n s#ch cases, these paths are tested as part of another path test!
8I
8I
"ra?h Matrice$
The proced#re for deriving the flow graph and even determining a set of basis paths
is amenable to mechani'ation! To develop a software tool that assists in basis path
testing, a data str#ct#re, called a graph matri", can be @#ite #sef#l! . graph matri" is
a s@#are matrix whose si'e >i!e!, n#mber of rows and col#mns? is e@#al to the n#mber
of nodes on the flow graph! -ach row and col#mn corresponds to an identified node,
and matrix entries correspond to connections >an edge? between nodes! -ach node on
the flow graph is identified by n#mbers, while each edge is identified by letters! .
letter entry is made in the matrix to correspond to a connection between two nodes!
"or example, node 4 is connected to node 8 by edge b.
To this point, the graph matrix is nothing more than a tab#lar representation of a
flow graph! Bowever, by adding a link weight to each matrix entry, the graph matrix
can become a powerf#l tool for eval#ating program control str#ct#re d#ring testing!
The lin0 weight provides additional information abo#t control flow! n its simplest
form, the lin0 weight is / >a connection exists? or 2 >a connection does not exist?! 6#t
lin0 weights can be assigned other, more interesting properties(
M The probability that a lin0 >edge? will be exec#ted!
M The processing time expended d#ring traversal of a lin0!
M The memory re@#ired d#ring traversal of a lin0!
M The reso#rces re@#ired d#ring traversal of a lin0!
ontrol $tr%ct%re te$ting
The basis path testing techni@#e is one of a n#mber of techni@#es for control
str#ct#re testing! .ltho#gh basis path testing is simple and highly effective, it is not
s#fficient in itself! n this section, other variations on control str#ct#re testing are
disc#ssed! These broaden testing coverage and improve @#ality of white$box testing!
on&ition Te$ting
'ondition testing is a test case design method that exercises the logical conditions
contained in a program mod#le! . simple condition is a 6oolean variable or a
92
92
relational expression, possibly preceded with one :7T >[? operator! . relational
expression ta0es the form
*. Wrelational$operatorX */
where *. and */ are arithmetic expressions and Wrelational$operatorX is one of the
following( W, Y, S, \Z>none@#ality?, X, or ]! . compound condition is composed of two
or more simple conditions, 6oolean operators, and parentheses! We ass#me that
6oolean operators allowed in a compo#nd condition incl#de 7) >^?, .:D >&? and
:7T >[?! . condition witho#t relational expressions is referred to as a &oolean
e"pression.
Therefore, the possible types of elements in a condition incl#de a 6oolean operator, a
6oolean variable, a pair of 6oolean parentheses >s#rro#nding a simple or compo#nd
condition?, a relational operator, or an arithmetic expression! f a condition is
incorrect, then at least one component of the condition is incorrect! Therefore, types
of errors in a condition incl#de the following(
M 6oolean operator error >incorrectLmissingLextra 6oolean operators?!
M 6oolean variable error!
M 6oolean parenthesis error!
M )elational operator error!
M .rithmetic expression error!
The condition testing method foc#ses on testing each condition in the program!
Condition testing strategies >disc#ssed later in this section? generally have two
advantages! "irst, meas#rement of test coverage of a condition is simple! Second, the
test coverage of conditions in a program provides g#idance for the generation of
additional tests for the program! The p#rpose of condition testing is to detect not only
errors in the conditions of a program b#t also other errors in the program!
"or detecting errors in the conditions contained in ,, it is li0ely that this test set is
also effective for detecting other errors in ,. n addition, if a testing strategy is
effective for detecting errors in a condition, then it is li0ely that this strategy will also
be effective for detecting errors in a program! . n#mber of condition testing strategies
have been proposed! &ranch testing is probably the simplest condition testing
strategy! "or a compo#nd condition ', the tr#e and false branches of ' and every
simple condition in ' need to be exec#ted at least once!
9/
9/
0omain testing re@#ires three or fo#r tests to be derived for a relational expression!
"or a relational expression of the form *. Wrelational$operatorX */ three tests are
re@#ired to ma0e the val#e of *. greater than, e@#al to, or less than that of */! f
Wrelational$operatorX is incorrect and *. and */ are correct, then these three tests
g#arantee the detection of the relational operator error! To detect errors in *. and */,
a test that ma0es the val#e of *. greater or less than that of */ sho#ld ma0e the
difference between these two val#es as small as possible!
"or a 6oolean expression with n variables, all of %n possible tests are re@#ired >n X
2?! This strategy can detect 6oolean operator, variable, and parenthesis errors, b#t it is
practical only if n is small! -rror$sensitive tests for 6oolean expressions can also be
derived! "or a sing#lar 6oolean expression >a 6oolean expression in which each
6oolean variable occ#rs only once? with n 6oolean variables >n X 2?, we can easily
generate a test set with less than %n tests s#ch that this test set g#arantees the detection
of m#ltiple 6oolean operator errors and is also effective for detecting other errors!
Tai s#ggests a condition testing strategy that b#ilds on the techni@#es *#st o#tlined!
Called &R1 >branch and relational operator? testing, the techni@#e g#arantees the
detection of branch and relational operator errors in a condition provided that all
6oolean variables and relational operators in the condition occ#r only once and have
no common variables! The 6)7 strategy #ses condition constraints for a condition '.
. condition constraint for ' with n simple conditions is defined as >0., 0/, . . ., 0n?,
where 0i >2 W i YZn? is a symbol specifying a constraint on the o#tcome of the ith
simple condition in condition '. . condition constraint 0 for condition ' is said to be
covered by an exec#tion of ' if, d#ring this exec#tion of ', the o#tcome of each
simple condition in ' satisfies the corresponding constraint in 0.
"or a 6oolean variable, &, we specify a constraint on the o#tcome of & that states that
& m#st be either tr#e >t? or false >f?! Similarly, for a relational expression, the symbols
X, S, W are #sed to specify constraints on the o#tcome of the expression!
.s an example, consider the condition
'.( &. & &/
where &. and &/ are 6oolean variables! The condition constraint for '. is of the form
>0., 0/?, where each of 0. and 0/ is t or f! The val#e >t, f? is a condition constraint
for '. and is covered by the test that ma0es the val#e of &. to be tr#e and the val#e of
&/ to be false! The 6)7 testing strategy re@#ires that the constraint set _>t, t?, >f, t?, >t,
9%
9%
f?` be covered by the exec#tions of '.! f '. is incorrect d#e to one or more 6oolean
operator errors, at least one of the constraint set will force '. to fail!
.s a second example, a condition of the form
'/( &. & >*2 S *3?
where &. is a 6oolean expression and *2 and *3 are arithmetic expressions! .
condition constraint for '/ is of the form >0., 0/?, where each of 0. is t or f and 0/
is X, S, W! Since '/ is the same as '. except that the second simple condition in '/ is
a relational expression, we can constr#ct a constraint set for '/ by modifying the
constraint set _>t, t?, >f, t?, >t, f?` defined for '.! :ote that t for >*2 S *3? implies S
and that f for >*2 S *3? implies either W or X! 6y replacing >t, t? and >f, t? with >t, S?
and >f, S?, respectively, and by replacing >t, f? with >t, W? and >t, X?, the res#lting
constraint set for '/ is _>t, S?, >f, S?, >t, W?, >t, X?`! Coverage of the preceding
constraint set will g#arantee detection of 6oolean and relational operator errors in '/!
.s a third example, we consider a condition of the form
'2( >*. X */? & >*2 S *3?
where *., */, *2 and *3 are arithmetic expressions! . condition constraint for '2 is
of the form >0., 0/?, where each of 0. and 0/ is X, S, W! Since '2 is the same as '/
except that the first simple condition in '2 is a relational expression, we can constr#ct
a constraint set for '2 by modifying the constraint set for '/, obtaining _>X, S?, >S,
S?, >W, S?, >X, X?, >X, W?`Coverage of this constraint set will g#arantee detection of
relational operator errors in '2.
!ata Flo5 Te$ting
The data flow testing method selects test paths of a program according to the
locations of definitions and #ses of variables in the program! To ill#strate the data
flow testing approach, ass#me that each statement in a program is assigned a #ni@#e
statement n#mber and that each f#nction does not modify its parameters or global
variables!
"or a statement with S as its statement n#mber,
D-">S? S _4 ^ statement S contains a definition of 4`
+S->S? S _4 ^ statement S contains a #se of 4`
f statement S is an if or loop statement, its D-" set is empty and its +S- set is based
on the condition of statement S. The definition of variable 4 at statement S is said to
94
94
be live at statement S5 if there exists a path from statement S to statement S5 that
contains no other definition of 4.
. definition!use >D+? chain of variable 4 is of the form O4, S, S5P, where S and S5 are
statement n#mbers, 4 is in D-">S? and +S-6S5?, and the definition of 4 in statement S
is live at statement S5.
7ne simple data flow testing strategy is to re@#ire that every D+ chain be covered at
least once! We refer to this strategy as the 07 testing strategy. t has been shown that
D+ testing does not g#arantee the coverage of all branches of a program! Bowever, a
branch is not g#aranteed to be covered by D+ testing only in rare sit#ations s#ch as
if$then$else constr#cts in which the then part has no definition of any variable and the
else part does not exist! n this sit#ation, the else branch of the if statement is not
necessarily covered by D+ testing!
98
98
Loo? Te$ting

Fig%re (D / la$$e$ of loo?$
Loops are the cornerstone for the vast ma*ority of all algorithms implemented in
software! .nd yet, we often pay them little heed while cond#cting software tests!
#oop testing is a white$box testing techni@#e that foc#ses excl#sively on the validity
of loop constr#cts! "o#r different classes of loops can be defined( simple loops,
concatenated loops, nested loops, and #nstr#ct#red loops!
Sim?le loo?$. The following set of tests can be applied to simple loops, where n is the
maxim#m n#mber of allowable passes thro#gh the loop!
(. S0ip the loop entirely!
3. 7nly one pass thro#gh the loop!
4. Two passes thro#gh the loop!
;. m passes thro#gh the loop where m W n!
<. n V/, n, n U / passes thro#gh the loop!
99
99
Ne$te& loo?$. f we were to extend the test approach for simple loops to nested loops,
the n#mber of possible tests wo#ld grow geometrically as the level of nesting
increases! This wo#ld res#lt in an impractical n#mber of tests! 6ei'er s#ggests an
approach that will help to red#ce the n#mber of tests(
(. Start at the innermost loop! Set all other loops to minim#m val#es!
3. Cond#ct simple loop tests for the innermost loop while holding the o#ter loops at
their minim#m iteration parameter >e!g!, loop co#nter? val#es! .dd other tests for
o#t$of$range or excl#ded val#es!
'omple" loop structures are another hiding place for bugs t8s well worth spending
time designing tests that fully e"ercise loop structures
4. Wor0 o#tward, cond#cting tests for the next loop, b#t 0eeping all other o#ter loops
at minim#m val#es and other nested loops to =typical= val#es!
;. Contin#e #ntil all loops have been tested!
oncatenate& loo?$. Concatenated loops can be tested #sing the approach defined for
simple loops, if each of the loops is independent of the other! Bowever, if two loops
are concatenated and the loop co#nter for loop / is #sed as the initial val#e for loop %,
then the loops are not independent! When the loops are not independent, the approach
applied to nested loops is recommended!
Un$tr%ct%re& loo?$. Whenever possible, this class of loops sho#ld be redesigned to
reflect the #se of the str#ct#red programming constr#cts!
hallenge$ in 5hite 0o* te$ting
White box testing re@#ires a so#nd 0nowledge of the program code and the
programming lang#age! This means that the developers sho#ld get intimately
involved in white box testing! Developers, in general, do not li0e to perform testing
f#nctions! This applies to str#ct#ral testing as well as static testing methods s#ch as
reviews! n addition, beca#se of the timeline press#res, the programmers may not find
time for reviews!
B#man tendency of a developer being #nable to find the defects in his code!
"#lly tested code may not correspond to realistic scenarios
93
93
These challenges do not mean that white box testing is ineffective! 6#t when
white$box testing is carried o#t and these challenges are addressed by other means of
testing, there is a higher li0elihood of more effective testing!
3.4 INTE"RATION TESTIN"
. neophyte in the software world might as0 a seemingly legitimate @#estion once all
mod#les have been #nit tested( =f they all wor0 individ#ally, why do yo# do#bt that
theyAll wor0 when we p#t them togetherE= The problem, of co#rse, is =p#tting them
together=Rinterfacing! Data can be lost across an interfaceH one mod#le can have an
inadvertent, adverse affect on anotherH s#b f#nctions, when combined, may not
prod#ce the desired ma*or f#nctionH individ#ally acceptable imprecision may be
magnified to #nacceptable levelsH global data str#ct#res can present problems! Sadly,
the list goes on and on! ntegration testing is a systematic techni@#e for constr#cting
the program str#ct#re while at the same time cond#cting tests to #ncover errors
associated with interfacing!
The ob*ective is to ta0e #nit tested components and b#ild a program str#ct#re that has
been dictated by design! There is often a tendency to attempt non incremental
integrationH that is, to constr#ct the program #sing a =big bang= approach! .ll
components are combined in advance! The entire program is tested as a whole! .nd
chaos #s#ally res#ltsG . set of errors is enco#ntered! Correction is diffic#lt beca#se
isolation of ca#ses is complicated by the vast expanse of the entire program! 7nce
these errors are corrected, new ones appear and the process contin#es in a seemingly
endless loop!
ncremental integration is the antithesis of the big bang approach! The program is
constr#cted and tested in small increments, where errors are easier to isolate and
correctH interfaces are more li0ely to be tested completelyH and a systematic test
approach may be applied! n the sections that follow, a n#mber of different
incremental integration strategies are disc#ssed!
9;
9;
3.4.( To?/&o5n Integration


Fig%re (E / To?/ !o5n integration te$ting
%op!down integration testing is an incremental approach to constr#ction of program
str#ct#re! Mod#les are integrated by moving downward thro#gh the control hierarchy,
beginning with the main control mod#le >main program?! Mod#les s#bordinate >and
#ltimately s#bordinate? to the main control mod#le are incorporated into the str#ct#re
in either a depth$first or breadth$first manner! )eferring to "ig#re!/;, depth!first
integration wo#ld integrate all components on a ma*or control path of the str#ct#re!
Selection of a ma*or path is somewhat arbitrary and depends on application$specific
characteristics! "or example, selecting the left hand path, components M/, M% , M9
wo#ld be integrated first! :ext, M< or >if necessary for proper f#nctioning of M%? M3
wo#ld be integrated! Then, the central and right hand control paths are b#ilt!
&readth!first integration incorporates all components directly s#bordinate at each
level, moving across the str#ct#re hori'ontally! "rom the fig#re, components M%, M4,
and M8 >a replacement for st#b S8? wo#ld be integrated first! The next control level,
M9, M3, and so on, follows!
9<
9<
The integration process is performed in a series of five steps(
(. The main control mod#le is #sed as a test driver and st#bs are s#bstit#ted for all
components directly s#bordinate to the main control mod#le!
3. Depending on the integration approach selected >i!e!, depth or breadth first?,
s#bordinate st#bs are replaced one at a time with act#al components!
4. Tests are cond#cted as each component is integrated!
;. 7n completion of each set of tests, another st#b is replaced with the real
component!
<. )egression testing may be cond#cted to ens#re that new errors have not been
introd#ced!
The process contin#es from step % #ntil the entire program str#ct#re is b#ilt! The
top$down integration strategy verifies ma*or control or decision points early in the test
process! n a well$factored program str#ct#re, decision ma0ing occ#rs at #pper levels
in the hierarchy and is therefore enco#ntered first! f ma*or control problems do exist,
early recognition is essential! f depth$first integration is selected, a complete f#nction
of the software may be implemented and demonstrated!
The incoming path may be integrated in a top$down manner! .ll inp#t processing
>for s#bse@#ent transaction dispatching? may be demonstrated before other elements
of the str#ct#re have been integrated! -arly demonstration of f#nctional capability is a
confidence b#ilder for both the developer and the c#stomer!
Top$down strategy so#nds relatively #ncomplicated, b#t in practice, logistical
problems can arise! The most common of these problems occ#rs when processing at
low levels in the hierarchy is re@#ired to ade@#ately test #pper levels! St#bs replace
low level mod#les at the beginning of top$down testingH therefore, no significant data
can flow #pward in the program str#ct#re! The tester is left with three choices(
>/? Delay many tests #ntil st#bs are replaced with act#al mod#les,
>%? Develop st#bs that perform limited f#nctions that sim#late the act#al mod#le, or
>4? ntegrate the software from the bottom of the hierarchy #pward!
The first approach >delay tests #ntil st#bs are replaced by act#al mod#les? ca#ses #s
to loose some control over correspondence between specific tests and incorporation of
specific mod#les! This can lead to diffic#lty in determining the ca#se of errors and
tends to violate the highly constrained nat#re of the top$down approach! The second
approach is wor0able b#t can lead to significant overhead, as st#bs become more and
9I
9I
more complex! The third approach, called bottom!up testing, is disc#ssed in the next
section!
3.4.3 +ottom/%? Integration
&ottom!up integration testing, as its name implies, begins constr#ction and testing
with atomic modules >i!e!, components at the lowest levels in the program str#ct#re?!
6eca#se components are integrated from the bottom #p, processing re@#ired for
components s#bordinate to a given level is always available and the need for st#bs is
eliminated!
. bottom$#p integration strategy may be implemented with the following steps(
(. Low$level components are combined into cl#sters >sometimes called builds? that
perform a specific software s#b f#nction!
3. . driver >a control program for testing? is written to coordinate test case inp#t and
o#tp#t!
4. The cl#ster is tested!
;. Drivers are removed and cl#sters are combined moving #pward in the program
str#ct#re!
Fig%re (F / +ottom/ U? Integration
32
32
ntegration follows the pattern ill#strated in "ig#re!/<! Components are combined to
form cl#sters /, %, and 4! -ach of the cl#sters is tested #sing a driver >shown as a
dashed bloc0?! Components in cl#sters / and % are s#bordinate to M
a
! Drivers D/ and
D% are removed and the cl#sters are interfaced directly to M
a
! Similarly, driver D4 for
cl#ster 4 is removed prior to integration with mod#le M
b
! 6oth M
a
and M
b
will
#ltimately be integrated with component M
c
, and so forth! 6ottom$#p integration
eliminates the need for complex st#bs!
.s integration moves #pward, the need for separate test drivers lessens! n fact, if the
top two levels of program str#ct#re are integrated top down, the n#mber of drivers can
be red#ced s#bstantially and integration of cl#sters is greatly simplified!
omment$ on Integration Te$ting
There has been m#ch disc#ssion of the relative advantages and disadvantages of
top$down vers#s bottom$#p integration testing! n general, the advantages of one
strategy tend to res#lt in disadvantages for the other strategy! The ma*or disadvantage
of the top$down approach is the need for st#bs and the attendant testing diffic#lties
that can be associated with them! Problems associated with st#bs may be offset by the
advantage of testing ma*or control f#nctions early! The ma*or disadvantage of
bottom$#p integration is that =the program as an entity does not exist #ntil the last
mod#le is added=! This drawbac0 is tempered by easier test case design and a lac0 of
st#bs!
Selection of an integration strategy depends #pon software characteristics and,
sometimes, pro*ect sched#le! n general, a combined approach >sometimes called
sandwich testing? that #ses top$down tests for #pper levels of the program str#ct#re,
co#pled with bottom$#p tests for s#bordinate levels may be the best compromise! .s
integration testing is cond#cted, the tester sho#ld identify critical modules. . critical
mod#le has one or more of the following characteristics(
>/? .ddresses several software re@#irements,
>%? Bas a high level of control >resides relatively high in the program str#ct#re?,
>4? s complex or error prone >cyclomatic complexity may be #sed as an indicator?, or
>8? has definite performance re@#irements! Critical mod#les sho#ld be tested as early
as is possible! n addition, regression tests sho#ld foc#s on critical mod#le f#nction!
3/
3/
3.4.4 Integration Te$t !oc%mentation
.n overall plan for integration of the software and a description of specific tests are
doc#mented in a %est Specification. This doc#ment contains a test plan, and a test
proced#re, is a wor0 prod#ct of the software process, and becomes part of the
software config#ration! The test plan describes the overall strategy for integration!
Testing is divided into phases and b#ilds that address specific f#nctional and
behavioral characteristics of the software! "or example, integration testing for a C.D
system might be divided into the following test phases(
M +ser interaction >command selection, drawing creation, display representation, error
processing and representation?!
M Data manip#lation and analysis >symbol creation, dimensioningH rotation,
comp#tation of physical properties?!
M Display processing and generation >two$dimensional displays, three dimensional
displays, graphs and charts?!
M Database management >access, #pdate, integrity, performance?!
-ach of these phases and s#b phases >denoted in parentheses? delineates a broad
f#nctional category within the software and can generally be related to a specific
domain of the program str#ct#re! Therefore, program b#ilds >gro#ps of mod#les? are
created to correspond to each phase! The following criteria and corresponding tests
are applied for all test phases(
Interface integrity. nternal and external interfaces are tested as each mod#le >or
cl#ster? is incorporated into the str#ct#re!
F%nctional 2ali&ity. Tests designed to #ncover f#nctional errors are cond#cted!
Information content. Tests designed to #ncover errors associated with local or global
data str#ct#res are cond#cted!
>erformance. Tests designed to verify performance bo#nds established d#ring
software design are cond#cted!
. sched#le for integration, the development of overhead software, and related topics
is also disc#ssed as part of the test plan! Start and end dates for each phase are
established and =availability windows= for #nit tested mod#les are defined! . brief
3%
3%
description of overhead software >st#bs and drivers? concentrates on characteristics
that might re@#ire special effort! "inally, test environment and reso#rces are
described!
+n#s#al hardware config#rations, exotic sim#lators, and special test tools or
techni@#es are a few of many topics that may also be disc#ssed!
The order of integration and corresponding tests at each integration step are
described! . listing of all test cases >annotated for s#bse@#ent reference? and expected
res#lts is also incl#ded! . history of act#al test res#lts, problems, or pec#liarities is
recorded in the %est Specification. nformation contained in this section can be vital
d#ring software maintenance! Li0e all other elements of a software config#ration, the
test specification format may be tailored to the local needs of a software engineering
organi'ation! t is important to note, however, that an integration strategy >contained
in a test plan? and testing details >described in a test proced#re? are essential
ingredients and m#st appear!
)ali&ation te$ting
.t the c#lmination of integration testing, software is completely assembled as a
pac0age, interfacing errors have been #ncovered and corrected, and a final series of
software testsRvalidation testingRmay begin! Qalidation can be defined in many
ways, b#t a simple >albeit harsh? definition is that validation s#cceeds when software
f#nctions in a manner that can be reasonably expected by the c#stomer! .t this point a
battle$hardened software developer might protest( =Who or what is the arbiter of
reasonable expectationsE= )easonable expectations are defined in the Software
Requirements SpecificationR a doc#ment that describes all #ser$visible attrib#tes of
the software! The specification contains a section called (alidation 'riteria.
nformation contained in that section forms the basis for a validation testing approach!
)ali&ation Te$t riteria
Software validation is achieved thro#gh a series of blac0$box tests that demonstrate
conformity with re@#irements! . test plan o#tlines the classes of tests to be cond#cted
and a test proced#re defines specific test cases that will be #sed to demonstrate
conformity with re@#irements! 6oth the plan and proced#re are designed to ens#re
that all f#nctional re@#irements are satisfied, all behavioral characteristics are
achieved, all performance re@#irements are attained, doc#mentation is correct, and
34
34
h#man engineered and other re@#irements are met >e!g!, transportability,
compatibility, error recovery, maintainability?!
.fter each validation test case has been cond#cted, one of two possible conditions
exist(
>/? The f#nction or performance characteristics conform to specification and are
accepted or
>%? a deviation from specification is #ncovered and a deficiency list is created!
Deviation or error discovered at this stage in a pro*ect can rarely be corrected prior to
sched#led delivery! t is often necessary to negotiate with the c#stomer to establish a
method for resolving deficiencies!
onfig%ration Re2ie5
.n important element of the validation process is a configuration review. The intent
of the review is to ens#re that all elements of the software config#ration have been
properly developed, are cataloged, and have the necessary detail to bolster the s#pport
phase of the software life cycle!
3.4.; Al?ha an& +eta Te$ting
t is virt#ally impossible for a software developer to foresee how the c#stomer will
really #se a program! nstr#ctions for #se may be misinterpretedH strange
combinations of data may be reg#larly #sedH o#tp#t that seemed clear to the tester
may be #nintelligible to a #ser in the field! When c#stom software is b#ilt for one
c#stomer, a series of acceptance tests are cond#cted to enable the c#stomer to validate
all re@#irements! Cond#cted by the end #ser rather than software engineers, an
acceptance test can range from an informal =test drive= to a planned and
systematically exec#ted series of tests! n fact, acceptance testing can be cond#cted
over a period of wee0s or months, thereby #ncovering c#m#lative errors that might
degrade the system over time!
f software is developed as a prod#ct to be #sed by many c#stomers, it is impractical
to perform formal acceptance tests with each one! Most software prod#ct b#ilders #se
a process called alpha and beta testing to #ncover errors that only the end$#ser seems
able to find!
38
38
The alpha test is cond#cted at the developerAs site by a c#stomer! The software is #sed
in a nat#ral setting with the developer =loo0ing over the sho#lder= of the #ser and
recording errors and #sage problems! .lpha tests are cond#cted in a controlled
environment!
The beta test is cond#cted at one or more c#stomer sites by the end$#ser of the
software! +nli0e alpha testing, the developer is generally not present! Therefore, the
beta test is a =live= application of the software in an environment that cannot be
controlled by the developer! The c#stomer records all problems >real or imagined? that
are enco#ntered d#ring beta testing and reports these to the developer at reg#lar
intervals! .s a res#lt of problems reported d#ring beta tests, software engineers ma0e
modifications and then prepare for release of the software prod#ct to the entire
c#stomer base!
3.; SYSTEM AN! AE>TANE TESTIN"
The testing cond#cted on the complete integrated prod#cts and sol#tions to eval#ate
system compliance with specified re@#irements on f#nctional and non$f#nctional
aspects is called system testing! System testing is cond#cted with an ob*ective to find
prod#ct level defects and in b#ilding the confidence before the prod#ct is released to
the c#stomer! Since system testing is the last phase of testing before the release, not
all defects can be fixed in code in time d#e to time and effort needed in development
and testing and d#e to the potential ris0 involved in any last$min#te changes!
Bence, an impact analysis is done for those defects to red#ce the ris0 of releasing a
prod#ct with defects! The analysis of defects and their classification into vario#s
categories also gives an idea abo#t the 0ind of defects that will be fo#nd by the
c#stomer after release! This information helps in planning some activities s#ch as
providing wor0aro#nds, doc#mentation on alternative approaches, and so on! Bence,
system testing helps in red#cing the ris0 of releasing a prod#ct!
39
39
3.;.( Sy$tem te$ting
System testing is defined as a testing phase cond#cted on the complete integrated
system, to eval#ate the system compliance with its specific re@#irements! t is done
after #nit, component and integration testing phases! System testing is the only phase
of testing which tests both the f#nctional and non$f#nctional aspects of the prod#ct!
7n the f#nctional side, system testing foc#ses on real$life c#stomer #sage of the
prod#ct and sol#tions! System testing sim#lates c#stomer deployments!
7n the non$f#nctional side, system brings in different testing types, some of which are
as follows!
/! PerformanceLLoad testing
%! Scalability testing
4! )eliability testing
8! Stress testing
9! interoperability testing
3! Locali'ation testing
Software is only one element of a larger comp#ter$based system! +ltimately,
software is incorporated with other system elements >e!g!, hardware, people,
information?, and a series of system integration and validation tests are cond#cted!
These tests fall o#tside the scope of the software process and are not cond#cted solely
by software engineers! Bowever, steps ta0en d#ring software design and testing can
greatly improve the probability of s#ccessf#l software integration in the larger system!
. classic system testing problem is =finger$pointing!= This occ#rs when an error is
#ncovered, and each system element developer blames the other for the problem!
)ather than ind#lging in s#ch nonsense, the software engineer sho#ld anticipate
potential interfacing problems and
>/? Design error$handling paths that test all information coming from other elements
of the system,
>%? Cond#ct a series of tests that sim#late bad data or other potential errors at the
software interface,
33
33
>4? )ecord the res#lts of tests to #se as =evidence= if finger$pointing does occ#r, and
>8? Participate in planning and design of system tests to ens#re that software is
ade@#ately tested!
System testing is act#ally a series of different tests whose primary p#rpose is to f#lly
exercise the comp#ter$based system! .ltho#gh each test has a different p#rpose, all
wor0 to verify that system elements have been properly integrated and perform
allocated f#nctions! n the sections that follow, we disc#ss the types of system tests
that are worthwhile for software$based systems!
To s#mmari'e, system testing is done for the following reasons!
Provide independent perspective in testing
6ring in c#stomer perspective in testing
Provide a Cfresh pair of eyesD to discover defects not fo#nd earlier by testing
Test prod#ct behavior in a holistic, complete and realistic environment
Test both f#nctional and non$f#nctional aspects of the prod#ct
6#ild confidence in the prod#ct
.naly'e and red#ce the ris0 of releasing the prod#ct
-ns#re all re@#irements are met and ready the prod#ct for acceptance testing!
F%nctional $y$tem te$ting
"#nctional testing is performed at different phases and the foc#s is on prod#ct level
feat#res! .s f#nctional testing is performed at vario#s testing phases, there are two
obvio#s problems! 7ne is duplication and other one is gray area! D#plication refers to
the same tests being performed m#ltiple times and gray area refers to certain tests
being missed o#t in all the phases! 1ray areas in testing happen d#e to lac0 of prod#ct
0nowledge, lac0 of 0nowledge of c#stomer #sage, and lac0 of co$ordination across
test teams! There are m#ltiple ways system f#nctional testing is performed! There are
also many ways prod#ct level test cases are derived for f#nctional testing!
3;
3;
F%nctional 2$ non/f%nctional te$ting
Te$ting a$?ect$ F%nctional te$ting Non/f%nctional te$ting
nvolves Prod#ct feat#res and
f#nctionality
5#ality factors
Tests Prod#ct behavior 6ehavior and experience
)es#lt concl#sion Simple steps written to chec0
expected res#lts
B#ge data collected and
analy'ed
)es#lt varies d#e
to
Prod#ct implementation Prod#ct implementation,
reso#rces, and config#rations
Testing foc#s Defect detection 5#alification of prod#ct
Knowledge
re@#ired
Prod#ct and domain Prod#ct, domain, design,
architect#re, statistical s0ills
"ail#res normally
d#e to
Code .rchitect#re, design, and code
system
Testing phase +nit, component, integration,
system
System
Test case
repeatability
)epeated many times )epeated only in case of fail#res
and for different config#rations
Config#ration 7ne$time set#p for a set of
test cases
Config#ration changes for each
test case
Some of the common techni@#es are given below(
Design and architect#re verification
6#siness vertical testing
Deployment testing
6eta testing
Certification, standards, and testing for compliance!
Non/f%nctional te$ting
The process followed by non$f#nctional testing is similar to that of f#nctional testing
b#t differs from the aspects of complexity, 0nowledge re@#irement, effort needed, and
n#mber of times the test cases are repeated! Since repeating non$f#nctional test cases
involve more time, effort, and reso#rces, the process for non$f#nctional testing has to
be more rob#st stronger than f#nctional testing to minimi'e the need for repetition!
3<
3<
This is achieved by having more stringent entryLexit criteria, better planning, and by
setting #p the config#ration with data pop#lation in advance for test exec#tion!
Reco2ery Te$ting
Many comp#ter based systems m#st recover from fa#lts and res#me processing
within a pre specified time! n some cases, a system m#st be fa#lt tolerantH that is,
processing fa#lts m#st not ca#se overall system f#nction to cease! n other cases, a
system fail#re m#st be corrected within a specified period of time or severe economic
damage will occ#r!
Recovery testing is a system test that forces the software to fail in a variety of ways
and verifies that recovery is properly performed! f recovery is a#tomatic >performed
by the system itself?, reinitiali'ation, chec0 pointing mechanisms, data recovery, and
restart are eval#ated for correctness! f recovery re@#ires h#man intervention, the
mean$time$to$repair >MTT)? is eval#ated to determine whether it is within acceptable
limits!
Scala0ility te$ting
The ob*ective of scalability testing is to find o#t the maxim#m capability of the
prod#ct parameters! .s the exercise involves finding the maxim#m, the reso#rces that
are needed for this 0ind of testing are normally very high! .t the beginning of the
scalability exercise, there may not be an obvio#s cl#e abo#t the maxim#m capability
of the system! Bence a high$end config#ration is selected and the scalability
parameter is increased step by step to reach the maxim#m capability!
"ail#res d#ring scalability test incl#de the system not responding, or the system
crashing and so on! Scalability tests help in identifying the ma*or bottlenec0s in a
prod#ct! When reso#rces are fo#nd to be the bottlenec0, they are increased after
validating the ass#mptions mentioned! Scalability tests are performed on different
config#rations to chec0 the prod#ctFs behavior!
There can be some bottlenec0s d#ring scalability testing, which will re@#ire certain
7S parameters and prod#ct parameters to be t#ned! C:#mber of open filesD and
C:#mber of prod#ct threadsD are some examples of parameters that may need t#ning!
When s#ch t#ning is performed, it sho#ld be appropriately doc#mented! . doc#ment
containing s#ch t#ning parameters and the recommended val#es of other prod#ct and
3I
3I
environmental parameters for attaining the scalability n#mbers is called a si'ing
g#ide! This g#ide is one of the mandatory deliverables from scalability testing!
Relia0ility te$ting
)eliability testing is done to eval#ate the prod#ctFs ability to perform its re@#ired
f#nctions #nder stated conditions for a specified period of time or for a large n#mber
of iterations! -xamples of reliability incl#de @#erying a database contin#o#sly for 8<
ho#rs and performing login operations /2,222 times!
The reliability of a prod#ct sho#ld not be conf#sed with reliability testing! )eliability
here is an all$encompassing term #sed to mean all the @#ality factors and f#nctionality
aspects of the prod#ct! This prod#ct reliability is achieved by foc#sing on the
following activities(
Defined engineering processes
)eview of wor0 prod#cts at each stage
Change management proced#res
)eview of testing coverage
7ngoing monitoring of the prod#ct
)eliability testing, on the other hand, refers to testing the prod#ct for a contin#o#s
period of time! )eliability testing only delivers a Creliability tested prod#ctD b#t not a
reliable prod#ct! The main factor that is ta0en into acco#nt for reliability testing is
defects!
To s#mmari'e, a Creliability tested prod#ctD will have the following characteristics(
:o errors or very few errors from repeated transactions
aero downtime
7ptim#m #tili'ation of reso#rces!
Consistent performance and response time of the prod#ct for repeated
transactions for a specified time d#ration
:o side$effects after the repeated transactions are exec#ted!
Sec%rity Te$ting
.ny comp#ter$based system that manages sensitive information or ca#ses actions
that can improperly harm >or benefit? individ#als is a target for improper or illegal
penetration! Penetration spans a broad range of activities( hac0ers who attempt to
;2
;2
penetrate systems for sportH disgr#ntled employees who attempt to penetrate for
revengeH dishonest individ#als who attempt to penetrate for illicit personal gain!
Security testing attempts to verify that protection mechanisms b#ilt into a system will,
in fact, protect it from improper penetration! To @#ote 6ei'er( =The systemAs sec#rity
m#st, of co#rse, be tested for inv#lnerability from frontal attac0Rb#t m#st also be
tested for inv#lnerability from flan0 or rear attac0!
D#ring sec#rity testing, the tester plays the role>s? of the individ#al who desires to
penetrate the system! .nything goesG The tester may attempt to ac@#ire passwords
thro#gh external clerical meansH may attac0 the system with c#stom software
designed to brea0down any defenses that have been constr#ctedH may overwhelm the
system, thereby denying service to othersH may p#rposely ca#se system errors, hoping
to penetrate d#ring recoveryH may browse thro#gh insec#re data, hoping to find the
0ey to system entry!
1iven eno#gh time and reso#rces, good sec#rity testing will #ltimately penetrate a
system! The role of the system designer is to ma0e penetration cost more than the
val#e of the information that will be obtained!
Stre$$ Te$ting
D#ring earlier software testing steps, white$box and blac0$box techni@#es res#lted in
thoro#gh eval#ation of normal program f#nctions and performance! Stress tests are
designed to confront programs with abnormal sit#ations! n essence, the tester who
performs stress testing as0s( =Bow high can we cran0 this #p before it failsE= Stress
testing exec#tes a system in a manner that demands reso#rces in abnormal @#antity,
fre@#ency, or vol#me! "or example,
>/? special tests may be designed that generate ten interr#pts per second, when one or
two is the average rate,
>%? inp#t data rates may be increased by an order of magnit#de to determine how
inp#t f#nctions will respond,
>4? test cases that re@#ire maxim#m memory or other reso#rces are exec#ted,
>8? test cases that may ca#se thrashing in a virt#al operating system are designed,
>9? test cases that may ca#se excessive h#nting for dis0$resident data are created!
-ssentially, the tester attempts to brea0 the program!
. variation of stress testing is a techni@#e called sensitivity testing. n some sit#ations
>the most common occ#r in mathematical algorithms?, a very small range of data
;/
;/
contained within the bo#nds of valid data for a program may ca#se extreme and even
erroneo#s processing or profo#nd performance degradation! Sensitivity testing
attempts to #ncover data combinations within valid inp#t classes that may ca#se
instability or improper processing!
Intero?era0ility te$ting
nteroperability testing is done to ens#re the two or more prod#cts can exchange
information, #se information, and wor0 properly together! Systems can be
interoperable #nidirectional or bi$directional! +nless two or more prod#cts are
designed for exchanging information, interoperability cannot be achieved! The
following are some g#idelines that help in improving interoperability!
/! Consistency of information flow across systems
%! Changes to data representation as per the system re@#irements
4! Correlated interchange of messages and receiving appropriate responses
8! Comm#nication and messages
9! Meeting @#ality factors!
3.;.3 AE>TANE TESTIN"
.cceptance testing is a phase after system testing that is normally done by the
c#stomers or representatives of the c#stomers! The c#stomer defines a set of test cases
that will be exec#ted to @#alify and accept the prod#ct! These test cases are exec#ted
by the c#stomers themselves to @#ic0ly *#dge the @#ality of the prod#ct before
deciding to b#y the prod#ct! .cceptance test cases are normally small in n#mber and
are not written with the intention of finding defects! .cceptance tests are written to
exec#te near real$life scenarios! .part from verifying the f#nctional re@#irements,
acceptance tests are r#n to verify the non$f#nctional aspects of the system also!
.cceptance test cases failing in a c#stomer site may ca#se the prod#ct to be re*ected
and may mean financial loss or may mean rewor0 of prod#ct involving effort and
time!
;%
;%
Acce?tance criteria
Acce?tance criteria/?ro&%ct acce?tance
D#ring the re@#irements phase, each re@#irement is associated with acceptance
criteria! t is possible that one or more re@#irements may be mapped to form
acceptance criteria! Whenever there are changes to re@#irements, the acceptance
criteria are accordingly modified and maintained! .cceptance testing is not meant for
exec#ting test cases that have not been exec#ted before! Bence, the existing test cases
are loo0ed at and certain categories of test cases can be gro#ped to form acceptance
criteria!
Acce?tance criteria ' ?roce&%re acce?tance
.cceptance criteria can be defined based on the proced#res followed for delivery! .n
example of proced#re acceptance co#ld be doc#mentation and release media! Some
examples of acceptance criteria of this nat#re are as follows(
+ser, administration and tro#bleshooting doc#mentation sho#ld be part of the
release!
.long with binary code, the so#rce code of the prod#ct with b#ild scripts to be
delivered in a CD!
. minim#m of %2 employees are trained on the prod#ct #sage prior to
deployment!
These proced#ral acceptance criteria are verified Ltested as part of acceptance testing!
Acce?tance criteria ' $er2ice le2el agreement$
Service level agreements are generally part of a contract signed by the c#stomer and
the prod#ct organi'ation! The important contract items are ta0en and verified as part
of acceptance testing!
Selecting te$t ca$e$ for acce?tance te$ting
This section gives some g#ideline on what test cases can be incl#ded for acceptance
testing(
-nd$to$end f#nctionality verification
;4
;4
Domain tests
+ser scenario tests
6asic sanity tests
:ew f#nctionality
. few non$f#nctional tests
Tests pertaining to legal obligations and service level agreements
.cceptance test data
E*ec%ting acce?tance te$t$
Sometimes the c#stomers themselves do the acceptance tests! n s#ch cases, the *ob of
the prod#ct organi'ation is to assist the c#stomers in acceptance testing and resolve
the iss#es that come o#t it! f the acceptance testing is done by the prod#ct
organi'ation, forming the acceptance test team becomes an important activity! .n
acceptance test team #s#ally comprises members who are involved in the day to$day
activities of the prod#ct #sage or are familiar with s#ch scenarios! The prod#ct
management, s#pport, and cons#lting team, who have good 0nowledge of the
c#stomers, contrib#te to the acceptance testing definition and exec#tion! They may
not be familiar with the testing process or the technical aspect of the software! 6#t
they 0now whether the prod#ct does what it is intended to do! .n acceptance test team
may be formed with I2b of them possessing the re@#ired b#siness process 0nowledge
of the prod#ct and /2b being representatives of the technical testing team! The
n#mber of test team members needed to perform acceptance testing is not m#ch when
compared to other phases of testing!
The role of the testing team members d#ring and prior to acceptance test is cr#cial
since they may constantly interact with the acceptance team members! Test team
members help the acceptance members to get the re@#ired test data, select and identify
test cases, and analy'e the acceptance test res#lts! D#ring test exec#tion, the
acceptance test team reports its progress reg#larly! The defect reports are generated on
a periodic basis!
Defects reported d#ring acceptance tests co#ld be of different priorities! Test teams
help acceptance test team report defects! Showstopper and high$priority defects are
necessarily fixed before software is released! n case ma*or defects are identified
d#ring acceptance testing, then there is a ris0 of missing the release date! When the
;8
;8
defect fixes point to scope or re@#irement changes, then it may either res#lt in the
extension of the release date to incl#de the feat#re in the c#rrent release or get
postponed to s#bse@#ent releases! .ll resol#tion of those defects are disc#ssed with
the acceptance test team and their approval is obtained for concl#ding the completion
of acceptance testing!
3.< S%mmary
White box testing re@#ires a so#nd 0nowledge of the program code and the
programming lang#age! This means that the developers sho#ld get intimately
involved in white box testing!
.ll testing activities that are cond#cted from the point where two components are
integrated to the point where all system components wor0 together, are considered a
part of the integration testing phase! The integration testing phase involves developing
and exec#ting test cases that cover m#ltiple components and f#nctionality! This
testing is both a type of testing and a phase of testing! ntegration testing, if done
properly, can red#ce the n#mber of defects that will be fo#nd in the system testing
phase!
System testing is cond#cted with an ob*ective to find prod#ct level defects, and in
b#ilding the confidence before the prod#ct is released to the c#stomer! System testing
is done to provide independent perspective in testing, bring in c#stomer perspective in
testing, provide a fresh pair of eyes to discover defects not fo#nd earlier by testing,
test prod#ct in a holistic, complete, and realistic environment, test both f#nctional and
non$f#nctional aspects of a prod#ct and analy'e and red#ce the ris0 of releasing the
prod#ct! t ens#res all re@#irements are met and ready the prod#ct for acceptance
testing!
.cceptance testing is a phase after system testing that is normally done by the
c#stomers or representatives of the c#stomer! .cceptance test cases failing in a
c#stomer site may ca#se the prod#ct to be re*ected and may mean financial loss or
may mean rewor0 of prod#ct involving effort and time!
;9
;9
3.D hec, Yo%r >rogre$$
/! -xplain the types of testing and their importanceE
%! What is called white$box testingE -xplain static testing!
4! What are the phases involved in str#ct#ral testingE -xplain!
8! What is called a code complexity testingE -xplain with an example!
9! Why integration testing is neededE What are the two types of integration
testingE
3! Why system testing is doneE -xplain
;! Define beta testing and itFs importance!
<! -xplain the acceptance testing!
I! Bow will yo# select test cases for acceptance testingE
/2! -xplain the concept system and acceptance testing as a whole!
;3
;3
Unit III Te$ting F%n&amental ' 3 J S?eciali@e& Te$ting
Str#ct#re
4!2! 7b*ectives
4!/ ntrod#ction
4!% Performance Testing
4!4 )egression Testing
4!8 Testing of 7b*ect 7riented System
4!9 +sability and .ccessibility Testing
4!3 S#mmary
4!; Chec0 Jo#r Progress
;;
;;

4.G O01ecti2e$
To learn the types of speciali'ed testing and their importance
To #nderstand the importance of ma0ing a performance testing and itFs #ses
To 0now the regression testing to ma0e the software with improved @#alities
To #nderstand how to ma0e 7b*ect oriented system testing and their feat#res
To #nderstand the importance of ma0ing #sability and accessibility testing as a
special 0ind of testing methodology
4.( Intro&%ction
The testing performed to eval#ate the response time, thro#ghp#t, and #tili'ation of
the system, to exec#te its re@#ired f#nctions in comparison with different versions of
the same prod#ct or a different competitive prod#ct is called performance testing! n
this internet era, when more and more of b#siness is transacted online, there is big and
#nderstandable expectation that all applications r#n as fast as possible! When
applications r#n fast, a system can f#lfill the b#siness re@#irements @#ic0ly and p#t it
in a position to expand it b#siness and handle f#t#re needs as well! . system or a
prod#ct that is not able to service b#siness transactions d#e to its slow performance is
a big loss for the prod#ct organi'ation, and its c#stomers! Bence performance is a
basic re@#irement for any prod#ct and is fast becoming a s#b*ect of great interest in
the testing comm#nity!
4.3 >ERFORMANE TESTIN"
"or real$time and embedded systems, software that provides re@#ired f#nction b#t
does not conform to performance re@#irements is #nacceptable! ,erformance testing
is designed to test the r#n$time performance of software within the context of an
integrated system! Performance testing occ#rs thro#gho#t all steps in the testing
process! -ven at the #nit level, the performance of an individ#al mod#le may be
assessed as white$box tests are cond#cted! Bowever, it is not #ntil all system elements
are f#lly integrated that the tr#e performance of a system can be ascertained!
;<
;<
Performance tests are often co#pled with stress testing and #s#ally re@#ire both
hardware and software instr#mentation! That is, it is often necessary to meas#re
reso#rce #tili'ation >e!g!, processor cycles? in an exacting fashion! -xternal
instr#mentation can monitor exec#tion intervals, log events >e!g!, interr#pts? as they
occ#r, and sample machine states on a reg#lar basis! 6y instr#menting a system, the
tester can #ncover sit#ations that lead to degradation and possible system fail#re!
"ig#re /I $ The deb#gging process
There are many factors that govern performance testing! t is critical to #nderstand
the definition and p#rpose of these factors prior to #nderstanding the methodology for
performance testing and for analy'ing the res#lts! The capability of the system or the
prod#ct in handling m#ltiple transactions is determined by a factor called thro#ghp#t!
%hroughput represents the n#mber of re@#estLb#siness transactions processed by
the prod#ct in specified time d#ration! t is important to #nderstand that the
thro#ghp#t varies according to the load of the prod#ct is s#b*ected to! The Coptim#m
thro#ghp#tD is represented by the sat#ration point and is the one that represents the
maxim#m thro#ghp#t for the prod#ct!
Response time can be defined as the delay between the point of re@#est and the first
response from the prod#ct! n a typical client$server environment, thro#ghp#t
represents the n#mber of transactions that can be handled by the server and response
time represents the delay between the re@#est and response!
n reality, not all the delay that happens between the re@#est and the response is
ca#sed by the prod#ct! n the networ0ing scenario, the networ0 or other prod#cts
;I
;I
which are sharing the networ0 reso#rces can ca#se the delays! This brings #p yet
another factor for performance latency! #atency is a delay ca#sed by the application,
operating system, and by the environment that are calc#lated separately!
"ig! %2 -xample of latencies at vario#s levels$ networ0 and applications
To explain latency, let #s ta0e an example of a web application providing a service
by tal0ing to a web server and a database server connected in the networ0 from the
above fig#re! "rom the above pict#re, latency and response time can be calc#lated as
:etwor0 latency S :/ U :% U :4 U :8
Prod#ct latency S ./ U .% U .4
.ct#al response time S networ0 latency U prod#ct latency
The next factor that governs the performance testing is tuning. T#ning is a proced#re
by which the prod#ct performance is enhanced by setting different val#es to the
parameters of the prod#ct, operating system and other components!
Jet another factor that needs to be considered for performance testing is performance
of competitive prod#cts! This type of performance testing wherein competitive
prod#cts are compared is called benchmarking!
To s#mmari'e, performance testing is done to ens#re that a prod#ct
Client
Web
Server
./
.4
Database
Server
:%
.%
:4
:
/
:
8
n
8
<2
<2
Processes the re@#ired n#mber of transactions in any given interval
>thro#ghp#t?!
s available and r#nning #nder different load conditions>availability?
)esponds fast eno#gh for different load conditions> response time?
Delivers worthwhile ret#rn on investment for the reso#rces$ hardware and
software
s comparable to and better than that of the competitors for different
parameters!
Methodology for performance testing involves the following steps!
/! Collecting re@#irements
%! Writing test cases
4! .#tomating performance test cases
8! -xec#ting performance test cases
9! .naly'ing performance test res#lts
3! Performance t#ning
;! Performance benchmar0ing
<! )ecommending right config#ration for the c#stomers

Tool$ for ?erformance te$ting
There are two types of tools that can be #sed for performance testing$ f#nctional
performance tools and load tools!
"#nctional performance tools help in recording and playing bac0 the
transactions and obtaining performance n#mbers! This test involves very few
machines!
Load testing tools sim#late the load condition for performance testing witho#t
having to 0eep that many #sers or machines!
The list of some pop#lar performance tools are listed below(
"#nctional performance tools
Win)#nner from Merc#ry
5. Partner from comp#ware
Sil0test from Seg#e
</
</
Load testing tools
Load )#nner from Merc#ry
5. Load from Comp#ware
Sil0 Performer from Seg#e
>roce$$ for ?erformance te$ting
Performance testing follows the same process as does any other testing type! The only
difference is in getting more details and analysis!
-ver$changing re@#irements for performance are a serio#s threat to the prod#ct as
performance can only be improved marginally by fixing it in code! Ma0ing the
re@#irements testable and meas#rable is the first activity needed for the s#ccess of
performance testing!
The next step in the performance testing process is to create a performance test plan!
This test plan needs to have the following details!
/! )eso#rce re@#irements
%! Test bed > sim#lated and real life ?, test$lab set#p
4! )esponsibilities
8! Setting #p prod#ct traces, a#dits, and traces
9! -ntry and exit criteria
The Art of !e0%gging
Software testing is a process that can be systematically planned and specified! Test
case design can be cond#cted, a strategy can be defined, and res#lts can be eval#ated
against prescribed expectations!
0ebugging occ#rs as a conse@#ence of s#ccessf#l testing! That is, when a test case
#ncovers an error, deb#gging is the process that res#lts in the removal of the error!
.ltho#gh deb#gging can and sho#ld be an orderly process, it is still very m#ch an art!
. software engineer, eval#ating the res#lts of a test, is often confronted with a
=symptomatic= indication of a software problem! That is, the external manifestation of
the error and the internal ca#se of the error may have no obvio#s relationship to one
another! The poorly #nderstood mental process that connects a symptom to a ca#se is
deb#gging!
<%
<%
The !e0%gging >roce$$
Deb#gging is not testing b#t always occ#rs as a conse@#ence of testing! The
deb#gging process begins with the exec#tion of a test case! )es#lts are assessed and a
lac0 of correspondence between expected and act#al performance is enco#ntered!
The deb#gging process will always have one of two o#tcomes(
>/? The ca#se will be fo#nd and corrected, or
>%? The ca#se will not be fo#nd! n the latter case, the person performing deb#gging
may s#spect a ca#se, design a test case to help validate that s#spicion, and wor0
toward error correction in an iterative fashion!
Why is deb#gging so diffic#ltE n all li0elihood, h#man psychology has more to do
with an answer than software technology! Bowever, a few characteristics of b#gs
provide some cl#es(
(. The symptom and the ca#se may be geographically remote! That is, the symptom
may appear in one part of a program, while the ca#se may act#ally be located at a site
that is far removed!
3. The symptom may disappear >temporarily? when another error is corrected!
4. The symptom may act#ally be ca#sed by non errors >e!g!, ro#nd$off inacc#racies?!
;. The symptom may be ca#sed by h#man error that is not easily traced!
<. The symptom may be a res#lt of timing problems, rather than processing problems!
D. t may be diffic#lt to acc#rately reprod#ce inp#t conditions >e!g!, a real$time
application in which inp#t ordering is indeterminate?!
E. The symptom may be intermittent! This is partic#larly common in embedded
systems that co#ple hardware and software inextricably!
F. The symptom may be d#e to ca#ses that are distrib#ted across
4.4 RE"RESSION TESTIN"
-ach time a new mod#le is added as part of integration testing, the software changes!
:ew data flow paths are established, new L7 may occ#r, and new control logic is
invo0ed! These changes may ca#se problems with f#nctions that previo#sly wor0ed
flawlessly! n the context of an integration test strategy, regression testing is the re
<4
<4
exec#tion of some s#bset of tests that have already been cond#cted to ens#re that
changes have not propagated #nintended side effects! n a broader context, s#ccessf#l
tests >of any 0ind? res#lt in the discovery of errors, and errors m#st be corrected!
Whenever software is corrected, some aspect of the software config#ration >the
program, its doc#mentation, or the data that s#pport it? is changed! )egression testing
is the activity that helps to ens#re that changes >d#e to testing or for other reasons? do
not introd#ce #nintended behavior or additional errors!
)egression testing may be cond#cted man#ally, by re$exec#ting a s#bset of all test
cases or #sing a#tomated capture9playback tools. Capt#reLplaybac0 tools enable the
software engineer to capt#re test cases and res#lts for s#bse@#ent playbac0 and
comparison!
The regression test s#ite >the s#bset of tests to be exec#ted? contains three different
classes of test cases(
M . representative sample of tests that will exercise all software f#nctions!
M .dditional tests that foc#s on software f#nctions that are li0ely to be affected by the
change!
M Tests that foc#s on the software components that have been changed!
.s integration testing proceeds, the n#mber of regression tests can grow @#ite large!
Therefore, the regression test s#ite sho#ld be designed to incl#de only those tests that
address one or more classes of errors in each of the ma*or program f#nctions! t is
impractical and inefficient to re$exec#te every test for every program f#nction once a
change has occ#rred!
Ty?e$ of regre$$ion te$ting
There are two types of regression testing in practice!
)eg#lar regression testing
"inal regression testing
. reg#lar regression testing is done between test cycles to ens#re that the defect fixes
that are done and the f#nctionality that were wor0ing with the earlier test cycles
contin#e to wor0! . reg#lar regression testing can #se more than one prod#ct b#ild for
the test cases to be exec#ted! . b#ild is an aggregation of all the defect fixes and
feat#res that are present in the prod#ct!
. final regression testing is done to validate the final b#ild before release!
<8
<8
t is necessary to perform regression testing when
. reasonable amo#nt of initial testing is already carried o#t!
. good n#mber of defects have been fixed!
Defect fixes that can prod#ce side$effects are ta0en care of!
Ho5 to &o regre$$ion te$ting
. well defined methodology for regression testing is very important as this among is
the final type of testing that is normally performed *#st before release! The
methodology here is made of the following steps(
Performing an initial Csmo0eD or CsanityD test
+nderstanding the criteria for selecting the test cases
Classifying the test cases
Methodology for selecting test cases
)esetting the test cases for regression testing
Smo,e Te$ting
Smoke testing is an integration testing approach that is commonly #sed when Cshrin0
wrappedD software prod#cts are being developed! t is designed as a pacing
mechanism for time$critical pro*ects, allowing the software team to assess its pro*ect
on a fre@#ent basis! n essence, the smo0e testing approach encompasses the
following activities(
(. Software components that have been translated into code are integrated into a
Cb#ildD! . b#ild incl#des all data files, libraries, re#sable mod#les, and engineered
components that are re@#ired to implement one or more prod#ct f#nctions!
3. . series of tests is designed to expose errors that will 0eep the b#ild from properly
performing its f#nction! The intent sho#ld be to #ncover Cshow stopperD errors that
have the highest li0elihood of throwing the software pro*ect behind sched#le!
4. The b#ild is integrated with other b#ilds and the entire prod#ct >in its c#rrent form?
is smo0e tested daily! The integration approach may be top down or bottom #p!
<9
<9
The daily fre@#ency of testing the entire prod#ct may s#rprise some readers!
Bowever, fre@#ent tests give both managers and practitioners a realistic assessment of
integration testing progress! McConnell describes the smo0e test in the following
manner(
The smo0e test sho#ld exercise the entire system from end to end! t does not have to
be exha#stive, b#t it sho#ld be capable of exposing ma*or problems! The smo0e test
sho#ld be thoro#gh eno#gh that if the b#ild passes, yo# can ass#me that it is stable
eno#gh to be tested more thoro#ghly! Smo0e testing provides a n#mber of benefits
when it is applied on complex, time critical software engineering pro*ects(
M ntegration risk is minimi:ed. 6eca#se smo0e tests are cond#cted daily,
incompatibilities and other show$stopper errors are #ncovered early, thereby red#cing
the li0elihood of serio#s sched#le impact when errors are #ncovered!
M %he quality of the end!product is improved. 6eca#se the approach is constr#ction
>integration? oriented, smo0e testing is li0ely to #ncover both f#nctional errors and
architect#ral and component$level design defects! f these defects are corrected early,
better prod#ct @#ality will res#lt!
M *rror diagnosis and correction are simplified. Li0e all integration testing
approaches, errors #ncovered d#ring smo0e testing are li0ely to be associated with
Cnew software incrementsDRthat is, the software that has *#st been added to the
b#ild>s? is a probable ca#se of a newly discovered error!
M ,rogress is easier to assess. With each passing day, more of the software has been
integrated and more has been demonstrated to wor0! This improves team morale and
gives managers a good indication that progress is being made!
+e$t ?ractice$ in regre$$ion te$ting
)egression methodology can be applied when
/! We need to assess the @#ality of prod#ct between test cycles
%! We are doing a ma*or release of a prod#ct, have exec#ted all test cycles, and are
planning a regression test cycle for defect fixes and
4! We are doing a minor release of a prod#ct having only defect fixes, and we can
plan for regression test cycles to ta0e care of those defect fixes!
<3
<3
The best practices are listed below(
)egression can be #sed for all types of releases!
Mapping defect identifiers with test cases improves regression @#ality
Create and exec#te regression test bed daily
.s0 yo#r best test engineer to select the test cases
Detect defects, and protect yo#r prod#ct from defects and defect fixes!
4.; Te$ting of O01ect Oriente& Sy$tem$
The ob*ective of testing, stated simply, is to find the greatest possible n#mber of
errors with a manageable amo#nt of effort applied over a realistic time span! .ltho#gh
this f#ndamental ob*ective remains #nchanged for ob*ect$oriented software, the nat#re
of 77 programs changes both testing strategy and testing tactics! t might be arg#ed
that, as 77. and 77D mat#re, greater re#se of design patterns will mitigate the need
for heavy testing of 77 systems! -xactly the opposite is tr#e! 6inder disc#sses this
when he states(
-ach re#se is a new context of #sage and retesting is pr#dent! t seems li0ely that
more, not less, testing will be needed to obtain high reliability in ob*ect$oriented
systems!
The testing of 77 systems presents a new set of challenges to the software engineer!
The definition of testing m#st be broadened to incl#de error discovery techni@#es
>formal technical reviews? applied to 77. and 77D models! The completeness and
consistency of 77 representations m#st be assessed as they are b#ilt! +nit testing
loses m#ch of its meaning, and integration strategies change significantly! n
s#mmary, both testing strategies and testing tactics m#st acco#nt for the #ni@#e
characteristics of 77 software!
4 The architect#re of ob*ect$oriented software res#lts in a series of layered s#bsystems
that encaps#late collaborating classes! -ach of these system elements >s#bsystems and
classes? performs f#nctions that help to achieve system re@#irements! t is necessary
to test an 77 system at a variety of different levels in an effort to #ncover errors that
<;
<;
may occ#r as classes collaborate with one another and s#bsystems comm#nicate
across architect#ral layers!
Who does itE 7b*ect$oriented testing is performed by software engineers and testing
specialists!
Why is it importantE Jo# have to exec#te the program before it gets to the c#stomer
with the specific intent of removing all errors, so that the c#stomer will not experience
the fr#stration associated with a poor$@#ality prod#ct! n order to find the highest
possible n#mber of errors, tests m#st be cond#cted systematically and test cases m#st
be designed #sing disciplined techni@#es!
What are the stepsE 77 testing is strategically similar to the testing of conventional
systems, b#t it is tactically different! 6eca#se the 77 analysis and design models are
similar in str#ct#re and content to the res#ltant 77 program, CtestingD begins with the
review of these models! 7nce code has been generated, 77 testing begins Cin the
smallD with class testing! Problems co#ld occ#r >and will have been avoided beca#se
of the earlier review? d#ring design(
(. mproper allocation of the class to s#bsystem andLor tas0s may occ#r d#ring system
design!
3. +nnecessary design wor0 may be expended to create the proced#ral design for the
operations that address the extraneo#s attrib#te!
4!The messaging model will be incorrect >beca#se messages m#st be designed for the
operations that are extraneo#s?!
f the error remains #ndetected d#ring design and passes into the coding activity,
considerable effort will be expended to generate code that implements an #nnecessary
attrib#te, two #nnecessary operations, messages that drive inter ob*ect
comm#nication, and many other related iss#es! n addition, testing of the class will
absorb more time than necessary! 7nce the problem is finally #ncovered, modification
of the system m#st be carried o#t with the ever$present potential for side effects that
are ca#sed by change!
D#ring later stages of their development, 77. and 77D models provide s#bstantial
information abo#t the str#ct#re and behavior of the system! "or this reason, these
models sho#ld be s#b*ected to rigoro#s review prior to the generation of code! .ll
ob*ect$oriented models sho#ld be tested >in this context, the term testing is #sed to
<<
<<
incorporate formal technical reviews? for correctness, completeness, and consistency
within the context of the modelFs syntax, semantics, and pragmatics!
Te$ting OOA an& OO! Mo&el$
.nalysis and design models cannot be tested in the conventional sense, beca#se they
cannot be exec#ted! Bowever, formal technical reviews can be #sed to examine the
correctness and consistency of both analysis and design models!
orrectne$$ of OOA an& OO! Mo&el$
The notation and syntax #sed to represent analysis and design models will be tied to
the specific analysis and design method that is chosen for the pro*ect! Bence, syntactic
correctness is *#dged on proper #se of the symbologyH each model is reviewed to
ens#re that proper modeling conventions have been maintained! D#ring analysis and
design, semantic correctness m#st be *#dged based on the modelFs conformance to the
real world problem domain!
f the model acc#rately reflects the real world >to a level of detail that is appropriate
to the stage of development at which the model is reviewed?, then it is semantically
correct! To determine whether the model does, in fact, reflect the real world, it sho#ld
be presented to problem domain experts, who will examine the class definitions and
hierarchy for omissions and ambig#ity! Class relationships >instance connections? are
eval#ated to determine whether they acc#rately reflect real world ob*ect connections!
on$i$tency of OOA an& OO! Mo&el$
The consistency of 77. and 77D models may be *#dged by Cconsidering the
relationships among entities in the model! .n inconsistent model has representations
in one part that are not correctly reflected in other portions of the modelD! To assess
consistency, each class and its connections to other classes sho#ld be examined! The
class$responsibility$collaboration model and an ob*ect$relationship diagram can be
#sed to facilitate this activity! The C)C model is composed on C)C index cards!
-ach C)C card lists the class name, its responsibilities >operations?, and its
collaborators >other classes to which it sends messages and on which it depends for
the accomplishment of its responsibilities?! The collaborations imply a series of
relationships >i!e!, connections? between classes of the 77 system! The
<I
<I
ob*ect$relationship model provides a graphic representation of the connections
between classes! .ll of this information can be obtained from the 77. model!
To eval#ate the class model the following steps have been recommended(
(. Re2i$it the R mo&el an& the o01ect/relation$hi? mo&el. Cross chec0 to
ens#re that all collaborations implied by the 77. model are properly represented!
3. In$?ect the &e$cri?tion of each R in&e* car& to &etermine if a &elegate&
re$?on$i0ility i$ ?art of the colla0orator7$ &efinition. "or example, consider a class
defined for a point$of$sale chec0o#t system, called credit sale. This class has a C)C
index card ill#strated in "ig#re %4!/! "or this collection of classes and collaborations,
we as0 whether a responsibility >e!g!, readcredit card? is accomplished if delegated to
the named collaborator >cre&it car&?! That is, does the class cre&it car& have an
operation that enables it to be readE n this case the answer is, CJes!D The
ob*ect$relationship is traversed to ens#re that all s#ch connections are valid!
4. In2ert the connection to en$%re that each colla0orator that i$ a$,e& for $er2ice
i$ recei2ing re6%e$t$ from a rea$ona0le $o%rce. "or example, if the cre&it car&
class receives a re@#est for p#rchase amo#nt from the cre&it $ale class, there wo#ld
be a problem! re&it car& does not 0now the p#rchase amo#nt!
;. U$ing the in2erte& connection$ e*amine& in $te? 49 &etermine 5hether other
cla$$e$ might 0e re6%ire& an& 5hether re$?on$i0ilitie$ are ?ro?erly gro%?e&
among the cla$$e$.
<. !etermine 5hether 5i&ely re6%e$te& re$?on$i0ilitie$ might 0e com0ine& into a
$ingle re$?on$i0ility. "or example, read credit card and get authori:ation occ#r in
every sit#ation! They might be combined into a validate credit request responsibility
that incorporates getting the credit card n#mber and gaining a#thori'ation!
D. Ste?$ ( thro%gh < are a??lie& iterati2ely to each cla$$ an& thro%gh each
e2ol%tion of the OOA mo&el.
7nce the 77D model is created, reviews of the system design and the ob*ect design
sho#ld also be cond#cted! The system design depicts the overall prod#ct architect#re,
the s#bsystems that compose the prod#ct, the manner in which s#bsystems are
allocated to processors, the allocation of classes to s#bsystems, and the design of the
#ser interface! The ob*ect model presents the details of each class and the messaging
activities that are necessary to implement collaborations between classes!
I2
I2
The system design is reviewed by examining the ob*ect$behavior model developed
d#ring 77. and mapping re@#ired system behavior against the s#bsystems designed
to accomplish this behavior! Conc#rrency and tas0 allocation are also reviewed within
the context of system behavior! The behavioral states of the system are eval#ated to
determine which exist conc#rrently! +se$case scenarios are #sed to exercise the #ser
interface design!
"ig#re %/ $ .n example C)C index card #sed for review 77D Model
O01ect Oriente& Te$ting Strategie$
The classical strategy for testing comp#ter software begins with Ctesting in the smallD
and wor0s o#tward toward Ctesting in the large!D Stated in the *argon of software
testing, we begin with #nit testing, then progress toward integration testing, and
c#lminate with validation and system testing! n conventional applications, #nit
testing foc#ses on the smallest compliable program #nitRthe s#bprogram >e!g!,
mod#le, s#bro#tine, proced#re, component?! 7nce each of these #nits has been tested
individ#ally, it is integrated into a program str#ct#re while a series of regression tests
are r#n to #ncover errors d#e to interfacing between the mod#les and side effects
ca#sed by the addition of new #nits! "inally, the system as a whole is tested to ens#re
that errors in re@#irements are #ncovered!
I/
I/
Unit Te$ting in the OO onte*t
When ob*ect$oriented software is considered, the concept of the #nit changes!
-ncaps#lation drives the definition of classes and ob*ects! This means that each class
and each instance of a class >ob*ect? pac0ages attrib#tes >data? and the operations
>also 0nown as methods or services? that manip#late these data! )ather than testing an
individ#al mod#le, the smallest testable #nit is the encaps#lated class or ob*ect!
6eca#se a class can contain a n#mber of different operations and a partic#lar
operation may exist as part of a n#mber of different classes, the meaning of #nit
testing changes dramatically! We can no longer test a single operation in isolation >the
conventional view of #nit testing? b#t rather as part of a class!
To ill#strate, consider a class hierarchy in which an operation 4 is defined for the
s#per class and is inherited by a n#mber of s#bclasses! -ach s#bclass #ses operation
4, b#t it is applied within the context of the private attrib#tes and operations that have
been defined for the s#bclass! 6eca#se the context in which operation 4 is #sed varies
in s#btle ways, it is necessary to test operation 4 in the context of each of the
s#bclasses! This means that testing operation 4 in a vac##m >the traditional #nit
testing approach? is ineffective in the ob*ect$oriented context!
Class testing for 77 software is the e@#ivalent of #nit testing for conventional
software! +nli0e #nit testing of conventional software, which tends to foc#s on the
algorithmic detail of a mod#le and the data that flow across the mod#le interface,
class testing for 77 software is driven by the operations encaps#lated by the class and
the state behavior of the class!
Integration Te$ting in the OO onte*t
6eca#se ob*ect$oriented software does not have a hierarchical control str#ct#re,
conventional top$down and bottom$#p integration strategies have little meaning! n
addition, integrating operations one at a time into a class >the conventional
incremental integration approach? is often impossible beca#se of the Cdirect and
indirect interactions of the components that ma0e #p the classD!
There are two different strategies for integration testing of 77 systems! The first,
thread!based testing, integrates the set of classes re@#ired to respond to one inp#t or
I%
I%
event for the system! -ach thread is integrated and tested individ#ally! )egression
testing is applied to ens#re that no side effects occ#r! The second integration
approach, use!based testing, begins the constr#ction of the system by testing those
classes >called independent classes? that #se very few >if any? of server classes! .fter
the independent classes are tested, the next layer of classes, called dependent classes,
that #se the independent classes are tested! This se@#ence of testing layers of
dependent classes contin#es #ntil the entire system is constr#cted! +nli0e
conventional integration, the #se of drivers and st#bs as replacement operations is to
be avoided, when possible!
'luster testing is one step in the integration testing of 77 software! Bere, a cl#ster of
collaborating classes >determined by examining the C)C and ob*ect$relationship
model? is exercised by designing test cases that attempt to #ncover errors in the
collaborations!

)ali&ation Te$ting in an OO onte*t
.t the validation or system level, the details of class connections disappear! Li0e
conventional validation, the validation of 77 software foc#ses on #ser$visible actions
and #ser$recogni'able o#tp#t from the system! To assist in the derivation of validation
tests, the tester sho#ld draw #pon the #se$cases that are part of the analysis model!
The #se$case provides a scenario that has a high li0elihood of #ncovered errors in #ser
interaction re@#irements! Conventional blac0$box testing methods can be #sed to
drive validations tests! n addition, test cases may be derived from the ob*ect$behavior
model and from event flow diagram created as part of 77.!
Te$t ca$e !e$ign for OO Soft5are
Test case design methods for 77 software are still evolving! Bowever, an overall
approach to 77 test case design has been defined by 6erard (
(. -ach test case sho#ld be #ni@#ely identified and explicitly associated with the class
to be tested!
3. The p#rpose of the test sho#ld be stated!
4. . list of testing steps sho#ld be developed for each test and sho#ld contain(
a! . list of specified states for the ob*ect that is to be tested!
b! . list of messages and operations that will be exercised as a conse@#ence of the
test!
I4
I4
c! . list of exceptions that may occ#r as the ob*ect is tested!
d! . list of external conditions >i!e!, changes in the environment external to the
software that m#st exist in order to properly cond#ct the test?!
e! S#pplementary information that will aid in #nderstanding or implementing the test!
+nli0e conventional test case design, which is driven by an inp#t$process$o#tp#t view
of software or the algorithmic detail of individ#al mod#les, ob*ect$oriented testing
foc#ses on designing appropriate se@#ences of operations to exercise the states of a
class!
The Te$t a$e !e$ign Im?lication$ of OO once?t$
.s we have already seen, the 77 class is the target for test case design! 6eca#se
attrib#tes and operations are encaps#lated, testing operations o#tside of the class is
generally #nprod#ctive! .ltho#gh encaps#lation is an essential design concept for
77, it can create a minor obstacle when testing! .s 6inder notes, CTesting re@#ires
reporting on the concrete and abstract state of an ob*ect!D Jet, encaps#lation can ma0e
this information somewhat diffic#lt to obtain! +nless b#ilt$in operations are provided
to report the val#es for class attrib#tes, a snapshot of the state of an ob*ect may be
diffic#lt to ac@#ire! nheritance also leads to additional challenges for the test case
designer! We have already noted that each new context of #sage re@#ires retesting,
even tho#gh re#se has been achieved!
n addition, m#ltiple inheritance4 complicates testing f#rther by increasing the
n#mber of contexts for which testing is re@#ired! f s#bclasses instantiated from a
s#per class are #sed within the same problem domain, it is li0ely that the set of test
cases derived for the s#per class can be #sed when testing the s#bclass! Bowever, if
the s#bclass is #sed in an entirely different context, the s#per class test cases will have
little applicability and a new set of tests m#st be designed!
A??lica0ility of on2entional Te$t a$e !e$ign Metho&$
The white$box testing methods can be applied to the operations defined for a class!
6asis path, loop testing, or data flow techni@#es can help to ens#re that every
statement in an operation has been tested! Bowever, the concise str#ct#re of many
class operations ca#ses some to arg#e that the effort applied to white$box testing
might be better redirected to tests at a class level!
I8
I8
6lac0$box testing methods are as appropriate for 77 systems as they are for systems
developed #sing conventional software engineering methods! .s we noted earlier in
this chapter, #se$cases can provide #sef#l inp#t in the design of blac0$box and
state$based tests!
Fa%lt/+a$e& Te$ting
The ob*ect of fault!based testing within an 77 system is to design tests that have a
high li0elihood of #ncovering pla#sible fa#lts! 6eca#se the prod#ct or system m#st
conform to c#stomer re@#irements, the preliminary planning re@#ired to perform fa#lt
based testing begins with the analysis model! The tester loo0s for pla#sible fa#lts >i!e!,
aspects of the implementation of the system that may res#lt in defects?! To determine
whether these fa#lts exist, test cases are designed to exercise the design or code!
Consider a simple exampleH Software engineers often ma0e errors at the bo#ndaries of
a problem! "or example, when testing a S5)T operation that ret#rns errors for
negative n#mbers, we 0now to try the bo#ndaries( a negative n#mber close to 'ero
and 'ero itself! =aero itself= chec0s whether the programmer made a mista0e li0e
if >x X 2? calc#lateVtheVs@#areVroot>?H
instead of the correct
if >x XS 2? calc#lateVtheVs@#areVroot>?H
.s another example, consider a 6oolean expression(
if >a && Gb ^^ c?
M#lticondition testing and related techni@#es probe for certain pla#sible fa#lts in this
expression, s#ch as
&& sho#ld be ^^
G was left o#t where it was needed There sho#ld be parentheses aro#nd Gb ^^ c
"or each pla#sible fa#lt, we design test cases that will force the incorrect expression
to fail! n the previo#s expression, >aS2, bS2, cS2? will ma0e the expression as given
eval#ate false! f the && sho#ld have been ^^, the code has done the wrong thing and
might branch to the wrong path!
7f co#rse, the effectiveness of these techni@#es depends on how testers perceive a
=pla#sible fa#lt!= f real fa#lts in an 77 system are perceived to be =impla#sible,=
then this approach is really no better than any random testing techni@#e! Bowever, if
the analysis and design models can provide insight into what is li0ely to go wrong,
then fa#lt$based testing can find significant n#mbers of errors with relatively low
I9
I9
expendit#res of effort! ntegration testing loo0s for pla#sible fa#lts in operation calls
or message connections! Three types of fa#lts are enco#ntered in this context(
#nexpected res#lt, wrong operationLmessage #sed, incorrect invocation! To determine
pla#sible fa#lts as f#nctions >operations? are invo0ed, the behavior of the operation
m#st be examined! ntegration testing applies to attrib#tes as well as to operations!
The =behaviors= of an ob*ect are defined by the val#es that its attrib#tes are assigned!
Testing sho#ld exercise the attrib#tes to determine whether proper val#es occ#r for
distinct types of ob*ect behavior! t is important to note that integration testing
attempts to find errors in the client ob*ect, not the server! Stated in conventional
terms, the foc#s of integration testing is to determine whether errors exist in the
calling code, not the called code! The operation call is #sed as a cl#e, a way to find
test re@#irements that exercise the calling code!
The Im?act of OO >rogramming on Te$ting
There are several ways ob*ect$oriented programming can have an impact on testing!
Depending on the approach to 77P,
M Some types of fa#lts become less pla#sible >not worth testing for?!
M Some types of fa#lts become more pla#sible >worth testing now?!
M Some new types of fa#lts appear!
When an operation is invo0ed, it may be hard to tell exactly what code gets exercised!
That is, the operation may belong to one of many classes! .lso, it can be hard to
determine the exact type or class of a parameter! When the code accesses it, it may get
an #nexpected val#e! The difference can be #nderstood by considering a conventional
f#nction call(
x S f#nc >y?H
"or conventional software, the tester need consider all behaviors attrib#ted to f#nc
and nothing more! n an 77 context, the tester m#st consider the behaviors of
base((f#nc>?, of derived((f#nc>?, and so on! -ach time f#nc is invo0ed, the tester m#st
consider the #nion of all distinct behaviors! This is easier if good 77 design practices
are followed and the difference between s#per classes and s#bclasses >in CUU *argon,
these are called base classes and derived classes? are limited! The testing approach for
base and derived classes is essentially the same! The difference is one of
boo00eeping!
I3
I3
Testing 77 class operations is analogo#s to testing code that ta0es a f#nction
parameter and then invo0es it! nheritance is a convenient way of prod#cing
polymorphic operations! .t the call site, what matters is not the inheritance, b#t the
polymorphism! nheritance does ma0e the search for test re@#irements more
straightforward! 6y virt#e of 77 software architect#re and constr#ction, are some
types of fa#lts more pla#sible for an 77 system and others less pla#sibleE The answer
is, CJes!D "or example, beca#se 77 operations are generally smaller, more time tends
to be spent on integration beca#se there are more opport#nities for integration fa#lts!
Therefore, integration fa#lts become more pla#sible!
Te$t a$e$ an& the la$$ Hierarchy
.s noted earlier in this chapter, inheritance does not obviate the need for thoro#gh
testing of all derived classes! n fact, it can act#ally complicate the testing process!
Consider the following sit#ation! . class 0a$e contains operations inherited and
redefined! . class &eri2e& redefines redefined to serve in a local context! There is
little do#bt the derived((redefined>? has to be tested beca#se it represents a new design
and new code! 6#t does derived((inherited>? have to be retestedE f
derived((inherited>? calls redefined and the behavior of redefined has changed,
derived((inherited>? may mishandle the new behavior! Therefore, it needs new tests
even tho#gh the design and code have not changed! t is important to note, however,
that only a s#bset of all tests for derived((inherited>? may have to be cond#cted! f part
of the design and code for inherited does not depend on redefined >i!e!, that does not
call it nor call any code that indirectly calls it?, that code need not be retested in the
derived class!
6ase((redefined>? and derived((redefined>? are two different operations with different
specifications and implementations! -ach wo#ld have a set of test re@#irements
derived from the specification and implementation! Those test re@#irements probe for
pla#sible fa#lts( integration fa#lts, condition fa#lts, bo#ndary fa#lts, and so forth! 6#t
the operations are li0ely to be similar! Their sets of test re@#irements will overlap!
The better the 77 design, the greater is the overlap! :ew tests need to be derived only
for those derived((redefined>? re@#irements that are not satisfied by the
base((redefined>? tests!
I;
I;
To s#mmari'e, the base((redefined>? tests are applied to ob*ects of class &eri2e&!
Test inp#ts may be appropriate for both base and derived classes, b#t the expected
res#lts may differ in the derived class!
Scenario/+a$e& Te$t !e$ign
"a#lt$based testing misses two main types of errors(
>/? ncorrect specifications and
>%? nteractions among s#bsystems!
When errors associated with incorrect specification occ#r, the prod#ct doesnAt do
what the c#stomer wants! t might do the wrong thing or it might omit important
f#nctionality! 6#t in either circ#mstance, @#ality >conformance to re@#irements?
s#ffers! -rrors associated with s#bsystem interaction occ#r when the behavior of one
s#bsystem creates circ#mstances >e!g!, events, data flow? that ca#se another
s#bsystem to fail!
Scenario!based testing concentrates on what the #ser does, not what the prod#ct
does! This means capt#ring the tas0s >via #se$cases? that the #ser has to perform, then
applying them and their variants as tests! Scenarios #ncover interaction errors! 6#t to
accomplish this, test cases m#st be more complex and more realistic than fa#lt$based
tests! Scenario$based testing tends to exercise m#ltiple s#bsystems in a single test
>#sers do not limit themselves to the #se of one s#bsystem at a time?!
.s an example, consider the design of scenario$based tests for a text editor! +se cases
follow(
U$e/a$e# ;i" the ;inal 0raft
+ac,gro%n&# tAs not #n#s#al to print the =final= draft, read it, and discover some
annoying errors that werenAt obvio#s from the on$screen image! This #se$case
describes the se@#ence of events that occ#rs when this happens!
(. Print the entire doc#ment!
3. Move aro#nd in the doc#ment, changing certain pages!
4. .s each page is changed, itAs printed!
;. Sometimes a series of pages is printed!
This scenario describes two things( a test and specific #ser needs! The #ser needs are
obvio#s( >/? a method for printing single pages and >%? a method for printing a range
of pages! .s far as testing goes, there is a need to test editing after printing >as well as
the reverse?! The tester hopes to discover that the printing f#nction ca#ses errors in
I<
I<
the editing f#nctionH that is, that the two software f#nctions are not properly
independent!
U$e/a$e# ,rint a +ew 'opy
+ac,gro%n&# Someone as0s the #ser for a fresh copy of the doc#ment! t m#st be
printed!
(. 7pen the doc#ment!
3. Print it!
4. Close the doc#ment!
.gain, the testing approach is relatively obvio#s! -xcept that this doc#ment didnAt
appear o#t of nowhere! t was created in an earlier tas0! Does that tas0 affect this oneE
n many modern editors, doc#ments remember how they were last printed! 6y defa#lt,
they print the same way next time! .fter the ;i" the ;inal 0raft scenario, *#st
selecting =Print= in the men# and clic0ing the =Print= b#tton in the dialog box will
ca#se the last corrected page to print again! So, according to the editor, the correct
scenario sho#ld loo0 li0e this(
U$e/a$e# ,rint a +ew 'opy
(. 7pen the doc#ment!
3. Select =Print= in the men#!
4. Chec0 if yo#Are printing a page rangeH if so, clic0 to print the entire doc#ment!
;. Clic0 on the Print b#tton!
<. Close the doc#ment!
6#t this scenario indicates a potential specification error! The editor does not do what
the #ser reasonably expects it to do! C#stomers will often overloo0 the chec0 noted in
step 4 above! They will then be annoyed when they trot off to the printer and find one
page when they wanted /22! .nnoyed c#stomers signal specification b#gs! . test case
designer might miss this dependency in test design, b#t it is li0ely that the problem
wo#ld s#rface d#ring testing! The tester wo#ld then have to contend with the probable
response, =ThatAs the way itAs s#pposed to wor0G=
Te$ting S%rface Str%ct%re an& !ee? Str%ct%re
Surface structure refers to the externally observable str#ct#re of an 77 program! That
is, the str#ct#re that is immediately obvio#s to an end$#ser! )ather than performing
f#nctions, the #sers of many 77 systems may be given ob*ects to manip#late in some
II
II
way! 6#t whatever the interface, tests are still based on #ser tas0s! Capt#ring these
tas0s involves #nderstanding, watching, and tal0ing with representative #sers >and as
many non representative #sers as are worth considering?!
There will s#rely be some difference in detail! "or example, in a conventional system
with a command$oriented interface, the #ser might #se the list of all commands as a
testing chec0list! f no test scenarios existed to exercise a command, testing has li0ely
overloo0ed some #ser tas0s >or the interface has #seless commands?! n a ob*ect based
interface, the tester might #se the list of all ob*ects as a testing chec0list! The best tests
are derived when the designer loo0s at the system in a new or #nconventional way!
"or example, if the system or prod#ct has a command$based interface, more thoro#gh
tests will be derived if the test case designer pretends that operations are independent
of ob*ects! .s0 @#estions li0e, CMight the #ser want to #se this operationRwhich
applies only to the Scanner ob*ectRwhile wor0ing with the printerE= Whatever the
interface style, test case design that exercises the s#rface str#ct#re sho#ld #se both
ob*ects and operations as cl#es leading to overloo0ed tas0s!
0eep structure refers to the internal technical details of an 77 program! That is, the
str#ct#re that is #nderstood by examining the design andLor code! Deep str#ct#re
testing is designed to exercise dependencies, behaviors, and comm#nication
mechanisms that have been established as part of the system and ob*ect design of 77
software! The analysis and design models are #sed as the basis for deep str#ct#re
testing!
"or example, the ob*ect$relationship diagram or the s#bsystem collaboration diagram
depicts collaborations between ob*ects and s#bsystems that may not be externally
visible! The test case design then as0s( CBave we capt#red >as a test? some tas0 that
exercises the collaboration noted on the ob*ect$relationship diagram or the s#bsystem
collaboration diagramE f not, why notED
Design representations of class hierarchy provide insight into inheritance str#ct#re!
nheritance str#ct#re is #sed in fa#lt$based testing! Consider a sit#ation in which an
operation named caller has only one arg#ment and that arg#ment is a reference to a
base class! What might happen when caller is passed a derived classE What are the
differences in behavior that co#ld affect caller? The answers to these @#estions might
lead to the design of speciali'ed tests!
/2
/2
Te$ting Metho&$ A??lica0le At the la$$ Le2el
Software testing begins Cin the smallD and slowly progresses toward testing Cin the
large!D Testing in the small foc#ses on a single class and the methods that are
encaps#lated by the class! )andom testing and partitioning are methods that can be
#sed to exercise a class d#ring 77 testing!
Ran&om Te$ting for OO la$$e$
To provide brief ill#strations of these methods, consider a ban0ing application in
which an acco%nt class has the following operations( open, setup, deposit, withdraw,
balance, summari:e, credit#imit, and close! -ach of these operations may be applied
for acco%nt, b#t certain constraints >e!g!, the acco#nt m#st be opened before other
operations can be applied and closed after all operations are completed? are implied
by the nat#re of the problem! -ven with these constraints, there are many
perm#tations of the operations! The minim#m behavioral life history of an instance of
acco%nt incl#des the following operations(
7pen
Set#p
Deposit
Withdraw
Close
This represents the minim#m test se@#ence for acco#nt! Bowever, a wide variety of
other behaviors may occ#r within this se@#ence(
openMset#pMdepositMOdeposit^withdraw^balance^s#mmari'e^creditLimitPnMwithdrawMclo
se
. variety of different operation se@#ences can be generated randomly! "or example(
Test case r/( openMset#pMdepositMdepositMbalanceMs#mmari'eMwithdrawMclose
Test case r%(
openMset#pMdepositMwithdrawMdepositMbalanceMcreditLimitMwithdrawMclose
These and other random order tests are cond#cted to exercise different class instance
life histories!
/2
/2
>artition Te$ting at the la$$ Le2el
,artition testing red#ces the n#mber of test cases re@#ired to exercise the class in
m#ch the same manner as e@#ivalence partitioning for conventional software! np#t
and o#tp#t are categori'ed and test cases are designed to exercise each category! 6#t
how are the partitioning categories derivedE
State!based partitioning categori'es class operations based on their ability to change
the state of the class! .gain considering the acco%nt class, state operations incl#de
deposit and withdraw, whereas nonstate operations incl#de balance, summari:e, and
credit#imit. Tests are designed in a way that exercises operations that change state
and those that do not change state separately! Therefore,
Test case p/( openMset#pMdepositMdepositMwithdrawMwithdrawMclose
Test case p%( openMset#pMdepositMs#mmari'eMcreditLimitMwithdrawMclose
Test case p/ changes state, while test case p% exercises operations that do not change
state >other than those in the minim#m test se@#ence?!
<ttribute!based partitioning categori'es class operations based on the attrib#tes that
they #se! "or the acco%nt class, the attrib#tes balance and creditLimit can be #sed to
define partitions! 7perations are divided into three partitions( >/? operations that #se
creditLimit, >%? operations that modify creditLimit, and >4? operations that do not #se
or modify creditLimit! Test se@#ences are then designed for each partition!
'ategory!based partitioning categori'es class operations based on the generic
f#nction that each performs! "or example, operations in the acco%nt class can be
categori'ed in initiali'ation operations >open, setup?, comp#tational operations
>deposit, withdraw?! 5#eries >balance, summari:e, credit#imit? and termination
operations >close?!
Intercla$$ Te$t a$e !e$ign
Test case design becomes more complicated as integration of the 77 system begins!
t is at this stage that testing of collaborations between classes m#st begin! To
ill#strate Cinterclass test case generationD, we expand the ban0ing example to incl#de
the classes and collaborations noted in "ig#re!
The direction of the arrows in the fig#re indicates the direction of messages and the
labeling indicates the operations that are invo0ed as a conse@#ence of the
collaborations implied by the messages! Li0e the testing of individ#al classes, class
/2
/2
collaboration testing can be accomplished by applying random and partitioning
methods, as well as scenario$based testing and behavioral testing!
M#ltiple Class Testing Kirani and Tsai s#ggest the following se@#ence of steps to
generate m#ltiple class random test cases(
/! "or each client class, #se the list of class operations to generate a series of random
test se@#ences! The operations will send messages to other server classes!
%. "or each message that is generated, determine the collaborator class and the
corresponding operation in the server ob*ect!
4! "or each operation in the server ob*ect >that has been invo0ed by messages sent
from the client ob*ect?, determine the messages that it transmits!
8! "or each of the messages, determine the next level of operations that are invo0ed
and incorporate these into the test se@#ence!
To ill#strate, consider a se@#ence of operations for the 0an, class relative to an ATM
class(
verify.cctMverifyP:MOOverifyPolicyMwithdraw)e@P^deposit)e@^acctnfo)-5Pn
. random test case for the 0an, class might be
test case r4 S verify.cctMverifyP:Mdeposit)e@
n order to consider the collaborators involved in this test, the messages associated
with each of the operations noted in test case r4 is considered! +an, m#st collaborate
with )ali&ationInfo to exec#te the verify<cct and verify,+! +an, m#st collaborate
with acco%nt to exec#te depositReq! Bence, a new test case that exercises these
collaborations is
test case r8 S verify.cct6an0Ovalid.cctQalidationnfoPMverifyP:6an0M
OvalidPinQalidationnfoPMdeposit)e@M Odepositacco#ntP
cardinserted
password
deposit
withdraw
accntStat#s
terminate
verifyStat#s
depositStat#s
dispenseCash
/2
/2
print .ccntStat
read Cardnfo
getCash.mnt
verify.cct
verifyP:
verifyPolicy
withdraw)e@
deposit)e@
acctnfo
open.cct
initialDeposit
a#thori'eCard
dea#thori'e
close.cct
validP:
valid.cct
creditLimit
accntType
balance
withdraw
deposit
close
.TM
#ser
interface
.TM
Cashier .cco#nt Qalidation
info
6an0
/2
/2
"ig#re %%$ Class collaboration diagram for ban0ing application
The approach for m#ltiple class partition testing is similar to the approach #sed for
partition testing of individ#al classes! Bowever, the test se@#ence is expanded to
incl#de those operations that are invo0ed via messages to collaborating classes! .n
alternative approach partitions tests based on the interfaces to a partic#lar class!
)eferring to the above fig#re, the 0an, class receives messages from the ATM and
ca$hier classes! The methods within 0an, can therefore be tested by partitioning
them into those that serve ATM and those that serve ca$hier! State$based partitioning
can be #sed to refine the partitions f#rther!
Te$t$ !eri2e& from +eha2ior Mo&el$
The state transition diagram is a model that represents the dynamic behavior of a
class! The STD for a class can be #sed to help derive a se@#ence of tests that will
/2
/2
exercise the dynamic behavior of the class >and those classes that collaborate with it?!
The state model can be traversed in a Cbreadth$firstD manner! n this context, breadth
first implies that a test case exercises a single transition and that when a new
transition is to be tested only previo#sly tested transitions are #sed!
Consider the cre&it car& ob*ect disc#ssed in the previo#s section! The initial state of
cre&it car& is undefined >i!e!, no credit card n#mber has been provided?! +pon
reading the credit card d#ring a sale, the ob*ect ta0es on a defined stateH that is, the
attrib#tes card n#mber and expiration date, along with ban0 specific identifiers are
defined! The credit card is submitted when it is sent for a#thori'ation and it is
approved when a#thori'ation is received! The transition of cre&it car& from one state
to another can be tested by deriving test cases that ca#se the transition to occ#r! .
breadth$first approach to this type of testing wo#ld not exercise submitted before it
exercised undefined and defined! f it did, it wo#ld ma0e #se of transitions that had
not been previo#sly tested and wo#ld therefore violate the breadth$first criterion!
Tool$ for te$ting of O01ect Oriente& Te$ting
There are several tools that aid in testing 77 systems! Some of these are
/! +se cases
%! Class diagrams
4! Se@#ence diagrams
8! State charts
4.< USA+ILITY AN! AESSA+ILITY TESTIN"
+sability testing attempts to characteri'e the Cloo0 and "eelD and #sage aspects of a
prod#ct, from the point of view of #sers! Most types of testing are ob*ective in nat#re!
Some of the characteristics of #sability testing are as follows(
+sability testing tests the prod#ct from the #serFs point of view! t
encompasses a range of techni@#es for identifying how #sers act#ally interact
with and #se the prod#ct!
+sability testing is for chec0ing the prod#ct to see if it is easy to #se for the
vario#s categories of #sers!
/2
/2
+sability testing is a process to identify discrepancies between the #ser
interface of the prod#ct and the h#man #ser re@#irements, in terms of the
pleasantness and aesthetics aspects!
f we combine all the above characteri'ations of the vario#s factors that determine
#sability testing, then the common threads are
/! -ase of #se
%! Speed
4! Pleasantness and aesthetics
A??roach to %$a0ility
When doing #sability testing, certain h#man factors can be represented in a
@#antifiable way and can be tested ob*ectively! 1enerally, the people best s#ited to
perform #sability testing are
11. Typical representatives of the act#al #ser segments who wo#ld
be #sing the prod#ct, so that the typical #ser patterns can be capt#red,
and
12. People who are new to the prod#ct, so that they can start
witho#t any bias and be able to identify #sability problems!
When to &o %$a0ility te$ting
The most appropriate way of ens#ring #sability is by performing the #sability testing
in two phases! "irst is design validation and the second is #sability testing done as a
part of component and integration testing phases of a test cycle! . prod#ct has to be
designed for #sability! . prod#ct designed only for f#nctionality may not get #ser
acceptance! . prod#ct designed for f#nctionality may also involve a high degree of
training, which can be minimi'ed if it is designed for both f#nctionality and #sability!
+sability design is verified thro#gh several means! Some of them are as follows(
Style sheets
Screen prototypes
Paper designs
Layo#t design
/2
/2
. C#sable prod#ctD is always the res#lt of m#t#al collaboration from all the
sta0eholders, for the entire d#ration of the pro*ect! +sability is a habit and a behavior!
N#st li0e h#mans, the prod#cts are expected to behave differently and correctly with
different #sers and to their expectations!
Q%ality factor$ for %$a0ility
Some @#ality factors are very important when performing #sability testing! "oc#sing
on some of the @#ality factors given below help in improving ob*ectivity in #sability
testing are as follows!
Comprehensibility
Consistency
:avigation
)esponsiveness
Ae$thetic$ te$ting
.nother important aspect in #sability is ma0ing the prod#ct Cbea#tif#lD! Performing
aesthetics testing helps in improving #sability f#rther! t is not possible for all
prod#cts to meas#re #p with the Ta* Mahal for its bea#ty! Testing for aesthetics can at
least ens#re the prod#ct is pleasing to the eye! .esthetics testing can be performed by
anyone who appreciates bea#ty! nvolving bea#ticians, artists, and architects who
have reg#lar roles of ma0ing different aspects of life bea#tif#l serve as experts here in
aesthetics testing! nvolving them d#ring design and testing phases and incorporating
their inp#ts may improve the aesthetics of the prod#ct! "or example, the icons #sed in
the prod#ct may loo0 more appealing if they are designed by an artist, as they are not
meant only for conveying messages b#t also help in ma0ing the prod#ct bea#tif#l!
AESSI+ILITY TESTIN"
There are a large n#mber of people who are challenged with vision, hearing, and
mobility related problems$partial or complete! Prod#ct #sability that does not loo0
into their re@#irements wo#ld res#lt in lac0 of acceptance! There are several tools that
/2
/2
are available to help them with alternatives! These tools are generally referred as
accessibility tools or assistive technologies! Qerifying the prod#ct #sability for
physically challenged #sers is called accessibility testing! .ccessibility is a s#bset of
#sability and sho#ld be incl#ded as part of #sability test planning!
.ccessibility of the prod#ct can be provided by two means!
Ma0ing #se of accessibility feat#res provided by the #nderlying infrastr#ct#re>
for example, operating system?, called basic accessibility, and
Providing accessibility in the prod#ct thro#gh standards and g#idelines, called
prod#ct accessibility!
+a$ic acce$$i0ility
6asic accessibility is provided by the hardware and operating system! .ll the inp#t
and o#tp#t devices of the comp#ter and their accessibility options are categori'ed
#nder basic accessibility! The 0eyboard accessibility and screen accessibility are some
of the basic accessibility feat#res!
>ro&%ct acce$$i0ility
. good #nderstanding of the basic accessibility feat#res is needed while providing
accessibility to the prod#ct! . prod#ct sho#ld do everything possible to ens#re that the
basic accessibility feat#res are #tili'ed by it! . good #nderstanding of basic
accessibility feat#res and the re@#irements of different types #sers with special needs
help in creating certain g#idelines on how the prod#ctFs #ser interface has to be
designed!
This re@#irement explains the importance of providing text e@#ivalents for pict#re
messages and providing captions for a#dio portions! When an a#dio file is played,
providing captions for the a#dio improves accessibility for the hearing impaired!
Providing a#dio clippings improves accessibility for impaired #sers who can not
#nderstand the video streams and pict#res! Bence, text e@#ivalents for a#dio, a#dio
descriptions for pict#res and vis#als become an important re@#irement for
accessibility!
Tool$ for %$a0ility
/2
/2
There are not many tools that help in #sability beca#se of the high degree of
s#b*ectivity involved in eval#ating this aspect! . sample list of #sability and
accessibility tools are listed below(
N.WS
BTML validator
Style sheet validator
Magnifier
:arrator
Soft 0eyboard
Te$t role$ for %$a0ility
+sability testing is not as formal as other types of testing in several companies and is
not performed with a pre$written set of test casesLchec0lists! Qario#s methods adopted
by companies for #sability testing are as follows!
Performing #sability testing as a separate cycle of testing
Biring external cons#ltants to do #sability validation
Setting #p a separate gro#p for #sability to instit#tionali'e the practices across
vario#s prod#ct development teams and to set #p organi'ation wide standards
for #sability!
4.D S%mmary
The overall ob*ective of ob*ect$oriented testingRto find the maxim#m n#mber of
errors with a minim#m amo#nt of effortRis identical to the ob*ective of conventional
software testing! 6#t the strategy and tactics for 77 testing differ significantly! The
view of testing broadens to incl#de the review of both the analysis and design model!
n addition, the foc#s of testing moves away from the proced#ral component >the
mod#le? and toward the class! 6eca#se the 77 analysis and design models and the
res#lting so#rce code are semantically co#pled, testing >in the form of formal
technical reviews? begins d#ring these engineering activities! "or this reason, the
review of C)C, ob*ect$relationship, and ob*ect$behavior models can be viewed as first
stage testing!
//
//
7nce 77P has been accomplished, #nit testing is applied for each class! The design of
tests for a class #ses a variety of methods( fa#lt$based testing, random testing, and
partition testing! -ach of these methods exercises the operations encaps#lated by the
class! Test se@#ences are designed to ens#re that relevant operations are exercised!
The state of the class, represented by the val#es of its attrib#tes, is examined to
determine if errors exist! ntegration testing can be accomplished #sing a thread$based
or #se$based strategy! Thread$based testing integrates the set of classes that
collaborate to respond to one inp#t or event! +se$based testing constr#cts the system
in layers, beginning with those classes that do not #se server classes!
There is an increasing awareness of #sability testing in the ind#stry! Soon, #sability
testing will become an engineering discipline, a life cycle activity, and a profession!
Several companies plan for #sability testing in the beginning of the prod#ct life cycle
and trac0 them to completion! +sability is not achieved only by testing! +sability is
more in the design and in the minds of the people who contrib#te to the prod#ct!
+sability is all abo#t #ser experiences! Thin0ing from the perspective of the #ser all
the time d#ring the pro*ect will go a long way in ens#ring #sability!
4.E hec, Yo%r >rogre$$
11 -xplain the p#rpose of ma0ing the performance testing and the factors
governing performance testing!
11 Bow will collect the re@#irements, test cases for ma0ing performance testingE
11 Bow will yo# a#tomate performance test casesE -xplain with an example!
11 Define the terms performance t#ning and benchmar0ing!
11 Mention some of the performance testing tools!
11 What is called a regression testingE Mention the types!
11 When to do regression testingE
11 Bow will yo# perform an initial Csmo0eD or CsanityD testE
11 Bow will yo# select the test cases for ma0ing a regression testingE
111 What are the best practices to be followed in regression testingE
//
//
UNIT / I)
Str%ct%re
8!2 7b*ectives
8!/ ntrod#ction
8!% Test Planning
8!4 Test management
8!8 Test -xec#tion and )eporting
8!9 S#mmary
8!3 Chec0 Jo#r Progress
//
//
;.G O01ecti2e$
To learn how to ma0e a test plan for the whole testing process and the steps
To #nderstand what is test management and the art of ma0ing it
To learn the test exec#tion and how to ma0e a test reporting after ma0ing every test
;.( Intro&%ction
n this chapter, we will loo0 at some of the pro*ect management aspects of testing!
The Pro*ect Management nstit#te defines a pro*ect formally as a temporary endeavor
to create a #ni@#e prod#ct or service! This means that every pro*ect or service is
different in some disting#ishing way from all similar prod#cts or services! Testing is
integrated into the endeavor of creating a given prod#ct or serviceH each phase and
each type of testing has different characteristics and what is tested in each version
co#ld be different! Bence, testing satisfies this definition of a pro*ect f#lly! 1iven that
testing can be considered as a pro*ect on its own, it has to be planned, exec#ted,
trac0ed, and periodically reported on!
;.3 TEST >LANNIN"
>re?aring a te$t ?lan
Testing li0e any pro*ect$ sho#ld be driven by a plan! The test plan covers the
following(
What needs to be tested the scope of testing, incl#ding clear
identification of what will be tested and what will not be tested!
Bow the testing is going to be performed
What reso#rces are needed for testing$ comp#ter as well as h#man
reso#rces!
The time lines by which the testing activities will be performed!
)is0s that may be faced in all the above, with appropriate mitigation
and contingency plans!
//
//
Sco?e management#
7ne single plan can be prepared to cover all phases or there can be separate plans for
each phase! n sit#ations where there are m#ltiple test plans, there sho#ld be one test
plan, which covers the activities common for all plans! This is called the master test
plan!
Scope management pertains to specifying the scope of a pro*ect! "or testing, scope
management entails
/! +nderstanding what constit#tes a release of a prod#ct
%! 6rea0ing down the release into feat#res
4! Prioriti'ing the feat#res of testing
8! Deciding which feat#res will be tested and which will not be and
9! 1athering details to prepare for estimation of reso#rces for testing!
Knowing the feat#res and #nderstanding them from the #sage perspective will enable
the testing team to prioriti'e the feat#res for testing! The following factors drive
choice and prioriti'ation of feat#res to be tested!
Feat%re$ that i$ ne5 an& critical for the relea$e
The new feat#res of a release set the expectations of the c#stomers and m#st perform
properly! These new feat#res res#lt in new program code and th#s have a higher
s#sceptibility and expos#re to defects!
Feat%re$ 5ho$e fail%re$ can 0e cata$tro?hic
)egardless of whether a feat#re is new or not, any feat#re the fail#re of which can be
catastrophic has to be high on the list of feat#res to be tested! "or example, recovery
mechanisms in a database will always have to be among the most important feat#res
to be tested!
Feat%re$ that are e*?ecte& to 0e com?le* to te$t
-arly participation by the testing team can help identify feat#res that are diffic#lt to
test! This can help in starting the wor0 on these feat#res earl and line #p appropriate
reso#rces in time!
Feat%re$ 5hich are e*ten$ion$ of earlier feat%re$ that ha2e 0een &efect ?rone
//
//
Defect prone areas need very thoro#gh testing so that old defects do not creep in
again!
!eci&ing te$t a??roachC$trategy
7nce we have this prioriti'ed feat#re list, the next step is to drill down into some
more details of what needs to be tested, to enable estimation of si'e, effort, and
sched#le! This incl#des identifying

/! What type of testing wo#ld yo# #se for testing the f#nctionalityE
%! What are the config#rations or scenarios for testing the feat#resE
4! What integration testing is followed to ens#re these feat#res wor0
togetherE
8! What locali'ation validations wo#ld be neededE
9! What non$f#nctional tests wo#ld yo# need to doE
The test approach sho#ld res#lt in identifying the right type of test for each of the
feat#res or combinations!
Setting %? criteria for te$ting
There m#st be clear entry and exit criteria for different phases of testing!
deally, tests m#st be r#n as early as possible so that the last min#te press#re of
r#nning tests after development delays is minimi'ed! The entry criteria for a test
specify threshold criteria for each phase or type of test! The completionLexit criteria
specify when a test cycle or a testing activity can be deemed complete!
S#spension criteria specify when a test cycle or a test activity can be
s#spended!
I&entifying re$?on$i0ilitie$9 $taffing9 an& training nee&$
. testing pro*ect re@#ires different people to play different roles! There are
roles for test engineers, test leads, and test managers! The different role definitions
sho#ld
/! -ns#re there is clear acco#ntability for a given tas0, so that each person
0nows what he has to doH
%! Clearly list the responsibilities for vario#s f#nctions to vario#s people
//
//
4! Complement each other, ens#ring no one steps on an othersF toesH and
8! S#pplement each other, so that no tas0 is left #nassigned!

Staffing is done based on estimation of effort involved and the availability of
time for release! n order to ens#re that the right tas0s get exec#ted, the feat#res and
tas0s are prioriti'ed the basis of an effort, time and importance!
t may not be possible to find the perfect fit between the re@#irements and
availability of s0illsH they sho#ld be addressed with appropriate training programs!
I&entifying re$o%rce re6%irement$
.s a part of planning for a testing pro*ect, the pro*ect manager sho#ld provide
estimates for the vario#s hardware and software reso#rces re@#ired! Some of the
following factors need to be considered!
/! Machine config#ration needed to r#n the prod#ct #nder test
%! 7verheads re@#ired by the test a#tomation tool, if any
4! S#pporting tools s#ch as compilers, test data generators, config#ration
management tools, and so on
8! The different config#rations of the s#pporting software that m#st be
present
9! Special re@#irements for r#nning machine$intensive tests s#ch as load tests
and performance tests
3! .ppropriate n#mber of licenses of all the software
I&entifying te$t &eli2era0le$
The test plan also identifies the deliverables that sho#ld come o#t of the test
cycleLtesting activity! The deliverables incl#de the following,
/! The test plan itself
%! Test case design specifications
4! Test cases, incl#ding any a#tomation that is specified in the plan
8! Test logs prod#ced by r#nning the tests
9! Test s#mmary reports
Te$ting ta$,$# $i@e an& effort e$timation
//
//
The scope identified above gives a broad overview of what needs to be tested!
This #nderstanding is @#antified in the estimation step! -stimation happens broadly in
three phases!
/! Si'e estimation
%! -ffort estimation
4! Sched#le estimation
Si@e e$timation
Si'e estimate @#antifies the act#al amo#nt of testing that needs to be done! The
factors contrib#te to the si'e estimate of a testing pro*ect are as follows(
Si@e of the ?ro&%ct %n&er te$t Line of Code >L7C?, "#nction Point >"P? are the
pop#lar methods to estimate the si'e of an application! . somewhat simpler
representation of application si'e is the n#mber of screens, reports, or transactions!
-xtent of a#tomation re@#ired
:#mber of platforms and inter$operability environments to be tested
Prod#ctivity data
)e#se opport#nities
)ob#stness of processes
Acti2ity 0rea,&o5n an& $che&%ling
.ctivity brea0down and sched#le estimation entail translating the effort re@#ired into
specific time frames! The following steps ma0e #p this translation!
dentifying external and internal dependencies among the activities
Se@#encing the activities, based on the expected d#ration as well as on
the dependencies
Monitoring the progress in terms of time and effort
)ebalancing sched#les and reso#rces as necessary
omm%nication$ management
//
//
Comm#nications management consists of evolving and following proced#res for
comm#nication that ens#re that everyone is 0ept in sync with the right level of detail!
Ri$, management
Li0e every pro*ect, testing pro*ects also face ris0s! )is0s are events that co#ld
potentially affect a pro*ectFs o#tcome! )is0 management entails
dentifying the possible ris0sH
5#antifying the ris0sH
Planning how to mitigate the ris0sH and
)esponding to ris0s when they become a reality!
Fig.34 / A$?ect$ of ri$, management
i? Ri$, i&entification consists of identifying the possible ris0s that may
hit a pro*ect! +se of chec0lists, +se of organi'ational history and
metrics and informal networ0ing across the ind#stry are the common
ways to identify ris0s in testing!
ii? Ri$, 6%antification deals with expressing the ris0 in n#merical terms!
The probability of the ris0 happening and the impact of the ris0 are the
two components to the @#antification of ris0!
iii? Ri$, mitigation ?lanning deals with identifying alternative strategies
to combat a ris0 event! To handle the effects of a ris0, it is advisable to
have m#ltiple mitigation strategies!
The following are some of the common ris0s enco#ntered in testing pro*ects(
//
//
)is0
identificatio
n
)is0
response
)is0
Mitigation
planning
)is0
@#antification
+nclear re@#irements,
Sched#le dependence,
ns#fficient time for testing,
Show stopper defects,
.vailability of s0illed and motivated people for testing and
nability to get a test a#tomation tool!
;.4 TEST MANA"EMENT
hoice of $tan&ar&$
Standards comprise an important part of planning in any organi'ation! There are two
types of standards e"ternal standards and internal standards!
E*ternal $tan&ar&$ are standards that a prod#ct sho#ld comply with, are externally
visible, and are #s#ally stip#lated by external consortia! Compliance to external
standards is #s#ally mandated by external parties!
Internal $tan&ar&$ are standards form#lated by a testing organi'ation to bring in
consistency and predictability! They standardi'e the processes and methods of
wor0ing within the organi'ation! Some of the internal standards incl#de
Naming an& $torage con2ention$ for te$t artifact$ ' -very test artifact have to be
named appropriately and meaningf#lly! S#ch naming conventions sho#ld enable easy
identification of the product functionality that a set of tests are intended forH and
reverse mapping to identify the f#nctionality corresponding to a given set of tests!
!oc%ment $tan&ar&$
Most of the disc#ssion on doc#mentation and coding standards pertain to a#tomated
testing! Doc#mentation standards specify how to capt#re information abo#t the tests
within the test scripts themselves! nternal doc#mentation of test scripts are similar to
internal doc#mentation of program code and sho#ld incl#de the following(
//
//
.ppropriate header level comments at he beginning of the file that o#tlines the
f#nctions to be served by the test!
S#fficient in$line comments spread thro#gho#t the file, explaining the
f#nctions served by the vario#s parts of a test script!
+p$to$date change history information, recording all the changes made to the
test file!
Te$t co&ing $tan&ar&$
Test coding standards go one level deeper into the tests and enforce standards on how
the test themselves are written! The standards may
/! -nforce the right type of initiali'ation
%! Stip#late ways of naming variables within the scripts to ma0e s#re
that a reader #nderstands consistently the p#rpose of a variable!
4! -nco#rage re#sability of test artifacts!
8! Provide standard interfaces to external entries li0e operating
system, hardware, and so on!
Te$t re?orting $tan&ar&$
Since testing is tightly interlin0ed with prod#ct @#ality, all the sta0eholders m#st get a
consistent and timely view of the progress of tests! The test reporting provides
g#idelines on the level of detail that sho#ld be present in the test reports, their
standard formats and contents, recipients of the report, and so on!
Te$t infra$tr%ct%re management
Testing re@#ires a rob#st infrastr#ct#re to be planned #pfront! This infrastr#ct#re is
made #p of three essential elements!
/! . test case database >TCD6 ?
%! . defect repository >D) ?
4! Config#ration management repository and tool
. test case database capt#res all the relevant information abo#t the test cases in an
organi'ation!
/%
/%
. defect repository capt#res all the relevant details of defects reported for a prod#ct!
Most of the metrics classified as testing defect metrics and development defect
metrics are derived o#t of the data in defect repository!
Jet another infrastr#ct#re that is re@#ired for a software prod#ct organi'ation is a
Software Config#ration Management >SCM? repository! .n SCM repository 0eeps
trac0 of change control and version control of all the files that ma0e #p a software
prod#ct! Change controls ens#res that
Changes to test files are made in a controlled fashion and only with
proper approvals!
Changes made by one test engineer are not accidentally lost or
overwritten by other changes!
-ach change prod#ces a distinct version of the file that is recreatable at
any point of time!
.t any point of time, everyone gets access to only the most recent
version of the test files!
Qersion control ens#res that the test scripts associated with a given release of a
prod#ct are base lined along with the prod#ct files!
TCD6, Defect )epository, and SCM repository sho#ld complement each other and
wor0 together in an integrated fashion!
/%
/%
"ig#re %8 relationship SCM, D) and TCD6
Te$t ?eo?le management
People management is an integral part of any pro*ect management! t re@#ires the
ability to hire, motivate and retain the right people! These s0ills are seldom formally
ta#ght! Testing pro*ects present several additional challenges! We believe that the
s#ccess of a testing organi'ation depends on *#dicio#s people management s0ills!
The important point is that the common goals and the spirit of teamwor0 have
to be internali'ed by all the sta0eholders! S#ch an internali'ation and #pfront team
b#ilding has to be part of the planning process for the team to s#cceed!
/%
/%
T!+
!R
SM
Test case
Prod#ct
c)-"
Test case
info
Test case
info
Test case
defect
c)-"
Prod#ct
Test
cases
Prod#ct
So#rce
code
-nviron$
Ment
files
Prod#ct
doc#men
tation
Defect
details
Defect
fix
details
Defect
Comm#$
nication
Defect
Test
details
Integrating 5ith ?ro&%ct relea$e
+ltimately, the s#ccess of a prod#ct depends on the effectiveness of integration of the
development and testing activities! These *ob f#nctions have to wor0 in tight #nison
between themselves and with other gro#ps s#ch as prod#ct s#pport, prod#ct
management, and so on! The sched#les of testing have to be lin0ed directly to prod#ct
release! The following are some of the points to be decided for this planning!
Sync points between development and testing as to when different types of
testing can commence!
Service level agreements between development and testing as to how long it
wo#ld ta0e for the testing team to complete the testing! This will ens#re that
testing foc#ses on finding relevant and important defects only!
Consistent definitions of the vario#s priorities and severities of the defects!
Comm#nication mechanisms to the doc#mentation gro#p to ens#re that the
doc#mentation is 0ept in sync with the prod#ct in terms of 0nown defects,
wor0aro#nds and so on!

The p#rpose of the testing team is to identify the defects in the prod#ct and the ris0s
that co#ld be faced by releasing the prod#ct with the existing defects!
;.; TEST >ROESS
>%tting together an& 0a$e lining a te$t ?lan
. test plan combines all the points disc#ssed above into a single doc#ment that acts
as an anchor point for the entire testing pro*ect! .n organi'ation normally arrives at a
template that is to be #sed across the board! -ach testing pro*ect p#ts together a test
plan based on the template! The test plan is reviewed by a designated set of competent
people in the organi'ation! t then is approved by a competent a#thority, who is
independent of the pro*ect manager directly responsible for testing! .fter this, the test
plan is base lined into the config#ration management repository! "rom then on, the
base lined test plan becomes the basis for r#nning the testing pro*ect! n addition,
/%
/%
periodically, any change needed to the test plan templates are disc#ssed among the
different sta0e holders and this is 0ept c#rrent and applicable to the testing teams!
Te$t ca$e $?ecification
+sing the test plan as the basis, the testing team designs test case specifications,
which then becomes the basis for preparing individ#al test cases! . test case is a
series of steps exec#ted on a prod#ct, #sing a pre$defined set of inp#t data, expected
to prod#ce a pre$defined set of o#tp#ts, in a given environment! Bence, a test case
specification sho#ld clearly identify,
The p#rpose of the test( this lists what feat#re or part the test is intended for!
tems being tested, along with their versionLrelease n#mbers as appropriate!
-nvironment that needs to be set #p for r#nning the test case!
np#t data to be #sed for the test case!
Steps to be followed to exec#te the test
The expected res#lts that are considered to be correct res#lts
. step to compare the act#al res#lts prod#ced with the expected res#lts
.ny relationship between this and other tests
U?&ate of tracea0ility matri*
. traceability matrix is a tool to validate that every re@#irement is tested! This matrix
is created d#ring the re@#irements gathering phase itself by filling #p the #ni@#e
identifier for each re@#irement! When a test case specification is complete, the row
corresponding to the re@#irement which is being tested by the test case is #pdated
with the test case specification identifier! This ens#res that there is a two$way
mapping between re@#irements and test cases!
I&entifying ?o$$i0le can&i&ate$ for a%tomation
6efore writing the test cases, a decision sho#ld be ta0en as to which tests are to be
a#tomated and which sho#ld be r#n man#ally! Some of the criteria that will be #sed in
deciding which scripts to a#tomate incl#de
)epetitive nat#re of the test
/%
/%
-ffort involved in a#tomation
.mo#nt of man#al intervention re@#ired for the test, and
Cost of a#tomation tool!
!e2elo?ing an& 0a$e lining te$t ca$e$
6ased on the test case specifications and the choice of candidates for a#tomation, test
cases have to be developed! The test cases sho#ld also have change history
doc#mentation, which specifies
What was the change
Why the change was necessitated
Who made the change
When was the change made
. brief description of how the change has been implemented and
7ther files affected by the change
.ll the artifacts of test cases the test scripts, inp#ts, scripts, expected o#tp#ts, and
so on sho#ld be stored in the test case database and SCM!
E*ec%ting te$t ca$e$ an& ,ee?ing tracea0ility matri* c%rrent
The prepared test cases have to be exec#ted at the appropriate times d#ring a
pro*ect! "or example, test cases corresponding to smo0e tests may be r#n on a daily
basis! System testing test cases will be r#n d#ring system testing!
.s the test cases are exec#ted d#ring a test cycle, the defect repository is #pdated with
/! Defects from the earlier test cycles that are fixed in the c#rrent b#ild and
%! :ew defects that get #ncovered in the c#rrent r#n of the tests!
D#ring test design and exec#tion, the traceability matrix sho#ld be 0ept c#rrent! When
tests get designed and exec#ted s#ccessf#lly, the traceability matrix sho#ld be
#pdated!
ollecting an& analy@ing metric$
/%
/%
When tests are exec#ted, information abo#t the test exec#tion gets collected in test
logs and other files! The basic meas#rements from r#nning the tests are then
converted to meaningf#l metrics by the #se of appropriate transformations and
form#lae!
>re?aring te$t $%mmary re?ort
.t the completion of a test cycle, a test s#mmary report is prod#ced! This report gives
insights to the senior management abo#t the fitness of the prod#ct for release!

Recommen&ing ?ro&%ct relea$e criteria
7ne of the p#rposes of testing is to decide the fitness of a prod#ct for release!
Testing can never concl#sively prove the absence of defects in a software prod#ct!
What it provides is an evidence of what defects exist in the prod#ct, their severity, and
impact! The *ob of the testing team is to artic#late to the senior management and the
prod#ct release team
/! What defect the prod#ct has
%! What is the impactLseverity of each of the defects
4! What wo#ld be the ris0s of releasing the prod#ct with the existing
defectsE
The senior management can then ta0e a meaningf#l b#siness decision on whether to
release given version or not!
;.< Te$t E*ec%tion an& Re?orting
Testing re@#ires constant comm#nication between the test team and other
teams! Test reporting is a means of achieving this comm#nication! There are two
types of reports or comm#nication that are re@#iredH test incident reports and test
s#mmary reports!
Te$t inci&ent re?ort
. test incident report is a comm#nication that happens thro#gh the testing
cycle as and when defects are enco#ntered! . test incident report is an entry made in
/%
/%
the defect repository! -ach defect has a #ni@#e D and this is #sed to identify the
incident! The high impact test incidents are highlighted in the test s#mmary report!
Te$t cycle re?ort
Test pro*ects ta0e place in #nits of test cycles! . test cycle entails planning
and r#nning certain tests in cycles, each cycle #sing a different b#ild of a prod#ct! .
test cycle report, at the end of each cycle, gives
/! . s#mmary of the activities carried o#t d#ring that cycleH
%! Defects that were #ncovered d#ring that cycle, based on their severity and
impact!
4! Progress from the previo#s cycle to the c#rrent cycle in terms of defects
fixedH
8! 7#tstanding defects that are yet to be fixed in this cycleH and
9! .ny variations observed in effort or sched#le!
Te$t $%mmary re?ort
The final step in a test cycle is to recommend the s#itability of a prod#ct for release!
. report that s#mmari'es the res#lts of a test cycle is the test s#mmary report!
There are two types of test s#mmary reports(
/! Phase$wise test s#mmary, which is prod#ced at the end of every phase
%! "inal test s#mmary reports!
. s#mmary report sho#ld present
. s#mmary of the activities carried o#t d#ring the test cycle or phase
Qariance of the activities carried o#t from the activities planned
S#mmary of res#lts which incl#des tests that failed, with any root ca#se
descriptions and severity of impact of the defects #ncovered by the tests!
Comprehensive assessment and recommendation for release sho#ld incl#de fit
for release assessment and recommendation of release!
Recommen&ing ?ro&%ct relea$e
6ased on the test s#mmary report, an organi'ation can ta0e a decision on whether to
release the prod#ct or not! deally an organi'ation wo#ld li0e to release a prod#ct with
'ero defects! Bowever, mar0et press#res may ca#se the prod#ct to be released with
/%
/%
the defects provided that the senior management is convinced that there is no ma*or
ris0 of c#stomer dissatisfaction! S#ch a decision sho#ld be ta0en by the senior
manager only after cons#ltation with the c#stomer s#pport team, development team
and testing team so that the overall wor0load for all parts of the organi'ation can be
eval#ated!
+e$t >ractice$
6est practices in testing can be classified into three categories!
/! Process related
%! People related
4! Technology related
>roce$$ relate& 0e$t ?ractice$
. strong process infrastr#ct#re and process c#lt#re is re@#ired to achieve better
predictability and consistency! . process database, a federation of information abo#t
the definition and exec#tion of vario#s processes can be a val#able addition to the
tools in an organi'ation!
>eo?le relate& 0e$t ?ractice$
While individ#al goals are re@#ired for the development and testing teams, it is very
important to #nderstand the overall goals that define the s#ccess of the prod#ct as a
whole! Nob rotation among s#pport, development and testing can also increase the
gelling among the teams! S#ch *ob rotation can help the different teams develop better
empathy and appreciation of the challenges faced in each otherFs roles and th#s res#lt
in better teamwor0!
Technology relate& 0e$t ?ractice$
. f#lly integrated TCD6$SCM D) can help in better a#tomation of testing
activities! When test a#tomation tools are #sed, it is #sef#l to integrate the tool with
TCD6, defect repository and an SCM tool!
. final remar0 on best practices, the three dimensions of best practices cannot be
carried o#t in isolation! . good technology infrastr#ct#re sho#ld be aptly s#pported
by effective process infrastr#ct#re and be exec#ted by competent people! These best
/%
/%
practices are inter$dependent, self$s#pporting, and m#t#ally enhancing! Th#s, the
organi'ation needs to ta0e a holistic view of these practices and 0eep a fine balance
among the three dimensions!
;.D S%mmary
"ailing to plan is planning to fail! Testing li0e any pro*ect sho#ld be driven by a
plan! The scope management for deciding the feat#res to be testedL not tested,
deciding a test approach, setting #p criteria for testing and identifying responsibilities,
staffing, and training needs are incl#ded in the test planning!
The test management incl#des the test infrastr#ct#re management and test people
management! The test infrastr#ct#re consists of a test case database, a defect
repository and a config#ration management repository and tool!
The test process incl#des the test case specification, p#tting a baseline to the test plan
and an #pdate of traceability matrix! The test process also has to identify possible
candidates for a#tomation!
;.E hec, Yo%r >rogre$$
/! Bow will yo# prepare a test planE -xplain the strategy!
%! -xplain the concept of identifying responsibilities, staffing, and
training needs!
4! Bow will yo# ma0e the si'e and effort estimation of the prod#ctE
8! -xplain the aspects of )is0 management!
9! -xplain the relationship between SCM, D) and TCD6!
3! -xplain the test process with an example!
;! What is called dtest reportingFE
<! Bow will yo# ma0e a test reportE -xplain with a sample report!
I! -xplain the best practices to be followed in test process!
/2! Differentiate between a test cycle report and test s#mmary report!
/%
/%
UNIT / )
Str#ct#re
9!2 7b*ectives
9!/ ntrod#ction
9!% Software Test .#tomation
9!4 Test metrics and meas#rements
9!8 S#mmary
9!9 Chec0 Jo#r Progress
/4
/4
<.G O01ecti2e$
To 0now the basic concepts of software test a#tomation and their benefits
To #nderstand the test metrics and meas#rements and the methods
9!/ Intro&%ction
Developing software to test the software is called test a#tomation! Test a#tomation
can help address several problems!
.#tomation saves time as software can exec#te test case faster than
h#man do!
Test a#tomation can free the test engineers from m#ndane tas0s and
ma0e them foc#s on more creative tas0s!
.#tomated tests can be more reliable
.#tomation helps in immediate testing
.#tomation can protect an organi'ation against attrition of test
engineers!
Test a#tomation opens #p opport#nities for better #tili'ation of global
reso#rces!
Certain types of testing cannot be exec#ted witho#t a#tomation
.#tomation means end$to$end, not test exec#tion alone!
.#tomation sho#ld have scripts that prod#ce test data to maximi'e coverage of
perm#tations and combinations of inp#ts and expected o#tp#t for res#lt comparison!
They are called test data generators! The a#tomation script sho#ld be able to map the
error patterns dynamically to concl#de the res#lt! The error pattern mapping is done
not only to concl#de the res#lt of a test, b#t also point o#t the root ca#se!
/4
/4
<.3 SOFTWARE TEST AUTOMATION
Term$ %$e& in a%tomation
. test case is a set of se@#ential steps to exec#te a test operating on a set of
predefined inp#ts to prod#ce certain expected o#tp#ts! There are two types of test
cases namely a#tomated and man#al! Test case in this chapter refers to a#tomated test
cases! . test case can be doc#mented as a set of simple steps, or it co#ld be an
assertion statement or a set of assertions! .n example of assertion is C7pening a file,
which is already opened sho#ld fail!D The following table describes some test cases
for the log in example, on how the log in can be tested for different types of testing!
S.No! Te$t ca$e$ for te$ting +elong$ to 5hat ty?e of
te$ting
/! Chec0 whether login wor0s "#nctionality
%! )epeat log in operation in a loop for 8< ho#rs )eliability
4! Perform log in from /2222 clients LoadLStress testing
8! Meas#re time ta0en for log in operations
n different conditions
Performance
9! )#n log in operation from a machine r#nning
Napanese lang#age
nternali'ation
Ta0le/ Same te$t ca$e 0eing %$e& for &ifferent ty?e$ of te$ting
n the above table the how portion of the test case is called scenarios. What an
operation has to do is a prod#ct specific feat#re and how they are to be r#n is a
framework$specific re@#irement! When a set of test cases is combined and associated
with a set of scenarios, they are called Ctest suiteD!



/4
/4
+ser
Defined scenarios
Bow to exec#te the tests
What a test sho#ld do

"ig! %9 "ramewor0 for test a#tomation
S,ill$ Nee&e& for A%tomation
The a#tomation of testing is broadly classified into three generations!
Fir$t generation ' recor& an& ?lay0ac,
)ecord and playbac0 avoids the repetitive nat#re of exec#ting tests! .lmost all
the test tools available in the mar0et have the record and playbac0 feat#re! . test
engineer records the se@#ence of actions by 0eyboard characters or mo#se clic0s and
those recorded scripts are played bac0 later, in the same order as they were recorded!
When there is fre@#ent change, the record and playbac0 generation of test a#tomation
tools may not be very effective!
Secon& generation ' &ata ' &ri2en
This method helps in developing test scripts that generates the set of inp#t conditions
and corresponding expected o#tp#t! This enables the tests to be repeated for different
inp#t and o#tp#t conditions! This generation of a#tomation foc#ses on inp#t and
o#tp#t conditions #sing the blac0 box testing approach!
Thir& generation action &ri2en
This techni@#e enables a layman to create a#tomated testsH there are no inp#t and
expected o#tp#t condition re@#ired for r#nning the tests! .ll action that appear on
application are a#tomatically tested based on a generic set of controls defined for
/4
/4
Scenarios
"ramewor0L
harness test
tool
Test cases
.
Test
s#ite
w W
a#tomation e inp#t and o#t p#t condition are a#tomatically generated and #sed the
scenarios for test exec#tion can be dynamically changed #sing the test framewor0 that
available in this approach of a#tomation hence a#tomation in the third generation
involves two ma*or aspects Ctest case a#tomationD and Cframe wor0 designD!
What to A%tomate9 Sco?e of A%tomation
The specific re@#irements can vary from prod#ct to prod#ct, from sit#ation to
sit#ation, from time to time! The following gives some generic tips for identifying the
scope of a#tomation!
I&entifying the ty?e$ of te$ting amena0le to a%tomation
Stre$$9 relia0ility9 $cala0ility9 an& ?erformance te$ting
These types of testing re@#ire the test cases to be r#n from a large n#mber of
different machines for an extended period of time, s#ch as %8 ho#rs, 8< ho#rs, and so
on! Test cases belonging to these testing types become the first candidates for
a#tomation!
Regre$$ion te$t$
)egression tests are repetitive in nat#re! 1iven the repetitive nat#re of the test cases,
a#tomation will save significant time and effort in the long r#n!
F%nctional te$t$
These 0inds of tests may re@#ire a complex set #p and th#s re@#ired speciali'ed s0ill,
which may not be available on an ongoing basis! .#tomating these once, #sing the
expert s0ill tests, can enable #sing less$s0illed people to r#n these tests on an ongoing
basis!
A%tomating area$ le$$ ?rone to change
+ser interfaces normally go thro#gh significant changes d#ring a pro*ect! To avoid
rewor0 on a#tomated test cases, proper analysis has to be done to find o#t the areas of
changes to #ser interfaces, and a#tomate only those areas that will go thro#gh
relatively less change! The non$#ser interface portions of the prod#ct can be
a#tomated first! This enables the non$1+ portions of the a#tomation to be re#sed
even when 1+ goes thro#gh changes!
/4
/4
A%tomate te$t$ that ?ertain to $tan&ar&$
7ne of the tests that prod#cts may have to #ndergo is compliance to standards! "or
example, a prod#ct providing a ND6C interface sho#ld satisfy the standard ND6C
tests! .#tomating for standards provides a d#al advantage! Test s#ites developed for
standards are not only #sed for prod#ct testing b#t can also be sold as test tools for the
mar0et! Testing for standards has certain legal re@#irements! To certify the software, a
test s#ite is developed and handed over to different companies! This is called
Ccertification testingD and re@#ires perfectly compliant res#lts every time the tests are
exec#ted!
Management a$?ect$ in a%tomation
Prior to starting a#tomation, ade@#ate effort has to be spent to obtain management
commitment! The a#tomated test cases need to be maintained till the prod#ct reaches
obsolescence! Since a#tomation involves effort over an extended period of time,
management permissions are only given in phases and part by part! t is important to
a#tomate the critical and basic f#nctionalities of a prod#ct first! To achieve this, all
test cases need to be prioriti'ed as high, medi#m, and low, based on c#stomer
expectations! .#tomation sho#ld start from high priority and then over medi#m and
low$priority re@#irements!
!e$ign an& Architect%re for A%tomation
Design and architect#re is an important aspect of a#tomation! .s in prod#ct
development, the design has to represent all re@#irements in mod#les and in the
interactions between mod#les!
n integration testing both internal interfaces and external interfaces have to be
capt#red by design and architect#re! .rchitect#re for test a#tomation involves two
ma*or heads( a test infrastr#ct#re that covers a test case database and a defect database
or defect repository! +sing this infrastr#ct#re, the test framewor0 provides a bac0bone
that ties the selection and exec#tion of test cases!
E*ternal mo&%le$
There are two mod#les that are external mod#les to a#tomation TCD6 and defect
D6! Man#al test cases do not need any interaction between the framewor0 and
/4
/4
TCD6! Test engineers s#bmit the defects for man#al test cases! "or a#tomated test
cases, the framewor0 can a#tomatically s#bmit the defects to the defect D6 d#ring
exec#tion! These external mod#les can be accessed by any mod#le in a#tomation
framewor0!
Scenario an& config%ration file mo&%le$
Scenarios are information on Chow to exec#te a partic#lar test caseC! . config#ration
file contains a set of variables that are #sed in a#tomation! . config#ration file is
important for r#nning the test cases for vario#s exec#tion conditions and for r#nning
the tests for vario#s inp#t and o#tp#t conditions and states! The val#es of variables in
this config#ration file can be changed dynamically to achieve different exec#tion
inp#t, o#tp#t and state conditions!
Te$t ca$e$ an& te$t frame5or, mo&%le$
Test case is an ob*ect for exec#tion for other mod#les in the architect#re and does not
represent any interaction by itself! . test framewor0 is a mod#le that combines Cwhat
to exec#teD and Chow they have to be exec#ted!D The test framewor0 is considered the
core of a#tomation design! t can be developed by the organi'ation internally or can
be bo#ght from the vendor!
Tool$ an& re$%lt$ mo&%le$
When a test framewor0 performs its operations, there are a set of tools that may be
re@#ired! "or example, when test cases are stored as so#rce code files in TCD6, they
need to be extracted and compiled by b#ild tools! n order to r#n the compiled code,
certain r#ntime tools and #tilities may be re@#ired!
The res#lts that come o#t of the test m#st be stored for f#t#re analysis! The history of
all the previo#s tests r#n sho#ld be recorded and 0ept as archives! This res#lts help the
test engineer to exec#te the test cases compared with the previo#s test r#n! The a#dit
of all tests that are r#n and the related information are stored in the mod#le of
a#tomation! This can also help in selecting test cases for regression r#ns!
Re?ort generator an& re?ort$ Cmetric$ mo&%le$
7nce the res#lts of a test r#n are available, the next step is to prepare the test reports
and metrics! Preparing reports is a complex wor0 and hence it sho#ld be part of the
/4
/4
a#tomation design! The periodicity of the reports is different, s#ch as daily, wee0ly,
monthly, and milestone reports! Baving reports of different levels of detail can
address the needs of m#ltiple constit#ents and th#s provide significant ret#rns!
The mod#le that ta0es the necessary inp#ts and prepares a formatted report is called a
report generator. 7nce the res#lts are available, the report generator can generate
metrics! .ll the reports and metrics that are generated are stored in the reportsLmetrics
mod#le of a#tomation for f#t#re #se and analysis!
"eneric Re6%irement$ for Te$t ToolCFrame5or,
n the previo#s section, we described a generic framewor0 for test a#tomation! This
section presents detailed criteria that s#ch a framewor0 and its #sage sho#ld satisfy!
:o hard coding in the test s#ite!
Test caseLs#ite expandability!
)e#se of code for different types of testing, test cases!
.#tomatic set#p and clean#p!
ndependent test cases!
Test case dependency
ns#lating test cases d#ring exec#tion
Coding standards and directory str#ct#re!
Selective exec#tion of test cases!
)andom exec#tion of test cases!
Parallel exec#tion of test cases!
Looping the test cases
1ro#ping of test scenarios
Test case exec#tion based on previo#s res#lts!
)emote exec#tion of test cases!
.#tomatic archival of test data!
)eporting scheme!
ndependent of lang#ages
Probability to different platforms!
/4
/4
>roce$$ Mo&el for A%tomation
The wor0 on a#tomation can go sim#ltaneo#sly with prod#ct development and can
overlap with m#ltiple releases of the prod#ct! 7ne specific re@#irement for
a#tomation is that the delivery of the a#tomated tests sho#ld be done before the test
exec#tion phase so that the deliverables from a#tomation effort can be #tili'ed for the
c#rrent release of the prod#ct!
Test a#tomation life cycle activities bear a strong similarity to prod#ct development
activities! N#st as prod#ct re@#irements need to be gathered on the prod#ct side,
a#tomation re@#irements too need to be gathered! Similarly, *#st as prod#ct planning,
design and coding are done, so also d#ring test a#tomation are a#tomation planning,
design and coding!
.fter introd#cing testing activities for both the prod#ct and a#tomation, the
above fig#re incl#des two parallel sets of activities for development and testing
separately! When they are p#t together, it becomes a CWD model! Bence for a prod#ct
development involving a#tomation, it will be a good choice to follow the W model to
ens#re that the @#ality of the prod#ct as well as the test s#ite developed meets the
expected @#ality norms!
Selecting a te$t tool
Baving identified the re@#irements of what to a#tomate, a related @#estion is the
choice of an appropriate tool for a#tomation! Selecting the test tool is an important
aspect of test a#tomation for several reasons given below(
/! "ree tools are not well s#pported and get phased o#t soon!
%! Developing in$ho#se tools ta0e time!
4! Test tools sold by vendors are expensive!
8! Test tools re@#ire strong training!
9! Test tools generally do not meet all the re@#irements for
a#tomation!
3! :ot all test tools r#n on all platform!
"or all the above strong reasons, ade@#ate foc#s needs to be provided for selecting the
right tool for a#tomation!

/4
/4
riteria for $electing te$t tool$
n the previo#s section, we loo0ed at some reasons for eval#ating the test tools
and how re@#irements gathering will help! This will change according to context and
are different for different companies and prod#cts! We will now loo0 into the broad
categories for classifying the criteria! The categories are
/! Meeting re@#irements
%! Technology expectations
4! TrainingLs0ills and
8! Management aspects!
Meeting re6%irement$
"irstly, there are plenty of tools available in the mar0et, b#t they do not meet all the
re@#irements of a given prod#ct! -val#ating different tools for different re@#irements
involves significant effort, money and time!
Secondly, test tools are #s#ally one generation behind and may not provide bac0ward
or forward compatibility! Thirdly, test tools may not go thro#gh the same amo#nt of
eval#ation for new re@#irements!
"inally, a n#mber of test tools cannot differentiate between a prod#ct fail#re and a test
fail#re! So the test tool m#st have some intelligence to proactively find o#t the
changes that happened in the prod#ct and accordingly analy'e the res#lts!
Technology e*?ectation$
-xtensibility and c#stomi'ation are important expectations of a test
tool!
. good n#mber of test tools re@#ire their libraries to be li0ed with
prod#ct binaries!
Test tools are not /22b cross platform! When there is an impact
analysis of the prod#ct on the networ0, the first s#spect is the test tool
and it is #ninstalled when s#ch analysis starts!
Training $,ill$
While test tools re@#ire plenty of training, very few vendors provide the training to
the re@#ired level! Test tools expect the #sers to learn new lang#ageLscripts and may
/4
/4
not #se standard lang#agesLscripts! This increases s0ill re@#irements for a#tomation
and increases the need for a learning c#rve inside the organi'ation!
Management a$?ect$
Test tools re@#ire system #pgrades!
Migration to other test tools diffic#lt
Deploying tool re@#ires h#ge planning and effort!

Ste?$ for tool $election an& &e?loyment
/! dentify yo#r test s#ite re@#irements among the generic
re@#irements disc#ssed! .dd other re@#irements if any!
%! Ma0e s#re experiences disc#ssed in previo#s sections are ta0en
care of!
4! Collect the experiences of other organi'ations which #sed similar
test tools!
8! Keep a chec0list of @#estions to be as0ed to the vendors on
costLeffortLs#pport!
9! dentify list of tools that meet the above re@#irements!
3! -val#ate and shortlist oneLset of tools and train all test developers
on the tool!
;! Deploy the tool across the teams after training all potential #sers of
the tool!
hallenge$ in A%tomation
The most important challenge of a#tomation is the management commitment!
.#tomation ta0es time and effort and pays off in the long r#n! Management sho#ld
have patience and persist with a#tomation! S#ccessf#l test a#tomation endeavors are
characteri'ed #nflinching management commitment, a clear vision of goals that trac0
progress with respect to the long$term vision!
/8
/8
<.4 TEST METRIS AN! MEASUREMENTS
What are metric$ an& mea$%rement$
Metrics derive information from raw date with a view to help in decision ma0ing!
Some of the areas that s#ch information wo#ld shed light on are
)elationship between the data points
.ny ca#se and effect correlation between the observed data points!
.ny pointers to how the data can be #sed for f#t#re planning and contin#o#s
improvements
Metrics are th#s derived from meas#rement #sing appropriate form#las or calc#lation!
7bvio#sly the same set meas#rement can help prod#ct different set of metrics of
interest to different people!
"rom the above disc#ssion it is obvio#s that in order that a pro*ect performance be
trac0ed and its progress monitored effectively,
The right parameters m#st be meas#redH the parameters may pertain to prod#ct
or to process
The right analysis m#st be done on the date meas#red to draw within a pro*ect
or organi'ation!
The res#lt of the analysis m#st be presented in an appropriate form to the
sta0eholders to enable them to ma0e the right decisions on improving prod#ct
or process @#ality
*ffort is the act#al time that is spent on a partic#lar activity or a phase! -lapsed days
are the difference between the start of an activity and the completion of the activity!
Collecting and analy'ing metrics involves effort and several steps! This is depicted in
the following fig#re!%3!
/8
/8
"ig#re! %3 Steps in a metrics program
The first step involved in a metrics programme is to decide what meas#rement are
important and collect data accordingly! The effort spent on testing n#mber of defect
and n#mber of test cases is some examples of meas#rement Depending on what the
data is #sed for the gran#larity of meas#rement will vary!

While deciding what to meas#re the following aspects need to be 0ept in mind!
/! What is meas#red sho#ld be of relevance to what we are trying to achieve!
"or testing f#nctions we wo#ld obvio#sly be interested in the effort spent on
testing n#mber of test cases n#mber of defects reported from test cases and so
on!
%! The entities meas#red sho#ld be nat#ral and sho#ld not involve too many
overheads for meas#rement! f there are too many overheads in ma0ing the
meas#rements do not follows nat#rally from the act#al wor0 being done then
the people who s#pply the date may resist giving the meas#rement data!
/8
/8
dentify what
To meas#re
Transform
Meas#rements
To metrics
Decide
operational
re@#irements
)efine
meas#rements
.nd metrics
Ta0e actions
and follow #p
Perform metric
analysis
4! What is meas#red sho#ld be at the right level of gran#larity to satisfy the
ob*ective for which the meas#rement is being made!
.n approach involved in getting the gran#lar detail is called data drilling
t is important to provide as m#ch gran#larity in meas#rement as possible! . set of
meas#rement can be combined to generate metrics! .n example @#estion involving
m#ltiple meas#rements is CBow many test cases prod#ced the 82 defect in date
migration involving different schemaED There are two meas#rements involved in this
@#estion the n#mber of test cases and the n#mber of defect! Bence the second step
involved in metrics collection is defining how to combine data points or
meas#rements to provide meaningf#l metrics!
. partic#lar metric can #se one or more meas#rements! The operational re@#irement
for a metrics plan sho#ld lay down not only the periodicity b#t also other operational
iss#es s#ch as who sho#ld collect meas#rements, who sho#ld receive the analysis, and
so on! The final step involved in a metrics plan is to ta0e necessary action and follow
#p on the action!
Why Metric$ in Te$ting
Metrics are needed to 0now test case exec#tion prod#ctivity and to estimate test
completion date!
Days needed to complete testing S total test cases yet to be exec#ted L
Test case exec#tion prod#ctivity
The defect fixing trend collected over a period of time gives another estimate of the
defect$fixing capability of the team!
Total days needed for defect fixes S >7#tstanding defects yet to be fixed U Defects
That can be fo#nd in f#t#re test cycles? L
Defect fixing capability!
/8
/8
Bence, metrics helps in estimating the total days needed for fixing defects!
7nce the time needed for testing and the time for defect fixing are 0nown, the release
date can be estimated!
Days needed for release S Max >days needed for testing, days needed for
Defect fixes ?!
The defect fixes may arrive after the reg#lar test cycles are completed! These defect
fixes have to be verified by regression testing before the prod#ct can be released!
Bence the form#la for days needed for release is to be modified as follows(
Days needed for release S Max O Days needed for testing, O Days needed for defect
fixesU Days needed for regressing o#tstanding defect fixesP
The idea of disc#ssing the form#la here is to explain that metrics are important and
help in arriving at the release date for the prod#ct! Metrics are not only #sed for
reactive activities! Metrics and their analysis help in preventing the defects
proactively, thereby saving cost and effort! Metrics are #sed in reso#rce management
to identify the right si'e of prod#ct development teams!
To s#mmari'e, metrics in testing help in identifying
When to ma0e the release!
What to release
Whether the prod#ct is being released with 0nown @#ality!
(G.3 Ty?e$ of Metric$
Metrics can be classified into different types based on what they meas#re and what
area they foc#s on! Metrics can be classified as product metrics and process metrics!
Prod#ct metrics can be f#rther classified as
>ro1ect metric$ a set of metrics that indicates how the pro*ect is planned and
exec#ted!
/8
/8
>rogre$$ metric$ a set of metrics that trac0s how the activities of the pro*ect are
progressing!
>ro&%cti2ity metric$ a set of metrics that ta0es #p into acco#nt vario#s
prod#ctivity n#mbers that can be collected and #sed for
planning and trac0ing testing activities!
>ro1ect Metric$

. typical pro*ect starts with re@#irements gathering and ends with prod#ct release! .ll
the phases that fall in between these points need to be planned and trac0ed! The
pro*ect scope gets translated to si'e estimates, which specify the @#ant#m of wor0 to
be done! This si'e estimate gets translated to effort estimate for each of the phases and
activities by #sing the available prod#ctivity data available! This initial effort is called
baselined effort.
.s the pro*ect progresses and if the scope of the pro*ect changes then the effort
estimates are re$eval#ated again and this re$eval#ated effort estimate is called revised
effort.
Effort 2ariance - ?lanne& 2$ act%al .
f there is s#bstantial difference between the baselined and revised effort, it points to
incorrect initial estimation! Calc#lating effort variance for each of the phases provides
a @#antitative meas#re of the relative difference between the revised and act#al
efforts!
/8
/8
Effort Re6 !e$ign o&ing Te$ting !oc !efect
fi*ing
)ariance
K
;!/ <!; 9 2 82 /9
Ta0le. Sam?le 2ariance ?ercentage 0y ?ha$e!
Qariance b S O> .ct#al effort )evised estimate? L )evised estimateP T /22
. variance more than 9b in any of the SDLC phase indicates the scope for
improvement in the estimation! The variance can be negative also! . negative
variance is an indication of an over estimate! These variance n#mbers along with
analysis can help in better estimation for the next release or the next revised
estimation cycle!
/8
/8
Fig. 3E Ty?e$ of metric$
>roce$$
metric$
>ro&%ct
metric$
>ro1ect
metric$
>rogre$$
Metric$
>ro&%cti2ity
metric$
Testing defect
metrics
Development
defect metrics
Defects per /22 hrs
of testing
Test cases exec#ted
per /22 hrs of
testing
Test cases
developed per /22
hrs
Defects per /22 test
cases
Defects per /22
failed test cases
Test phase
effectiveness
Closed defects
distrib#tion
Defect find rate
Defect fix rate
7#tstanding
defects rate
Priority
o#tstanding rate
Defects trend
Defect
classification
trend
Weighted
defects trend
Defect ca#se
distrib#tion
-ffort variance
Sched#le
variance
-ffort
distrib#tion
Component
wise defect
distrib#tion
Defect density
and defect
removal rate
.ge analysis of
o#tstanding
defects
ntrod#ced and
reopened
defects rate
/8
/8
Sche&%le 2ariance - ?lanne& 2$ act%al .
Sched#le variance, li0e effort variance, is the deviation of the act#al sched#le from
the estimated sched#le! There is one difference tho#gh! Depending on the SDLC
model #sed by the pro*ect, several phases co#ld be active at the same time! "#rther the
different phases in SDLC are interrelated and co#ld share the same set of individ#als!
6eca#se of all these complexities involved, sched#le variance is calc#lated only at the
overall pro*ect level, at specific milestones, not with respect to each of the SDLC
phases!
-ffort and sched#le variance have to be analy'ed in totality, not in isolation! This is
beca#se while effort is a ma*or driver of the cost, sched#le determines how best a
prod#ct can exploit mar0et opport#nities! Qariance can be classified into negative
variance, 'ero variance, acceptable variance and #nacceptable variance! 1enerally
2$9b is considered as acceptable variance!
Effort &i$tri0%tion acro$$ ?ha$e$
Qariance calc#lation helps in finding o#t whether commitments are met on time and
whether the estimation method wor0s well! The distrib#tion percentage across the
different phases can be estimated at the time of planning and these can be compared
with the act#al at the time of release for getting a comfort feeling on the release and
estimation methods!
Mat#re organi'ations spend at least /2$/9b of the total effort in re@#irements and
approximately the same effort in the design phase! The effort percentage for testing
depends on the type of release and amo#nt of change to the existing code base and
f#nctionality! Typically, organi'ations spend abo#t %2 92b of their total effort in
testing!
>rogre$$ Metric$
The n#mber of defects that are fo#nd in the prod#ct is one of the main indicators of
@#ality! Bence, we will loo0 at progress metrics that reflect the defects of a prod#ct!
Defects get detected by the testing team and get fixed by the development team!
6ased on this, defect metrics are f#rther classified into test defect metrics and
development defect metrics.
/8
/8
Bow many defects have already been fo#nd and how many more defects may get
#nearthed are two parameters that determine prod#ct @#ality and its assessment, the
progress of testing has to be #nderstood! f only 92b of testing is complete and if /22
defects are fo#nd, then, ass#ming that the defects are #niformly distrib#ted over the
prod#ct, another <2$/22 defects can be estimated as resid#al defects!
(. Te$t &efect metric$
The next set of metrics helps #s #nderstand how the defects that are fo#nd can be
#sed to improve testing and prod#ct @#ality! :ot all defects are e@#al in impact or
importance! Some organi'ations classify defects by assigning a defect priority! The
priority of a defect provides a management perspective for the order of defect fixes!
Some organi'ation #se defect severity levels, they provide the test team a perspective
of the impact of that defect in prod#ct f#nctionality! Since different organi'ations #se
different methods of defining priorities and severities, a common set of defect
definitions and classification are provided in the table given below!
!efect fin& rate
When trac0ing and plotting the total n#mber of defects fo#nd in the prod#ct at
reg#lar intervals, from beginning to end of a prod#ct development cycle, it may show
a pattern for defect arrival! "or a prod#ct to be fit for release, not only is s#ch a
pattern of defect arrival in a partic#lar d#ration sho#ld be 0ept at a bare minim#m
n#mber! . bell c#rve along with minim#m n#mber of defects fo#nd in the last few
days indicate that the release @#ality of the prod#ct is li0ely to be good!
/8
/8
!efect
cla$$ification
What it mean$
-xtreme
Prod#ct crashes or #n#sable
:eeds to be fixed immediately
Critical
6asic f#nctionality of the prod#ct not wor0ing
:eeds to be fixed before next test cycle starts
mportant
-xtended f#nctionality of the prod#ct not wor0ing
Does not affect the progress of testing
Minor
Prod#ct behaves differently
:o impact on the test team or c#stomers
"ix it when tome permits
Cosmetic Minor irritant
:eed not be fixed for this release
Ta0le. A common &efect &efinition an& cla$$ification
!efect fi* rate
f the goal of testing is to find defects as early as possible, it is nat#ral to expect that
the goal of development sho#ld be to fix defects as soon as they arrive! f the defect
fixing c#rve is in line with defect arrival a Cbell c#rveD will be the res#lt again! There
is a reason why defect fixing rate sho#ld be same as defect arrival rate! .s disc#ssed
in the regression testing, when defects are fixed in the prod#ct, it opens the door for
the introd#ction of new defects! Bence, it is a good idea to fix the defects early and
test those defect fixes thoro#ghly to find o#r all introd#ced defects! f this principle is
not followed, defects introd#ced by the defect fixes may come #p for testing *#st
before the release and end #p in s#rfacing of new defects!
O%t$tan&ing &efect$ rate
The n#mber of defects o#tstanding in the prod#ct is calc#lated by s#btracting the
total defects fixed from the total defects fo#nd in the prod#ct! f the defect fixing
pattern is constant li0e a straight line, the o#tstanding defects will res#lt in a bell
c#rve again! f the defect$fixing pattern matches the arrival rate, then the o#tstanding
/9
/9
defects c#rve will loo0 li0e a straight line! Bowever it is not possible to fix all defects
when the arrival rate is at the top end of the bell c#rve! Bence, the o#tstanding defect
rate res#lts in a ball c#rve in many pro*ects! When testing is in progress, the
o#tstanding defects sho#ld be 0ept very close to 'ero so that the development teamFs
bandwidth is available to analy'e and fix the iss#es soon after they arrive!
>riority o%t$tan&ing rate
Sometimes the defects that are coming o#t of testing may be very critical and may
ta0e enormo#s effort to fix and to test! Bence, it is important to loo0 at how many
serio#s iss#es are being #ncovered in the prod#ct! The modification to the o#tstanding
defects rate c#rve by plotting only the high priority defects is called priority
o#tstanding defects! The priority o#tstanding defects correspond to extreme and
critical classification of defects! Some organi'ations incl#de important defects also in
priority o#tstanding defects!
The effectiveness of analysis increases when several perspectives of find rate, fix rate,
o#tstanding, and priority o#tstanding defects are combined! There are different defect
trends li0e defect trend, defect classification trend and weighted defects trend!
!e2elo?ment &efect metric$
n this section we will loo0 how metrics can be #sed to improve the development
activities! The defect metrics that directly help in improving development activities
are disc#ssed in this section and are termed as development defect metrics! While
defect metrics foc#s on the n#mber of defects, development defect metrics try to map
those defects to different components of the prod#ct and to some of the parameters of
development s#ch as lines of code!
om?onent/5i$e &efect &i$tri0%tion
While it is important to co#nt the n#mber of defects in the prod#ct, for development
it is important to map them to different components of the prod#ct so that they can be
assigned to the appropriate developer to fix those defects! The pro*ect manager in
charge of development maintains a mod#le ownership list where all prod#ct mod#les
and owners are listed! 6ased on the n#mber of defects existing in each of the mod#les,
the pro*ect manager assigns reso#rces accordingly!
/9
/9
!efect &en$ity an& &efect remo2al rate
. good @#ality prod#ct can have a long lifetime before becoming obsolete! The
lifetime of the prod#ct depends on its @#ality, over the different releases it goes
thro#gh! 7ne of the metrics that correlates so#rce code and defects is defect density!
This metric maps the defects in the prod#ct with the vol#me of code that is prod#ced
for the prod#ct!
There are several standard form#lae for calc#lating defect density! 7f these, defects
per KL7C is the most practical and easy metric to calc#late and plot! KL7C stands
for 0ilo lines of code, every /222 lines of exec#table statements in the prod#ct is
co#nted as one KL7C!
The metric compares the defects per KL7C of the c#rrent release with previo#s
releases of the prod#ct! There are several variants if this metric to ma0e it relevant to
releases, and one of them is calc#lating .MD > added, modified, deleted code? to find
o#t how a partic#lar release affects prod#ct @#ality!
Defects per KL7C S > Total defects fo#nd in the prod#ctL Total exec#table
.MD lines of code in KL7C?
Defects per KL7C can be #sed as a release criteria as well as a prod#ct @#ality
indicator with respect to code and defects! Defects fo#nd by the testing team have to
be fixed by the development team! The #ltimate @#ality of the prod#ct depends both
on development and testing activities and there is a need for a metric to analy'e both
the development and the testing phases together and map them to release! The defect
removal rate is #sed for the p#rpose!
The form#la for calc#lating the defect removal rate is
> Defects fo#nd by verification activities U Defects fo#nd in #nit testing? L
> Defects fo#nd by test teams? T /22
The above form#la helps in finding the efficiency of verification activities and #nit
testing which are normally responsibilities of the development team and compare
them to the defects fo#nd by the testing teams! These metrics are trac0ed over vario#s
releases to st#dy in$release$on$release trends in the verification L@#ality ass#rance
activities!
/9
/9
Age analy$i$ of o%t$tan&ing &efect$
.ge here means those defects that have been waiting to be fixed for a long time!
Some defects that are diffic#lt to be fixed or re@#ire significant effort may get
postponed for a longer d#ration! Bence, the age of a defect in a way represents the
complexity of the defect fix needed! 1iven the complexity and time involved in fixing
those defects, they need to be trac0ed closely else they may get postponed close to
release which may even delay the release! . method to trac0 s#ch defects is called age
analysis of o#tstanding defects!
>ro&%cti2ity Metric$
Prod#ctivity metrics combine several meas#rements and parameters with effort spent
on the prod#ct! They help in finding o#t the capability of the team as well as for other
p#rposes, s#ch as
a! -stimating for the new release
b! "inding o#t how well the team is progressing, #nderstanding, the
reasons for variation in res#lts!
c! -stimating the n#mber of defects that can be fo#nd!
d! -stimating release date and @#ality!
e! -stimating the cost involved in the release!
!efect$ ?er (GG ho%r$ of te$ting
Program testing can only prove the presence of defects, never their absence! Bence, it
is reasonable to concl#de that there is no end to testing and more testing may reveal
more new defects! 6#t here may be a point of diminishing ret#rns when f#rther testing
may not reveal any defects! f incoming defects in the prod#ct are red#cing , it may
mean vario#s things!
/! Testing is not effective!
%! The @#ality of the prod#ct is improving
4! -ffort spent in testing is falling!
Defects per /22 ho#rs of testing S > Total defects fo#nd in the prod#ct
for a periodL Total ho#rs spent to get those defects? T /22
/9
/9
Te$t ca$e$ e*ec%te& ?er (GG ho%r$ of te$ting

The n#mber of test cases exec#ted by the test team for a partic#lar d#ration depends
on team prod#ctivity and @#ality of prod#ct! The team prod#ctivity has to be
calc#lated acc#rately so that it can be trac0ed for the c#rrent release and be #sed to
estimate the next release of the prod#ct!
Test cases exec#ted per /22 ho#rs of testing S > Total test cases exec#ted for a
period L total ho#rs spent in test exec#tion? T /22
Te$t ca$e$ &e2elo?e& ?er (GG ho%r$ of te$ting
6oth man#al and a#tomating test cases re@#ire estimating and trac0ing of prod#ctivity
n#mbers! n a prod#ct scenario not all test cases are written afresh for every release
new test cases are added to address new f#nctionality and for testing feat#res that
were not tested earlier -xisting test cases are modified to reflect changes in the
prod#ct! Some test cases are deleted changers in the prod#ct, Some test cases are
deleted if they are no longer #sef#l or if corresponding feat#res are removed from the
prod#ct feat#res are removed from the prod#ct, Bence the form#la for test cases
developed #ses the co#nt corresponding to added modified and deleted test cases,
Test cases developed per /22 ho#rs of testingS Total test cases developed for a
period total ho#rs spent in test case development /22
!efect ?er (GG Te$t a$e$

Since the goal of testing is find o#t as many defects as possible, it is appropriate to
meas#re the defect yield of test that is how many defects get #ncovered d#ring testing!
This is a f#nction of two parameters one the effectiveness of the tests that are capable
of #ncovering defects! The ability of test case to #ncover defect depend on how well
the test cases are designed and developed! 6#t in a typical prod#ct scenario not all test
cases are exec#ted for every test cycle, hence it is better to select test cases that
prod#ce defect! . meas#re that @#antifies these two parameters is defect per /22 test
cases! Jet another parameter that infl#ences this metric is the @#ality of prod#ct, f
/9
/9
prod#ct @#ality is poor, it prod#ces more defects per /22 test cases compared to a
good @#ality prod#ct! The form#la #sed for calc#lating this metric is
Defects per /22 test cases S >Total defect fo#nd for a periodL Total test cases
exec#ted for the same period? T /22
!efect$ ?er (GG Faile& Te$t a$e$
Defect per /22 failed test cases is a good meas#re to find o#t how gran#lar the test
cases are it indicates
Bow many test cases need to be exec#ted when a defect is fixed
What defect need to be fixed so that an acceptable n#mber of test cases
reach the pass state and
Bow the fail rate of test cases and defect affect each other for release
readiness analysis!
D-"-CT P-) /22 ".L-D T-ST C.S-S S >T7T.L D-"-CT "7+:D "7) .
P-)7DL T7T.L T-ST C.S-S ".L-D D+- T7 TB7S- D--"-CTS?T /22
Te$t ?ha$e effecti2ene$$
.s per the principles of testing, testing is not the *ob of testers alone! Developers
perform #nit testing and there co#ld be m#ltiple testing teams performing component,
integration and system testing phases! The idea of testing is to find defects early in the
cycle and in the early phases of testing! .s testing is performed by vario#s teams with
the ob*ective of finding defects early at vario#s phases, a metric needed to compare
the defects filed by each of the phases in testing! The defects fo#nd in vario#s phases
s#ch as #nit testing>+T?, component testing>CT?, integration testing>T?, and system
testing are plotted and analy'ed!
/9
/9

"ig! %< Test phase effectiveness
n the chart given, the total defects fo#nd by each test phase are plotted! The
following few observations can be made!
/! . good proportion of defects were fo#nd in the early phases of testing >+T
and CT?!
%! Prod#ct @#ality improved from phase to phase
lo$e& &efect &i$tri0%tion
The ob*ective of testing is not only to find defects! The testing team also has
the ob*ective to ens#re that all defects fo#nd thro#gh testing are fixed so that the
c#stomer gets the benefit of testing and the prod#ct @#ality improves the testing team
has to trac0 the defects and analy'e how they are closed!
Relea$e Metric$
The decision to release a prod#ct wo#ld need to consider several perspectives and
metrics! .ll the metrics that were disc#ssed in the previo#s section need to be
considered in totality for ma0ing the release decision! The following table gives some
of the perspectives and some sample g#idelines needed for release analysis!
/9
/9
+T 4Ib
ST
/%b
b
T
/;b
CT
4%b
Metric >er$?ecti2e$ to 0e
con$i&ere&
"%i&eline$
Test cases exec#ted -xec#tion b
Pass b
.ll /22b of test cases to be exec#ted
Test cases passed sho#ld be minim#m
I<b
-ffort distrib#tion .de@#ate effort has
been spent on all
phases
/9$%2b effort spent each on
re@#irements, design, and testing
phases!
Defect find rate Defect trend Defect arrival trend showing bell c#rve
ncoming defects close to 'ero in the
last wee0
Defect fix rate Defect fix trend Defect fixing trend matching arrival
trend
Table! 1#idelines for )elease analysis
<.< S%mmary
.#tomation ma0es the software to test the software and enables the h#man effort to
be spent on creative testing! .#tomation bridges the gap in s0ills re@#irement between
testing and developmentH at times it demands more s0ills for test teams! What to
a#tomate ta0es into acco#nt the technical and management aspects, as well as the long
term vision! Prod#ct and a#tomation are li0e the two lines in a railway trac0H they go
parallel in the same direction with similar expectations!
Test metrics are needed to 0now test case exec#tion prod#ctivity and to estimate test
completion date! To s#mmari'e, metrics in testing helps in identifying when to ma0e
the release, what to release and whether the prod#ct is being released with 0nown
@#ality!
/9
/9
<.D hec, yo%r >rogre$$
/! What is test a#tomationE Why it is importantE
%! -xplain the scope of a#tomation!
4! Bow will yo# ma0e the design and architect#re for a#tomationE
8! -xplain the generic re@#irements for test toolLframewor0!
9! Bow will yo# select a test toolE
3! What are the challenges involved in test a#tomationE
;! What are test metrics and meas#rementsE
<! Why metrics are needed in testingE
I! -xplain the Pro*ect metrics!
/2! -xplain the Progress and Prod#ctivity metrics!
/9
/9

Das könnte Ihnen auch gefallen