Sie sind auf Seite 1von 127

2/17/2013

CSC 532
Assignment
Mathematical Proof of Correctness of Program Code, Building
Models from Program Code
Question 1: Deduce a mathematical proof of correctness in
Software Quality Assurance

Over the past two decades, a small, but vocal, segment of the software engineering
community has argued that a more formal approach to software quality assurance
is required. It can be argued that a computer program is a mathematical object. A
rigorous syntax and semantics can be defined for every programming language,
and work is underway to develop a similarly rigorous approach to the specification
of software requirements. If the requirements model (specification) and the
programming language can be represented in a rigorous manner, it should be
possible to apply mathematic proof of correctness to demonstrate that a program
conforms exactly to its specifications.
The application of formal methods in the specification, design and validation
processes of software development can be used to realize these goals.

FORMAL METHODS
Formal methods are a particular kind of mathematically based technique for the
specification, development and verification of software and hardware systems. The
use of formal methods for software and hardware design is motivated by the
expectation that, as in other engineering disciplines, performing appropriate
mathematical analysis can contribute to the reliability and robustness of a design.
They include: 1. Theoretical computer science 2. Logic calculi 3. Formal languages 4.
Automata theory 5. Program semantics.
Taxonomy
Formal methods can be used in a number of levels:
Level 0: Formal Specification may be undertaken, and then a program may be
developed from this informally. It is cost effective.
Level 1: Formal Development and Formal Verification may be used to produce a
program in a more formal manner. For example, proofs of properties or refinement
from the specification to a program may be undertaken. This may be most
appropriate in high-intensity systems involving safety and security.
Level 2: Theorem Provers may be used to undertake fully formal machine-checked
proofs. The can be very expensive and is only practically worthwhile if the cost of
mistakes is extremely high (e.g. in the critical parts microprocessor design).
Classification of Program Semantics
As with programming language semantics, styles of formal methods may be roughly
classified as follows:
Denotational Semantics, in which the meaning of a system is expressed in the
mathematical theory of domains. Proponents of such rely only on the well-
understood nature of domains to give meaning to the system; critics point out that
not every system may be intuitively or naturally viewed as a function.
Operational Semantics, in which meaning of the system is expressed as a sequence
of actions of a (presumably) simpler computational model. Proponents of such
methods point to the simplicity of their models as a means to expressive clarity; critics
counter that the problem of semantics has just been delayed (who defines the
semantics of the simpler model).
Axiomatic Semantics, in which the meaning of the system is expressed in terms of
preconditions and postconditions which are true before and after the system
performs a task, respectively. Proponents note the connection to classical logic;
critics note that the semantics never really describe what a system does (merely
what is true before and afterwards).
Lightweight Formal Methods
The cost of making a fully formalized specification or design will most likely be too
high. As an alternative, various lightweight formal methods, which emphasize partial
specification and focused application have been proposed: Alloy, Z-notation, Use
Case, etc.
Applications of Formal Methods in The Development Process
Specification: Formal methods may be used to give a description of the system to
be developed, at whatever level(s) of detail desired. This can be used to guide
further development activities (see below). Additionally, it can be used to verify that
the requirements for the system being developed have been completely and
accurately specified e.g. with ALGOL 58, the BNF (Backus-Naur Form) was proposed
to formalize the language.
Development: the formal specification is used as a guide while the concrete system
is developed during the design process e.g.
If the formal specification is an Operational Semantic, then observed
behaviors of the concrete system can be compared with the behavior of the
specification.
If Axiomatic Semantics, the preconditions and postconditions of the
specification may become assertions in the executable code.
Verification: once a formal specification has been developed, the specification may
be used as the basis for proving properties of the specification (and hopefully by
inference the developed system).

Types of Proof
1. Human-directed Proof: sometimes, the motivation for proving the correctness
of a system is not the obvious need for re-assurance of the system, but the
desire to understand the system better. Consequently, some proofs of
correctness are produced in style of mathematical proof: handwritten (or
typeset) using natural language, using a level of informality common to such
proofs. A good proof is one which is readily understandable by other
human readers. Critics of such approaches point out that the ambiguity
inherent in natural language allows errors to be undetected in such proofs;
often, subtle errors can be present in the low-level details typically overlooked
by such proofs. Additionally, the work involved in producing such a good
proof requires a high level of mathematical sophistication and expertise.
2. Automated Proof: in contrast, there is increasing interest in producing proofs of
correctness of such systems by automated means. Automated techniques fall
into two general categories.
a. Automated Theorem Proving: in which a system attempts to produce a
formal proof from scratch, given a description of the system, a set of
logical axioms, and a set of inference rules.
b. Model Checking: in which the system verifies certain properties by
means of an exhaustive search of all possible states that a system
could enter during its execution.

An Example of a Formal Language: Z Notation
Z specifications are structured as a set of schemasa boxlike structure that
introduces variables and specifies the relationship between these variables. A
schema is essentially the formal specification analog of the programming language
subroutine or procedure. In the same way that procedures and subroutines are used
to structure a system, schemas are used to structure a formal specification.
Z notation is based on typed set theory and first-order logic. Z provides a construct,
called a schema, to describe a specifications state space and operations. A
schema groups variable declarations with a list of predicates that constrain the
possible value of a variable. In Z, the schema X is defined by the form:
X
declarations
predicates

The declaration gives the type of the function or constant, while the predicate gives
it value. Only an abbreviated set of Z symbols is presented in this table.

Example


BlockHandler
Used, free: P BLOCKS
BlockQueue : seq P BLOCKS

used free =
used U free = AllBlocks
V i : dom BlockQueue . BlockQueue i used
V i, j : dom BlockQueue . i j =>
BlockQueue i BlockQueue j =


The schema consists of two parts. The part above the central line represents the
variables of the state, while the part below the central line describes the data
invariant.
Whenever the schema representing the data invariant and state is used in another
schema it is preceded by the symbol. Therefore, if the preceding schema is
used in a schema that, for example, describes an operation, then it would be written
as Block-Handler. As the last sentence implies, schemas can be used to describe
operations.






















Question 2: How can models be made from program codes?
A model of a system can be made from program codes by a process such as
reverse engineering. Several software development environments used for
developing code often have this reverse engineering feature. For example, a UML
tool or UML modeling tool (which is a software application that supports some or all
of the notation and semantics associated with the Unified Modeling Language)
comes with this feature.
Reverse Engineering
Reverse engineering in this context means the UML tool reads the program source
code as input and derives model data and corresponding graphical UML diagrams
from it.
Some of the challenges of reverse engineering are:
The source code often has much more detailed information than one would
want to see in design diagrams.
Diagram data is normally not contained with the program source code, such
that the UML tool, at least in the initial step, has to create some random
layout of the graphical symbols of the UML notation or use some automatic
layout algorithm to place the symbols in a way that the user can understand
the diagram. For example, the symbols should be placed at such locations on
the drawing pane that they do not overlap.
There are features of some programming languages, like class- or function
templates of the C++ programming language, which are notoriously hard to
convert automatically to UML diagrams in their full complexity.
Example
Consider a class definition of an object-oriented program below:
public class Account{

//Operators
public void credit (int anAmount){..}
public void debit (int anAmount){..}
public getBalance(){..}
public void display (){..}

//Attributes
private String theNumber;
private int theBalance;

}//class: Account
The corresponding reversed engineered UML representation will be:
ACCOUNT
Attributes
- String theNumber
- int theBalance
Operators
+ void credit (int anAmount)
+ void debit (int anAmount)
+ int getBalance()
+ void display()
Apart from UML tools, other programming tools also offer this feature, i.e.
- Visual Studio for C#
- Netbeans for Java
Other kind of models for other program constructs can be obtained using the
reverse engineering method. For example:
- Reverse engineering a database to get its entity-relation (ER) diagram.
SOFTWAREVERIFICATION
Everysoftwaredevelopmentorganisationstronglydesirestocreatesoftwarethat
will perform expected function and delivers desired service with minimal or no
faultsduringusage;Inordertoachievethis,softwareengineeringpractisesneed
to be employed throughout the development of the software, these practises
includesoftwareverificationandvalidation.
Software validation: This is an aspect of software engineering that is carried out
after the development of the software to ensure that the software produced is
abletosatisfyusersexpectation.
Softwareverification:Thisisanaspectofsoftwareengineeringthatensuresthat
software product that is being developed complies with the requirement and
specifications specified in the development document. This could be carried out
atthedevelopmentalphaseorpostdevelopmentalphase.
At the developmental phase, software verification ensures that the product of a
phasefulfilstherequirementestablishedduringthepreviousphase,
At the post development phase, Software verification ensures that the software
produce complies with requirement and specification of the whole software in
general by regularly checking the software for bugs, and some anomalies in the
software.
Asthecomplexityofsoftwareincreases,thedemandtoproducesoftwarethatis
bug minimal or free, meets requirement, specifications and that satisfies users
expectationincreases,therebyputtingmoredemandonsoftwareverificationand
validation. Although this module is concerned about software verification,
regards will be given to software validation because they both go hand in hand
thoughtheyaretwo(2)differentsoftwareengineeringpractises.
Software Verification enables us to build a product in the right way that will
facilitate the correctness, consistency and others necessary properties of the
software, asking the question are we building the product right? while
softwarevalidationenablesustobuildtherightsoftwarei.e.askingthequestion
arewebuildingtherightproduct?
APPROACHTOSOFTWAREVERIFICATION
Therearetwofundamentalapproachestosoftwareverification,thereare:
DynamicVerification(alsocalledTestorexperimentation)
StaticVerification(orAnalysis)
Althoughtheabovearethefundamentalapproachestosoftwareverification,but
theactivitiesinvolvedare
Reviews
Walkthroughs
Inspections
Testing
DYNAMICVERIFICATION
Thiskindofverificationinvolvesperiodicallycheckingthebehaviourofasoftware
product during each phase of development to ensure that bugs and other
anomaliesareidentified.
Someoftheattributesofthiskindofverificationare
1. Executionofthecomponent,moduleorthewholesoftwaresystem
2. Selectingsometestcasesconsistingoftestdata
3. Confirmingthatoutputresultsoutofinputtestcases.
During the verification, bugs or faults, failures and other malfunctions can be
identified and tackled appropriately. The terms means different things and they
aredefinebelow
Faultwrongormissingfunctioninthecode.
Failurethemanifestationofafaultduringexecution.
Malfunctionaccordingtoitsspecificationthesystemdoesnotmeetitsspecifiedfunctionality.
Thiskindofverificationmustensurethatalltheabovethreemustbebroughttoa
minimum(absentifpossible)inordertomaketheproductmeetitsspecification
andfortheinputtoproducedesiredoutput.
CATEGORIESOFDYNAMICVERIFICATION
As said earlier that this verification can also be called test, so it involves the
followingtest
UnitTest:Asthenameimplies,thiskindoftestonlytestssomesmallpart
of the program like functions to ensure that it provide intended
functionality.
Module Test: A module is a group of classes, methods and other program
fragments.Thistestingchecksifamodulecorrectlycarriesoutthefunction
whichitisintendedfor.
Integration Test: This involves overlapping the functionality of multiple
modules,howtheycaninteroperatewitheachother.
SystemTest:Thisteststheentiresoftwaresystem,ensuringthatitworksas
intended.
FunctionalTest:Thistestinvolvesidentifyingandtestingallfunctionsofthe
system as defined in basic requirements document. This kind of test is a
blackboxinthesensethatthetesterisnotexpectedtobeknowledgeable
of the code and implementation of the system but rather uses just some
testcasesandtestasmanyfeaturesofthesystemaspossible.
ADVANTAGESOFDYNAMICVERIFICATION
There are several advantages of dynamic verification but a few of them are
mentionedbelow
1. Falsepositive results are rarely generated because the software itself is
runningwhiletheverificationactivitiesarebeingcarriedout.
2. Smalleramountofproficient,dedicatedknowledgeneeded
3. Wellestablishedtoolsandmethodologies

STATICVERIFICATION
Thisisaverificationapproachthatchecksthatthedevelopedsoftwaremeetsthe
requirement specified in the development document by doing a physical
inspection and review. This verification is concerned with the analysis of static
systemrepresentationofasoftwaresystemtodiscoverproblems.
Someoftheattributesofthiskindofverificationare
Involvesexaminingthesourcerepresentationwiththeaimofdiscoveringanomaliesand
defects.
Donotrequireexecutionofasystemsomaybeusedbeforeimplementation.
May be applied to any representation of the system (requirements, design, test data,
etc.)
Static verification could be carried out before the executable version of the
programexistswhichmayhelpusrevealsdefectslikememoryleaks,compliance
withcodingstandardsandothers.

TECHNIQUESOFSTATICVERIFICATION

FORMALVERIFICATION:Thisinvolvestheuseofrigorousmathematicalargument
andconstructtoshowthatsoftwareconformstoitsspecification.Simplyput,this
involves the use of formal methods to prove conformance of software with its
specifications.Thistechniquecouldnotonlybeusedatthevariousstagesduring
verificationbutalsovalidationanditcanbeusedtocheck
Inconsistency in a software system. As a matter of fact, this techniques is effective in
discoveringspecificationerrorsandomissions
Inconsistencyincodewhichenableustodiscoverprogramminganddesignerrors.
Thesecondinconsistencycheckisabitdifficulttoverifybecauseofthewidegap
between formal system specification and program code. As a result, this part is
based on transformation development. In this development process, the formal
specification is transformed into series of program code. This is done by some
softwaretoolsexampleofsuchtransformationmethodistheBmethod.
Formal verification ensures that the developed program meets its specification
errorsandwillnotcompromisethedependabilityofthesystem.
LIMITATION
1. With the advantage of formal specification and proof, it doesnt guarantee
thatthesoftwareworksreliablyinpracticaluse.Duetothefollowingreasons.
The specification may not reflect the real requirements of the system.Because this
requires a skilled personnel, it may contain errors, bugs or might not even meet
exactsystemspecificationwithoutbeingvisibletopeople
Theproofmayalsocontainerrorsduetoitscomplexityandvolume
Theproofmayalsoincludeorbebasedonincorrectassumptions
2. It is very expensive to implement especially when verifying nonfunctional
softwaresystemsbecauseitwillrequireexperiencepersonnelandspecialised
tools
Even with its limitation, this technique has been known to easily identify those
common causes of system failure. So may be applicable to safetycritical and
securitycriticalcomponents.

MODEL CHECKING: This implies the creation of a model of the system and
checking the correctness using some specialised tools. The formal method of a
system is usually built using finite state machine. This may then be expressed in
any language based on the checker. In using the static verification method, the
propertiestocheckforareidentifiedandstoredasaformalnotation.Themodel
checkingthenchecksallthepathsinthestatemachinetoseethattheyconform
to its correctness with respect to the outlined properties. If this doesnt hold for
some path, a counter example is generated to prove that the property doesnt
hold.
Thishasfoundapplicationinconcurrentsystemswhicharedifficulttotestdueto
itssensitivitytotimeandalsoinembeddedsystems.

LIMITATION
1. Themainissueisthecreationofthemodel.Ifthemodeliscreatedbyhand,it
mayrequireagreat dealoftimetofinish,soits bettercreatedautomatically
fromprogramcode.
2. It is computationally very expensive because of it exhaustive checking of all
availablepathsinthemodel.Asthesizeofprogramincreases,thenumberof
states,sodoesthepathincreases.

AUTOMATED STATIC ANALYSIS: it is well known that there are some common
errors found in different programming language; all these could be listed and an
automated process could be designed to check programs against this list of
common errors. This act resulted in the development of automated static
analyser capable of identifying incorrect code fragment. This tool works directly
on the source code, so no other notation is required. This makes it easy to
introduce this technique into development process than other static verification
techniques.
Automated static analyser checks the source program to determine faults and
abnormalities in the input i.e. detecting if the input are well formed or not and
alsomakeinferenceaboutcontrolflow.
Automatedstaticanalysersarefasterandcheaperthancodereviews,butitcant
identifier errors not listed in the list. This technique is used to draw a code
readers attention to abnormalities in program such as uninitialized variables,
usedvariables.
NOTE: Although anomalies occur but may not necessarily mean that there is an
errorintheprogram.
There are three levels of error checking that may be implemented in static
analysers
(1) Characteristic error checking: The tools knows about the common error that the
programmer can make, so it looks for such patterns and highlight them to the
programmerifanywerefound.
(2) User defined error checking: This is an error checking defined by the user of the static
analyser. The user defines the pattern of the error such as executing a particular
procedureAbeforeanotherlikeB.
(3) Assertion Checking: In this case, programmers include formal assertions that must be
checkedatsomepointintheprogram.Inasituationwheretheassertionisviolatedthe
portionishighlightedtotheprogrammer.
LIMITATION
Itcouldgenerateanumberoffalsepositiveerrors,butthesecouldbereducedby
addingmoreinformationtotheprograminformofassertions
uR00P 2
Symbolic Execution anu Softwaie
Bebugging
OUTLINE
WhatisSoftwareDebugging
Originandhistoryofsoftwaredebugging
DifferencebetweenSoftwareDebugging,TestingandVerification
ReasonsforSoftwareDebugging/ProgramTesting
Toolsforsoftwaredebugging
WhatisSymbolicExecution
WhySymbolicExecution?
Typesandexplanationsofsymbolicexecutionprograms
DetailedExplanationofHowSymbolicExecutionaidsSoftwareDebugging
Illustrationhowtotestprograms/debugwithSymbolicExecutionwithExamples
BriefwriteuponexampleofSymbolicExecution
ToolsandLimitations
References
What is Softwaie Bebugging.
Debuggingasiswellknownamongsoftwareengineers,mostoftheeffortindebugginginvolves
locatingthedefects.Softwaredebuggingistheprocessbywhichdevelopersattempttoremove
codingbugsordefectsfromacomputerprogram.Mosttimes,thedebuggingphaseofsoftware
developmentconsumes6070%oftheoveralldevelopmenttime.Infact,debuggingisresponsiblefor
80%ofallsoftwareprojectoverruns.Ultimately,agreatamountofdifficultyanduncertainty
surroundthecrucialprocessofsoftwaredebugging.
Thisisbecauseateachstageoftheerrordetectionprocess,itisdifficulttodeterminehowlongitwill
taketofindandfixanerror,nottomentionwhetherornotthedefectwillactuallybefixed.Inorder
toremovebugsfromthesoftware,thedevelopersmustfirstdiscoverthataproblemexists,then
classifytheerror,locatewheretheproblemactuallyliesinthecode,andfinallycreateasolutionthat
willremedythesituationwithoutintroducingotherproblems.Someproblemsaresoelusivethatit
maytakeprogrammersmanymonths,orinextremecases,evenyearstofindthem.Developersare
constantlysearchingforwaystoimproveandstreamlinetheprocessofsoftwaredebugging.Atthe
sametime,theyhavebeenattemptingtoautomatetechniquesusedinerrordetection.
0iigin of Softwaie Bebugging
Overtheyears,thehistoricalperspectiveoftheoriginofthetermdebugginghasbeencontroversial.
Theterms"bug"and"debugging"arebothpopularlyattributedtoAdmiralGraceHopperinthe1940s.
WhileshewasworkingonaMarkIIComputeratHarvardUniversity,herassociatesdiscoveredamoth
stuckinarelayandtherebyimpedingoperation,whereuponsheremarkedthattheywere"debugging"
thesystem.
Howevertheterm"bug"inthemeaningoftechnicalerrordatesbackatleastto1878andThomas
Edisonand"debugging"seemstohavebeenusedasaterminaeronauticsbeforeenteringtheworldof
computers.Indeed,inaninterviewGraceHopperremarkedthatshewasnotcoiningtheterm.The
mothfitthealreadyexistingterminology,soitwassaved.
TheOxfordEnglishDictionaryentryfor"debug"quotestheterm"debugging"usedinreferenceto
airplaneenginetestingina1945articleintheJournaloftheRoyalAeronauticalSociety,Hopper'sbug
wasfoundonSeptember9,1947.Thetermwasnotadoptedbycomputerprogrammersuntiltheearly
1950s.TheseminalarticlebyGillin1951istheearliestindepthdiscussionofprogrammingerrors,butit
doesnotusetheterm"bug"or"debugging".
IntheAssociationforComputingMachinery'sdigitallibrary,theterm"debugging"isfirstusedinthree
papersfrom1952ACMNationalMeetings.Twoofthethreeusetheterminquotationmarks.By1963,
"debugging"wasacommonenoughtermtobementionedinpassingwithoutexplanationonpage1of
theCrayTimeSharingSystemmanual.Astudyofdebuggingtechnologiesrevealsaninterestingtrend.
Mostdebugginginnovationshavecenteredaroundreducingthedependencyonhumanabilitiesand
interaction.However,debuggingisanattempttoimproveprogrammerproductivityandthetechnology
hasdevelopedthroughseveralstages:

The Stone Age


Atthedawnofthecomputerageitwasdifficultforprogrammerstocoaxcomputerstoproduceoutput
abouttheprogramstheyran.Programmerswereforcedtoinventdifferentwaystoobtaininformation
abouttheprogramstheyused.Theynotonlyhadtofixthebugs,buttheyalsohadtobuildthetoolsto
findthebug.Devicessuchasscopesandprogramcontrolledbulbswereusedasanearlytechniqueof
debugging.

The Bionze Age: Piint statement times


Eventually,programmersbegantodetectbugsbyputtingprintinstructionsinsidetheirprograms.By
doingthis,programmerswereabletotracetheprogrampathandthevaluesofkeyvariables.Theuseof
printstatementsfreedprogrammersfromthetimeconsumingtaskofbuildingtheirowndebugging
tools.Thistechniqueisstillincommonuseandisactuallywellsuitedtocertainkindsofproblems.

The Niuule Age: Runtime Bebuggeis


Althoughprintstatementswereanimprovementindebuggingtechniques,theystillrequireda
considerableamountofprogrammertimeandeffort.Whatprogrammersneededwasatoolthatcould
executeoneinstructionofaprogramatatime,andprintvaluesofanyvariableintheprogram.This
wouldfreetheprogrammerfromhavingtodecideaheadoftimewheretoputprintstatements,sinceit
wouldbedoneashesteppedthroughtheprogram.Thus,runtimedebuggerswereborn.Inprinciple,a
runtimedebuggerisnothingmorethananautomaticprintstatement.Itallowstheprogrammertotrace
theprogrampathandthevariableswithouthavingtoputprintstatementsinthecode.
Today,virtuallyeverycompileronthemarketcomeswitharuntimedebugger.Thedebuggeris
implementedasaswitchpassedtothecompilerduringcompilationoftheprogram.Veryoftenthis
switchiscalledthe"g"switch.Theswitchtellsthecompilertobuildenoughinformationintothe
executabletoenableittorunwiththeruntimedebugger.Theruntimedebuggerwasavast
improvementoverprintstatements,becauseitallowedtheprogrammertocompileandrunwitha
singlecompilation,ratherthanmodifyingthesourceandrecompilingashetriedtonarrowdownthe
error.

The Piesent Bay: Automatic Bebuggeis


Runtimedebuggersmadeiteasiertodetecterrorsintheprogram,buttheyfailedtofindthecauseof
theerrors.Theprogrammerneededabettertooltolocateandcorrectthesoftwaredefect.
Softwaredevelopersdiscoveredthatsomeclassesoferrors,suchasmemorycorruptionandmemory
leaks,couldbedetectedautomatically.Thiswasastepforwardfordebuggingtechniques,becauseit
automatedtheprocessoffindingthebug.Thetoolwouldnotifythedeveloperoftheerror,andhisjob
wastosimplyfixit.
Automaticdebuggerscomeinseveralvarieties.Thesimplestonesarejustalibraryoffunctionsthatcan
belinkedintoaprogram.Whentheprogramexecutesandthesefunctionsarecalled,thedebugger
checksformemorycorruption.Ifitfindsthiscondition,itreportsit.Theweaknessofsuchatoolisits
inabilitytodetectthepointintheprogramwherethememorycorruptionactuallyoccurs.Thishappens
becausethedebuggerdoesnotwatcheveryinstructionthattheprogramexecutes,andisonlyableto
detectasmallnumberoferrors.
ThenextgroupofruntimedebuggersisbasedonOCItechnology.Thesetoolsreadtheobjectcode
generatedbycompilers,andbeforeprogramsarelinked,theyareinstrumented.Thebasicprincipleof
thesetoolsisthattheylookforprocessorinstructionsthataccessmemory.Intheobjectcode,any
instructionthataccessesmemoryismodifiedtocheckforcorruption.Thesetoolsaremoreusefulthat
theonesbasedonlibrarytechniques,buttheyarestillnotperfect.Becausethesetoolsaretriggeredby
memoryinstructions,theycanonlydetecterrorsrelatedtomemory.
Thesetoolscandetecterrorsindynamicmemory,buttheyhavelimiteddetectionabilityonthestack
andtheydonotworkonstaticmemory.Theycannotdetectanyothertypeoferrors,becauseofthe
weaknessesinOCItechnology.Attheobjectlevel,alotofsignificantinformationaboutthesourcecode
ispermanentlylostandcannotbeusedtohelplocateerrors.Anotherdrawbackofthesetoolsisthat
theycannotdetectwhenmemoryleaksoccur.Pointersandintegersarenotdistinguishableatthe
objectlevel,makingthecauseoftheleakundetectable.
ThethirdgroupofruntimedebuggersisbasedonSCItechnology.Thetoolreadsthesourcecodeofthe
program,analyzesit,andinstrumentssothateveryprograminstructionissandwichedbetweenthe
tool'sinstructions.Becausethetoolreadsthesourcecode,itcandiscovererrorsrelatedtomemoryand
otherlargeclassesoferrors.Moreover,formemorycorruptionerrors,thetoolisabletodetecterrorsin
allmemorysegmentsincludingheap,static,andstackmemory.
Thebigadvantageofthistoolisthatitcantrackpointersinsideprograms,andleakscanbetracedto
pointwheretheyoccurred.Thisgenerationoftoolsisconstantlyevolving.Inadditiontolookingfor
memoryerrors,thesetoolsareabletodetectlanguagespecificerrorsandalgorithmicerrors.These
toolswillbethebasisforthenextstepoftechnologicaldevelopment.
Allofthepresenttoolshaveonecommondrawback.Theystillrequiretheprogrammertogothrough
theextrastepoflookingforruntimeerrorsaftertheprogramiscompiled.Inasense,theprocesshasn't
changedmuchsincetheDebuggingStoneAge.First,youwritecode,andthenyoucheckforerrors.This
twostageprocessstillexists,onlyatahigherlevel.Theprocedureneedstobeintegratedintoone
stage.

The neai futuie: Compilei integiation of
automatic iuntime uebuggeis
Thenearfutureiswheredebuggingtechnologyisheaded.Overtheyears,debuggingtechnologyhas
substantiallyimproved,anditwillcontinuetodevelopsignificantlyinthenearfuture.Thenextlogical
stepinthedevelopmentofautomaticdebuggingtechniquesistointegratethesetechnologieswith
compilers.Thereisnoreasonwhycompilervendorscouldnotexpandtheuseofthe"g"switch.Instead
ofjustprovidinginformationforruntimedebugging,the"g"switch,bydefault,shouldperform
automaticruntimeerrordetection.Suchtightintegrationwouldhaveatremendousimpacton
programmersproductivity.
Amajorproblemwithautomaticruntimedebuggersisthattheyarenotusedenough.Programmers
typicallydeveloptheircode,andoncetheyareattheendofthedevelopmentprocesstheyruntheerror
detectiontool.Thisisinefficient,becausetheprogrammercouldusetheruntimedebuggertodiscover
errorsthroughoutthewholedevelopmentprocess.Thedeveloperwillbeabletodiscovererrorsduring
theveryfirstrunofthenewprogram.
Thisisalogicalstepintheevolutionofdebugging.Humaninterventionwillbeeliminated.Whenthis
technologyisintegratedwiththecompilersruntimedebuggingwillbetransparent,notadistinct
process.
Apartfromfunctionalityandeaseofuse,integrationofruntimedebuggingtoolswithcompilerswill
leadtosignificanttechnologicaladvance.SCIbasedtoolsareprecursorsofthisstep.Suchtoolsarevery
similartocompilers.Toolsthatarebasedonthistechnologyparsetheprogram,generateaparsetree,
instrumentit,andwriteoutsourcecodethatispassedtothecompiler.
Whentoolssuchastheseareintegratedwiththecompilers,theprocessissignificantlyshortened.
Insteadofusinganextratooltoparsethesourcecode,thecodecanbeparsedusingthecompiler's
purser.Aftertheparsetreeisgenerated,thetreecanbepassedtothedebuggingtoolfor
instrumentation.Onceinstrumented,theparsetreecanbesentbacktothecompilersforgeneration.
TheSCIdebuggingtoolisreadyforfullintegrationwithcompilers,andthisprocesshasalreadybegun.

Bow to fuithei impiove the uebugging
piocess
Asstatedearlier,thedebuggingprocesswillbesignificantlyimprovedinthenearfuture.Weexpectthat
SCIbasedtoolsandcompilerscanbeeasilyintegrated.Theyalsocontaintechnologywhichwillbethe
basisfortheintegration.Eventually,developerswillbegintoexpectcompilerstosupportruntime
debuggers.Thiswillbearequiredfeature.
Thequestionisnotiftheintegrationwillhappen,butratherwhenitwillhappen.Thecompilervendors
whointegratewithSCIbasedtoolswillhaveasignificantadvantage.Therearetworeasonsforthis:
Productivityofdevelopersworkingwiththesecompilerswillbemuchhigher,thusplatformsoffering
suchcompilerswillbetheplatformsofchoiceforseriouscodedevelopment.Atpresent,mostcompilers
arerelativelysimilar.Thisistrueforindependentcompilervendorsaswellasforthecompilersprovided
withthemachine.TheintegrationwiththeSCIbasedautomaticdebuggerswillmakethecompilerand
platformmoreattractive.
TheexperiencedevelopedfromintegratingSCItoolswithcompilerswillleadtoyetanother
technologicaladvancement.AnSCIbasedautomaticruntimedebuggerissimilarinitsdesigntoother
tools.Developmenttoolssuchassyntaxchecking,coverageanalysisandothersrelyontheirownability
toanalyzeandinstrumenttheparsetree.Atpresent,allofthesetoolsrelyontheirownparsesto
performtheirfunctions.Thedevelopmentandmaintenanceofparsersisasignificanttaskfor
companiesthatprovidethesetools
Tooldevelopmentwouldbeeasierif,insteadofparsingcode,thesetoolscouldusethecompilerparse
treetoperformtheirfunctions.WebelievethattechnologydevelopedduringtheintegrationofSCI
basedautomaticruntimedebuggerswillleadtoeasierwaysfortoolstobeintegratedwithcompilers.
Finally,becauseofthis,morethirdpartytoolswillbeavailableforsuchcompilers.

Stanuaius: An impoitant pait of the


piocess
WhenoneofthecompilervendorsintegratesSCIbasedautomaticdebuggingwiththeircompiler,other
vendorswillfollowsuit.Asmoreandmoreintegrationhappens,thethirdpartyvendorswillrequire
sometypeofstandardinterfacewhichallowthemtoconnecttheirtoolstocompilerparsetrees.This
willnaturallyleadtothedevelopmentofastandardinterface.Thiswillbebeneficialtoallparties.
Compilervendorswillgetmoretoolsthatcanworkwiththeircompilers,andtoolvendorswillnothave
tosupporttheirownparses.Programmerswillbenefitfromsophisticatedsystemsthatassistthemin
debuggingtheirsoftware.
Thefutureofautomaticdebuggingtoolswillbeverysimilartowhatiscurrentlyhappeninginconsumer
computerproducts.Forexample,thedevelopersofbasicspreadsheetsallowthirdpartyvendorsto
connecttheirmodulestothebasicapplication,andthemodulesareseamlesslyintegratedintoawhole
system.Thesamethingwillhappenwhencompilervendorsandtoolvendorsintegratetheirsystems.
Thefutureofdevelopmenttoolsisveryexciting.Webelievethatautomaticerrordetectionwillexpand.
ThedevelopmentofOCIbasedautomaticerrordetectiontoolswasabigstepforward.Atpresent,this
technologyisbeingreplacedwithSCIbasedtoolswhicharemoreaccurate,candetectlargerclassesof
errorsandareextensible.Thesetoolswillbethebasisoffuturecompilerbasedautomaticdetection
toolswhichwilldetecterrorsthatatthisstagewecannotevenimagine.


Bebugging Piocess
Normallythefirststepindebuggingistoattempttoreproducetheproblem.Thiscanbeanontrivial
task,forexampleaswithparallelprocessesorsomeunusualsoftwarebugs.Also,specificuser
environmentandusagehistorycanmakeitdifficulttoreproducetheproblem.
Afterthebugisreproduced,theinputoftheprogrammayneedtobesimplifiedtomakeiteasierto
debug.Forexample,abuginacompilercanmakeitcrashwhenparsingsomelargesourcefile.
However,aftersimplificationofthetestcase,onlyfewlinesfromtheoriginalsourcefilecanbe
sufficienttoreproducethesamecrash.Suchsimplificationcanbemademanually,usingadivideand
conquerapproach.Theprogrammerwilltrytoremovesomepartsoforiginaltestcaseandcheckifthe
problemstillexists.WhendebuggingtheprobleminaGUI,theprogrammercantrytoskipsomeuser
interactionfromtheoriginalproblemdescriptionandcheckifremainingactionsaresufficientforbugs
toappear.
Afterthetestcaseissufficientlysimplified,aprogrammercanuseadebuggertooltoexamineprogram
statesandtrackdowntheoriginoftheproblem.Alternatively,tracingcanbeused.Insimplecases,
tracingisjustafewprintstatements,whichoutputthevaluesofvariablesatcertainpointsofprogram
execution.

Bebugging Tools
Thedebuggingskilloftheprogrammercanbeamajorfactorintheabilitytodebugaproblem,butthe
difficultyofsoftwaredebuggingvariesgreatlywiththecomplexityofthesystem,andalsodepends,to
someextent,ontheprogramminglanguage(s)usedandtheavailabletools,suchasdebuggers.
Debuggersaresoftwaretoolswhichenabletheprogrammertomonitortheexecutionofaprogram,
stopit,restartit,setbreakpoints,andchangevaluesinmemory.Thetermdebuggercanalsoreferto
thepersonwhoisdoingthedebugging.Generally,highlevelprogramminglanguages,suchasJava,
makedebuggingeasier,becausetheyhavefeaturessuchasexceptionhandlingthatmakerealsources
oferraticbehavioreasiertospot.InprogramminglanguagessuchasCorassembly,bugsmaycause
silentproblemssuchasmemorycorruption,anditisoftendifficulttoseewheretheinitialproblem
occurred.Inthosecases,memorydebuggertoolsmaybeneeded.

Bebugging Techniques
1. Print(ortracing)debuggingistheactofwatching(liveorrecorded)tracestatements,orprint
statements,thatindicatetheflowofexecutionofaprocess.Thisissometimescalledprintf
debugging,duetotheuseoftheprintfstatementinC.
2. Remotedebuggingistheprocessofdebuggingaprogramrunningonasystemdifferentthan
thedebugger.Tostartremotedebugging,debuggerconnectstoaremotesystemovera
network.Onceconnected,debuggercancontroltheexecutionoftheprogramontheremote
systemandretrieveinformationaboutitsstate.
3. Postmortemdebuggingisdebuggingoftheprogramafterithasalreadycrashed.Related
techniquesoftenincludevarioustracingtechniques,analysisofmemorydump(orcoredump)
ofthecrashedprocess.Thedumpoftheprocesscouldbeobtainedautomaticallybythesystem
(forexample,whenprocesshasterminatedduetoanunhandledexception),orbya
programmerinsertedinstruction,ormanuallybytheinteractiveuser.
4. DeltaDebuggingtechniqueofautomatingtestcasesimplification.
5. SaffSqueezetechniqueofisolatingfailurewithinthetestusingprogressiveinliningofpartsof
thefailingtest.


Biffeience between Softwaie Bebugging,
Testing anu veiification
Thetermsdebugging,testingandverificationarenotmutuallyexclusiveactivities,especiallyineveryday
practice.Thedefinitionsdrawdistinctions,buttheboundariesmayactuallybefuzzy.
Debugging:Theprocessofdebugginginvolvesanalysingandpossiblyexpanding(withdebugging
statements)thegivenprogramthatdoesnotmeetthespecificationinordertofindanewprogramthat
isclosetotheoriginalanddoessatisfythespecification.Thusitistheprocessofdiagnosingtheprecise
natureofaknownerrorandthencorrectingit
Debugging=diagnosis
Debugging=errorfindingandfixing
Debuggingisdonebydevelopers
Debuggersfixtheerrors
Debugging=doneindevelopmentphase
Debugging=whiteboxtesting
Incontrast,givenaprogram,andasetofspecification,verificationistheprocessofprovingor
demonstratingthattheprogramcorrectlysatisfiesthespecification.
Verification=functionalcorrectness.
Verificationcanbedonebycustomers
Whereasverificationprovesconformancewithaspecification,testingfindscaseswhereaprogramdoes
notmeetitsspecification.Basedonthisdefinition,anyactivitythatexposestheprogrambehaviour
violatingaspecificationcanbecalledtesting.
Testing=tryingtobreaktheprogram
Testingisdonebytesters
Testersdontfixproblemsbutreturnittoprogrammersforfixing.
Testingisdoneinthetestingphase
Testing=blackboxtesting
Testingdoesnotshowtheabsenceoferrorbutthepresence
Testingdoesnotincludeeffortsassociatedwithtrackingdownbugsandfixingthem
Testingdoesntensurequality

Reasons foi Softwaie


BebuggingPiogiam Testing
1. Toprovethataprogrammeetswithitsspecification
2. Toprovethatthesoftwareiserrorfree
3. Thepurposeoftestingisnottofindbugsitisdebuggingthatfindsandremovesbugs
4. Toqualifyasoftwareprogramsqualitybymeasuringitsattributesandcapabilitiesagainst
expectationsandapplicablestandards
5. And(probablythemostimportantroleoftestingissimplyto)provideinformation

Tools foi softwaie uebugging


Debuggingrangesincomplexity,fromfixingsimpleerrorstoperforminglengthyandtiresometasksof
datacollection,analysisandschedulingupdates.Thedebuggingskillofprogrammercanbeamajor
factorintheabilitytodebugaproblem,butthedifficultyofsoftwaredebuggingvariesgreatlywiththe
complexityofthesystem,andalsodepends,tosomeextent,ontheprogramminglanguages(s)usedand
theavailabletools.
MemoryDebuggers:Thisiaaprogrammingtoolforfindingmemoryleaksandbufferoverflows.These
areduetobugsrelatedtotheallocationanddeallocationofdynamicmemory.Programswrittenin
languagethathavegarbagecollection;suchasmanagedcode,mightalsoneedmemorydebuggers,e.g.
formemoryleaksduetolivingreferencesincollections.Examplesare
AllineaDDT
AQtime
Bcheck
BoundsChecker
Diakon
Debug_new
Deleaker
Dmalloc
Duma
etc.
Otherdebuggingtoolsinclude
BuGLe
gDebugger
APITrace
GLIntercept
glsDevil
XCodeTools
GL_ARB_debug_output
GNUDebugger(GDB)
IntelDebugger(IDB)
LLDB
MicrosoftVisualStudionDebugger
Valgrind
WinDbg


SYNB0LIC EXEC0TI0N ANB S0FTWARE
BEB0uuINu
SYMBOLICEXECUTION
Symbolicexecutionisaprogramanalysistechniqueintroducedinthe70sthathasreceivedrenewed
interestinrecentyears,duetoalgorithmicadvancesandincreasedavailabilityofcomputationalpower
andconstraintsolvingtechnology.Incomputerscience,symbolicexecution(alsosymbolicevaluation)
referstotheanalysisofprogramsbytrackingsymbolicratherthanactualvalues,andmaintainingapath
conditionthatisupdatedwheneverabranchinstructionisexecuted,toencodetheconstraintsonthe
inputsthatreachthatprogrampoint.
Symbolicexecutionisusedtoreasonaboutalltheinputsthattakethesamepaththroughaprogram.
Symbolicexecutionisnowtheunderlyingtechniqueofseveralpopulartestingtools,manyofthem
opensource:NASAsSymbolic(Java)PathFinder,UIUSsCUTEandjCUTE,StanfordsKLEE,UCBerkeleys
CRESTandBitBlaze,etc.
Symbolicexecutionstillsuffersfromscalablityissuesduetothelargenumberofpathsthatneedtobe
analysedandthecomplexityoftheconstraintsthataregenerated.However,algorithmicadvances,
newlyavailableSatifiabilityModuloTheories(SMT)solversandmorepowerfulmachineshavealready
madeitpossibletoapplysuchtechniquestolargeprograms(withmillionslinesofcode)andtodiscover
subtlebugsincommonlyusedsoftwarerangingfromlibrarycodetonetworkandoperatingsystems
code.

SOFTWAREDEBUGGING
Debuggingisamethodicalprocessoffindingandreducingthenumberofbugs,ordefects,ina
computerprogramorapieceofelectronichardware,thusmakingitbehaveasexpected.Debugging
tendstobeharderwhenvarioussubsystemsaretightlycoupled,aschangesinonemaycausebugsto
emergeinanother.Debugginginvolvesnumerousaspectsincludinginteractivedebugging,controlflow,
integrationtesting,logfiles,monitoring(application,system),memorydumps,profiling,Statistical
ProcessControlandspecialdesigntacticstoimprovedetectionwhilesimplifyingchanges.
ORIGIN
ThetermsbuganddebuggingarebothpopularlyattributedtoAdmiralGraceHopperinthe1940s.
WhileshewasworkinginaMarkIIComputeratHarvardUniversity,herassociatesdiscoveredamoth
stuckinarelayandtherebyimpedingoperation,whereuponsheremarkedthattheyweredebugging
thesystem.Howeverthetermbuginthemeaningoftechnicalerrordatesbackatleast1878and
ThomasEdison,anddebuggingseemstohavebeenusedasaterminaeronauticsbeforeenteringthe
worldofcomputers.Thetermwasnotadoptedbycomputerprogrammersuntiltheearly1950s.

Assoftwareandelectronicsystemshavebecomegenerallymorecomplex,thevariouscommon
debuggingtechniqueshaveexpandedwithmoremethodstodetectanomalies,assessimpact,and
schedulesoftwarepatchesorfullupdatestoasystem.
Tools
Debuggingranges,incomplexity,fromfixingsimpleerrorstoperforminglengthyandtiresometasksof
datacollection,analysis,andschedulingupdates.Thedebuggingskilloftheprogrammercanbeamajor
factorintheabilitytodebugaproblem,butthedifficultyofsoftwaredebuggingvariesgreatlywiththe
complexityofthesystem,andalsodepends,tosomeextent,ontheprogramminglanguage(s)usedand
theavailabletools,suchasdebuggers.
Debuggersaresoftwaretoolswhichenabletheprogrammertomonitortheexecutionofaprogram,
stopit,restartit,setbreakpointsandchangevaluesinmemory.Thetermdebuggercanalsorefertothe
personwhoisdoingthedebugging.
Generally,highlevelprogramminglanguages,suchasjava,makedebuggingeasier,becausetheyhave
featuressuchasexceptionhandlingthatmakerealsourcesoferraticbehavioureasiertospot

Typesandexplanationsofsymbolicexecutionprograms
Symbolicexecutionmaintainsasymbolicstate,whichmapsvariablestosymbolicexpressionsanda
symbolicpathconstraintPC,afirstorderquantifierfreeformulaoversymbolicexpressions.PC
accumulatesconstraintsontheinputsthattriggertheexecutiontofollowtheassociatedpath.
Considertheprogrambelow,whichreadsinavalueandfailsiftheinputis6.Ifthisprogramis
symbolicallyexecuted,aspecialsymbolicvariable(asdistinctfromtheprogramsvariables)isassociated
withthevaluesreturnedfromthereadfunction.Thesesymbolicvariablesandexpressionsofthemare
trackedinaspecialsymbolicstate.Thesymbolicvariable,whichwecallsisassignedtoyinthesymbolic
state,laterwhenyismultipliedbytwo,yisupdatedtocontaintheexpression2*s.
Atanycontroltransferinstructions,suchasthey==12,apathConstraintsisupdatedtotrackwhich
branchwastaken.Inthisexampleassumingtheconditionistrue,thePathConstraintisupdatedbeing
empty,tocontain:2*s==12
y=read()
y=2*y
if(y==12)
fails()
print(OK)
BynegatingsomeoftheconditionsinthePathConstraint,andbyusingaconstraintsolvertoobtain
satisfyingassignmentstothemodifiedPathConstraintitispossibletogenerateinputsthatexplorenew
partsoftheprogram.
DetailedExplanationofHowSymbolicExecutionaidsSoftwareDebugging

Listofdebuggers
GNUDebugger(GDB)
IntelDebugger(IDB)
LLDB
MicrosoftVisualStudionDebugger
Valgrind
WinDbg

Notes:
JavaPathfinder(JPF)isasystemtoverifyexecutablejavabytecodeprograms.JPFwasdevelopedby
NASAAmesResearchCenterandopensourcedin2005.ItsprimaryapplicationhasbeenModelchecking
ofconcurrentprograms,andtofinddefectssuchasdataracesanddeadlocks.
RearAdminiralGraceMurryHopperwasanAmericancomputerscientistandunitedstatesnavyofficer.
Apioneerinthefield,shewasoneofthefirstprogrammersoftheHarvardMarkIcomputerand
developedthefirstcompilerforacomputerprogramminglanguage.

Symbolic Execution Illustiation
Letusconsiderasimpleprogramminglanguage.Lettheprogramvariablesbeexclusivelyoftype
"signedinteger".Includesimpleassignmentstatements,IFstatements(withTHENandELSEclauses),
GOTO'stolabels,andsomemeansforobtaininginputs(e.g.procedureparameters,global
variables,readoperations).
Restrictthearithmeticexpressionstothebasicintegeroperatorsofaddition(+),subtraction(),
andmultiplication(X).RestricttheBooleanexpressions(usedinIFstatements)tothesimpletestof
whetheranarithmeticexpressionisnonnegative(i.e.{arith.expr.}>_0).
Thesymbolicexecutionofprogramsinthissimplelanguageisnowdescribedtakingthenormal
executionsemanticsforgranted.Theexecutionsemanticsischangedforsymbolicexecution,but
neitherthelanguagesyntaxnortheindividualprogramswritteninthelanguagearechanged.The
onlyopportunitytointroducesymbolicdataobjects(symbolsrepresentingintegers)isasinputsto
theprogram.Forsimplicity,letussupposethateachtimeanewinputvaluefortheprogramis
required,itissuppliedsymbolicallyfromthelistofsymbols{al,~2,~3,}.Programinputsare
eventuallyassignedasvaluestoprogramvariables(e.g.byprocedureparameters,globalvariables,
orreadstatements).Thus,tohandlesymbolicinputs,weallowthevaluesofvariablestobea~'sas
wellassignedintegerconstants.

TheevaluationrulesforarithmeticexpressionsusedinassignmentandIFstatementsmustbe
extendedtohandlesymbolicvalues.Theexpressionsformedintheusualwaybytheintegers,a
setofindeterminatesymbols{a1,a2,...},parentheses,andtheoperations+,,andXarethe
integerpolynomials(integervalued,integercoefficients)overthosesymbols.Byallowingprogram
variablestoassumeintegerpolynomialsovertheai'sasvalues,thesymbolicexecutionofassignment
statementsfollowsnaturally.Theexpressionontherighthandsideofthestatementisevaluated,
possiblysubstitutingpolynomialexpressionsforvariables.Theresultisapolynomial(anintegeristhe
trivialcase)whichisthenassignedasthenewvalueofthevariableonthelefthandsideofthe
assignmentstatement.
TheGOTO'stolabelsfunctionexactlyasinnormalexecutionsbyunconditionallytransferringcontrol
fromtheGOTOstatementtothestatementassociatedwiththecorrespondinglabel.The"state"ofa
programexecutionusuallyincludesthevaluesofprogramvariablesandastatementcounter
(denotingthestatementcurrentlybeingexecuted).ThedefinitionofthesymbolicexecutionoftheIF
statementrequiresthata"pathcondition"(pc)alsobeincludedintheexecutionstate,pcisa
Booleanexpressionoverthesymbolicinpulsia~}.Itnevercontainsprogramvariables,andforour
simplelanguage,isaconjoinedlistofexpressionsoftheformR_>0or1(R>_0),whereRis
apolynomialover{ai}.
Forexample:
{ai>=0/\ai+2Xai>=0/\(ai>=0)}.
Aswillbeseen,pcistheaccumulatorofpropertieswhichtheinputsmustsatisfyinorderforan
executiontofollowtheparticularassociatedpath.Eachsymbolicexecutionbeginswithpc
initializedtotrue.Asassumptionsabouttheinputsaremade,inordertochoosebetweenalternative
pathsthroughtheprogramaspresentedbyIFstatements,thoseassumptionsareadded
(conjoined)topc.
ThesymbolicexecutionofanIFstatementbeginsinafashionsimilartoitsnormalexecution:the
evaluationoftheassociatedBooleanexpressionbyreplacingvariablesbytheirvalues.Sincethe
valuesofvariablesarepolynomialsover{ad,theconditionisanexpressionoftheform:R>_0,
whereRisapolynomial.Callsuchanexpressionq.Usingthecurrentpathcondition(pc).
ThesymbolicexecutionofanIFstatementbeginsinafashionsimilartoitsnormalexecution:the
evaluationoftheassociatedBooleanexpressionbyreplacingvariablesbytheirvalues.Sincethe
valuesofvariablesarepolynomialsover{ai},theconditionisanexpressionoftheform:R>_0,
whereRisapolynomial.Callsuchanexpressionq.Usingthecurrentpathcondition(pc)formthe
twoexpressions:
(a) pc q
(b) pc

Atmostoneoftheseexpressionscanbetrue(eliminatingthetrivialcasewherepcisidenticallyfalse).
Whenexactlyoneexpressionistrue,wecontinuetheexecutionoftheIFstatementasusualbypassing
controleithertotheTHENpart,whenexpression(a)istrue,ortotheELSEpart,whenexpression(b)is
true.Allnormalexecutions,whoseinputvaluessatisfypc,wouldfollowthesamealternativeasthis
symbolicexecution;allwouldtaketheTHENalternativepc qorallwouldtaketheELSEalternative
pc .Inthiscase,theexecutionoftheIFstatementiscalleda"nonforking"execution.
Themoreinterestingcaseoccurswhenneitherexpression(a)norexpression(b)istrue.Inthissituation
thereexistsatleastonesetofinputstotheprogramwhichsatisfypcandwouldtaketheTHEN
alternativeandthereexistsatleastoneothersetofinputswhichsatisfypcandwouldleadtotheELSE
alternative.Sinceeachalternativeispossibleinthiscase,theonlycompleteapproachistoexploreboth
controlpaths.Sothesymbolicexecutionisdefinedtoforkintotwo"parallel"executions:onefollowing
theTHENalternative,theother,theELSE.Bothoftheseexecutionsassumethecomputationstatewhich
existedimmediatelybeforeexecutionoftheIFstatementbutproceedindependentlythereafter.Inthis
casetheexecutionoftheIFstatementiscalleda"forking"execution.Notethattheforking/nonforking
characteristicisassociatedwithaparticularexecutionofanIFstatementandnotwiththestatement
itself.OneexecutionofaparticularIFstatementmaybeforking,whileasubsequentexecutionofthe
samestatementmaybenonforking.
Since,inchoosingtheTHENalternative,theinputsareassumedtosatisfyq(theevaluatedIFstatement
Boolean),thisinformationisrecordedinpcbydoingtheassignmentpc~pcAq.Similarlychoosingthe
ELSEalternativeleadstopc~pcA~q.pciscalledthe"pathcondition"becauseitistheaccumulation
ofconditionswhichdeterminesauniquecontrolflowpaththroughtheprogram.Eachforkingexecution
ofanIFstatementcontributesaconditionovertheinputsymbolswhichisdeterminedbytheparticular
choiceofpath.pcremainsunchangedfornonforkingexecutionsofIFstatements,sincenonew
assumptionsaremadeorneeded,pccanneverbecomefalsesinceitsinitialvalueistrueandtheonly
operationperformedonpcisanassignmentoftheform:
pc~pcAr(whereriseitherqor1q).
butonlyinthecasewhen(pcAr)issatisfiable(pcAr~(pc~~r),whichissatisfiableifandonlyif(pcD
~r)isnotatheorem).

Examples

ConsiderthesimpleprogramshowninFigure1.ItiswritteninaPL/Istylesyntaxandcomputesthesum
ofthreevalues.Withintegerinputsof1,3,and5theconventionalexecutionofthisprogram,asshown
inFigure2,computestheoutput9.ThesymbolicexecutionshownindetailinFigure3hasestablished
thatforanythreeintegers,saya~,a2,~3,theprogramwillcalculatetheirsum,alA~2qa~.Now
considerthesomewhatmorecomplicatedexampleshowninFigure4forraisinganintegerXtothe
powerY.Withthesymbolsaxandor2suppliedasinputforXandYthesymbolicexecutionwould
proceedasshowninthefigure5.

inta=,b=,c=;
2.//symbolic
3.intx=0,y=0,z=0;
4.if(a){
5.x=2;
6.}
7.if(b<5){
8.if(!a&&c){y=1;}
9.z=2;
10.}
11.assert(x+y+z!=3)

SymbolicExecutionExample
Givenasetofcodes
1. int a = , b = , c = ;
2. // symbolic
3. int x = 0, y = 0, z = 0;
4. if (a) {
5. x = -2;
6. }
7. if (b < 5) {
8. if (!a && c) { y = 1; }
9. z = 2;
10.}
11.assert(x+y+z!=3)

Whathappensduringsymbolicexecution
1. int a = , b = , c = ;
2. // symbolic
3. int x = 0, y = 0, z = 0;
4. if (a) {
5. x = -2;
6. }
7. if (b < 5) {
8. if (!a && c) { y = 1; }
9. z = 2;
10.}
11.assert(x+y+z!=3)


Whats going on heie.
Duringsymbolicexecution,wearetryingtodetermineif
certainformulasaresatisfiable
E.g.,isaparticularprogrampointreachable?
Figureoutifthepathconditionissatisfiable
E.g.,isarrayaccessa[i]outofbounds?
Figureoutifconjunctionofpathconditionandi<0i>a.lengthissatisfiable
E.g.,generateconcreteinputsthatexecutethesamepaths
ThisisenabledbypowerfulSMT/SATsolvers
SAT=Satisfiability
SMT=Satisfiabilitymodulotheory=SAT++
E.g.Z3,Yices,STP
Types of Symbolic Execution
Symbolicexecutionisanattractiveapproachtosolvinglinereachability:bydesign,symbolicexecutors
arecomplete,meaninganypaththeyfindisrealizable.
Symbolicexecutorsworkbyrunningtheprogram,computingoverbothconcretevaluesandexpressions
thatincludesymbolicvalues,whichareunknownsthatrangeovervarioussetsofvalues,e.g.,integers,
strings,etc.[17,2,15,29].Whenasymbolicexecutorencountersaconditionalwhoseguarddependson
asymbolicvalue,itinvokesatheoremprover(ourimplementationusestheSMTsolverSTP)to
determinewhichbranchesarefeasible.Ifbothare,thesymbolicexecutionconceptuallyforks,exploring
bothbranches.
Biiecteu symbolic Execution
InthissectionwepresentSDSE,CCBSE,andMixCCBSE.Wewillexplainthemintermsoftheir
implementationinOtter,oursymbolicexecutionframework,tomakeourexplanationsconcrete(andto
savespace),buttheideasapplytoanysymbolicexecutiontool.
Figure1diagramsthearchitectureofOtterandgivespseudocodeforitsmainschedulingloop.Otter
usesCIL(CIntermediateLanguage)toproduceacontrolflowgraphfromtheinputCprogram.Thenit
callsastateinitializertoconstructaninitialsymbolicexecutionstate,whichitstoresinworklist,usedby
thescheduler.Astateincludesthestack,heap,programcounter,andpathtakentoreachthecurrent
position.
Intraditionalsymbolicexecution,whichwecallforwardsymbolicexecution,theinitialstatebegins
executionatthestartofmain.Theschedulerextractsastatefromtheworklistviapickandsymbolically
executesthenextinstructionbycallingstep.

Fig.1TheArchitectureoftheOttersymbolicExecutionEngine
AsOtterexecutesinstructions,itmayencounterconditionalswhoseguardsdependonsymbolicvalues.
Atthesepoints,OtterqueriesSTP,anSMTsolver,toseeiflegal,concreterepresentationsofthe
symbolicvaluescouldmakeeitherorbothbranchespossible,andwhetheranerrorsuchasanassertion
failuremayoccur.
Thesymbolicexecutorwillreturnthesestatestothescheduler,andthosethatareincomplete(i.e.,non
terminal)areaddedbacktotheworklist.ThecalltomanagetargetsisjustforguidingCCBSE'sbackward
search(itisanoopforotherstrategies),andisdiscussedfurtherbelow.

Symbolic Execution Types


TherearevarioustypesofSymbolicExecutionTechniques(directedsymbolicexecutionsearch)which
include:
Forward Symbolic ExecutionThisbeginsexecutionatthestartofmain.
Shortestdistance symbolic execution (SDSE) whichprioritizesthepathwiththe
shortestdistancetothetargetlineascomputedoveraninterproceduralcontrolflowgraph
(ICFG)
Callchainbackward symbolic execution (CCBSE)whichstartsatthetarget
lineandworksbackwarduntilitfindsarealizablepathfromthestartoftheprogram,using
standardforward(interprocedural)symbolicexecutionasasubroutine
Mixedstrategy CCBSE (MixCCBSE)whichcombinesCCBSEwithanotherforward
search.InMixCCBSE,wealternateCCBSEwithsomeforwardsearchstrategyS.

Foiwaiu Symbolic Execution
Differentforwardsymbolicexecutionstrategiesaredistinguishedbytheirimplementationofthepick
function.InOtterwehaveimplemented,amongothers,threesearchstrategiesdescribedinthe
literature:
Random Path (RP)isaprobabilisticversionofbreadthfirstsearch.RPrandomlychooses
fromtheworkliststates,weighingastatewithapathoflengthnby2n.Thus,thisapproach
favorsshorterpaths,buttreatsallpathsofthesamelengthequally.
KLEE usesaroundrobinofRPandwhatwecallclosesttouncovered,whichcomputesthe
distancebetweentheendofeachstate'spathandtheclosestuncoverednodeintheinter
proceduralcontrolflowgraphandthenrandomlychoosesfromthesetofstatesweighed
inverselybydistance.
SAGEusesacoverageguidedgenerationalsearchtoexplorestatesintheexecutiontree.At
first,SAGErunstheinitialstateuntiltheprogramterminatesbyrandomlychoosingastateto
runwheneverthesymbolicexecutioncorereturnsmultiplestates.Itstorestheremainingstates
intotheworklistasthefirrstgenerationchildren.Next,SAGErunseachofthefirstgeneration
childrentocompletion,inthesamemannerastheinitialstate,butseparatelygroupingthe
grandchildrenbytheirfrstgenerationparent.Afterexploringthefirstgeneration,SAGEexplores
subsequentgenerations(childrenofthefirstgeneration,grandchildrenofthefirstrst
generation,etc.)inamoreintermixedfashion,usingablockcoverageheuristictodetermine
whichgenerationstoexplorefirst.

ShoitBistance Symbolic Execution


ThebasicideaofSDSEistoprioritizeprogrambranchesthatcorrespondtotheshortestpathtotarget
intheICFG.ToillustratehowSDSEworks,considerthecodeinFigure2,whichperformscommandline
argumentprocessingfollowedbysomeprogramlogic,apatterncommontomanyprograms.This
program,apatterncommontomanyprograms.Thisprogramfirstentersaloopthatiteratesuptoargc
times,processingthei
th
commandlineargumentinargvduringiterationi.Iftheargumentis'b',the
programsetsb[n]to1andincrementsn(line8);otherwise,theprogramcallsfoo.Apotentialbuffer
overflowcouldoccuratline8whenmorethanfourargumentsare'b';weaddanassertiononline7to
identifywhenthisoverflowwouldoccur.Aftertheargumentsareprocessed,theprogramentersaloop
thatreadsandprocessescharacterinputs(lines12onward).

Figure2.exampleillustratingSDSEspotentialbenefit.

Supposewewouldliketoreasonaboutapossiblefailureoftheassertion.Thenwecanrunthisprogram
withsymbolicinputs,whichweidentifywiththecallsonline3tothespecialbuiltinfunctionsymbolic.
Therighthalfofthefigureillustratesthepossibleprogrampathsthesymbolicexecutorcanexploreon
thefirstfiveiterationsoftheargumentprocessingloop.Noticethatforfiveloopiterationsthereisonly
onepaththatreachesthefailingassertionoutof (S 2
n
) = 9S
4
n=0
totalpaths.Moreover,the
assertionisnotreachableonceexplorationhasadvancedpasttheargumentprocessingloop.
Inthisexample,RPwouldhaveonlyasmallchanceoffindingtheoverflow,spendingmostofitstime
exploringpathsshorterthantheonethatleadstothebufferoverflow.AsymbolicexecutorusingKLEE
orSAGEwouldfocusonincreasingcoveragetoalllines,wastingsignificanttimeexploringpathsthrough
theloopattheendoftheprogram,whichdoesnotinfluencethisbufferoverflow.
Incontrast,SDSEworksverywellinthisexample,withline7setasthetarget.Considerthefirstiteration
oftheloop.Thesymbolicexecutorwillbranchuponreachingtheloopguard,andwillchoosetoexecute
thefirstinstructionoftheloop,whichistwolinesawayfromtheassertion,ratherthantheiteration
aftertheloop,whichcannolongerreachtheassertion.
Next,online6,thesymbolicexecutortakesthetruebranch,sincethatreachestheassertionitself
immediately.Then,determiningthattheassertionistrue,itwillrunthenextline,sinceitisonlythree
linesawayfromtheassertionandhencecloserthanpathsthatgothroughfoo(whichweredeferredby
thechoicetogototheassertion).Thenthesymbolicexecutorwillreturntotheloopentry,repeating
thesameprocessforsubsequentiterations.Asaresult,SDSEexploresthecentralpathshowninboldin
thefigure,andtherebyquicklyfindtheassertionfailure.

Implementation.
SDSEisimplementedasapickfunctionfromFigure1.Asmentioned,SDSEchoosesthestateonthe
worklistwiththeshortestdistancetotarget.Withinafunction,thedistanceisjustthenumberofedges
betweenstatementsinthecontrolflowgraph(CFG).Tomeasuredistancesacrossfunctioncallswe
countedgesinaninterproceduralcontrolflowgraph(ICFG),inwhichfunctioncallsitesaresplitinto
callnodesandreturnnodes,withcalledgesconnectingcallnodestofunctionentriesandreturnedges
connectingfunctionexitstoreturnnodes.Foreachcallsitei,welabelcallandreturnedgesby(iand)i,
respectively.Figure3(a)showsanexampleICFGforaprograminwhichmaincallsfootwice;herecalli
tofooislabeledfoo
i
.

WedefinethedistancetotargetmetrictobethelengthoftheshortestpathintheICFGfroman
instructiontothetarget,suchthatthepathcontainsnomismatchedcallsandreturns.Formally,wecan
definesuchpathsasthosewhosesequenceofedgelabelsformastringproducedfromthePN
nonterminalinthegrammarshowninFigure3(b).Inthisgrammar,developedbyRepsandlaternamed
byFahndrichetal,Spathscorrespondtothosethatexactlymatchcallsandreturns;Npathscorrespond
toenteringfunctionsonly;andPpathscorrespondtoexitingfunctionsonly.Forexample,thedotted
pathinFigure3(a)isaPNpath:ittraversesthematching(foo
o
and)foo
o
edges,andthentraversesfoo
1

tothetarget.Noticethatweavoidconflatingedgesofdifferentcallsitesbymatching(iand)iedges,
andthuswecanstaticallycomputeacontextsensitivedistancetotargetmetric.

Fig3.SDSEdistancecomputation

PNreachabilitywaspreviouslyusedforconservativestaticanalysis.However,inSDSE,wearealways
askingaboutPNreachabilityfromthecurrentinstruction.Hence,ratherthansolvereachabilityforan
arbitraryinitialPpathsegment(whichwouldcorrespondtoaskingaboutdistancesfromthecurrent
instructioninallcallingcontextsofthatinstruction),werestricttheinitialPpathsegmenttothe
functionsonthecurrentcallstack.Forperformance,westaticallyprecomputeNpathandSpath
distancesforallinstructionstothetargetandcombinethemwithPpathdistancesondemand.

CallChain Backwaiu Execution


SDSEisoftenveryeffective,buttherearecasesonwhichitdoesnotdowellinparticular,SDSEisless
effectivewhentherearemanypotentialpathstothetargetline,butthereareonlyafew,longpaths
thatarerealizable.Inthesesituations,CCBSEcansometimesworkdramaticallybetter.
Toseewhy,considerthecodeinFigure4.Thisprograminitializesmandntobesymbolicandthen
loops,callingf(m,n)whenm==ifori2[0;1000).Fornonnegativevaluesofn,theloopinlines12{16
iteratesthroughn'sleastsignificantbits(storedinaduringiteration),incrementingsumbya+1foreach
nonzeroa.Finally,ifsum==0andm==7,thefailingassertiononline19isreached.Otherwise,the
programfallsintoanindefiniteloop,assumandmareneverupdatedintheloop.

Fig.4.ExampleIllustratingCCBSEspotentialbenefit.
RP,KLEE,SAGE,andSDSEallperformpoorlyonthisexample.SDSEgetsstuckattheverybeginning:in
main'sforloop,itimmediatelystepsintofwhenm==0,asthisisthe\fastest"waytoreachthe
assertioninsidefaccordingtotheICFG.Unfortunately,theguardoftheassertionisneversatisfiedwhen
mis0,andthereforeSDSEgetsstuckintheinfiniteloop.SAGEisverylikelytogetstuck,becausethe
chanceofSAGE'sfirstgenerationenteringfwiththerightargument(m==7)isextremelylow,and
SAGEalwaysrunsitsfirstgenerationtocompletion,andhencewillexecutetheinniteloopforever.RP
andKLEEwillalsoreachtheassertionveryslowly,sincetheywastetimeexecutingfwherem=7;none
ofthesepathsleadtotheassertionfailure.
Incontrast,CCBSEbeginsbyrunningfwithbothparametersmandnsettosymbolic,asCCBSEdoesnot
knowwhatvaluesmightbepassedtof.Hence,CCBSEwillpotentiallyexploreall2
6
pathsinducedbythe
forloop,andoneofthem,sayp,willreachtheassertion.Whenpisfound,CCBSEwilljumptomainand
explorevariouspathsthatreachthecalltof.Atthecalltof,CCBSEwillfollowptoshortcircuitthe
evaluationthroughf(inparticular,the2
6
branchesinducedbytheforloop),andthusquicklyfinda
realizablepathtothefailure.

Implementation
CCBSEisimplementedinthemanagetargetsandpickfunctionsfromFigure1.Otterstatess,returned
bypick,includethefunctionfinwhichsymbolicexecutionstarted,whichwecalltheoriginfunction.
Thus,traditionalsymbolicexecutionstatesalwayshavemainastheiroriginfunction,whileCCBSEallows
differentoriginfunctions.Inparticular,CCBSEbeginsbyinitializingstatesforfunctionscontainingtarget
lines.
TostartsymbolicexecutionatanarbitraryfunctionOttermustinitializesymbolicvaluesforthe
function'sinputs(parametersandglobalvariables).Integervaluedinputsareinitializedtosymbolic
words,andpointersarerepresentedusingconditionalpointers,manipulatedusingMorris'sgeneral
axiomofassignment.Tosupportrecursivedatastructures,Otterinitializespointerslazilywedonot
actuallycreateconditionalpointersuntilapointerisused,andweonlyinitializeasmuchofthememory
mapasisrequired.Wheninitialized,pointersaresetupasfollows:forinputspoftypepointertotypeT,
weconstructaconditionalpointersuchthatpmaybenullorpmaypointtoafreshsymbolicvalueof
typeT.IfTisaprimitivetype,wealsoaddadisjunctinwhichpmaypointtoanyelementofanarrayof
4freshvaluesoftypeT.Thislastcasemodelsparametersthatarepointerstoarrays,andwerestrictits
usetoprimitivetypesforperformancereasons.Inourexperiments,wehavenotfoundthisrestriction
tobeproblematic.ThisstrategyforinitializingpointersisunsoundinthatCCBSEcouldmisssome
targets,butfinalpathsCCBSEproducesarealwaysfeasiblesincetheyultimatelyconnectbacktomain.
Thepickfunctionworksintwosteps.First,itselectstheoriginfunctiontoexecuteandthenitselectsa
statewiththatorigin.Fortheformer,itpicksthefunctionfwiththeshortestlengthcallchainfrom
main.FornonCCBSEtheoriginwillalwaysbemain.AtthestartofCCBSEwithasingletarget,theorigin
willbetheonecontainingthetarget;asexecutioncontinuestherewillbemorechoicespickingthe
shortesttomainensuresthatwemovebackwardfromtargetfunctionstowardmain.Afterselecting
theoriginfunctionf,pickchoosesoneoff'sstatesaccordingtosomeforwardsearchstrategy.Wewrite
CCBSE(S)todenoteCCBSEusingforwardsearchstrategyS.

Fig5.TargetmanagementforCCBSE.
Themanagetargets(s)functionisgiveninFigure5.RecallfromFigure1thatshasalreadybeenaddedto
theworklistforadditional,standardforwardsearch;thejobofmanagetargetsistorecordwhichpaths
reachthetargetlineandtotrytoconnectswithpathsuffixespreviouslyfoundtoreachthetarget.The
managetargetsfunctionextractsfromsboththeoriginfunctionsfandthe(interprocedural)pathpthat
hasbeenexploredfromsftothecurrentpoint.
Thispathcontainsallthedecisionsmadebythesymbolicexecutoratconditionpoints.Ifpathp'send
(denotedpc(p))hasreachedatarget(line10),weassociatepwithsfbycallingupdatepaths;forthe
momentonecanthinkofthisfunctionasaddingptoalistofpathsthatstartatsfandreachtargets.
Otherwise,ifthepath'sendisatacalltosomefunctionf,andfitselfhaspathstotargets,thenwemay
possiblyextendpwithoneormoreofthosepaths.Soweretrievef'spaths,andforeachonepwesee
whetherconcatenatingptop(writtenp+p)producesafeasiblepath.Ifso,weaddittosfspath.
Feasibilityischeckedbyattemptingtosymbolicallyexecutepstartinginp'statess.

Nowweturntotheimplementationofupdatepaths.Thisfunctionsimplyaddsptosf'spaths(line19),
andifsfdidnotpreviouslyhaveanypaths,itwillcreateinitialstatesforeachofsf'scallers(pre
computedfromthecallgraph)andaddthesetotheworklist(line17).Becausethesecallerswillbe
closertomain,theywillbesubsequentlyfavoredbypickwhenitchoosesstates.

Nixing CCBSE with foiwaiu seaich


WhileCCBSEmayfindapathmorequickly,itcomeswithacost:itsqueriestendtobemorecomplex
thaninforwardsearch,anditcanspendsignificanttimetryingpathsthatstartinthemiddleofthe
programbutareultimatelyinfeasible.ConsiderFigure6,amodifiedversionofthecodeinFigure4.
Here,maincallsfunctiong,whichactsasmaindidinFigure4,withsomem>=30(line6),andthe
assertioninfisreachableonlywhenm==37(line22).Allotherstrategiesfailinthesamemanneras
theydoinFigure4.
However,CBSEalsofailtoperformwellhere,asitdoesnotrealizethatmisatleast30,andtherefore
considersultimatelyinfeasibleconditions0<=m<=36inf.WithMixCCBSE,however,weconceptually
startforwardsymbolicexecutionfrommainatthesametimethatCCBSE(\backwardsearch")isrun.As
before,thebackwardsearchwillgetsstuckinndingapathfromg'sentrytotheassertion.However,in
theforwardsearch,giscalledwithm<=30,andthereforefisalwayscalledwithm30,makingithitthe
rightconditionm==37verysoonthereafter.Noticethat,inthisexample,thebackwardsearchmust
findthepathfromf'sentrytotheassertionbeforefiscalledwithm==37intheforwardsearchinorder
forthetwosearchestomatchup(e.g.,thereareenoughinstructionstoruninline5).Shouldthisnot
happen,MixCCBSEdegeneratestoitsconstituentsrunningindependentlyinparallel,whichistheworst
case.
Implementation
WeimplementMixCCBSEwithaslightalterationtopick.Ateachstep,wedecidewhethertouse
regularforwardsearchorCCBSEnext,splittingthestrategies50/50bytimespent.Wecomputetime
heuristicallyas50X(no.ofsolvercalls)+(no.ofinstructionsexecuted),takingintoaccountthehigher
costofsolverqueriesoverinstructionexecutions.

Limitations
1. Limitedbythepowerofconstraintsolveri.eSymbollicExecutioncannothandlenonlinearand
verycomplexconstraints.
2. Doesnotscalewhennumberofinfeasiblepathsarelarge.(Subjectofongoingresearchinthis
area)
3. Sourcecode,orequivalent(e.g.,Javaclassfiles)isrequiredforprecisesymbolicexecution

Refeiences
1. http://en.wikipedia.org/wiki/symbolicexecution
2. http://babelfish.arc.nasa.gov/trac/jpf/wiki/projects/jpfsymbc
3. http://osl.cs.uiuc.edu/~ksen/cute/
4. http://klee.llvm.org/
5. http://code.google.com/p/crest/
6. http://bitblaze.cs.berkeley.edu/
7. http://research.microsoft.com/enus/projects/pex/
8. http://research.microsoft.com/enus/projects/yogi/
9. http://en.wikipedia.org/wiki/debugging

Software Quality
Assurance
CSC532 Group3
Lawal Yusuf 080805053
Obidiagha Stanley 080805064
Edokwe Ebube 080805033
Outline
Quality Concepts
Software Quality Assurance
Software Reviews
Formal Technical Review
Formal Approaches to SQA
Statistical Software Quality Assurance
Software Reliability
The ISO 9000 & 9001 Quality Standards
The SQA Plan
Summary



























Quality Concepts
The American Heritage Dictionary defines quality as a characteristic or attribute of
something. As an attribute of an item, quality refers to measurable characteristics
things we are able to compare to known standards such as length, color, electrical
properties, and malleability. However, software, largely an intellectual entity, is more
challenging to characterize than physical objects.
Nevertheless, measures of a programs characteristics do exist. These properties
include cyclomatic complexity, cohesion, number of function points, lines of code,
and many others. When we examine an item based on its measurable
characteristics, two kinds of quality may be encountered: quality of design and
quality of conformance.
Quality of design refers to the characteristics that designers specify for an item. The
grade of materials, tolerances, and performance specifications all contribute to the
quality of design. As higher-grade materials are used, tighter tolerances and greater
levels of performance are specified, the design quality of a product increases, if the
product is manufactured according to specifications.
Quality of conformance is the degree to which the design specifications are
followed during manufacturing. Again, the greater the degree of conformance, the
higher is the level of quality of conformance.
In software development, quality of design encompasses requirements,
specifications, and the design of the system. Quality of conformance is an issue
focused primarily on implementation. If the implementation follows the design and
the resulting system meets its requirements and performance goals, conformance
quality is high.
But are quality of design and quality of conformance the only issues that software
engineers must consider? Robert Glass [GLA98] argues that a more intuitive
relationship is in order:
User satisfaction =compliant product +good quality +
delivery within budget and schedule

At the bottom line, Glass contends that quality is important, but if the user isnt
satisfied, nothing else really matters. DeMarco [DEM99] reinforces this view when he
states: A products quality is a function of how much it changes the world for the
better. This view of quality contends that if a software product provides substantial
benefit to its end-users, they may be willing to tolerate occasional reliability or
performance problems.
Quality Control
Quality control involves the series of inspections, reviews, and tests used throughout
the software process to ensure each work product meets the requirements placed
upon it. Quality control includes a feedback loop to the process that created the
work product. The combination of measurement and feedback allows us to tune the
process when the work products created fail to meet their specifications. This
approach views quality control as part of the manufacturing process.
Quality control activities may be fully automated, entirely manual, or a combination
of automated tools and human interaction. A key concept of quality control is that
all work products have defined, measurable specifications to which we may
compare the output of each process. The feedback loop is essential to minimize the
defects produced.
Quality Assurance
Quality assurance consists of the auditing and reporting functions of management.
The goal of quality assurance is to provide management with the data necessary to
be informed about product quality, thereby gaining insight and confidence that
product quality is meeting its goals. Of course, if the data provided through quality
assurance identify problems, it is managements responsibility to address the
problems and apply the necessary resources to resolve quality issues.
Cost of Quality
The cost of quality includes all costs incurred in the pursuit of quality or in performing
quality-related activities. Cost of quality studies are conducted to provide a base-
line for the current cost of quality, identify opportunities for reducing the cost of
quality, and provide a normalized basis of comparison. The basis of normalization is
almost always dollars. Once we have normalized quality costs on a dollar basis, we
have the necessary data to evaluate where the opportunities lie to improve our
processes. Furthermore, we can evaluate the effect of changes in dollar-based
terms.
Quality costs may be divided into costs associated with prevention, appraisal, and
failure. Prevention costs include:
quality planning
formal technical reviews
test equipment
training
Appraisal costs include activities to gain insight into product condition the first time
through each process. Examples of appraisal costs include:
in-process and inter-process inspection
equipment calibration and maintenance
testing
Failure costs are those that would disappear if no defects appeared before shipping
a product to customers. Failure costs may be subdivided into internal failure costs
and external failure costs. Internal failure costs are incurred when we detect a
defect in our product prior to shipment. Internal failure costs include:
rework
repair
failure mode analysis
External failure costs are associated with defects found after the product has been
shipped to the customer. Examples of external failure costs are:
complaint resolution
product return and replacement
help line support
warranty work

Software Quality Assurance
Many definitions of software quality have been proposed in the literature. For our
purposes, software quality is defined as:
Conformance to explicitly stated functional and performance requirements,
explicitly documented development standards, and implicit characteristics that are
expected of all professionally developed software.
There is little question that this definition could be modified or extended. In fact, a
definitive definition of software quality could be debated endlessly. The definition
serves to emphasize three important points:
1. Software requirements are the foundation from which quality is measured.
Lack of conformance to requirements is lack of quality.
2. Specified standards define a set of development criteria that guide the
manner in which software is engineered. If the criteria are not followed, lack
of quality will almost surely result.
3. A set of implicit requirements often goes unmentioned, e.g.:
i) correctness the extent to which the software meets its specification and meets
its
ii) users requirements
iii) reliability the degree to which the software continues to work without failing
iv) performance the amount of main memory and processor time that the software
uses
v) integrity the degree to which the software enforces control over access to
information by users
vi) usability the ease of use of the software
vii) maintainability the effort required to find and fix a fault
viii) flexibility the effort required to change the software to meet changed
requirements
ix) testability the effort required to test the software effectively
x) portability the effort required to transfer the software to a different hardware
and/or software platform
xi) reusability the extent to which the software (or a component within it) can be
reused within some other software
xii) interoperability the effort required to make the software work in conjunction with
some other software
xiii) security the extent to which the software is safe from external sabotage that
may damage it and impair its use.


If software conforms to its explicit requirements but fails to meet implicit
requirements, software quality is suspect.

Background Issues
Quality assurance is an essential activity for any business that produces products to
be used by others. Prior to the twentieth century, quality assurance was the sole
responsibility of the craftsperson who built a product. The first formal quality
assurance and control function was introduced at Bell Labs in 1916 and spread
rapidly throughout the manufacturing world. During the 1940s, more formal
approaches to quality control were suggested. These relied on measurement and
continuous process improvement as key elements of quality management.
Today, every company has mechanisms to ensure quality in its products. In fact,
explicit statements of a company's concern for quality have become a marketing
ploy during the past few decades.
The history of quality assurance in software development parallels the history of
quality in hardware manufacturing. During the early days of computing (1950s and
1960s), quality was the sole responsibility of the programmer. Standards for quality
assurance for software were introduced in military contract software development
during the 1970s and have spread rapidly into software development in the
commercial world [IEE94]. Extending the definition presented earlier, software quality
assurance is a "planned and systematic pattern of actions" [SCH98] that are required
to ensure high quality in software. The scope of quality assurance responsibility might
best be characterized by paraphrasing a once-popular automobile commercial:
"Quality Is J ob #1." The implication for software is that many different constituencies
have software quality assurance responsibilitysoftware engineers, project
managers, customers, salespeople, and the individuals who serve within an SQA
group.
The SQA group serves as the customer's in-house representative. That is, the people
who perform SQA must look at the software from the customer's point of view. Does
the software adequately meet the quality factors? Has software development been
conducted according to pre-established standards? Have technical disciplines
properly performed their roles as part of the SQA activity? The SQA group attempts
to answer these and other questions to ensure that software quality is maintained.
SQA Activities
Software quality assurance is composed of a variety of tasks associated with two
different constituenciesthe software engineers who do technical work and an SQA
group that has responsibility for quality assurance planning, oversight, record
keeping, analysis, and reporting.
Software engineers address quality (and perform quality assurance and quality
control activities) by applying solid technical methods and measures, conducting
formal technical reviews, and performing well-planned software testing.
The charter of the SQA group is to assist the software team in achieving a high
quality end product. The Software Engineering Institute [PAU93] recommends a set
of SQA activities that address quality assurance planning, oversight, record keeping,
analysis, and reporting. These activities are performed (or facilitated) by an
independent SQA group that:
Prepares an SQA plan for a project. The plan is developed during project planning
and is reviewed by all interested parties. Quality assurance activities performed by
the software engineering team and the SQA group are governed by the plan. The
plan identifies
evaluations to be performed
audits and reviews to be performed
standards that are applicable to the project
procedures for error reporting and tracking
documents to be produced by the SQA group
amount of feedback provided to the software project team
Participates in the development of the projects software process description. The
software team selects a process for the work to be performed. The SQA group
reviews the process description for compliance with organizational policy, internal
software standards, externally imposed standards (e.g., ISO-9001), and other parts of
the software project plan.
Reviews software engineering activities to verify compliance with the defined
software process. The SQA group identifies, documents, and tracks deviations from
the process and verifies that corrections have been made.
Audits designated software work products to verify compliance with those defined
as part of the software process. The SQA group reviews selected work products;
identifies, documents, and tracks deviations; verifies that corrections have been
made; and periodically reports the results of its work to the project manager.
Ensures that deviations in software work and work products are documented and
handled according to a documented procedure. Deviations may be encountered in
the project plan, process description, applicable standards, or technical work
products.
Records any noncompliance and reports to senior management. Noncompliance
items are tracked until they are resolved.
In addition to these activities, the SQA group coordinates the control and
management of change and helps to collect and analyze software metrics.

Software Reviews
Software reviews are a "filter" for the software engineering process. That is, reviews
are applied at various points during software development and serve to uncover
errors and defects that can then be removed. Software reviews "purify" the software
engineering activities that we have called analysis, design, and coding. Freedman
and Weinberg [FRE90] discuss the need for reviews this way:
Technical work needs reviewing for the same reason that pencils need erasers: To
err is human. The second reason we need technical reviews is that although people
are good at catching some of their own errors, large classes of errors escape the
originator more easily than they escape anyone else.
A reviewany reviewis a way of using the diversity of a group of people to:
1. Point out needed improvements in the product of a single person or team;
2. Confirm those parts of a product in which improvement is either not desired or
not needed;
3. Achieve technical work of more uniform, or at least more predictable, quality
than can be achieved without reviews, in order to make technical work more
manageable.
Many different types of reviews can be conducted as part of software engineering.
Each has its place. An informal meeting around the coffee machine is a form of
review, if technical problems are discussed. A formal presentation of software design
to an audience of customers, management, and technical staff is also a form of
review. A formal technical review is the most effective filter from a quality assurance
standpoint. Conducted by software engineers (and others) for software engineers,
the FTR is an effective means for improving software quality.

Formal Technical Review
A formal technical review is a software quality assurance activity performed by
software engineers (and others). The objectives of the FTR are:
1) To uncover errors in function, logic, or implementation for any representation of
the software;
2) To verify that the software under review meets its requirements;
3) To ensure that the software has been represented according to predefined
standards;
4) To achieve software that is developed in a uniform manner; and
5) To make projects more manageable.
In addition, the FTR serves as a training ground, enabling junior engineers to observe
different approaches to software analysis, design, and implementation. The FTR also
serves to promote backup and continuity because a number of people become
familiar with parts of the software that they may not have otherwise seen.
The FTR is actually a class of reviews that includes walkthroughs, inspections, round-
robin reviews and other small group technical assessments of software. Each FTR is
conducted as a meeting and will be successful only if it is properly planned,
controlled, and attended. In the sections that follow, guidelines similar to those for a
walkthrough [FRE90], [GIL93] are presented as a representative formal technical
review.
The Review Meeting
Regardless of the FTR format that is chosen, every review meeting should abide by
the following constraints:
Between three and five people (typically) should be involved in the review.
Advance preparation should occur but should require no more than two
hours of work for each person.
The duration of the review meeting should be less than two hours.
Given these constraints, it should be obvious that an FTR focuses on a specific (and
small) part of the overall software. For example, rather than attempting to review an
entire design, walkthroughs are conducted for each component or small group of
components. By narrowing focus, the FTR has a higher likelihood of uncovering
errors.
The focus of the FTR is on a work product (e.g., a portion of a requirements
specification, a detailed component design, a source code listing for a
component). The individual who has developed the work productthe producer
informs the project leader that the work product is complete and that a review is
required. The project leader contacts a review leader, who evaluates the product
for readiness, generates copies of product materials, and distributes them to two or
three reviewers for advance preparation. Each reviewer is expected to spend
between one and two hours reviewing the product, making notes, and otherwise
becoming familiar with the work. Concurrently, the review leader also reviews the
product and establishes an agenda for the review meeting, which is typically
scheduled for the next day.
The review meeting is attended by the review leader, all reviewers, and the
producer. One of the reviewers takes on the role of the recorder; that is, the
individual who records (in writing) all important issues raised during the review. The
FTR begins with an introduction of the agenda and a brief introduction by the
producer. The producer then proceeds to "walk through" the work product,
explaining the material, while reviewers raise issues based on their advance
preparation. When valid problems or errors are discovered, the recorder notes each.
At the end of the review, all attendees of the FTR must decide whether to (1) accept
the product without further modification, (2) reject the product due to severe errors
(once corrected, another review must be performed), or (3) accept the product
provisionally (minor errors have been encountered and must be corrected, but no
additional review will be required). The decision made, all FTR attendees complete a
sign-off, indicating their participation in the review and their concurrence with the
review team's findings.
Review Reporting and Record Keeping
During the FTR, a reviewer (the recorder) actively records all issues that have been
raised. These are summarized at the end of the review meeting and a review issues
list is produced. In addition, a formal technical review summary report is completed.
A review summary report answers three questions:
1. What was reviewed?
2. Who reviewed it?
3. What were the findings and conclusions?
The review summary report is a single page form (with possible attachments). It
becomes part of the project historical record and may be distributed to the project
leader and other interested parties.
The review issues list serves two purposes: (1) to identify problem areas within the
product and (2) to serve as an action item checklist that guides the producer as
corrections are made. An issues list is normally attached to the summary report.
It is important to establish a follow-up procedure to ensure that items on the issues list
have been properly corrected. Unless this is done, it is possible that issues raised can
fall between the cracks. One approach is to assign the responsibility for follow-up
to the review leader.
Review Guidelines
Guidelines for the conduct of formal technical reviews must be established in
advance, distributed to all reviewers, agreed upon, and then followed. A review
that is uncontrolled can often be worse that no review at all. The following
represents a minimum set of guidelines for formal technical reviews:
1) Review the product, not the producer. An FTR involves people and egos.
Conducted properly, the FTR should leave all participants with a warm feeling of
accomplishment. Conducted improperly, the FTR can take on the aura of an
inquisition. Errors should be pointed out gently; the tone of the meeting should be
loose and constructive; the intent should not be to embarrass or belittle. The
review leader should conduct the review meeting to ensure that the proper tone
and attitude are maintained and should immediately halt a review that has
gotten out of control.
2) Set an agenda and maintain it. One of the key maladies of meetings of all types
is drift. An FTR must be kept on track and on schedule. The review leader is
chartered with the responsibility for maintaining the meeting schedule and
should not be afraid to nudge people when drift sets in.
3) Limit debate and rebuttal. When an issue is raised by a reviewer, there may not
be universal agreement on its impact. Rather than spending time debating the
question, the issue should be recorded for further discussion off-line.
4) Enunciate problem areas, but don't attempt to solve every problem noted. A
review is not a problem-solving session. The solution of a problem can often be
accomplished by the producer alone or with the help of only one other
individual. Problem solving should be postponed until after the review meeting.
5) Take written notes. It is sometimes a good idea for the recorder to make notes on
a wall board, so that wording and priorities can be assessed by other reviewers
as information is recorded.
6) Limit the number of participants and insist upon advance preparation. Two
heads are better than one, but 14 are not necessarily better than 4. Keep the
number of people involved to the necessary minimum. However, all review team
members must prepare in advance. Written comments should be solicited by the
review leader (providing an indication that the reviewer has reviewed the
material).
7) Develop a checklist for each product that is likely to be reviewed. A checklist
helps the review leader to structure the FTR meeting and helps each reviewer to
focus on important issues. Checklists should be developed for analysis, design,
code, and even test documents.
8) Allocate resources and schedule time for FTRs. For reviews to be effective, they
should be scheduled as a task during the software engineering process. In
addition, time should be scheduled for the inevitable modifications that will
occur as the result of an FTR.
9) Conduct meaningful training for all reviewers. To be effective all review
participants should receive some formal training. The training should stress both
process-related issues and the human psychological side of reviews. Freedman
and Weinberg [FRE90] estimate a one-month learning curve for every 20 people
who are to participate effectively in reviews.
10) Review your early reviews. Debriefing can be beneficial in uncovering problems
with the review process itself. The very first product to be reviewed should be the
review guidelines themselves.
Because many variables (e.g., number of participants, type of work products, timing
and length, specific review approach) have an impact on a successful review, a
software organization should experiment to determine what approach works best in
a local context. Porter and his colleagues [POR95] provide excellent guidance for
this type of experimentation.

Formal Approaches to SQA
In the preceding sections, we have argued that software quality is everyone's job;
that it can be achieved through competent analysis, design, coding, and testing, as
well as through the application of formal technical reviews, a multitier testing
strategy, better control of software work products and the changes made to them,
and the application of accepted software engineering standards. In addition,
quality can be defined in terms of a broad array of quality factors and measured
(indirectly) using a variety of indices and metrics.
Over the past two decades, a small, but vocal, segment of the software engineering
community has argued that a more formal approach to software quality assurance
is required. It can be argued that a computer program is a mathematical object
[SOM96]. A rigorous syntax and semantics can be defined for every programming
language, and work is underway to develop a similarly rigorous approach to the
specification of software requirements. If the requirements model (specification) and
the programming language can be represented in a rigorous manner, it should be
possible to apply mathematic proof of correctness to demonstrate that a program
conforms exactly to its specifications.
Attempts to prove programs correct are not new. Dijkstra [DIJ 76] and Linger, Mills,
and Witt [LIN79], among others, advocated proofs of program correctness and tied
these to the use of structured programming concepts.

Statistical Software Quality
Assurance
Statistical quality assurance reflects a growing trend throughout industry to become
more quantitative about quality. For software, statistical quality assurance implies
the following steps:
1) Information about software defects is collected and categorized.
2) An attempt is made to trace each defect to its underlying cause (e.g.,
nonconformance to specifications, design error, violation of standards, and poor
communication with the customer).
3) Using the Pareto principle (80 percent of the defects can be traced to 20
percent of all possible causes), isolate the 20 percent (the "vital few").
4) Once the vital few causes have been identified, move to correct the problems
that have caused the defects.
This relatively simple concept represents an important step towards the creation of
an adaptive software engineering process in which changes are made to improve
those elements of the process that introduce error.
To illustrate this, assume that a software engineering organization collects
information on defects for a period of one year. Some of the defects are uncovered
as software is being developed. Others are encountered after the software has
been released to its end-users. Although hundreds of different errors are uncovered,
all can be tracked to one (or more) of the following causes:
incomplete or erroneous specifications (IES)
misinterpretation of customer communication (MCC)
intentional deviation from specifications (IDS)
violation of programming standards (VPS)
error in data representation (EDR)
inconsistent component interface (ICI)
error in design logic (EDL)
incomplete or erroneous testing (IET)
inaccurate or incomplete documentation (IID)
error in programming language translation of design (PLT)
ambiguous or inconsistent human/computer interface (HCI)
miscellaneous (MIS)
Mathematical methods along with predefined formulas are used to assess the
different error types to create an error index, which is then used to develop an
overall indication of improvement in software quality.

Software Reliability
There is no doubt that the reliability of a computer program is an important element
of its overall quality. If a program repeatedly and frequently fails to perform, it
matters little whether other software quality factors are acceptable.
Software reliability, unlike many other quality factors, can be measured directed
and estimated using historical and developmental data. Software reliability is
defined in statistical terms as "the probability of failure-free operation of a computer
program in a specified environment for a specified time".
Measures of Reliability and Availability
Early work in software reliability attempted to extrapolate the mathematics of
hardware reliability theory to the prediction of software reliability.
A simple measure of reliability is meantime-between-failure (MTBF), where:
MTBF = MTTF + MTTR
The acronyms MTTF and MTTR are mean-time-to-failure and mean-time-to-repair,
respectively.
In addition to a reliability measure, we must develop a measure of availability.
Software availability is the probability that a program is operating according to
requirements at a given point in time and is defined as:
Availability = [MTTF / (MTTF + MTTR)] 100%
The MTBF reliability measure is equally sensitive to MTTF and MTTR. The availability
measure is somewhat more sensitive to MTTR, an indirect measure of the
maintainability of software.

The ISO 9000 Quality Standards
A quality assurance system may be defined as the organizational structure,
responsibilities, procedures, processes, and resources for implementing quality
management. Quality assurance systems are created to help organizations ensure
their products and services satisfy customer expectations by meeting their
specifications. These systems cover a wide variety of activities encompassing a
products entire life cycle including planning, controlling, measuring, testing and
reporting, and improving quality levels throughout the development and
manufacturing process. ISO 9000 describes quality assurance elements in generic
terms that can be applied to any business regardless of the products or services
offered.
The ISO 9000 standards have been adopted by many countries including all
members of the European Community, Canada, Mexico, the United States,
Australia, New Zealand, and the Pacific Rim. Countries in Latin and South America
have also shown interest in the standards.
After adopting the standards, a country typically permits only ISO registered
companies to supply goods and services to government agencies and public
utilities. Telecommunication equipment and medical devices are examples of
product categories that must be supplied by ISO registered companies. In turn,
manufacturers of these products often require their suppliers to become registered.
Private companies such as automobile and computer manufacturers frequently
require their suppliers to be ISO registered as well.
To become registered to one of the quality assurance system models contained in
ISO 9000, a companys quality system and operations are scrutinized by third party
auditors for compliance to the standard and for effective operation. Upon
successful registration, a company is issued a certificate from a registration body
represented by the auditors. Semi-annual surveillance audits ensure continued
compliance to the standard.
The ISO Approach to Quality Assurance Systems
The ISO 9000 quality assurance models treat an enterprise as a network of
interconnected processes. For a quality system to be ISO compliant, these processes
must address the areas identified in the standard and must be documented and
practiced as described.
ISO 9000 describes the elements of a quality assurance system in general terms.
These elements include the organizational structure, procedures, processes, and
resources needed to implement quality planning, quality control, quality assurance,
and quality improvement. However, ISO 9000 does not describe how an
organization should implement these quality system elements. Consequently, the
challenge lies in designing and implementing a quality assurance system that meets
the standard and fits the companys products, services, and culture.
The ISO 9001 Standard
ISO 9001 is the quality assurance standard that applies to software engineering. The
standard contains 20 requirements that must be present for an effective quality
assurance system. Because the ISO 9001 standard is applicable to all engineering
disciplines, a special set of ISO guidelines (ISO 9000-3) have been developed to help
interpret the standard for use in the software process.
The requirements delineated by ISO 9001 address topics such as management
responsibility, quality system, contract review, design control, document and data
control, product identification and traceability, process control, inspection and
testing, corrective and preventive action, control of quality records, internal quality
audits, training, servicing, and statistical techniques. In order for a software
organization to become registered to ISO 9001, it must establish policies and
procedures to address each of the requirements just noted (and others) and then
be able to demonstrate that these policies and procedures are being followed.

The SQA Plan
The SQA Plan provides a road map for instituting software quality assurance.
Developed by the SQA group, the plan serves as a template for SQA activities that
are instituted for each software project.
A standard for SQA plans has been recommended by the IEEE [IEE94]. Initial sections
describe the purpose and scope of the document and indicate those software
process activities that are covered by quality assurance. All documents noted in the
SQA Plan are listed and all applicable standards are noted. The management
section of the plan describes SQAs place in the organizational structure, SQA tasks
and activities and their placement throughout the software process, and the
organizational roles and responsibilities relative to product quality.
The documentation section describes (by reference) each of the work products
produced as part of the software process. These include
project documents (e.g., project plan)
models (e.g., ERDs, class hierarchies)
technical documents (e.g., specifications, test plans)
user documents (e.g., help files)
In addition, this section defines the minimum set of work products that are
acceptable to achieve high quality. The standards, practices, and conventions
section lists all applicable standards and practices that are applied during the
software process (e.g., document standards, coding standards, and review
guidelines). In addition, all project, process, and (in some instances) product metrics
that are to be collected as part of software engineering work are listed.
The reviews and audits section of the plan identifies the reviews and audits to be
conducted by the software engineering team, the SQA group, and the customer. It
provides an overview of the approach for each review and audit.
The test section references the Software Test Plan and Procedure. It also defines test
record-keeping requirements. Problem reporting and corrective action defines
procedures for reporting, tracking, and resolving errors and defects, and identifies
the organizational responsibilities for these activities.
The remainder of the SQA Plan identifies the tools and methods that support SQA
activities and tasks; references software configuration management procedures for
controlling change; defines a contract management approach; establishes
methods for assembling, safeguarding, and maintaining all records; identifies training
required to meet the needs of the plan; and defines methods for identifying,
assessing, monitoring, and controlling risk.

Summary
Software quality assurance is an umbrella activity that is applied at each step in the
software process. SQA encompasses procedures for the effective application of
methods and tools, formal technical reviews, testing strategies and techniques,
procedures for change control, procedures for assuring compliance to standards,
and measurement and reporting mechanisms.
SQA is complicated by the complex nature of software qualityan attribute of
computer programs that is defined as "conformance to explicitly and implicitly
specified requirements." But when considered more generally, software quality
encompasses many different product and process factors and related metrics.
Software reviews are one of the most important SQA activities. Reviews serve as
filters throughout all software engineering activities, removing errors while they are
relatively inexpensive to find and correct. The formal technical review is a stylized
meeting that has been shown to be extremely effective in uncovering errors.
To properly conduct software quality assurance, data about the software
engineering process should be collected, evaluated, and disseminated. Statistical
SQA helps to improve the quality of the product and the software process itself.
Software reliability models extend measurements, enabling collected defect data to
be extrapolated into projected failure rates and reliability predictions.
In summary, we recall the words of Dunn and Ullman [DUN82]: "Software quality
assurance is the mapping of the managerial precepts and design disciplines of
quality assurance onto the applicable managerial and technological space of
software engineering." The ability to ensure quality is the measure of a mature
engineering discipline. When the mapping is successfully accomplished, mature
software engineering is the result.
SOFTWARERELIABILITYMEASUREMENTANDPREDICTION
Softwarereliabilityisthemostimportantandmostmeasurableaspectofsoftware
qualityanditisverycustomeroriented.Itisameasureofhowwelltheprogramfunction
meetsitsoperationalrequirement.Asourglobalsocietybecomesmoredependenton
informationintheproductionofgoodsandservices,thepressureforhigherquality,
lowercost,andfasterdeliveryforsoftwareproductsareincreasing.Herearetermswhich
willhelpourunderstandability.
Terms:
1. Softwarereliabilityisdefinedastheprobabilityoffailurefreeoperationofa
computerprogramforaspecifiedtimeinaspecifiedenvironment.
2. Failure:isdefinedasthedepartureofaprogramoperationfromrequirement.
3. Failureintensity:isdefinedasfailureoccurringwithrespecttotime,butitisthe
alternativewayofexpressingsoftwarereliability.
4. Fault:isadefectinaprogramthatcausesfailure.
Fault
Softwarereliabilityisinfluencedbyfaultintroductionresultingfromnewormodified
code.Thissoftwarefaultarealsocalleddesignfaultanddesignfaultresultfromhuman
errorindevelopmentprocessandmaintenance.Designfaultcausefailureundercertain
circumstanceandtheprobabilityoftheactivationofadesignfaultistypicallyonusage
dependentandtimeindependent.However,theremovaloffaultiscalledDebugging.
Note:thetermdesignisusedinabroadsenseinsoftwaredependabilityandreferstoall
softwaredevelopmentstepsfromtherequirementtorealization.
MEASURINGRELIABILTY
Inmeasuringthereliabilityofaproduct,themainissueisthatofcollecting
accurateandcompletefailureandpopulationdatathatisneededfordetermining
reliability.Failuredataisobtainedthroughaproductserviceorganization,whereusers
canreportfailurewhentheyencounterthemwhilepopulationdataisobtainedfromthe
salesfigure.
Note:thisapproachformeasuringreliabilitycanworkforlargeservertypesoftware
productsbecausetheusageprofileissimilarandthepopulationdataiswellknown.
Failuresarelikelytobereportedduetothenatureoftheircustomerbase.Butthis
assumptiondoesnotholdformassmarketproductbecauseusershavevarying
operationalprofile.Thepopulationdatafordifferentusergroupisnotknown
Differentusershavedifferentinclination(thewaytheyfeel)tofailurereporting,.

Thekeyissuewhenmeasuringthereliabilityofasoftwareproductare
1. FailureClassification
Reliabilityisconcernedwiththefrequencyofdifferenttypeoffailure.So,weneeda
clearandunambiguousclassificationoffailure.Failureclassificationschemeshouldbe
generalandcomprehensiveandshouldpermitauniqueclassificationofeachfailure.The
failureclassificationwillhavetobefromtheusersperspective,aswearetryingto
capturethereliabilityexperienceoftheuser.
Failureispartitionedasunplannedevents,plannedeventsandconfiguration
failures.
Unplannedeventsarecausedbysoftwarebugs.
Plannedeventsarethosewheresoftwareshutdowninaplannedmannerto
performsomehousekeepingtasks.
Configurationfailureoccurduetoproblemsinconfigurationsettinginmany
systems,configurationfailureaccountforalargepercentageoffailures.
Typesofeventsasincludedundercategories.
1. UnplannedEvents
o Crashes.
o Hangs.
o Functionallyincorrectresponse.
o Ultimatelyresponsetoofastorslow.
2. PlannedEvents
o Updatesrequiringrestart.
o Configurationchangesrequiringrestart.
3. ConfigurationFailures
o Application/systemincompatibilityerror.
o Installation/setupfailures.
Thisfailureclassificationprovidesaframeworkforcountingfailure.Note:Different
productmaychoosetofocusonspecifictypesoffailuresonly,dependingonwhatisof
importancetotheirusersandtheoverheadofmeasurement.

2. ThepopulationSize
Akeydataweneedfordeterminingreliabilityisthepopulationsize,whichishowmany
unitsoftheproductareinoperation.Inthepastsalesinformationhasoftenbeenoften
beenused.Usingsalesdataformassmarketproductposesnewproblembecausemany
productmanufacturerusemultipledistributionchannelstosellaproductwhereasthe
productmanufacturerrecordsasalewhentheproductissoldtothechannelbutwhen
theproductisactuallyinstalledontoacomputerbyauserisoftennotknown.
Usingtheentireuserpopulationbaseforreliabilitywillrequireobtainingfailuredata
fromthisbase,whichwillbemuchharderforawidelysoldmassmarketproduct.A
randomsample(calledobservedgroup)isproposed,tohelprecordfailuredataand
regularstatisticaltechniquescanbeusedtodeterminethesamplesizesuchthatfinal
resultisaccuratetothedesireddegree.
Note:ifthepopulationsizeisfixedearly,reliabilitygrowthwithageistracked.

3. ObtainingFailureData
Forreliabilitycomputation,weneedamechanismtocollectfailuredata,wherethe
failuresareoccurringonsystemusedbyusersdistributedaroundtheworld.Inthepast,
failuresreportedbytheuserstotheP.S.O(Productserviceorganization)havebeenused.
Butitiswellknownthatcustomersdonotreportallproblemstheyencounterasthey
sometimessolveitthemselves.Thisnonreportingisfarmorepronouncedinmass
marketproducts.
Thismethodofdatacollectionisusefulfortrendanalysisandgettingsomegeneral
reliability,butwillnotleadtoanaccuratedeterminationofreliability.Anotherway
failurecanbereportedbytheuserisbytheuseofpolling,whichtheusersinthe
observedgroupareperiodicallyaskedtofillaformtoreportfailuretheyexperienceinthe
last24hours.Themostaccuratedatacollectionfromtheobservedgroupwilloccurifthe
dataiscollectedandreportedautomaticallythroughproperinstrumentationandtriggers
intheproducts.TheseinstrumentationandtriggersarecalledEventlogging,whichisthe
mechanismthatprovidestheabilityforaproducttorecordspecialevents.Productsusing
eventsloggingmechanismhavetobeprogrammedtorecordtheirspecifiedeventsinthe
log.Theeventswilltypicallybebasedonuserinteractions(tocaptureusagetime)and
theprogramstateandexitstatus(tocapturefailuredata).
Thesubsetoftheeventlogcanbeusedtodeterminethereliabilityandavailability
oftheproducts.Eventslogginghasbeenusedinoperationsystemtoassistinthe
managementandrepairofsystemanddetermineavailabilityandreliabilityofthesystem.
However,eventloghasonbeenusedmuchformeasuringreliabilityofmassmarket
products.
4. UsageTime
Actualusagetimeoftheproductbytheuserneedstobedeterminedtobeableto
calculatethefailurerate.Usagedurationfordifferentusermayvaryconsiderablyfor
massmarketproducts,asthefailureencounteredbyauserclearlydependsonthe
amountofusageoftheproductthelongertheusagedurationthemorethechance
ofencounteringfailures.So,togetanaccurateideaofthereliabilityoftheproductin
use,weneedtocapturetheusagetime.
Note:usagetimecollectionthrowsupnewissuesformassmarketproductsastheuse
ofsuchproductisgenerallyspreadovermanysession.
5. HardwareandSoftwareConfiguration
Forservertypesoftware,itsunderlyinghardwareconfigurationisoftenwelldefined
andunderstood.Thisisnotsoformassmarketproductaproductmayrunasa
clientorasaserver,onamachinewithlotsofmemoryoramachinewithlittle
memory,andamachinewithnetworkconnectionorwithoutetc.Besidesthe
hardwareconfiguration,thesoftwaremaycoexistwithmanyothertypesofsoftware
residentonthecomputer.
Failurerateofsoftwaredependsontheloadonthehardwareorthecapacityofthe
hardware.

HOWCANSOFTWARERELIABIBLITYMEASUREHELPYOU?
1. Usingsoftwarereliabilitymeasure,developercustomerdialogissubstantially
enhanced.
2. Reliabilityfigurescanreadilyberelatedtotheoperationalcostsoffailure.
Thiswillhelpthecustomertounderstandtherealreliabilityrequirement
ofthesysteminquestion.
Itwillhelpdevelopertorelatethereliabilitylevelrequestedto
developmentcosts.
3. Itisbeneficialtotheuser,becausetheuserisconcernedwiththeefficient
operationofthesystem.
4. Itguidesthedevelopertobetterdecisions.Because:
a. Insystemengineeringstages,theypromotequantitativespecificationofdesign
goals,scheduleandresourcerequired.
b. Theydeterminequantitylevelduringtesting.
c. Ithelpsinbettermanagementofprojectresources.

qwertyuiopasdfghjklzxcvbnmqwertyui
opasdfghjklzxcvbnmqwertyuiopasdfgh
jklzxcvbnmqwertyuiopasdfghjklzxcvb
nmqwertyuiopasdfghjklzxcvbnmqwer
tyuiopasdfghjklzxcvbnmqwertyuiopas
dfghjklzxcvbnmqwertyuiopasdfghjklzx
cvbnmqwertyuiopasdfghjklzxcvbnmq
wertyuiopasdfghjklzxcvbnmqwertyuio
pasdfghjklzxcvbnmqwertyuiopasdfghj
klzxcvbnmqwertyuiopasdfghjklzxcvbn
mqwertyuiopasdfghjklzxcvbnmqwerty
uiopasdfghjklzxcvbnmqwertyuiopasdf
ghjklzxcvbnmqwertyuiopasdfghjklzxc
vbnmqwertyuiopasdfghjklzxcvbnmrty
uiopasdfghjklzxcvbnmqwertyuiopasdf
ghjklzxcvbnmqwertyuiopasdfghjklzxc

CSC532 ASSIGNMENT GROUP 8

TOPIC: PROJECT MANAGEMENT

GROUPMEMBERS
NAMES MATRICNO
IMORUVICTORO. 080805046
ADEJONPEOLANREWAJU 080805005
SALAMIRUKAYATA. 080805084
OLALEKENHASSANA. 080805042
BULUKUJOSHUA


INTRODUCTION
Projectmanagementhasbeenpracticedsinceearlycivilization.Untilthebeginningoftwentieth
centurycivilengineeringprojectswereactuallytreatedasprojectsandweregenerallymanagedby
creativearchitectsandengineers.Projectmanagementasadisciplinewasnotaccepted.Itwasinthe
1950sthatorganizationsstartedtosystematicallyapplyprojectmanagementtoolsand
techniquestocomplexprojects.Asadiscipline,ProjectManagementdevelopedfromseveralfields
ofapplicationincludingconstruction,engineering,anddefenseactivity.Twoforefathersofproject
managementarecommonlyknown:HenryGantt,calledthefatherofplanningandcontrol
techniqueswhoisfamousforhisuseoftheGanttchartasaprojectmanagementtool;andHenriFayolfor
hiscreationofthefivemanagementfunctionswhichformthefoundationofthebodyofknowledge
associatedwithprojectandprogrammanagement.The1950smarkedthebeginningofthemodern
ProjectManagementera.Projectmanagementbecamerecognizedasadistinctdisciplinearising
fromthemanagementdiscipline.
DEFINITION
Aprojectcanbedefinedinmanyways:
Aprojectisatemporaryendeavorundertakentocreateauniqueproduct,service,orresult.Operations,on
theotherhand,isworkdoneinorganizationstosustainthebusiness.Projectsaredifferentfromoperations
inthattheyendwhentheirobjectiveshavebeenreachedortheprojecthasbeenterminated.
Aprojectistemporary.Aprojectsdurationmightbejustoneweekoritmightgoonforyears,butevery
projecthasanenddate.Youmightnotknowthatenddatewhentheprojectbegins,butitsthere
somewhereinthefuture.Projectsarenotthesameasongoingoperations,althoughthetwohaveagreat
dealincommon.
Aprojectisanendeavor.Resources,suchaspeopleandequipment,needtodowork.Theendeavoris
undertakenbyateamoranorganization,andthereforeprojectshaveasenseofbeingintentional,
plannedevents.Successfulprojectsdonothappenspontaneously;someamountofpreparationand
planninghappensfirst.
Everyprojectcreatesauniqueproductorservice.Thisisthedeliverablefortheprojectandthereason,
whythatprojectwasundertaken.

PROJECTATTRIBUTES
Projectscomeinallshapesandsizes.Thefollowingattributeshelpustodefineaprojectfurther:
Aprojecthasauniquepurpose.Everyprojectshouldhaveawelldefinedobjective.Forexample,many
peoplehirefirmstodesignandbuildanewhouse,buteachhouse,likeeachperson,isunique.
Aprojectistemporary.Aprojecthasadefinitebeginningandadefiniteend.Forahomeconstruction
project,ownersusuallyhaveadateinmindwhentheydliketomoveintotheirnewhomes.
Aprojectisdevelopedusingprogressiveelaborationorinaniterativefashion.
Aprojectrequiresresources,oftenfromvariousareas.Resourcesincludepeople,hardware,
software,orotherassets.Manydifferenttypesofpeople,skillsets,andresourcesareneeded
tobuildahome.
Aprojectshouldhaveaprimarycustomerorsponsor.Mostprojectshavemanyinterestedparties
orstakeholders,butsomeonemusttaketheprimaryroleofsponsorship.Theprojectsponsor
usuallyprovidesthedirectionandfundingfortheproject.

PROJECTCONSTRAINTS
Likeanyhumanundertaking,projectsneedtobeperformedanddeliveredundercertainconstraints.
Traditionally,theseconstraintshavebeenlistedasscope,time,andcost.Thesearealsoreferredtoasthe
ProjectManagementTriangle,whereeachsiderepresentsaconstraint.Onesideofthetrianglecannotbe
changedwithoutimpactingtheothers.Afurtherrefinementoftheconstraintsseparatesproduct
'quality'or'performance'fromscope,andturnsqualityintoafourthconstraint.Asshownbelow:
Time
Cost
Scope

Thetimeconstraintreferstotheamountoftimeavailabletocompleteaproject.Thecostconstraintrefers
tothebudgetedamountavailablefortheproject.Thescopeconstraintreferstowhatmustbedoneto
producetheproject'sendresult.Thesethreeconstraintsareoftencompetingconstraints:increased
scopetypicallymeansincreasedtimeandincreasedcost,atighttimeconstraintcouldmeanincreased
costsandreducedscope,andatightbudgetcouldmeanincreasedtimeandreducedscope.
Thedisciplineofprojectmanagementisaboutprovidingthetoolsandtechniquesthatenabletheproject
team(notjusttheprojectmanager)toorganizetheirworktomeettheseconstraints.
Anotherapproachtoprojectmanagementistoconsiderthethreeconstraintsasfinance,timeandhuman
resources.Ifyouneedtofinishajobinashortertime,youcanallocatemorepeopleattheproblem,which
inturnwillraisethecostoftheproject,unlessbydoingthistaskquickerwewillreducecostselsewhere
intheprojectbyanequalamount.
Time:
Foranalyticalpurposes,thetimerequiredtoproduceaproductorserviceisestimatedusingseveral
techniques.Onemethodistoidentifytasksneededtoproducethedeliverablesdocumentedina
workbreakdownstructureorWBS.Theworkeffortforeachtaskisestimatedandthoseestimatesare
rolledupintothefinaldeliverableestimate.
Thetasksarealsoprioritized,dependenciesbetweentasksareidentified,andthisinformationis
documentedinaprojectschedule.Thedependenciesbetweenthetaskscanaffectthelengthof
theoverallproject(dependencyconstraint),ascantheavailabilityofresources(resourceconstraint).
Timeisnotconsideredacostnoraresourcesincetheprojectmanagercannotcontroltherateatwhichitis
expended.Thismakesitdifferentfromallotherresourcesandcostcategories.
Cost:
Costtodevelopaprojectdependsonseveralvariablesincluding:laborrates,materialrates,risk
management,plant(buildings,machines,etc.),equipment,andprofit.Whenhiringanindependent
consultantforaproject,costwilltypicallybedeterminedbytheconsultant'sorfirmsperdiemrate
multipliedbyanestimatedquantityforcompletion.
Scope:
Scopeisrequirementspecifiedfortheendresult.Theoveralldefinitionofwhattheprojectissupposedto
accomplish,andaspecificdescriptionofwhattheendresultshouldbeoraccomplishcanbesaidtobethe
scopeoftheproject.Amajorcomponentofscopeisthequalityofthefinalproduct.Theamountoftimeput
intoindividualtasksdeterminestheoverallqualityoftheproject.Sometasksmayrequireagivenamountof
timetocompleteadequately,butgivenmoretimecouldbecompletedexceptionally.Overthecourseofa
largeproject,qualitycanhaveasignificantimpactontimeandcostorviceversa.
PROJECTMANAGEMENT
Projectmanagementistheapplicationofknowledge,skills,toolsandtechniquestoprojectactivitiesto
meettheprojectrequirements.Theeffectivenessofprojectmanagementiscriticalinassuringthesuccess
ofanysubstantialactivity.Areasofresponsibilityforthepersonhandlingtheprojectincludeplanning,
controlandimplementation.Aprojectshouldbeinitiatedwithafeasibilitystudy,whereacleardefinition
ofthegoalsandultimatebenefitsneedtobedetermined.Seniormanagerssupportforprojectsis
importantsoastoensureauthorityanddirectionthroughouttheproject'sprogressand,alsoto
ensurethatthegoalsoftheorganizationareeffectivelyachievedinthisprocess.
Knowledge,skills,goalsandpersonalitiesarethefactorsthatneedtobeconsideredwithinproject
management.Theprojectmanagerandhis/herteamshouldcollectivelypossessthenecessaryand
requisiteinterpersonalandtechnicalskillstofacilitatecontroloverthevariousactivitieswithinthe
project.
Thestagesofimplementationmustbearticulatedattheprojectplanningphase.Disaggregatingthe
stagesatitsearlypointassistsinthesuccessfuldevelopmentoftheprojectbyprovidinganumberof
milestonesthatneedtobeaccomplishedforcompletion.Inadditiontoplanning,thecontroloftheevolving
projectisalsoaprerequisiteforitssuccess.Controlrequiresadequatemonitoringandfeedback
mechanismsbywhichseniormanagementandprojectmanagerscancompareprogressagainstinitial
projectionsateachstageoftheproject.Monitoringandfeedbackalsoenablestheprojectmanagerto
anticipateproblemsandthereforetakepreemptiveandcorrectivemeasuresforthebenefitoftheproject.
Projectsnormallyinvolvetheintroductionofanewsystemofsomekindand,inalmostallcases,new
methodsandwaysofdoingthings.Thisimpactstheworkofothers:the"users".Userinteractionisan
importantfactorinthesuccessofprojectsand,indeed,thedegreeofuserinvolvementcaninfluencethe
extentofsupportfortheprojectoritsimplementationplan.Aprojectmanageristheonewhois
responsibleforestablishingacommunicationinbetweentheprojectteamandtheuser.Thusoneofthe
mostessentialqualityoftheprojectmanageristhatofbeingagoodcommunicator,notjustwithinthe
projectteamitself,butwiththerestoftheorganizationandoutsideworldaswell.
PROJECTMANAGEMENTTOOLSANDTECHNIQUES
Projectplanningisattheheartofprojectmanagement.Onecantmanageandcontrolprojectactivitiesif
thereisnoplan.Withoutaplan,itisimpossibletoknowifthecorrectactivitiesareunderway,iftheavailable
resourcesareadequateoroftheprojectcanbecompletedwithinthedesiredtime.Theplanbecomesthe
roadmapthattheprojectteammembersusetoguidethemthroughtheprojectactivities.Project
managementtoolsandtechniquesassistprojectmanagersandtheirteamsincarryingoutworkinallnine
knowledgeareas.Forexample,somepopulartimemanagementtoolsandtechniquesincludeGantt
charts,projectnetworkdiagrams,andcriticalpathanalysis.Table1.1belowlistssomecommonlyusedtools
andtechniquesbyknowledgearea.
KnowledgeArea Tools&Techniques
IntegrationManagement Projectselectionmethods,methodologies,stakeholderanalyses,project
charters,projectmanagementplans,projectmanagementsoftware,
changerequests,changecontrolboards,projectreviewmeetings,
lessonslearnedreports
ScopeManagement Scopestatements,workbreakdownstructures,mindmaps,statementsof
work,requirementsanalyses,scopemanagementplans,scopeverification
techniques,andscopechangecontrols.
CostManagement Netpresentvalue,returnoninvestment,paybackanalyses,earnedvalue
management,projectportfoliomanagement,costestimates,cost
managementplans,costbaselines.
TimeManagement Ganttcharts,projectnetworkdiagrams,criticalpathanalyses,crashing,
fasttracking,scheduleperformancemeasurements.
HumanresourceManagement Motivationtechniques,empathiclistening,responsibilityassignment
matrices,projectorganizationalcharts,resourcehistograms,team
buildingexercises.
QualityManagement Qualitymetrics,checklists,qualitycontrolcharts,Paretodiagrams,
fishbonediagrams,maturitymodels,statisticalmethods.
RiskManagement Riskmanagementplans,riskregisters,probability/impactmatrices,risk
rankings
CommunicationManagement Communicationsmanagementplans,kickoff meetings,conflict
management,communicationsmediaselection,statusandprogress
reports,virtualcommunications,templates,projectWebsites.
ProcurementManagement Makeorbuyanalyses,contracts,requestsforproposalsorquotes,source
selections,supplierevaluationmatrices.

PROJECTLIFECYCLE
TheProjectLifeCyclereferstoalogicalsequenceofactivitiestoaccomplishtheprojectsgoalsor
objectives.Regardlessofscopeorcomplexity,anyprojectgoesthroughaseriesofstagesduringitslife.There
isfirstanInitiationorStartingphase,inwhichtheoutputsandcriticalsuccessfactorsaredefined,followedby
aPlanningphase,characterizedbybreakingdowntheprojectintosmallerparts/tasks,anExecutionphase,
inwhichtheprojectplanisexecuted,andlastlyaClosureorExitphase,thatmarksthecompletion
oftheproject.Projectactivitiesmustbegroupedintophasesbecausebydoingso,theprojectmanager
andthecoreteamcanefficientlyplanandorganizeresourcesforeachactivity,andalsoobjectively
measureachievementofgoalsandjustifytheirdecisionstomoveahead,correct,orterminate.Itisofgreat
importancetoorganizeprojectphasesintoindustryspecificprojectcycles.Why?Notonlybecauseeach
industrysectorinvolvesspecificrequirements,tasks,andprocedureswhenitcomestoprojects,butalso
becausedifferentindustrysectorshavedifferentneedsforlifecyclemanagementmethodology.Andpaying
closeattentiontosuchdetailsisthedifferencebetweendoingthingswellandexcellingasproject
managers.Diverseprojectmanagementtoolsandmethodologiesprevailinthedifferentprojectcycle
phases.Letstakeacloserlookatwhatsimportantineachoneofthesestages:
ProjectInitiation
Theinitiationstagedeterminesthenatureandscopeofthedevelopment.Ifthisstageisnot
performedwell,itisunlikelythattheprojectwillbesuccessfulinmeetingthebusinesssneeds.The
Keyprojectcontrolsneededhereareanunderstandingofthebusinessenvironmentandmaking
surethatallnecessarycontrolsareincorporatedintotheproject.Anydeficienciesshouldbe
reportedandarecommendationshouldbemadetofixthem.Theinitiationstageshouldincludea
planthatencompassesthefollowingareas:
Analyzingthebusinessneeds/requirementsinmeasurablegoals.
Reviewingofthecurrentoperations.
Conceptualdesignoftheoperationofthefinalproduct.
Equipmentandcontractingrequirementsincludinganassessmentoflongleadtimeitems.
Financialanalysisofthecostsandbenefitsincludingabudget.
Stakeholderanalysis,includingusers,andsupportpersonnelfortheproject.
Projectcharterincludingcosts,tasks,deliverables,andschedule.
PlanningandDesign
Aftertheinitiationstage,thesystemisdesigned.Occasionally,asmallprototypeofthe
finalproductisbuiltandtested.Testingisgenerallyperformedbyacombinationoftestersandend
users,andcanoccuraftertheprototypeisbuiltorconcurrently.Controlsshouldbeinplacethat
ensuresthatthefinalproductwillmeetthespecificationsoftheprojectcharter.Theresultsofthe
designstageshouldincludeaproductdesignthat:
Satisfiestheprojectsponsor(thepersonwhoisprovidingtheprojectbudget),enduser,and
businessrequirements.
Functionsasitwasintended.
Canbeproducedwithinacceptablequalitystandards.
Canbeproducedwithintimeandbudgetconstraints.
Executionandcontrolling
MonitoringandControllingconsistsofthoseprocessesperformedtoobserveprojectexecution
sothatpotentialproblemscanbeidentifiedinatimelymannerandcorrectiveactioncanbetaken,
whennecessary,tocontroltheexecutionoftheproject.Thekeybenefitisthatprojectperformanceis
observedandmeasuredregularlytoidentifyvariancesfromtheprojectmanagementplan.Monitoring
andControllingincludes:
Measuringtheongoingprojectactivities(whereweare);
Monitoringtheprojectvariables(cost,effort,scope,etc.)againsttheproject
managementplanandtheprojectperformancebaseline(whereweshouldbe);
Identifycorrectiveactionstoaddressissuesandrisksproperly(Howcanwegeton
trackagain);
Influencingthefactorsthatcouldcircumventintegratedchangecontrolsoonly
approvedchangesareimplemented
Inmultiphaseprojects,theMonitoringandControllingprocessalsoprovidesfeedbackbetweenproject
phases,inordertoimplementcorrectiveorpreventiveactionstobringtheprojectintocompliancewiththe
projectmanagementplan.
ProjectMaintenanceisanongoingprocess,anditincludes:
Continuingsupportofendusers
Correctionoferrors
Updatesofthesoftwareovertime
Inthisstage,auditorsshouldpayattentiontohoweffectivelyandquicklyuserproblemsareresolved.
OverthecourseofanyITproject,theworkscopemaychange.Changeisnormalandexpected
partoftheprocess.Changescanbetheresultofnecessarydesignmodifications,differingsite
conditions,materialavailability,clientrequestedchanges,valueengineeringandimpactsfromthird
parties,tonameafew.Beyondexecutingthechangeinthefield,thechangenormallyneedstobe
documentedtoshowwhatwasactuallydeveloped.ThisisreferredtoasChangeManagement.Hence,
theownerusuallyrequiresafinalrecordtoshowallchangesor,morespecifically,anychangethatmodifies
thetangibleportionsofthefinishedwork.Therecordismadeonthecontractdocumentsusually,but
notnecessarilylimitedto,thedesigndrawings.Theendproductofthiseffortiswhattheindustrytermsas
builtdrawings,ormoresimply,asbuilt.
Whenchangesareintroducedtotheproject,theviabilityoftheprojecthastobereassessed.Itisimportant
nottolosesightoftheinitialgoalsandtargetsoftheprojects.Whenthechangesaccumulate,the
forecastedresultmaynotjustifytheoriginalproposedinvestmentintheproject.
Closure
Closingincludestheformalacceptanceoftheprojectandtheendingthereof.Administrativeactivitiesinclude
thearchivingofthefilesanddocumentinglessonslearned.Thisphaseconsistsof:
Projectclose:Finalizeallactivitiesacrossalloftheprocessgroupstoformallyclosetheprojectora
projectphase.
Contractclosure:Completeandsettleeachcontract(includingtheresolutionofanyopen
items)andcloseeachcontractapplicabletotheprojectorprojectphase.
PROJECTSCHEDULING
Scheduleshelpinthefollowing:
Theyprovideabasisforyoutomonitorandcontrolprojectactivities.
Theyhelpyoudeterminehowbesttoallocateresourcessoyoucanachievetheprojectgoal.
Theyhelpyouassesshowtimedelayswillimpacttheproject.
Youcanfigureoutwhereexcessresourcesareavailabletoallocatetootherprojects.
Theyprovideabasistohelpyoutrackprojectprogress.
Projectmanagershaveavarietyoftoolstodevelopaprojectschedulefromtherelativelysimpleprocessof
actionplanningforsmallprojects,touseofGanttChartsandNetworkAnalysisforlargeprojects.Here,
weoutlinethekeytoolsyouwillneedforscheduledevelopment.
SchedulingInputs
Youneedseveraltypesofinputstocreateaprojectschedule:
PersonalandprojectcalendarsUnderstandingworkingdays,shifts,andresourceavailabilityiscritical
tocompletingaprojectschedule.
DescriptionofprojectscopeFromthis,youcandeterminekeystartandenddates,major
assumptionsbehindtheplan,andkeyconstraintsandrestrictions.Youcanalsoinclude
stakeholderexpectations,whichwilloftendetermineprojectmilestones.
ProjectrisksYouneedtounderstandthesetomakesurethere'senoughextratimetodealwith
identifiedrisksandwithunidentifiedrisks(risksareidentifiedwiththoroughRiskAnalysis).
ListsofactivitiesandresourcerequirementsAgain,itsimportanttodetermineifthereareother
constraintstoconsiderwhendevelopingtheschedule.Understandingtheresourcecapabilitiesand
experienceyouhaveavailableaswellascompanyholidaysandstaffvacationswillaffectthe
schedule.
Aprojectmanagershouldbeawareofdeadlinesandresourceavailabilityissuesthatmaymake
theschedulelessflexible.
SchedulingTools
Herearesometoolsandtechniquesforcombiningtheseinputstodeveloptheschedule:
ScheduleNetworkAnalysisThisisagraphicrepresentationoftheproject'sactivities,thetime
ittakestocompletethem,andthesequenceinwhichtheymustbedone.Projectmanagement
softwareistypicallyusedtocreatetheseanalysesGanttchartsandPERTChartsarecommon
formats.
CriticalPathAnalysisThisistheprocessoflookingatalloftheactivitiesthatmustbecompleted,and
calculatingthe'bestline'orcriticalpathtotakesothatyou'llcompletetheprojectintheminimum
amountoftime.Themethodcalculatestheearliestandlatestpossiblestartandfinishtimesfor
projectactivities,anditestimatesthedependenciesamongthemtocreateascheduleofcritical
activitiesanddates.
ScheduleCompressionThistoolhelpsshortenthetotaldurationofaprojectbydecreasingthe
timeallottedforcertainactivities.It'sdonesothatyoucanmeettimeconstraints,andstill
keeptheoriginalscopeoftheproject.Youcanusetwomethodshere:
CrashingThisiswhereyouassignmoreresourcestoanactivity,thusdecreasingthetimeittakesto
completeit.Thisisbasedontheassumptionthatthetimeyousavewilloffsettheaddedresource
costs.
FastTrackingThisinvolvesrearrangingactivitiestoallowmoreparallelwork.Thismeansthat
thingsyouwouldnormallydooneafteranotherarenowdoneatthesametime.However,do
bearinmindthatthisapproachincreasestheriskthatyou'llmissthings,orfailtoaddresschanges.
PROJECTMANAGEMENTSOFTWARETOOLS
Therearemanyprojectschedulingsoftwareproductswhichcandomuchofthetediousworkof
calculatingthescheduleautomatically,andplentyofbooksandtutorialsdedicatedtoteaching
peoplehowtousethem.However,beforeaprojectmanagercanusethesetools,heshould
understandtheconceptsbehindtheworkbreakdownstructure(WBS),dependencies,resource
allocation,criticalpaths,Ganttchartsandearnedvalue.Thesearetherealkeystoplanningasuccessful
project.
I. AllocateResourcetotasks:Thefirststepinbuildingtheprojectscheduleistoidentifythe
resourcesrequiredtoperformeachofthetasksrequiredtocompletetheproject.Aresource
isanyperson,item,tool,orservicethatisneededbytheprojectthatiseitherscarceorhas
limitedavailability.Oneormoreresourcesmustbeallocatedtoeachtask.Todothis,theproject
managermustfirstassignthetasktopeoplewhowillperformit.Foreachtask,theproject
managermustidentifyoneormorepeopleontheresourcelistcapableofdoingthattask.
Andassignittothem.Onceataskisassigned,theteammemberwhoisperformingitisnot
availableforothertasksuntiltheassignedtaskiscompleted.Whilesometaskscanbeassignedto
anyteammember,mostcanbeperformedonlybycertainpeople.Ifthosepeoplearenotavailable,
thetaskmustwait.
II. IdentifyDependencies:Onceresourcesareallocated,thenextstepincreatingaprojectscheduleisto
identifydependenciesbetweentasks.Ataskhasadependencyifitinvolvesanactivity,resource,or
workproductthatissubsequentlyrequiredbyanothertask.Dependenciescomeinmanyforms:atest
plancantbeexecuteduntilabuildofthesoftwareisdelivered;codemightdependonclassesor
modulesbuiltinearlierstages;auserinterfacecantbebuiltuntilthedesignisreviewed.If
WidebandDelphiisusedtogenerateestimates,manyofthesedependencieswillalreadybe
representedintheassumptions.Itistheprojectmanagersresponsibilitytoworkwith
everyoneontheengineeringteamtoidentifythesedependencies.Theprojectmanagershould
startbytakingtheWBSandaddingdependencyinformationtoit:eachtaskintheWBSisgivena
number,andthenumberofanytaskthatitisdependentonshouldbelistednexttoitasa
predecessor.Thefollowingfigureshowsthefourwaysinwhichonetaskcanbedependenton
another.
III. CreateSchedules:Oncetheresourcesanddependenciesareassigned,thesoftwarewillarrange
thetaskstoreflectthedependencies.Thesoftwarealsoallowstheprojectmanagertoenter
effortanddurationinformationforeachtask;withthisinformation,itcancalculateafinaldate
andbuildtheschedule.ThemostcommonformforthescheduletotakeisaGanttchart.
IV. RiskPlanandManagement:Ariskplanisalistofallrisksthatthreatentheproject,alongwithaplan
tomitigatesomeorallofthoserisks.Somepeoplesaythatuncertaintyistheenemyofplanning.If
therewerenouncertainty,theneveryprojectplanwouldbeaccurateandeveryprojectwould
gooffwithoutahitch.Unfortunately,reallifeintervenes,usuallyatthemostinconvenienttimes.The
riskplanisaninsurancepolicyagainstuncertainty.Oncetheprojectteamhasgeneratedafinalsetof
risks,theyhaveenoughinformationtoestimatetwothings:aroughestimateoftheprobabilitythat
theriskwilloccur,andthepotentialimpactofthatriskontheprojectifitdoeseventuallymaterialize.
Theymustthenbeprioritizedintwoways:inorderofprobability,andinorderofimpact.Boththe
probabilityandimpactaremeasuredusingarelativescalebyassigningeachanumberbetween
1and5.

MONITORINGANDCONTROLLINGOFPROJECT
Toappreciatehowprojectcontrolworksyoumustfirstunderstandthat,despitealltheeffortdevoted
todevelopingandgainingcommitmenttoaplan,thereislittlechancethattheresultingprojectwillrun
preciselyaccordingtothatplan.
Thisdoesntmeanthatyouwillfailtoachievetheobjectivesoftheplanonthecontrary,youmusthavea
veryhighlevelofconfidencethatyoucanachievethoseobjectivesanddeliverthefullscope,fitfor
purpose,ontimeandtobudget.
Theplandescribeswhatyouwouldliketodobutitmodelsjustoneoftheinfinitenumberofroutesfrom
whereyouarenowtowhereyouwanttobe.Inpracticeyourprojectwillfollowadifferentroutetotheone
showninyourplan,youdontknowwhichone,butyouwillneedcontroltomakesureitisaroutethat
takesyoutowhereyouneedtobe,whenyouneedtobethere,andatacostyoucanafford.
Thepoweroftheplanisthatitgivesyouabaselineagainstwhichyoucancompareactualachievement,
costandtimeanddeterminetheamountofdeviationfromplanandhencetakecorrectiveactionif
required.
Theessentialrequirementforcontrolistohaveaplanagainstwhichprogresscanbemonitoredto
providethebasisforstimulatingmanagementactioniftheplanisnotbeingfollowed.Controlthen
becomesaregular,frequentiterationofcreatingtherightenvironmentforcontrol.
Thebasicrequirementsforcontrolareaplanthatis:
realistic
credible
detailedenoughtobeexecuted
acceptabletothosewhomustexecuteit(ProjectManagerandProjectTeams)
approvedbythosewhoareaccountableforitsachievement(theSRO/ProjectBoard);
Aprocessformonitoringandmanagingprogressandresourceusage;
Aprojectmanagementorganizationofappropriatelyskilledpeoplewithsufficientauthorityandtimeto
plan,monitor,report,takedecisionsanddealwithexceptions;
Aprocesstomakeminorcorrectionsandadjustmentstodealwithminordeviationsandomissionsfromthe
plan;thecommitmentofthosewhowillprovidetheresourcesindicatedintheplan(SRO,Project
Board,Stakeholdersandresourceownersintheparentorganizationanditsrelatedagencies).

PROJECTCOMMUNICATIONPLAN
Goodcommunicationsamongallstakeholdersiskeyforthesuccessofaproject.Itsimportanttoensureyour
projectteamdevelopsacommunicationplansothatlackofcommunicationdoesnotderailyourgoals.
Eventhoughyoumayhaveidentifiedandanalyzedyourstakeholdersanddeterminedthemost
effectivecommunicationsvehicleswithoutawelldevelopedandimplementedcommunicationplan,you
mayhavearecipefordisaster.Sohowdoyoudevelopacommunicationplantoensureyourprojects
success?
Followingarethetwotypesofcommunicationplanstosupportandenhancecommunicationsthroughout
yourproject.Thefirststepinbuildingyourplanistoidentifyyourprojectstakeholdersanddeterminethebest
communicationsvehicle.Next,youbuildyourplan.
Twotypesofcommunicationplanforanysizedproject
Forallsizedprojects,awellstructuredcommunicationsplanisamustfromthebeginning.Projectsoffer
multipleopportunitiesforcommunicationstoyourkeystakeholders,andwerecommendexploringtwo
typesofcommunicationplansforyourprojecttoexploittheseopportunities.
RegularorOngoingCommunicationPlan
OnetimeorEventdrivenCommunicationPlan
RegularorOngoingCommunication:Regularorongoing,communicationsincludethoseopportunitiesyou
havetocommunicatetoyourprojectteammembers,sponsors,steeringcommitteemembers,andother
keystakeholdersonaregularbasis.Thesetypesofcommunicationcouldincludeyourregularstatus
reports,scheduledprojectteammeetings,monthlyupdateswiththesteeringcommittee,orregularly
scheduledcampusupdatesonaproject.Useyourstakeholderanalysistodeveloptheseroutineandongoing
communicationsfortheproject.
Reviewthisplanatregularintervals(quarterly)toensurethatyouareadequatelycommunicatingtothose
stakeholderswhoareclosesttotheproject.Thechartonthenextpageprovidesanexampleofthetypesof
communicationstoconsiderforyourregularandongoingcommunications.Dontforgettoincludeyour
regularmeetingsandevenoneononesthatyoumayhavewithyoursponsor.
OnetimeorEventdrivenCommunicationPlan:Duringthelifeofanyproject,opportunitiesariseforonetime
oreventdrivencommunications.Workwithyourprojectteamtoidentifythoseopportunities,likethe
exampletimeline.Thisplancouldalsoincludecriticalissuessessions,vendormeetings,trainingschedules,
androlloutschedules.
Togainthemostadvantagefromthecommunicationsopportunitiesforyourproject,reviewthis
portionofyourcommunicationplaneverymonthwithyourprojectteam.Reviewthepastmonth,and
thenlookforwardatleastsixmonthstoensurethatasyourprojectplanchanges,youareableto
capitalizeoneverycommunicationopportunity.
Whendevelopingyourcommunicationsplankeepinmindthatthekeyistoalwayshavethereceiveras
thefocalpointnotthesender.Makeyourcommunicationsdeliberateandfocused.Bymakingsurethat
yourplanisclearandthoroughlyoutlined,youcanhelpreducethenumberofproblemsandsurprisesthat
popupandhaveaprojectassuccessfulasaperfectsouffl.

PROJECTMETRICS
Metricsareasetofquantifiableparameterswhichareusedtomeasuretheeffectivenessofaprojector
undertaking.Valuesareobtainedfortheparametersformultipleinstancesofthesameentityandtheyare
comparedandinterpretedastothechangeintheeffectiveness.Forexample,iftherearemultipleversions
ofaproduct,onemetriccouldbetheusersatisfactionlevel(say1to51beingleasthappyand5beingvery
happy)withtheuserinterfaceforeachoftheversions.Theeffectivenessofthechangesintheuser
interfacecanbemeasuredbythesatisfactionleveloftheuserswitheachoftheversions.Projectmetrics
areinprocessorprojectexecutionmeasuresthatarecollected,analyzedandusedtodriveprojectprocess
improvement.Reasonsforprojectmetricsareshownbelow:
Toprovideclearandtangibleprojectstatusinformationaboutprojectscheduleandcost
Toidentifyareasforprojectprocessimprovement
Todemonstratetheresultsofprocessimprovementefforts
Tocollectadatabaseofprojectmetricstoanalyzetrendinformationorprovidehistoric
comparatorsandperhapsusedforparametricestimate
Tocollectprojectmetricswithoutaclearplanoffutureactiontousethosemetricsissimplywastingtimeand
effort.Inshort,onlycollectprojectmetricsthatwillbeusedtodriveprojectprocessimprovements.
REPORTINGPERFORMANCEANDPROGRESS
Performancereportinginvolvescollecting,processingandcommunicatinginformationtokeystakeholders,
regardingtheperformanceoftheproject.Performancereportingcanbeconductedusingvarioustoolsand
techniques,mostofwhichhavebeenalreadydescribedinthepreviousparagraphs.Themostwidelyused
techniquesforperformancereportingare:
Performancereviewmeetingsthattakeplacetoassesstheprojectsprogressor/andstatus
Varianceanalysiswhichisaboutcomparingactualprojectresults(intermsofschedule,resources,cost,
scope,qualityandrisk)againstplannedorexpectedones.
EarnedValueAnalysis(EVA)usedtoassessprojectperformanceintermsoftime(schedule)andcost(or
resources).
FinancialandOutputPerformanceIndicatorsusedtomeasurefinancialandphysicalprogressoftheproject.

PROJECTPROCUREMENTMANAGEMENT
ProjectProcurementManagementincludestheprocessesrequiredtoacquiregoodsandservicesfromoutside
theperformingorganization.Forsimplicity,goodsandservices,whetheroneormany,willgenerallybe
referredtoasaproduct.Thefollowingprocessesarecontainedinprojectprocurementmanagementas
listedbelow
ProcurementPlanningdeterminingwhattoprocureandwhen.
SolicitationPlanningdocumentingproductrequirementsandidentifyingpotentialsources.
Solicitationobtainingquotations,bids,offers,orproposalsasappropriate.
SourceSelectionchoosingfromamongpotentialsellers.
ContractAdministrationmanagingtherelationshipwiththeseller.
ContractCloseoutcompletionandsettlementofthecontract,includingresolutionofanyopen
items.
Theseprocessesinteractwitheachotherandwiththeprocessesintheotherknowledgeareasaswell.Each
processmayinvolveeffortfromoneormoreindividualsorgroupsofindividualsbasedontheneedsofthe
project.Althoughtheprocessesarepresentedhereasdiscreteelementswithwelldefinedinterfaces,in
practicetheymayoverlapandinteractinwaysnotdetailedhere.
ProjectProcurementManagementisdiscussedfromtheperspectiveofthebuyerinthebuyerseller
relationship.Thebuyersellerrelationshipcanexistatmanylevelsononeproject.Dependingonthe
applicationarea,thesellermaybecalledacontractor,avendor,orasupplier.

PROJECTIMPLEMENTATION
Afterdevelopingtheproject,theinformationsystemistransferredsuccessfullyfromthedevelopment
andtestenvironmenttotheoperationalenvironmentofthecustomer.Choosinganinappropriate
implementationapproachcannegativelyimpacttheprojectsremainingscheduleandbudget.In
general,theprojectteamcantakeoneofthreeapproachesforimplementingtheIS.Theseapproaches
are
o Directcutover
o Pilot
o Parallel
o Phased
DirectCutover:Thisapproach,asshownbelowproducesthechangeoverfromoldsystemtothenewsystem
instantly.Thisapproachcanbeeffectivewhenquickdeliveryofthenewsystemiscriticalandthisapproach
mayalsobeappropriatewhenthesystemsfailurewillnothaveamajorimpactontheorganizationi.e.,
thesystemisnotmissioncritical.
Parallel:Parallelapproachasshownbelowisthemethodinwhichboththenewsystemandtheold
systemwilloperateatthesametime,foraspecifiedperiodoftime,inordertocheckthenewsystemfor
complexities.

Dataisinputinbothsystemandtheresultsareverified.Thisapproachisimpracticalifthesystemsare
dissimilarordoesnotsupporteachother.Thecostusingthisapproachisrelativelyhigh,becauseboth
systemsareoperatingrequiringmoremanpowerintermsofmanagement.Usingthisapproachprovides
confidencethatthenewsystemisfunctioningandperformingproperlybeforerelyingonitentirely.Itisalso
impracticaltousethisapproachasthenewsystemandoldsystemtechnicallyincompatible.
Pilot:Itisthecombinationofbothdirectcutoverandparallelapproach.Thepilotmethodinvolves
implementingthenewsystemataselectedlocationlikeabranchoffice,onedepartmentinacompany,
etc.calledpilotsite,andtheoldsystemcontinuestooperatefortheentireorganization.
Riskandcost,associatedinthismethodarerelativelyless,becauseonlyonelocationrunsthesystemandthe
newsystemisonlyinstalledandimplementedatpilotsites;reducingtheriskoffailure.Afterthenew
systemprovesthatthesystemissuccessfullyatthepilotsite,itisimplementingintherestofthe
organization,usuallyusingthedirectcutovermethod.
Phased:ThePhasedapproachallowsimplementingthenewsysteminphasesormodulesorstagesin
differentpartsoftheorganizationincrementallyasshownbelow.E.g.,anorganizationmayimplementan
accountinginformationsystempackagebyfirstimplementingthegeneralledgercomponent,thenaccounts
payableetc.
Thismethodisoneoftheleastriskybecauseimplementationonlytakeseffectinpart,incasean
errorgoeswrongwiththenewsystem,onlythatparticularaffectedpartisatrisk.Aphasedapproachmay
alsoallowtheprojectteamtolearnfromitsexperiencesduringtheinitialimplementationsothatthelater
implementationsrunsmoothly.
Althoughthephasedapproachmaytakemoretimethanthedirectcutoverapproach,itmaybelessriskyand
muchmanageable.Afterallthemoduleshavebeentestedindependentlyitispossibletoimplementthe
newsystemintheorganization,whichwouldbeerrorfree.Also,overlyoptimistictargetdatesor
problemsexperiencedduringtheearlyphasesofimplementationmaycreateachainreactionthatpushes
backthescheduleddatesoftheremainingplannedimplantations.

SOFTWARE
MAINTENANCE
CSC532 GROUP ASSIGNMENT
2012
GROUP SIX
INTRODUCTION
Software maintenance is a very essential part of software development process. It forms part of
the full software development Life Cycle which includes other phases like Requirements
Engineering, Architecting, Design, Implementation, Testing and Software Deployment.
Maintenance is the last stage of the software life cycle. After the product has been released, the
maintenance phase keeps the software up to date with environment changes and changing user
requirements. Software maintenance activities currently account for more than half of the typical
software budget (Glass, 1989). In addition, more than 50 percent of global software developers
are engaged in modifying existing applications (J ones, 2000).
The word maintenance is surprisingly ambiguous in a software context. In normal usage it can
span some 21 forms of modification to existing applications. The two most common meanings of
the word maintenance include: 1) Defect repairs; 2) Enhancements or adding new features to
existing software applications. Although software enhancements and software maintenance in
the sense of defect repairs are usually funded in different ways and have quite different sets of
activity patterns associated with them, many companies lump these disparate software activities
together for budgets and cost estimates. Software maintenance in software engineering is the
modification of a software product after delivery to correct faults, to improve performance or
other attributes. So any work done to change the software after it is in operation is considered to
be maintenance work. The purpose is to preserve the value of software over the time. The value
can be enhanced by expanding the customer base, meeting additional requirements, becoming
easier to use, more efficient and employing newer technology. Maintaining software is expensive
(Smith, 1999). The maintenance phase of the software development life cycle is often the longest
and the most expensive. Maintenance may span for 20 years, whereas development may be 1-2
years. Software applications that take months or years to develop are sometimes in service for
decades, and maintenance often accounts for 65 to 75 percent of total SDLC costs. In response to
this problem, the software industry is exhibiting an increased interest in the benefits yielded by
improved software maintenance processes. The professional community now recognizes the
importance of timely and accurate software maintenance
However, the success of software maintenance greatly depends on the success of the earlier
stages of the Software Development Life Cycle. The earlier phases should be done so that the
product is easily maintainable. The design phase should plan the structure in a way that can be
easily altered. Similarly, the implementation phase should create code that can be easily read,
understood, and changed. Maintenance can only happen efficiently if the earlier phases are done
properly. There are four major problems that can slow down the maintenance process:
unstructured code, maintenance programmers having insufficient knowledge of the system,
documentation being absent, out of date, or at best insufficient, and software maintenance having
a bad image. The success of the maintenance phase relies on these problems being fixed earlier
in the life cycle (Stafford, 2003).








. THE SIGNIFICANCE OF SOFTWARE MAINTENANCE
It is estimated that there are more than 100 billion lines of code in production in the world, and
as much as 80% of it is unstructured, patc
hed, and badly documented (van Vliet [2000]). It is necessary to keep these software systems
operational. Errors and design defects in software must be corrected. Alternatively, the Y2K
problem is an example of adaptive change to Y2K environment. Systems must also be adapted to
changing environments and user requirement needs. In fact, a substantial proportion of the
resources expended within the Information Technology industry goes towards the maintenance
of software systems. Annual software maintenance cost in the United States has been estimated
to be more than $70 billion for ten billion lines of existing code (Sutherland [1995]). At the
company level, Nokia Inc. used about $90 million for preventive Y2K-bug corrections
(Koskinen [2003]). Many studies were done to investigate the proportional software maintenance
cost, in other words, the cost ratio of new development versus maintenance. The total cost of
system maintenance is estimated to comprise at least 50% of total life cycle costs (van Vliet
[2000]). The proportional maintenance costs range from 49 % for a pharmaceutical company to
75% for an automobile company in some studies according to Takang and Grubb [1996].
Zelkowitz et al [1979] also point out that in many large-scale software systems, only one-fourth
to one-third of the entire life cycle costs can be attributed to software development. Most effort is
spent during the operations and maintenance phase of the software life cycle. In their study of
487 data processing organizations, Lientz and Swanson [1980] reported on the proportion of
maintenance effort allocated to each type of maintenance. Corrective maintenance accounted for
slightly more than 20% of the total, on the average. Adaptive maintenance accounted for slightly
less than 25%. Perfective maintenance accounted for over 50%. In particular, enhancements for
users accounted for 42% of the total maintenance effort. Only 5% was spent on preventive
maintenance activities.
TYPES OF SOFTWARE MAINTENANCE
E.B. Swanson was one of the first to categorize software maintenance (Pigoski,1996). He
defined three different categories of software maintenance: corrective, adaptive, and perfective.
The 1993 IEEE Standard on Software Maintenance further defined the categories and added a
fourth preventive maintenance category.
Corrective Maintenance
Corrective maintenance involves changing a software application to remove errors (Chapin,
2000b). The three main causes of corrective maintenance are design errors, logic errors, and
coding errors. Design errors occur when, for example, changes made to the software are
incorrect, incomplete, wrongly communicated or the change request is misunderstood. Logic
errors result from invalid tests and conclusions, incorrect implementation of design
specifications, faulty logic flow or incomplete test of data. Coding errors are caused by incorrect
implementation of detailed logic design and incorrect use of the source code logic. Defects are
also caused by data processing errors and system performance errors. All these errors, sometimes
called residual errors or bugs, prevent the software from conforming to its agreed
specification. The need for corrective maintenance is usually initiated by bug reports drawn up
by the end users (Coenen and Bench-Capon [1993]). Examples of corrective maintenance
include correcting a failure to test for all possible conditions or a failure to process the last record
in a file (Martin and McClure [1983]).However, new faults are often introduced during the
maintenance process (Kammer, 2000). For example, the more difficult it is to maintain a
software application; the more likely it is that software updates will not be correctly installed by
users. This often leads to system faults that further impact operation, functionality, and security.
Corrective maintenance accounts for approximately 20 percent of all software maintenance. A
study conducted by Mockus and Votta (2000) used keyword classification rules to identify the
corrective maintenance activities of large-scale software systems Keywords included fix, bug,
error, fix-up, and fail. Results showed that when compared to the other types, corrective changes
tended to be the most difficult. In addition, the interval for corrective changes was the smallest.
The study also showed a strong relationship between the type and size of a change and the time
required to carry it out. A related article by Mayrhauser and Vans (1997) reported on a field
study conducted to understand the corrective maintenance of large-scale software by professional
software maintenance engineers. The study found that corrective maintenance was not a popular
activity. Maintenance engineers were required to have knowledge about the specific application
in addition to language and domain skills. In many cases, the task was assigned to novices with
some language skills or to an expert programmer that lacked experience in the new language. In
both cases, corrective maintenance activities involved learning on the job.
Adaptive Maintenance
Adaptive software maintenance is any effort that is the result of changes in a software
applications operating environment (Burrows, 1984). The term environment in this context
refers to the totality of all conditions and influences which act from outside upon the system, for
example, business rule, government policies, work patterns, software and hardware operating
platforms (Takang and Grubb [1996]). These environmental modifications consist mainly of
changes to the following:
Rules, laws, and regulations that affect the application
Hardware configurations (e.g. new printers)
File structures and data formats
System software (e.g. operating system or utilities)
The need for adaptive maintenance can only be recognized by monitoring the environment
(Coenen and Bench-Capon [1993]). Adaptive maintenance accounts for approximately 20
percent of software maintenance activities. The end user does not see a change in the operation
of the software, but the software maintainer must expend resources to make the necessary
changes. Fioravanti, Nesi, and Stortoni (1999) presented a model for the estimation and
prediction of the adaptive maintenance required for object oriented software systems. Object
oriented modeling has been adopted by the industry recently and older systems are in need of
adaptive maintenance to better meet current user requirements. The paper validated that certain
effort estimation/prediction metrics are also useful for the estimation/prediction of software
maintenance effort. In addition, the paper concluded that during adaptive maintenance
considerable attention should be paid to methods definition and the methods interface. The
complexity of these interfaces for locally defined methods is the most important factor when
estimating adaptive software maintenance effort. In a related paper, Rayside, Kerr, and
Kontogiannis (1998) discussed techniques to automatically detect and adapt to changes in a J ava
applications library. The rapid evolution of J ava libraries, together with J avas run-time linking,
may produce incompatibilities between an application and the library it relies on. In the paper,
the authors developed and tested a prototype tool that was useful in detecting adaptive
maintenance in J ava applications developed on new versions of the J ava Development Kit (J DK)
but deployed on older versions. Self-adaptive software takes this concept one step further by
modifying its behavior in response to changes in its operating environment (Oreizy, Gorlick,
Taylor, Heimbigner, J ohnson, Medvidovic, Quilici, Rosenblum, & Wolf, 1999). Operating
environment includes anything observable by the software system (e.g. end-user input, external
hardware devices and sensors, and program instrumentation). Self-adaptive software systems
would be particularly effective when applied to military systems in which battlefield conditions
(i.e. operating environment) are subject to change without warning. One civilian application
might be the use of unmanned air vehicles (UAV) deployed for land-use management, freeway-
traffic management, and airborne cellular telephone relay stations.

Perfective Maintenance
Perfective maintenance is software maintenance implemented to improve the maintainability,
performance, or other attributes of a computer application (Burrows,1984). It mainly deals with
accommodating to new or changed user requirements. Perfective maintenance concerns
functional enhancements to the system and activities to increase the systems performance or to
enhance its user interface (vanVliet [2000]). Furthermore, perfective maintenance includes all
changes, insertions, deletions, modifications, extensions, and enhancements made to a system to
meet evolving and/or expanding user needs (Pigoski, 1996). For example, the addition of
features that were not in the original specification that a user wishes added is perfective
maintenance. Perfective maintenance comprises approximately 60 percent of all software
maintenance in a related paper, Domsch and Schach (1999) reported on a case study of the
maintenance of an object-oriented application in which a text-based user interface was replaced
with a graphical user interface (GUI). The study found that 94.8 percent of the maintenance
effort was perfective (i.e. GUI development). The paper concluded that adding a GUI to an
existing software application was difficult and time-consuming unless the maintainer had
extensive GUI design experience. The quality of the existing software product and its current
user interface had little impact on this conclusion.

Preventive Maintenance
In 1993, the IEEE added a fourth category of software maintenance preventive maintenance
(Pigoski, 1996). Some software maintainers classify preventive maintenance under the corrective
category. Preventive maintenance is defined as maintenance performed for the purpose of
preventing problems before they happen. This category is particularly important in safety critical
systems such as in aircraft and the space shuttle. Another definition offered by Vehvilainen
(2000) is that preventive maintenance refers to all software maintenance activities that are
prepared and decided upon regularly. These activities are based upon the analyses of present
conditions and the forecasted needs of the software. In a related article, Chapin (2000) provided
a review of the history of software maintenance and the role of preventive maintenance. The link
between scheduled and preventive maintenance was also discussed. Preventive maintenance was
interpreted to mean improving a software applications future maintainability. Forecasting what
would improve maintainability was shown to be difficult to accomplish. The author also pointed
out that most successful forecasting was done when software maintenance was performed on a
scheduled basis (i.e. preventive maintenance should be incorporated into scheduled
maintenance). The effectiveness of preventive maintenance in enhancing the software
dependability of operational software was explored in another article (Garg, Puliafito, Telek, &
Trivedi, 1998). The paper presented a model for a transaction based software application that
employed preventive maintenance to increase availability, decrease response time, and minimize
the probability of loss. Numerical examples were presented to illustrate the applicability of the
model. The main strength of the model was its ability to capture the dependence of crash/hang
failures and performance degradation on time and instantaneous load.
Among these four types of maintenance, only corrective maintenance is traditional
maintenance. The other types can be considered software evolution. The term evolution has
been used since the early 1960s to characterize the growth dynamics of software (Chapin et al
[2001]). Software evolution is now widely used in the software maintenance community. For
example, The Journal of Software Maintenance added the term evolution to its title to reflect
this transition (Chapin and Cimitile [2001]).

SOFTWARE MAINTENANCE PROCESS
Software Maintenance Process includes the following phases:
a) Problem/modification identification, classification, and prioritization;
b) Analysis;
c) Design;
d) Implementation;
e) Regression/system testing;
f) Acceptance testing; and
g) Delivery





Problem/modification identification, classification, and prioritization
In this phase, software modifications are identified, classified, and assigned an initial
priority ranking. Each modification request (MR) shall be evaluated to determine its
classification and handling priority. Classification shall be identified from the following
maintenance types:
a) Corrective;
b) Adaptive;
c) Perfective; and
d) Emergency.
Metrics/measures and associated factors identified for this phase should be collected and
reviewed at appropriate intervals.
Analysis
The analysis phase shall use the repository information and the MR validated in the
modification identification and classification phase, along with system and project
documentation, to study the feasibility and scope of the modification and to devise a
preliminary plan for design, implementation, test, and delivery. Metrics/measures and
associated factors identified for this phase should be collected and reviewed at
appropriate intervals
Design
In the design phase, all current system and project documentation, existing software and
databases, and the output of the analysis phase (including detailed analysis, statements of
requirements, identification of elements affected, test strategy, and implementation plan)
shall be used to design the modification to the system. Metrics/measures and associated
factors identified for this phase should be collected and reviewed at appropriate intervals.
Implementation
In the implementation phase, the results of the design phase, the current source code, and
project and system documentation (i.e., the entire system as updated by the analysis and
design phases) shall be used to drive the implementation effort,
Metrics/measures and associated factors identified for this phase should be collected and
reviewed at appropriate intervals.
System Test
System testing, as defined in IEEE Std 610.12-1990, shall be performed on the modified
system.
Regression testing is a part of system testing and shall be performed to validate that the
modified code does not introduce faults that did not exist prior to the maintenance
activity.
Acceptance Test
Acceptance tests shall be conducted on a fully integrated system. Acceptance tests shall
be performed by either the customer, the user of the modification package, or a third
party designated by the customer.
An acceptance test is conducted with software that is under SCM in accordance with the
provisions of IEEE Std 828-1998, and in accordance with the IEEE Std 730-1998.
Acceptance testing, as defined in IEEE Std 610.12-1990, shall be performed on the
modified system
Delivery
This sub-clause describes the requirements for the delivery of a modified software
system.
Metrics/measures and associated factors identified for this phase should be collected and
reviewed at appropriate intervals
SOFTWARE MAINTENANCE TOOLS
Software maintenance tools can be defined as anything functional that can assist the software
maintainer to address maintenance problems (Lethbridge & Singer, 1997).
Software maintenance tools are designed to satisfy requirements in the following five
areas:
Analysis and design
Testing
Software configuration management
Reverse engineering
Documentation management
Computer-Aided Software Engineering (CASE) tools currently exist to aid software engineers
perform a variety of tasks that include requirements analysis, software design, code production,
testing, document generation, and project management. While CASE tools have focused on other
areas of SDLC, the use of CASE tools for software maintenance has the potential to significantly
improve productivity and reduce cost. A study by CASE Associates showed that between 50 to
70 percent of a developers time was spent making changes to software (Sharon, 1996).
Therefore, the tool with the greatest potential impact on software development is not the
development tool. It is the software maintenance tool. A related paper by Dumke and Winkler
(1997) discussed the use of Computer. Assisted Software Measurement and Evaluation (CAME)
tools. CAME tools are designed for code analysis and measurement. They are applied during the
implementation and maintenance development phases. CAME tool classification includes tools
for model-based software component analysis, metrics application, measurement result
presentation, and statistical analysis and evaluation. Glass (1989) identified software
maintenance documentation as a key factor in the software maintenance process. Understanding
a software application consumes more time than any other maintenance task. Reducing this time
with up-to-date documentation is the goal of software maintenance tools. In a related paper,
Cioch and Palazzolo (1996) discussed a documentation approach and tool developed to
accelerate the efficiency of software maintainers. The approach identified the four learning
stages of a software maintainer: newcomer, student, intern, and expert. The information and form
of the documentation presented by the documentation tool differed according to the ability of the
maintainer. The approach was successfully used by the U.S. Army TARDEC to maintain the
ground vehicle simulation software used by the Vetronics Simulation Facility. Examples of
maintenance tools include VIFOR 2, xVue, and RETIRE. VIFOR 2 focuses on the deterioration
in structure and documentation that typically occurs during the maintenance of legacy systems
(Rajlich & Adnapally, 1996). The tool employs browsing and hypertext documentation
technologies to provide rapid code navigation along with the incremental recording and retrieval
of documentation. xVue, a software maintenance tool by Telcordia Technologies, allows
maintainers to quickly locate code associated with a particular system feature by highlighting the
source code related to that feature (Telcordia, 2001). xVue also facilitates the mapping of system
features to program components. Another maintenance tool, RETIRE, was developed by the
Source Recovery Company (SRC, 1999). RETIRE automatically converts outdated 4GL (fourth
generation language) code to COBOL. The tool replaces the manual rewrite process for 25 to 5
percent of the cost and completes tasks in 10 to 20 percent of the time.

SOFTWARE MAINTENANCE PROBLEMS
A study by the Software Engineering Institute (SEI) revealed problems believed to be typical of
many software maintenance organizations regardless of the type of maintenance performed
(Dart, Christie, & Brown, 1993). Software maintenance practices are frequently associated with
high cost and low productivity. Problems contributing to these factors include the need for more
effective software maintenance tools, a lack of software documentation, the low status of
maintainers, and the lack of a design-for maintenance viewpoint in software development
processes.
Effective Software Maintenance Tools
More effective software tools are needed to facilitate maintenance activities such as coding and
testing in addition to related management activities (e.g. budgeting and resource allocation)
(Dart, Christie, & Brown, 1993). Maintenance tool-related issues typically center on the
following:
Availability, quality, and integration of CASE tools
Adequate documentation tools
Configuration management support tools
Lack of reverse engineering tools
Different tooling for lifecycle maintenance versus new software development
Better testing tools and procedures
Effective tools improve maintenance productivity by helping the maintainer understand the
current system and make changes and repairs more efficiently (Sharon, 1996).
Documentation
Documentation is another area in which problems exist (Glass, 1989). There are typically two
extremes in software documentation. The first is a total lack of documentation because of
schedule and budget factors during product development. The second is too much
documentation. As a software application evolves this bulky documentation becomes out of date
and essentially useless. The result in both cases is a lack of quality documentation for the
maintainer to use.
Maintainer Status
Maintaining software applications is not highly regarded in the industry (Smith, 1999).
Maintainers often have little esteem and are viewed to be at bottom of the programming
hierarchy. In the past, this viewpoint was reinforced by the practice of relegating maintenance
tasks to the less capable while talented developers were given the task of developing new
systems (Sharon, 1996).
Design-for-Maintenance
Software applications designed without considering maintainability is a major maintenance issue
(Dart, Christie, & Brown, 1993). Tight schedules often result in software applications that are
moved into the maintenance phase before all deliverables have been finished. Consequently,
significant maintenance time is devoted to problems related to the support of poorly designed,
coded, tested, and documented software. Designing maintainable software is more than writing
understandable code. It involves looking at programs from the viewpoint of a person required to
change or repair the code (Smith, 1999). For example, programming languages with large and
varied instruction sets allow developers to code an algorithm in a variety of ways. However,
they are also more difficult for a maintainer to understand. The best choice is sometimes a
language that allows a process to be written in a limited number of ways.

SOFTWARE EVOLUTION AND LEHMANS LAWS
The term software evolution stems for a series of works, commonly referred to today
as Lehman's laws, that were first proposed by Dr. Meir Lehman in his work Programs, Life
Cycles, and Laws of Software Evolution(1980). In order to understand the concept of software
evolution, one should back up a moment and recall that the purpose of software systems is to
provide solutions to problems. These problems are human based and software is developed to
alleviate them through automation and functionality they provide. Often, the term software
solution is used to highlight their purpose of solving problems; however, the problems of humans
are not static and change over time (e.g. new regulations are passed, workflow procedures
change, etc.) In order to keep these software solutions relevant to the problems they are intended
to solve, they too must be modified to change along with their corresponding problems. This is
the cusp of what software evolution is; it is the concept that software is modified to keep it
relevant to its users.
As such, Lehman's Laws essentially describe software evolution as a force that is responsible for
both the driving of new and/or revising of developments in a system; whereas, it also being
simultaneously responsible for the hindering of future progress in the implementation of those
new developments through a byproduct of evolution, which is software decay. Later in this
document, six of Lehman's Laws will be discussed; however, before proceeding it is important to
understand the types of systems to which they pertain, as defined by Lehman.
Lehman's System Types
The identification of this force, or rather where the concept evolution originated, was derived
from a series of empirical observations Lehman made of large system development (Programs,
Life Cycles, and Laws of Software Evolution, 1980). Through those observations, Lehman
categories all software systems to consist of one of three types: S-Types, P-Types, and E-Types.
S-Type Systems
S-type or static-type systems are those that can be described formally in a set of specifications
and whose solution is equally well understood. Essentially, these systems are those whose
stakeholders understand the problem and exactly know what is needed to resolve it.
The static part of these types of systems refers to their distinction of have specifications that do
not change. An example of a type of this system would be one that performs mathematical
computations where both the desired functionality and how to accomplish it are both well
understood beforehand and which are not likely to change. Systems of these types are generally
the simplest of all the three types and least subjective to evolutionary forces. For this reason,
being that once initially developed, there is little occurrence of change, Lehman's Laws do not
pertain to these simple systems.
P-type systems
P-type systems, or practical-type systems, are those that can be described formally, but whose
solution is not immediately apparent. In essences, these are the systems that the stakeholders
know what end result is needed, but have no good means of describing how to get there. For
systems of this type, they require an iterative approach in their discovery to facilitate the
feedback necessary to them to understand what they do need, what they do not need, and how to
articulate it. This iterative process to facilitate feedback is analogous to how a witness of a crime
would work with a sketch-artist to derive an accurate picture of the suspect.
E-Type Systems
The final type of system proposed by Lehman is the E-types, which is short for embedded-types.
E-Types are those that characterize the majority of software in everyday use (2002).
The embedded part of its name refers to the notion that these systems serve to model real-world
processes, and through their use, become a component of the world in which they intend to
model. In other words, opposed to a simple calculator program (i.e. as with s-type system),
systems of this type become components of real world processes; processes which would fail
without them. An example of such a highly embedded type system would be the air-traffic
control system of an airport; without it, the business of orchestrating the safe departure and
arrival of flights would be impossible. In this example, it is easy to see how the system has
become component of the real world.
Lehman's Laws
With an understanding of to what type of systems Lehman's Laws pertain (i.e. E-type), a
discussion regarding those laws can commence. Lehman offers eight laws, which are not offered
as laws of nature, but rather the product of observations made during the aforementioned survey
of large system. It is important to understanding that they pertain to all E-type systems
regardless of the development or management methodologies or technologies used to develop
them.


1
st
Law - The Law of Continuing Change
The first of Lehman's laws is the Law of Continuing Change. As suggested, since the
environment of the real world is ever changing (e.g. new laws and regulations are always being
passed, new base expectations of users are constantly shifting, etc.), and since E-type systems are
a component of those real-world processes, in order for a system to continue to be relevant, it
must adapt (i.e. evolve) as the world changes or face becoming progressively less applicable and
useful.
2
nd
Law - The Law of Increasing Complexity
The second of Lehman's laws is the Law of Increasing Complexity. This law states that, without
direct intervention and proactive efforts to mitigate the risk posed by it, the implementation of all
E-type systems will continue to become more complex as they evolve. The implications of this,
as will be seen in Lehman's third law (described next) as complexity of the system increases, the
ability for conserve familiarity with its implementation diminishes; thereby hindering the ability
for it to continue to evolve. This diminishing ability to evolve is a product of software decay,
which will be discussed later on in this paper.
3
rd
Law - The Law of Conservation of Familiarity
The third of Lehman's Laws is the Law of Conservation of Familiarity. This law refers to the
idea that in order for E-type system to continue to evolve, its maintainers must possess and
maintain a mastery of its subject manner and implementation must. In other words, in order for a
system to efficiently continue to evolve to meet the forces-of-change exerted on it by the real
world, a deep understand of how the system functions (i.e. how it works) and why it has been
developed to function in that manner (i.e. why does/must it work in this way) must be preserved
at all costs. Logically, to get somewhere one must first know where they are. The concept here is
similar. Without a familiarity of how and why the system was designed in the way it was, it
becomes very difficult to implement changes to it without compromising the ability to
understand it. This law refers back to the second of Lehman's laws: The Law of Increasing
Complexity.
4
th
Law - The Law of Continuing Growth
The fourth law that is relevant to the discussion of the Staged-Model is the Law of Continuing
Growth. It states that in order for a E-type system to remain relevant to the business problems it
is intended to resolve, the size of the system's implementation, will continue to increase over its
lifestyle. This is implicitly tied of Lehman's second law, the Law of Increasing Complexity, in
that there is a direct relationship between an increase in the number of function points that a
system has offers and the complexity required to achieve that increased functionality; therefore,
this law reaffirms his second law in that without due care and direct efforts to mitigate it, the
expansion of the systems size can have negative affects to the ability for it to be comprehended
along with its ability to evolve.
5
th
Law - The Law of Declining Quality
The fifth of Lehman's laws is the Law of Declining Quality. Poorly conceived modifications lead
to the introduction of defects and partial fulfillment of specifications. This law serves to also
reaffirm the Lehman's second through fourth laws in that without a direct and rigorous effort to
preserve the system's structural integrity by minimizing the effects that continuous evolution will
have in increasing its size, complexity and its ability to be comprehended, that the overall effect
will lead to a decline in the systems quality.
6
th
Law - The Law of Feedback Systems
The final law of Lehman that is relevant to the Staged-Model is the Law of Feedback Systems. It
states that in order sustain continuous change, or evolution, of a system in a manner that
minimizes the threats that software decay and loss of familiarity poses, there must be a means to
monitor the performance it will have. This law refers to the importance that all E-type systems
include feedback mechanisms that permit its maintainers to collect metrics on the systems and
maintenance efforts performance. Whereas the nominal value of these metrics may not
intrinsically provide much insight; by performing a trend analysis of the metric's values over
time, they serve as a barometric-like indication of how the systems evolution is proceeding by
observing their relative change.
7
th
Law Self Regulation E-type system evolution process is self-regulating with distribution
of product and process measures close to normal.
7
th
Law Conservation of Organizational Stability (invariant work rate) - The average
effective global activity rate in an evolving E-type system is invariant over product lifetime.







CONCLUSION
The maintenance of software applications is one of the major problems in the software
development life cycle. The maintenance process is costly, conflictive, and extremely resource
intensive (Polo et al., 1999). The practice of software maintenance has improved significantly in
past 15 years (Bennett, Griffiths, Brereton, Munro, & Layzell, 1996). However, software
applications are becoming larger and more complex. In addition, user requirements dictate the
flexibility to meet changing business needs. As more and more of these systems go online, the
pressure to continuously improve and adapt them in response to user requests will continue to
burden software maintenance organizations.
The four maintenance categories were discussed in the previous sections; the use of maintenance
tools to reduce cost and improve efficiency was explored; software maintenance problems were
investigated; and positive industry developments were explained. In the future, the software
industry must continue to focus on improving software maintenance processes. This can be
accomplished through the implementation of industry best practices. Recommended best
practices include software that is designed with maintainability in mind. This requires looking at
programs and programming from the perspective of a programmer about to alter the code. In
addition, as systems become more complex and the technology they use becomes more obscure,
software organizations must elevate the position of software maintainer and encourage the best
programmers to become maintainers. Finally, as the best programmers become dedicated to
software maintenance it is only reasonable to equip them with the best maintenance tools and
procedures available.
2013
[GROUP7:SOFTWAREREUSE]
Codereuseistheideathatapartialcomputerprogramwrittenatonetimecanbe,shouldbe,oris
beingusedinanotherprogramwrittenatalatertime.Thereuseofprogrammingcodeisacommon
techniquewhichattemptstosavetimeandenergybyreducingredundantwork
Contents
1.0 WHAT IS SOFTWARE REUSE ......................................................................................... 2
2.0 HOW SOFTWARE REUSE CAN ADOPTED ................................................................... 2
2.1 MODULARITY: .............................................................................................................. 2
2.2 LOOSE COUPLING: ....................................................................................................... 3
2.3 HIGH COHESION: .......................................................................................................... 3
2.4 INFORMATION HIDING: .............................................................................................. 3
2.5 SEPARATION OF CONCERNS: ................................................................................... 4
3.0 TYPES OF REUSE................................................................................................................... 4
3.1 Opportunistic......................................................................................................................... 4
3.1.1 Internal Reuse ................................................................................................................ 4
3.1.2 External reuse ................................................................................................................. 4
3.2 Planned .................................................................................................................................. 4
3.3 Referenced ............................................................................................................................ 5
3.4 Forked ................................................................................................................................... 5
3.4.1 Advantages of Forked Reuse .................................................................................... 5
3.4.1 Disadvantages of Forked Reuse ................................................................................ 5
4.0 Software Reuse Models ............................................................................................................ 5
4.1 Reuse Level Metrics ............................................................................................................. 6
4.2 Economic Models ................................................................................................................. 6
5.0 Pros and Cons of Software Reuse ............................................................................................. 6
5.1 Benefits of Software Reuse ................................................................................................... 6
5.2 Disadvantages of Software Reuse ..................................................................................... 7
6.0 Examples and Applications of Software Reuse ........................................................................ 7
Software Reuse
1.0 WHAT IS SOFTWARE REUSE
Software reuse is the process of creating software systems from existing software rather than
building software systems from scratch. This simple yet powerful vision as a recognized area of
study in software engineering was introduced in 1968 when Douglas McIlroy of Bell
Laboratories proposed basing the software industry on reusable components.
Information systems development is typically acknowledged as an expensive and lengthy
process, often producing code that is of uneven quality and difficult to maintain. Software reuse
has been advocated as a means of revolutionizing this process. The claimed benefits from
software reuse are reduction in development cost and time, improvement in software quality,
increase in programmer productivity, and improvement in maintainability. Software reuse does
incur undeniable costs of creating, populating, and maintaining a library of reusable components.
There is anecdotal evidence to suggest that some organizations benefit from reuse.
Ad hoc code reuse has been practiced from the earliest days of programming. Programmers have
always reused sections of code, templates, functions, and procedures. Software reuse is the idea
that a partial computer program written at one time can be, should be, or is being used in another
program written at a later time. The reuse of programming code is a common technique which
attempts to save time and energy by reducing redundant work.
2.0 HOW SOFTWARE REUSE CAN ADOPTED
Some so-called code "reuse" involves simply copying some or all of the code from an existing
program into a new one. While organizations can realize time to market benefits for a new
product with this approach, they can subsequently be saddled with many of the same code
duplication problems caused by cut and paste programming.
Abstraction plays a central role in software reuse. Concise and expressive abstractions are
essential if software artifacts are to be effectively reused. For newly written code to use a piece
of existing code, some kind of interface, or means of communication, must be defined. These
commonly include a "call" or use of a subroutine, object, class, or prototype.
Some characteristics that make software more easily reusable are;
2.1 MODULARITY:
Modular programming (also called "top-down design" and "stepwise refinement") is a software
design technique that emphasizes separating the functionality of a program into independent,
interchangeable modules, such that each contains everything necessary to execute only one aspect
of the desired functionality. Conceptually, modules represent a separation of concerns, and
improve maintainability by enforcing logical boundaries between components. Modules are
typically incorporated into the program through interfaces. A module interface expresses the
elements that are provided and required by the module. The elements defined in the interface are
detectable by other modules. The implementation contains the working code that corresponds to
the elements declared in the interface.
2.2 LOOSE COUPLING:
Coupling refers to the degree of direct knowledge that one class has of another or the amount of
reliance of one module or subroutine on another. This is not meant to be interpreted as
encapsulation vs. non-encapsulation. It is not a reference to one class's knowledge of another
class's attributes or implementation, but rather knowledge of that other class itself.
a loosely coupled system is one in which each of its components has, or makes use of, little or no
knowledge of the definitions of other separate components. The notion was introduced into
organizational studies by Karl Weick
2.3 HIGH COHESION:
Cohesion refers to the degree to which the elements of a module belong together and how
strongly-related or focused the responsibilities of a single module are. Thus, it is a measure of
how strongly-related each piece of functionality expressed by the source code of a software
module is. It is an ordinal type of measurement and is usually expressed as high cohesion or
low cohesion when being discussed.
As applied to object-oriented programming, if the methods that serve the given class tend to be
similar in many aspects, then the class is said to havehigh cohesion. In a highly-cohesive system,
code readability and the likelihood of reuse is increased, while complexity is kept manageable.
Modules with high cohesion tend to be preferable because high cohesion is associated with
several desirable traits of software including;
Robustness
Reliability
Reusability and
Understandability
2.4 INFORMATION HIDING:
Information hiding is the principle of segregation of the design decisions in a computer program
that are most likely to change, thus protecting other parts of the program from extensive
modification if the design decision is changed. The protection involves providing a stable
interface which protects the remainder of the program from the implementation (the details that
are most likely to change).Written another way, information hiding is the ability to prevent certain
aspects of a class or software component from being accessible to its clients, using either
programming language features (like private variables) or an explicit exporting policy.
2.5 SEPARATION OF CONCERNS:
Separation of concerns (SoC) is a design principle for separating a computer program into
distinct sections, such that each section addresses a separate concern. A concern is a set of
information that affects the code of a computer program. A concern can be as general as the
details of the hardware the code is being optimized for, or as specific as the name of a class to
instantiate. A program that embodies SoC well is called a modular program. Modularity, and
hence separation of concerns, is achieved by encapsulating information inside a section of code
that has a well-defined interface. Layered designs in information systems is another embodiment
of separation of concerns (e.g., presentation layer, business logic layer, data access layer,
database layer).
3.0 TYPES OF REUSE
Concerning motivation and driving factors, reuse can be classified into the following types of
Software Reuse;
3.1 Opportunistic
While getting ready to begin a project, the team realizes that there are existing components that
they can reuse.
Opportunistic reuse can be categorized further:
3.1.1 Internal Reuse
A team reuses its own components. This may be a business decision, since the team may want to
control a component critical to the project.
3.1.2 External reuse
A team may choose to license a third-party component. Licensing a third-party component
typically costs the team 1 to 20 percent of what it would cost to develop internally. The team
must also consider the time it takes to find, learn and integrate the component.
3.2 Planned
A team strategically designs components so that they'll be reusable in future projects.
Concerning form or structure of reuse, code can be
;

3.3 Referenced
The client code contains a reference to reused code, and thus they have distinct life cycles and
can have distinct versions.
3.4 Forked
The client code contains a local or private copy of the reused code, and thus they share a single
life cycle and a single version.
3.4.1 Advantages of Forked Reuse
Easier and more direct approach
Does not require full knowledge of the source except for the excerpt or snippet being
used.
Isolation of the source hence flexibility to change the reused code is ensured as the source
will not be affected by the modifications.
Easier packaging, deployment and version management
3.4.1 Disadvantages of Forked Reuse
Fork-reuse is often discouraged because it's a form of code duplication.
Every bug is corrected in each copy
Enhancements made to reused code needs to be manually merged in every copy or they
become out-of-date.
4.0 Software Reuse Models
Several software reuse types and techniques but there exist outlined models for software reuse,
some provide means to quantitatively measure reuse benefit; others model the relationship
between cost and benefit of reuse programs.
Existing reuse models focus on particular aspects of reuse in the software development cycle
(Poulin 1997).The models provide software development groups with a tool to institutionalize
reuse. This means that in order make software reuse a regular integrated part of the software
development process, reuse models and metrics can help the organizations to achieve their reuse
goals. Reuse models provide a standard that decides what counts as reuse and how is it measured
or to assess cost savings that are to be attributed to the success of reuse. Accordingly, some
models provide means to estimate the cost of reuse projects, or required reuse occurrences to
break even; while most of the models discussed are an ex-post analysis of the quality or success
of reuse that helps organizations to decide whether to continue the reuse approach as is or
whether to change directions
For the purposes of this overview, we have classified the models into Reuse Level Metrics,
Economic Models
4.1 Reuse Level Metrics
Banker and Kauffman (1991) developed a set of metrics in the context of a study that
investigates the management of software reuse and the effect of the use of CASE technology at
First Boston Corp. The metrics was introduced in the context of repository-based integrated
CASE technology. It models the Reuse Percentage and the Reuse Leverage. Frakes and Terry
(1994) have presented a measure that differentiates between internal and external reuse. It has
been implemented as a tool that can calculate the metrics for a given systems. The Rothenberger
and Hershauer (1999) measure implements a reuse level metric in the context of an enterprise-
level data and process model environment that achieves reuse through a common architecture. It
defines what type of components should count as reuse and argues that LOC can be a suitable
complexity measure to assess the reuse rate.
4.2 Economic Models
Gaffney and Durek (1989) investigated how many times each component must be reused in order
to pay off the effort invested in it. Hereby, it is assumed that the total cost of a new software
system is equal to the sum of the costs of its new and reused components. The impact of reuse is
measured relative to the effort required to develop the project from all-new code. The metric can
help organizations to estimate development time and to make the trade-off between the
proportion of reuse and the cost of developing and reusing components. Poulin and Caruso
(1993) developed a model to improve measurement and reporting of software reuse. They
present a measure for reuse level, the financial, and the productivity benefit of reuse. Also, they
provide two return-on-investment models for analysis of reuse on the project, as well as on the
corporate level. Barnes and Bollinger (1991) provide an analytical approach for making good
reuse investment decisions. They present a model that relates the reuse benefits to the cost of
reuse and then they analyze several methods of improving the cost-effectiveness of reuse based
on the metric introduced. Assuming that an organization can measure or estimate cost and
benefit of reuse, the number of projects necessary for reuse to pay off can be calculated.

5.0 Pros and Cons of Software Reuse
Software reuse like every aspect of software engineering has both good and bad sides, from the
economic and technical point of views.
5.1 Benefits of Software Reuse
Reduction in Development Cost and Time: Much of the cost and effort stems from the
continual re-discovery and re-invention of core patterns and framework components
throughout the software industry. The art of reusing components from previously built
system saves the cost that would have been incurred if the system were to be built from
Scratch. Therefore the total cost incurred in the developing the system is reduced by how
much existing components are found. Usage of 3rd company components attracts cost,
but it is always 80-90% cheaper than the original cost of building the components.
The usage of existing components reduces greatly the total time needed for completion of
the project.
Improvement in Software Quality: Careful employment of software reuse greatly
improves the efficiency and quality of the program cause more focus is placed on careful
optimizations of the new system being built while the efficiency of the reused component
is ensured,
Increase in programmer productivity: Since times is not wasted on the reused
components, more attention is placed on the new modules hence a greater rate of
productivity from programmers as more work can be completed within the allotted time
frame for the project.
Improvement in Maintainability: Software reuse fosters modular pattern of
development, hence the reduced difficulty in maintaining the system as only the affected
module need be changed or updated. This greatly reduces the cost of debugging and
testing.

5.2 Disadvantages of Software Reuse
Software reuse when not properly applied may cause the following problems;
Dependency Problem: by reusing third party libraries, is that you are strongly coupled
and dependent to how that library works and how it's supposed to be used, unless you
manage to create a middle interface layer that can take care of it.
Error and Bug Replication: Undetected errors from the component being reused can be
propagated to the system.
Licensing Concerns: Most Reusable external components have usage license attached to
them, ignoring them may lead to serious litigation.
Module Learning Curve: The modules being used must be fully understood by the
developers else the system fails. The process of studying the module may at times be
longer the time it will take to build another.
6.0 Examples and Applications of Software Reuse
The following are examples and Applications of Software Reuse;
Software libraries
A very common example of code reuse is the technique of using a software library. Many
common operations, such as converting information among different well-known formats,
accessing external storage, interfacing with external programs, or manipulating information
(numbers, words, names, locations, dates, etc.) in common ways, are needed by many
different programs. Authors of new programs can use the code in a software library to
perform these tasks, instead of "re-inventing the wheel", by writing fully new code directly in
a program to perform an operation. Library implementations often have the benefit of being
well-tested, and covering unusual or arcane cases. Disadvantages include the inability to
tweak details which may affect performance or the desired output, and the time and cost of
acquiring, learning, and configuring the library.
Design patterns
A design pattern is a general solution to a recurring problem. Design patterns are more
conceptual than tangible and can be modified to fit the exact need. However, abstract classes
and interfaces can be reused to implement certain patterns.


Frameworks
Developers generally reuse large pieces of software via third-party applications and
frameworks. Though frameworks are usually domain-specific and applicable only to certain
families of applications.
Systematic software reuse
Systematic software reuse is still the most promising strategy for increasing productivity and
improving quality in the software industry. Although it is simple in concept, successful
software reuse implementation is difficult in practice. A reason put forward for this is the
dependence of software reuse on the context in which it is implemented. Some problematic
issues that needs to be addressed related to systematic software reuse are :
o A clear and well-defined product vision is an essential foundation to an SPL.
o An evolutionary implementation strategy would be a more pragmatic strategy for
the company.
o There exist a need for continuous management support and leadership to ensure
success.
o An appropriate organizational structure is needed to support SPL engineering.
o The change of mindset from a project-centric company to a product-oriented
company is essential.

A note on Reverse
Engineering
[Type the document subtitle]
4
Table of Contents
Introduction..................................................................................................................................................5
Definition......................................................................................................................................................6
TypesofReverseEngineering.......................................................................................................................8
StagesofReverseEngineering......................................................................................................................9
ProcessofReverseEngineering..................................................................................................................10
GroupsofReverseEngineering...................................................................................................................12
UsesandFunctions.....................................................................................................................................13
DifferencesbetweenReverseandForwardEngineering...........................................................................14
LegalityofReverseengineering..................................................................................................................15
Conclusion...................................................................................................................................................16
5

Introduction
Generally, reverse engineering is the process of discovering the technological principles of a
device, object, or system through analysis of its structure, function, and operation. It often
involves taking something (a mechanical device, electronic component, computer program, or
biological, chemical, or organic matter) apart and analyzing its workings in detail to be used in
maintenance or to try to make a new device or program that does the same thing without using or
simply duplicating the original.
Reverse engineering has its origins in the analysis of hardware for commercial or military
advantage. The purpose is to deduce design decisions from end products with little or no
additional knowledge about the procedures involved in the original production. The same
techniques are subsequently being researched for application to legacy software systems, not for
industrial or defence ends, but rather to replace incorrect, incomplete, or otherwise unavailable
documentation.
Reverse engineering is one of the important stages in the reengineering process. It is sometimes
conducted in conjunction with some of the other stages of the reengineering process.














6
Definition
Specifically, reverse engineering is the general process of analyzing a technology to ascertain
how it was designed or how it operates. It is the process of analyzing a subject system to create
representations of the system at a higher level of abstraction. It can also be seen as going
backwards through the development cycle. In this model, the output of the implementation phase
(in source code form) is reverse-engineered back to the analysis phase, in an inversion of the
traditional waterfall model. Reverse engineering is a process of examination only: the software
system under consideration is not modified (which would make it re-engineering). Reverse
engineering is the process of taking apart an object to see how it works in order to duplicate or
enhance the object. The practice, which was taken from older industries, is now frequently used
on computer hardware and software.
Software reverse engineering involves reversing a program's machine code (the string of 0s and
1s that are sent to the logic processor) back into the source code that it was written in, using
program language statement. Software reverse engineering is done to retrieve the source code of
a program because the source code was lost, to study how the program performs certain
operations, to improve the performance of a program, to fix a bug (correct an error in the
program when the source code is not available), to identify malicious content in a program such
as a virus or to adapt a program written for use with one microprocessor for use with another.
When reverse engineering software, researchers are able to examine the strength of systems and
identify their weaknesses in terms of performance, security, and interoperability.
Commonly, reverse engineering is performed using the clean-room or Chinese wall. Clean-room,
reverse engineering is conducted in a sequential manner:
a team of engineers are sent to disassemble the product to investigate and describe what it
does in as much detail as possible at a somewhat high level of abstraction.
the description is given to another group who has no previous or current knowledge of
the product.
the second group then builds product from the given description. This product might
achieve the same end effect but will probably have a different solution approach
There are different tools which can be used for software reverse engineering. Some of the tools
are:
hexadecimal dumper, which prints or displays the binary numbers of a program in
hexadecimal format (which is easier to read than a binary format).
disassembler, which reads the binary code and then displays each executable instruction
in text form.
decompiler, which translate an object file produced by some compiler into an ASCII
representation.
debuggers, which
7

hex editor , which patches (make changes to) exe file






















8
Types of Reverse Engineering
There are two main types of reverse engineering.
In the first case, source code is already available for the software, but higher-level aspects of the
program, perhaps poorly documented or documented but no longer valid, are discovered.
In the second case, there is no source code available for the software, and any efforts towards
discovering one possible source code for the software are regarded as reverse engineering. This
second usage of the term is the one most people are familiar with.
Also, black box testing in software engineering has a lot in common with reverse engineering.
The tester usually has the API, but their goals are to find bugs and undocumented features by
testing the product from outside.
9

Stages of Reverse Engineering


In order to reverse engineer a product or component of a system, engineers and researchers
generally follow the following four-stage process:

1. Identifying the product or component which will be reverse engineered
2. Observing or disassembling the information documenting how the original product works
3. Implementing the technical data generated by reverse engineering in a replica or modified
version of the original
4. Creating a new product (and, perhaps, introducing it into the market)
In the first stage in the process, sometimes called pre-screening, reverse engineers determine the
candidate product for their project. Potential candidates for such a project include singular items,
parts, components, units, subassemblies, some of which may contain many smaller parts sold as
a single entity.

The second stage, disassembly or decompilation of the original product, is the most time-
consuming aspect of the project. In this stage, reverse engineers attempt to construct a
characterization of the system by accumulating all of the technical data and instructions of how
the product works.

In the third stage of reverse engineering, reverse engineers try to verify that the data generated by
disassembly or decompilation is an accurate reconstruction the original system. Engineers verify
the accuracy and validity of their designs by testing the system, creating prototypes, and
experimenting with the results.
The final stage of the reverse engineering process is the introduction of a new product into the
marketplace. These new products are often innovations of the original product with competitive
designs, features, or capabilities. These products may also be adaptations of the original product
for use with other integrated systems, such as different platforms of computer operating systems.
Often different groups of engineers perform each step separately, using only documents to
exchange the information learned at each step. This is to prevent duplication of the original
technology, which may violate copyright. By contrast, reverse engineering creates a different
implementation with the same functionality.


10

Process of Reverse Engineering


Reverse engineering can extract design information from source code, but the abstraction level,
the completeness of the documentation, the degree to which tools and a human analyst work
together, and the directionality of the process are highly variable.
The abstraction level of a reverse engineering process and the tools used to achieve it refers to
the sophistication of the design information that can be extracted from the source code. Ideally,
the abstraction level should be as high as possible. That is, the reverse engineering process
should be capable of deriving procedural design representations (a low-level abstraction),
program and data structure information (a somewhat higher level of abstraction), data and
control flow models (a relatively high level of abstraction), and entity relationship models (a
high level of abstraction). As the abstraction level increases, the software engineer is provided
with information that will allow easier understanding of the program. The completeness of a
reverse engineering process refers to the level of detail that is provided at an abstraction level. In
most cases, the completeness decreases as the abstraction level increases. For example, given a
source code listing, it is relatively easy to develop a complete procedural design representation.
Simple data flow representations may also be derived, but it is far more difficult to develop a
complete set of data flow diagrams or entity-relationship models.
Completeness improves in direct proportion to the amount of analysis performed by the person
doing reverse engineering. Interactivity refers to the degree to which the human is integrated
with automated tools to create an effective reverse engineering process. In most cases, as the
abstraction level increases, interactivity must increase or completeness will suffer.
If the directionality of the reverse engineering process is one way, all information extracted from
the source code is provided to the software engineer who can then use it during any maintenance
activity. If directionality is two way, the information is fed to a reengineering tool that attempts
to restructure or regenerate the old program.
The reverse engineering process is represented in. Before reverse engineering activities can
commence, unstructured (dirty) source code is restructured so that it contains only the structured
programming constructs. This makes the source code easier to read and provides the basis for all
the subsequent reverse engineering activities.
The core of reverse engineering is an activity called extract abstractions. The engineer must
evaluate the old program and from the (often undocumented) source code, extract a meaningful
specification of the processing that is performed, the user interface that is applied, and the
program data structures or database that is used.
11

Restructure
code
Refineand
simplify
Extract
abstractions
Database
Interface
Processing
Thereverseengineeringprocess

12
Groups of Reverse Engineering
Reverse engineering of software can be accomplished by various methods. The three main
groups of software reverse engineering are
Analysis through observation of information exchange, most prevalent in protocol reverse
engineering, which involves using bus analyzers and packet sniffers, for example, for accessing a
computer bus or computer network connection and revealing the traffic data thereon. Bus or
network behaviour can then be analyzed to produce a stand-alone implementation that mimics
that behaviour. This is especially useful for reverse engineering device drivers.
Disassembly using a disassembler, meaning the raw machine language of the program is read
and understood in its own terms, only with the aid of machine-language mnemonics. This works
on any computer program but can take quite some time, especially for someone not used to
machine code.
Decompilation using a decompiler, a process that tries, with varying results, to recreate the
source code in some high-level language for a program only available in machine code or
bytecode.
13

Uses and Functions


Reverse engineering which is relatively a new form of software engineering has many functions
and uses. Some of the uses of this type of engineering are stated below:
Reverse engineering as a learning tool
Reverse engineering is used for making software interoperate more effectively or to
bridge different operating systems or databases.
Reverse engineering is used to uncover the uncoordinated features of commercial
products.
It is used in understanding how a product works more comprehensively than by merely
observing it
Reverse engineering investigates and corrects errors and limitations in existing programs
Studying the design principles of a product as part of an education in engineering
Making products and systems compatible so they can work together or share data
Evaluating one's own product to understand its limitations
It is used to determining whether someone else has literally copied elements of one's own
technology
Reverse engineering is used for creating documentation for the operation of a product
whose manufacturer is unresponsive to customer service requests
It is used for transforming obsolete products into useful ones by adapting them to new
systems and platforms










14

Differences between Reverse and Forward Engineering


The most traditional method of the development of a technology is referred to as forward
engineering. In the construction of a technology, manufacturers develop a product by
implementing engineering concepts and abstractions. By contrast, reverse engineering begins
with final product, and works backward to recreate the engineering concepts by analyzing the
design of the system and the interrelationships of its components.
Value engineering refers to the creation of an improved system or product to the one originally
analyzed. While there is often overlap between the methods of value engineering and reverse
engineering, the goal of reverse engineering itself is the improved documentation of how the
original product works by uncovering the underlying design. The working product that results
from a reverse engineering effort is more like a duplicate of the original system, without
necessarily adding modifications or improvements to the original design.












15

Legality of Reverse engineering


Reverse engineering has long been held a legitimate form of discovery in both legislation and
court opinions
There are two basic legalities associated with reverse engineering:
a. Copyright Protection - protects only the look and shape of a product.
b. Patent Protection - protects the idea behind the functioning of a new product.

Is reverse engineering unethical?
This issue is largely debated and does not seem to have a clear cut answer. The number one
argument against reverse engineering is that of intellectual property. If an individual or an
organization produces a product or idea, is it ok for others to "disassemble" the product in order
to discover the inner workings? Lexmark does not think so. Since Lexmark and companies like
them spend time and money to develop products, they find it unethical that others can reverse
engineer their products. There are also products like Bit Keeper that have been hurt by reverse
engineering practices. Why should companies and individuals spend major resources to gather
intellectual property that may be reversed engineered by competitors at a fraction of the cost?
There are also benefits to reverse engineering. Reverse engineering might be used as a way to
allow products to interoperate. Also reverse engineering can be used as a check so that computer
software isn't performing harmful, unethical, or illegal activities.



16

Conclusion
The law regarding reverse engineering in the computer software and hardware context is less
clear, but has been described by many courts as an important part of software development. The
reverse engineering of software faces considerable legal challenges due to the enforcement of
anti reverse engineering licensing provisions and the prohibition on the circumvention of
technologies embedded within protection measures. By enforcing these legal mechanisms, courts
are not required to examine the reverse engineering restrictions under federal intellectual
property law. In circumstances involving anti reverse engineering licensing provisions, courts
must first determine whether the enforcement of these provisions within contracts are pre-empted
by federal intellectual property law considerations. Under DMCA claims involving the
circumvention of technological protection systems, courts analyze whether or not the reverse
engineering in question qualifies under any of the exemptions contained within the law.

Although reverse engineering is legal as long as another person or group does not explicitly copy
another product, the ethical debate is sure to endure. One argument for reverse engineering is
summed up by an analogy offered by J ennifer Granick , director of the law and technology clinic
at Stanford Law School, "You have a car, but you're not allowed to open the hood. Other
companies, such as Lexmark will argue that reverse engineering infringes upon their intellectual
property.


















17















s
2. MeritsandDemeritsofForkedReuse
MERITS:
Easierandmoredirectapproach
Doesnotrequirefullknowledgeofthesourceexceptfortheexcerptorsnippetbeingused
Isolatedfromthesourcehencemoregreaterflexibilitytochangethereusedcode;assourcewill
notbeaffectedbymodifications
Easierpackaging,deploymentandversionmanagement.
DEMERITS
Codeduplication
Everybugcorrectedinonecopywillnotbecorrectedinothers(becausetheyaredisjointed)or
elsetheybecomeoutofdate.
3i)METRICBASEDMODELS:
Thismeasurestheamountorsizeofreusebyviewingreusefromametricstandpointreusehencecan
bemeasuredintermsofLOC(Linesofcode).
ii)ECONOMICMODELS:
Thismeasuresreusefromaneconomicstandpoint.TheImpactofreuseismeasuredrelativetothe
effortrequiredtodevelopfromallnewcode.Itcanhelporganizationstoestimatethetradeoffbtw
proportionofreuseandthecostofdevelopingandreusingcomponents.Economicmodelsshow
INTERNALREUSE EXTERNALREUSE
Useofcompanyorselfbuiltcomponents Usageofcomponentsbuiltbyanexternal/3
rd
part
firm
Cannotresultinlitigationfromlicensingissues Manysoftwarelitigationclaimsareduetoimproper
licensebeforeusageof3
rd
partcomponents
Lesstimeisspentinunderstandingthecomponents
asitisbuiltbythefirmitself
Considerabletimeisspentinunderstandingthe
componentandreadingitsdocumentation
Freelycustomizableasitisbuiltbythefirm,itcan
bereshapedaspleased
Considerableamountofpaperworkandlicensingis
neededbeforepermissionsformodificationcanbe
granted.(mosttimesuchpermissionsarenever
granted)
Nocostisincurred(free) Canbefreeorexpensivebutwhenusedfor
commercialpurposestheycangetquiteexpensive
methodsofimprovingcosteffectivenessofreusebasedonthemetricintroducede.gcostandbenefits
ofreuse,thenumberofprojectsnecessaryforreusetopayoffcanbecalculated.
4. JustExplainOOP
5. overloadingofMethods:Declarationofmethodsorfunctionswithnamethatissameasanexisting
methodbutdifferentsignatureasthealreadyexistingmethods.

Das könnte Ihnen auch gefallen