Sie sind auf Seite 1von 77

Software Testing and Reliability

Reliability and Risk Assessment


Aditya P. Mathur
Purdue University
August 12-16

Guidant Corporation
Minneapolis/St Paul, MN

Graduate Assistants: Ramkumar Natarajan


Baskar Sridharan
Last update: August 16, 2002

Reliability and risk assessment

Learning objectives1

. What is software reliability?

. How to estimate software reliability?

. What is risk assessment?

. How to estimate risk using application architecture?

SoftwareTestingand

References
1

. Statistical Methods in Software Engineering: Reliability and


Risk, Nozer D. Singpurwalla and Simon P. Wilson, Springer,
1999.

. Software Reliability, Measurement, Prediction,


Application, John D. Musa, Anthony Iannino, and
Kazuhira Okumoto, McGraw-Hill Book Company, 1987.

. A Methodology for Architecture Level Reliability Risk


Analysis, S. M. Yacoub and H. H. Ammar, IEEE
Transactions on Software Engineering, June 2002,
V28, N 6, pp529-547.

. Real-Time UML: Developing Efficient Objects for


Embedded Systems. Bruce Powell Douglass, AddisonWesley, 1998.

SoftwareTestingand

Software Reliability

Software reliability is the probability of failure free operation


of an application in a specified operating environment and
time period.

Reliabilityisonequalitymetric.Othersinclude
performance,maintainability,portability,and
interoperability

SoftwareTestingand

Operating Environment

Hardware:Machineandconfiguration

Software:OS,libraries,etc.

Usage(Operationalprofile)

SoftwareTestingand

Uncertainty

Uncertaintyisacommonphenomenainourdailylives.
Insoftwareengineering,uncertaintyoccursinallphasesofthe
softwarelifecycle.

Examples:
Willtheschedulebemet?
Howmanymonthswillittaketocompletethedesign?
Howmanytesterstodeploy?
Whatisthenumberoffaultsremainingfaults?

SoftwareTestingand

Probability and statistics

Uncertaintycanbequantifiedandmanagedusingprobability
theoryandstatisticalinference.

Probabilitytheoryassistswithquantificationandcombinationof
uncertainties.

Statisticalinferenceassistswithrevisionofuncertaintiesin
lightoftheavailabledata.

SoftwareTestingand

Probability Theory

Inanysoftwareprocessthereareknownandunknown
quantities.

TheknownquantitiesconstitutehistoryandisdenotedbyH.

Theunknownquantitiesarereferredtoasrandom
quantities.

Eachunknownquantityisdenotedbyacapitallettersuch
asTorX.
SoftwareTestingand

Random Variables

SpecificvaluesofTandXaredenotedbylowercaseletterst
andxandareknownasrealizationsofthecorresponding
randomquantities.

Whenarandomquantitycanassumenumericalvaluesitis
knownasarandomvariable.

Example:IfXdenotestheoutcomeofacointoss,thenX
canassumeavalue0(forhead)and1(fortail).Xisa
randomvariableundertheassumptionthatoneachtossthe
outcomeisnotknownwithcertainty.
SoftwareTestingand

Probability

The probability of an event E computed at time

in light of

history H is given by

P (E |H)

For brevity we will suppress H and


of E as simply

to denote the probability

P (E )

SoftwareTestingand

10

Random Events
Arandomquantitythatmayassumeoneoftwovalues,say
e1ande2,isarandomeventoftendenotedbyE.

Examples:
ProgramPwillfailonthenextrun.
ApplicationAcontainsnoerrors.
ThetimetonextfailureofapplicationAwillbegreaterthant.
ThedesignforapplicationAwillbecompletedinlessthan3months.

SoftwareTestingand

11

Binary Random Variables

Whene1ande2arenumericalvalues,suchas0and1,thenEis
knownasabinaryrandomvariable.

Adiscreterandomvariableisonewhoserealizationsare
countable.

Example:Numberoffailuresencounteredoverfourhoursofapplication
use.

Acontinuousrandomvariableisonewhoserealizationsarenot
countable.

Example:Timetonextfailure.

SoftwareTestingand

12

Probability distribution function

For a random variable X let E be the event that X=x

If P ( X x ) 0 then X is said to have a point mass.

P( X
If E is the event that X <=x then
the distribution function of X and is denoted as

x)

is known as

F X (x)

Note that
1.

F X (x)

is nondecreasing in x and ranges from 0 to

SoftwareTestingand

13

Probability density function

If X is continuous and takes all values in some interval I


and F X (x) is differentiable with respect to x for all x in I ,
then F X (x) is absolutely continuous.

f X(xand
) is
F X (x) at x is denoted by
The derivative of
known as the probability density function of X.
f X(x )dx is the approximate probability that the random

variable X takes on a value in the interval x and x+dx.

SoftwareTestingand

14

Exponential Density function:


Continuous random variable

e x ,forbothxand>0.

f (x | )

P ( X x | ) e x

0
x

SoftwareTestingand

15

Binomial Distribution

Suppose that an application is executed N times each


with a distinct input. We want to know the number of
inputs, X, on which the application will fail.

Note that the proportion of the correct outputs is a measure of


the reliability of the application.

X can assume values x =0, 1, 2,,N. We are interested in the


probability that X=x.

Each input to the application can be assumed to be a


Bernoulli trial. This gives us Bernoulli random variables Xi,
i=1, 2,,N. Each Xi is a 1 if the application fails and 0
otherwise. Note that X= X1+X2++XN.

SoftwareTestingand

16

Binomial Distribution [contd.]


Under certain assumptions, the following probability
model, known as the Binomial distribution, is used.

N x
P ( X x | p ) ( ) p (1 p ) N x , x 0,..., N
x

Here p is the probability that Xi = 1 for i=1,,N. In other words, p is the probability of failure of any single run.

N
( ) N ! /( x!( N x)!)
x

SoftwareTestingand

17

Poisson Distribution

When the application under test is almost error free and is


subjected to a large number of inputs, then N is large, (1-p) is
small, and N (1-p) is moderate.
The above assumption leads to a simplification of the Binomial
distribution into the Poisson distribution given by the formula

P( X x | ) e

SoftwareTestingand

x
, x 0,1,2.....
x!

18

Software Reliability: Types

Reliability on a single execution: P(X=1|H), modeled by


Bernoulli distribution.

Reliability over N executions: P(X=x|H), for x=0,1,2,N,


given by Binomial distribution or Poisson distribution for
large N and small parameter value p.

Reliability over an infinite number of executions: P(X=x|


H), for x=1,2,N. Note that we are interested in the
number of inputs after which the first failure occurs. This
is given by geometric distribution.

SoftwareTestingand

19

Software Reliability: Types [contd.]

When the inputs to software occur continuously over


time, then we are interested in P(X>=x|H), i.e. the
probability that the first failure occurs after x time units.
This is given by the exponential distribution.

The time of occurrence to the kth failure can be given by


the Gamma distribution.

There are several other models of reliability, over one


hundred!
SoftwareTestingand

20

Software failures: Sources of uncertainty


Uncertaintyaboutthepresenceandlocationofdefects.
Uncertaintyabouttheuseofruntypes.Willarunfor
agiveninputstatecauseafailure?

SoftwareTestingand

21

Failure Process

Inputs arrive at an application at random times.

Some inputs cause failures and others do not.

T1, T2, denote (CPU) times between application


failures.

Most reliability models are centered around the interfailure


times.

SoftwareTestingand

22

Failure Intensity and Reliability

Failure intensity is the number of failures experienced


within a unit of time. For example, the failure intensity
of an application might be 0.3 failures/hr.

Failure intensity is an alternate way of expressing


reliability, R(), which is the probability of no failures over
time duration .

For a constant failure intensity we have R()=e-.


It is safe to assume that during testing and debugging, the
failure intensity decreases with time and thus the reliability
increases.

SoftwareTestingand

23

Jelinski and Moranda Model [1972]

The application contains an unknown number N of defects.

Each time the application fails the defect that caused the failure is
removed.

Debugging is perfect.

Constant relationship between the number of defects and the failure


rate.

Ti is proportional to (N-I+1).

SoftwareTestingand

24

Jelinski and Moranda Model [contd.]

Thus, given 0=S0<=S1<=.<=Si, i=1, 2 and some constant c, we obtain the following
failure intensity, where S0,S1,,Si are supposed software failure times, failure rate r Ti is
given by:

r Ti (t S i 1) c( N i 1) for t S i 1

r (t )

Notethatthefailurerate
dropsbyaconstantamount.

S0=0

S1

SoftwareTestingand

S2

S3

time t

25

Musa-Okumoto Model: Terminology

Execution time:

Execution time from current time:

Initial failure intensity: 0=f K 0

Average number of failures at a given time::

Total number of failures in infinite time: 0= 0 / B

Fault reduction factor: B

Per fault hazard rate: ;

SoftwareTestingand

0 / 0= B

26

Musa-Okumoto Model: Terminology [contd.]

Number of inherent faults: 0=I I

Number of inherent faults per source instructions: I

Fault exposure ratio: K

Number of source instructions: I

Instruction execution rate: r

Executable object instructions: I

Linear execution frequency: f=r/I

SoftwareTestingand

27

Musa-Okumoto: Basic Model

Failure intensity for basic execution time model

() 0 1
0

( ) 0e

0

0

R( '| ) e
SoftwareTestingand

{[ 0 e

0
0

][1e

0 '
0

]}

28

Musa-Okumoto: Logarithmic Poisson Model

Failure intensity decay parameter:

Failure intensity for logarithmic Poisson model:

() 0e

0
()
0 1
0 1 1/
R( '| ) [
]
0 ( ' ) 1
SoftwareTestingand

29

Failureintensity()

Failure intensity comparison as a function


of average failures experienced

LogarithmicPoissonmodel

0
Basicmodel

Averagenumberoffailuresexperienced

SoftwareTestingand

30

Failureintensity()

Failure intensity comparison as function


of execution time

LogarithmicPoissonmodel

0
Basicmodel

Executiontime

SoftwareTestingand

31

Which Model to use?

Uniformoperationalprofile:Usethebasicmodel

Nonuniformoperationalprofile:UsethelogarithmicPoisson
model

SoftwareTestingand

32

Other issues

Countingfailures

Whenisadefectrepaired

Impactofimperfectrepair

SoftwareTestingand

33

Independent check against code coverage


Reliabilityestimate

CL RH

Unreliableestimate

CH RH

Reliableestimate

CH RL

Reliableestimate

RH
RL

CL

CH

Codecoverage
CL RL

Unreliableestimate

SoftwareTestingand

34

Operational Profile

Aquantitativecharacterizationofhowanapplicationwillbeused.This
characterizationrequiresaknowledgeofinputvariables.

Inputstateisavectorofvaluesofallinputvariables.

Inputvariables:Aninterruptisaninputvariableandsoareall
environmentvariablesandvariableswhosevaluesareinputbytheuser
viathekeyboardorfromafileinresponsetoaprompt.

Internalvariables,computedfromoneormoreinput
variablesarenotinputvariables.
Intermediateresultsandinterruptsgeneratedduringtheexecutionasa
resultoftheexecutionshouldnotbeconsideredasinputvariables.

SoftwareTestingand

35

Operational Profile [contd.]

Runsofanapplicationthatbeginwithidenticalinputstatesbelongtothesamerun
type.

Example1:Twowithdrawalsfromthesamepersonfromthesame
accountandofthesamedollaramount.

Example2:Reservationsmadefortwodifferentpeopleonthesame
flightbelongtodifferentruntypes.

Function:Groupingofdifferentruntypes.Afunctionisconceivedat
thetimeofrequirementsanalysis.

SoftwareTestingand

36

Operational Profile [contd.]

Function:Asetofdifferentruntypes.Afunctionisconceivedatthe
timeofrequirementsanalysis.Afunctionisanalogoustoausecase.

Operation:Asetofruntypesfortheapplicationthatisbuilt.

SoftwareTestingand

37

Input Space: Graphical View


Inputspace
Function1

Inputstate

Inputstate

Inputstate

Inputstate

Inputstate

Inputstate
Function2

Function3
Function4

SoftwareTestingand

Functionk

38

Functional Profile
Function

Probability of occurrence

F1

0.6

F2

0.35

F3

0.05

SoftwareTestingand

39

Operational Profile
Function

Operation

Probability of occurrence

F1

O11

0.4

O12

0.1

O13

0.1

O21

0.05

O22

0.15

O31

0.15

O33

0.05

F2

F3

SoftwareTestingand

40

Modes and Operational Profile


Mode

Function

Operation

Probability of occurrence

Normal

F1

O11

0.4

O12

0.1

O13

0.1

O21

0.05

O22

0.15

O31

0.15

O33

0.05

F2

F3

SoftwareTestingand

41

Modes and Operational Profile [contd.]


Mode

Function

Operation

Probability of occurrence

Administrative

AF1

AO11

0.4

AO12

0.1

AO21

0.5

AF2

SoftwareTestingand

42

Reliability Estimation Process


Develop Operational profile
Perform system test
Remove defects

Collect failure data


Compute reliability

No

Objective met?
Yes

App. ready for release


SoftwareTestingand

43

Risk Assessment

Risk is a combination of two factors:

Risk Assessment is useful in identifying:

Probabilityofmalfunction
Consequenceofmalfunction
Complex modules that need more attention
Potential trouble spots
Estimating test effort

Dynamic complexity and coupling metrics can be used to account for the
probability of a fault manifesting itself into a failure.

SoftwareTestingand

44

Question of interest

Given the architecture of an application, how does one quantify the risk
associated with the given architecture?

Note that risk analysis, as described here, can be


performed prior to the development of any code and
soon after the system architecture, in terms of its
components and connections, is available.

SoftwareTestingand

45

Risk Assessment Procedure


Develop System Architecture
Develop operational scenarios and their likelihood
Determine component and connector complexity
Perform severity analysis
Develop risk factors
Develop CDG
Perform risk analysis
SoftwareTestingand

46

Cardiac Pacemaker: Behavior Modes


Behaviormodeindicatedby3letteracronym:L1L2L3
L1

L2

L3

A:Atrium

A:Atrium

I:Inhibited

V:Ventricle

V:Ventricle

T:Triggered

D:Dual;(both)

D:Dual;(both)

D:Dualpacing

Whatispaced?

Whichchamberis
beingmonitored?

Whatisthemodetype?

Example:VVI:VentricleispacedwhenVentricularsensedoes
notoccur,paceisInhibitedifasensedoesoccur

SoftwareTestingand

47

Pacemaker: Components and Communication

magnet

Reed
Switch

enables

Communication
Gnome

enables
Coil
Driver

Atrial
Model

Ventricular
Model

programming

SoftwareTestingand

heart

48

Component Description

Reed Switch (RS): Magnetically activated switch; must be


closed before programming can begin.

Coil Driver (CD): Pulsed to send 0s and 1s by the programmer.

Communications Gnome (CG): Receives commands as bytes


from CD and send to AR and VT.

Atrial Model (AR): Controls heart pacing.

Ventricular Model (VT): Controls sensing and the refractory


period.

SoftwareTestingand

49

Scenarios

Programming: Programmer sets the operation mode of the device.

AVI: VT monitors the heart. When a heart beat is not sensed the
AR paces the heart and a refractory period is in effect.

VVI: VT component paces the heart when it does not sense any
pulse.

AAI: The AR component paces the heart when it does not sense any pulse.

VVT: VT component continuously paces the heart.

AAT: The AR component continuously paces the heart.

SoftwareTestingand

50

Static Complexity for OO Designs

Coupling: Two classes are considered coupled if methods from


one class use methods or instance variables from other class.

Coupling Between Classes (CBC): Total number of other


classes to which a class is coupled.

SoftwareTestingand

51

Operational Complexity for Statecharts

Given a program graph G with e edges and n nodes,


the cyclomatic complexity V(G)=e-n+2.

Dynamic complexity factor for each component is based on cyclomatic


complexity of the statechart specification for each component.

For each execution scenario Sk a subset of the statechart specification of


the component is executed thereby exercising state entries, state exits,
and fired transitions.

The cyclomatic complexity of the executed path for each component


Ci is called the operational complexity denoted by cpxk (Ci ).

SoftwareTestingand

52

Dealing with Composite States


s1

I
init

t11

s21

t12

s11

s2

init

t13
s22

Cyclomaticcomplexityforthes11tos22transition:

VGx (s11) VGa (t11) VGx (s1) VGa (t12)


VGe (s1) VGa (t13) VGe(s22)
VGp:p:x,a,e:Complexityoftheexit,action,andentrycodesegmentscodesegment

SoftwareTestingand

53

Dynamic Complexity for Statecharts

Each component of the model is assigned a complexity


variable.

For each execution scenario these variables are updated with the
complexity measure of the thread that is triggered for that particular
scenario.

At the end of the simulation, the tool reports the dynamic


complexity value for each component.

The average operational complexity is now updated for each


component:
|S |

cpx(Ci ) PS k cpxk (Ci )


k 1

PS k istheprobabilityofscenariok, S
SoftwareTestingand

isthetotalnumberofscenarios

54

Component Complexity

Sequence diagrams are developed fo each scenario.


Each sequence diagram is used to simulate the corresponding
scenario.

Simulation is used to compute the dynamic complexity of each


component.

Average operational complexity is then computed as a sum of the scenario


component complexity weighted by the scenario probability

The component complexities are then normalized against the highest


component complexity.

Domain experts determine the relative probability of occurrence of


each scenario. This is akin to the operational profile of an application.

SoftwareTestingand

55

Connector Complexity

Export coupling, ECk( Ci , Cj ), measures the coupling for component Ci

with respect to component Cj . It is the percentage of the number of


messages sent from Ci to Cj with respect to the total number of
messages exchanged during the execution of scenario Sk .

The export coupling metric for a pair of components for a given scenario
is extended for an operational profile by averaging over all scenarios
using the probabilities of occurrences of the scenarios considered.
|S|

EC(Ci ,C j ) PSk ECk (Ci ,C j )


k1

SoftwareTestingand

56

Connector Complexity

Simulation is used to determine the dynamic coupling measure


for each connector.

Coupling amongst components is represented in the form of a


matrix.

Coupling values are normalized to the highest coupling.

SoftwareTestingand

57

Component Complexity Values


AR

VT

AVI (0.29)

53.2

46.8

AAT (0.15)

100

AAI (0.20)

100

Programming (0.01)

RS

CD

CG

8.3

67.4

24.3

VVI (0.15)

100

VVT (0.20)

100

% of architecture
complexity

.083

Normalized

.0.002 0.013

SoftwareTestingand

.674

.243

50.248

48.572

0.005

0.963

58

Coupling Matrix
RS
RS

CD

CG

.0014

.0014

CD

AR

VT

.003

CG

.002

.0014

.0014
.25

.27

Programmer

.0014

Heart

.011

AR
VT

Prog.

1
.873

.006

Heart

SoftwareTestingand

.123

.307

59

Severity Analysis

Apart from their complexity, risk also depends on the severity of


failure of components and connectors.

Risk factors are associated with each component and connector


by performing severity analysis.

Basic failure mode(s) of each component/connector and its effect on the


overall system is studied using failure mode and effects analysis (FMEA).

A simulation tool is used for injecting faults, one-by-one in each


component and each connector.

The effect of each fault, and the resulting failure, is studied. Domain experts can
rank severity of failures, thus ranking the effect of a component or connector failure

SoftwareTestingand

60

Severity Ranking

Domain experts assign severity indices (svrtyi) to the severity


classes.

Catastrophic (0.95): Failure may cause death or total system loss.

Critical (0.75): Failure may cause severe injury, property


damage, system damage, or loss loss of production.

Marginal (0.5): Failure may cause minor injury, property damage, system damage,
delay or loss of production.

Minor (0.25): Failure not serious enough to cause injury, property damage,
or system damage but will result in unscheduled maintenance or repair.

SoftwareTestingand

61

Heuristic Risk Factor

By comparing the result of the simulation with the expected


operation, severity level for each faulty component for a given
scenario is determined.
The highest severity index (svrtyi) corresponding to a severity level of
failure of a given component i, is assigned as its severity value.

A Heuristic Risk Factor (hrfi) is then computed for each


component based on its complexity and severity value.

hrf i cpxi svrtyi


SoftwareTestingand

62

FMEA for components (sample)


Component Failure

Cause

Effect

Criticality

RS

Communication
not enabled

Error in
translating
magnet
command

Unable to
program the
pacemaker,
schedule
maintenance
task.

Minor

VT

No heart pulses
are sensed
though heart is
working fine.

Heart sensor is Heart is paced Critical


malfunctioning. incorrectly;
patient could
be harmed.

SoftwareTestingand

63

FMEA for connectors (sample)


Connector Failure

Cause

Effect

AR-Heart

Failed to pace
the heart in AVI
mode.

Pacing h/w
device
malfunction.

Heart operation is Catastrophic


irregular

CG-VT

Send incorrect
command (e.g.
ToOff instead of
ToIdle)

Incorrect
interpretation
of program
bytes

Incorrect
operation mode
and pacing of the
heart. Device still
monitored by the
physician,
immediate
maintenance
required.

SoftwareTestingand

Criticality

Marginal

64

Component Risk factors: Using Dynamic Complexity

RS

CD

CG

AR

VT

Dynamic complexity

.002

.013

.005

.963

Severity

.25

.25

.5

.95

.95

Risk factor

.0005

.00325 .0025

.95

.91485

SoftwareTestingand

65

Connector Risk factors: Using Dynamic Complexity

RS

CD

CG

RS

.00035 .00035

CD

.00075

CG

.0005

AR

.0007

Heart

.0007
.2375

VT

Prog.

.00275

AR

Prog.

VT

.2565

.95
.82935

.00035 .0015

Heart

.11685

SoftwareTestingand

.2916

66

Component Risk factors: Using Static Complexity

RS

CD

CG

AR

VT

CBC

0.47

0.8

0.6

0.6

Severity

0.25

0.25

0.5

0.95

0.95

Risk factors based on


CBO

0.119

0.2

0.5

0.57

0.57

SoftwareTestingand

67

Component Risk factors: Comparison

DynamicmetricsbetterdistinguishARandVTcomponentsashigh
riskwhencomparedwithRS,CD,andCG.

Usingstaticmetrics,CGisconsideredatthesamerisklevelasARand
VT.

Inpacemaker,ARandVTcontroltheheartandhencearethehighest
riskcomponentswhichisconfirmedwhenonecomputestherisk
factorsusingdynamicmetrics.

SoftwareTestingand

68

Component Dependency Graphs (CDGs)

ACDGisdescribedbysetsNandEwhereNisasetofofnodesandEisaset
ofedges.sandtaredesignatedasthestartandterminationnodesandbelongto
N.

EachnodeninN:<Ci,RCi,ECi>,whereCiisthecomponent
correspondington,RCiisthereliabilityofCi,andECiistheaverage
executiontimeofCi.

EachedgeeinE:<Tij,RTij,PTij>,whereTijisthetransitionfromnode
CitoCj,RTijisthereliabilityofthistransition,andPTijisthetransition
probability.

Inthemethodologydescribedhere,riskfactorsreplacethereliabilities
ofcomponentsandtransitions.

SoftwareTestingand

69

Generation of CDGs

Estimatetheexecutionprobabilityofeachscenario.

Foreachscenarioestimatetheexecutiontimeofeachcomponentand
then,usingtheprobabilityofeachscenario,computetheaverage
executiontimeofeachcomponent.

Estimatethetransitionprobabilityofeachtransition.

Estimatethecomplexityfactorofeachcomponent.

Estimatethecomplexityfactorofeachconnector..

SoftwareTestingand

70

CDG for the Pacemaker (all transition labels not shown)


s
<,0,.0.35>

<Prog,0,5>
<,3.5x104,0.002>

<AR,0.95,40>

<VT,0.95,40>

<RS,0.0005,5>
<,0.29,0.64>

<CD,0.003,5>

SoftwareTestingand

<CG,0.0025,5>
t

<,0,0.34>

<Heart,0,5>

71

Reliability Risk Analysis

Architecture risk factor is obtained by aggregating the risk factors of individual


components and connectors.

Example: Let L be the length of an execution sequence, i.e,. L is


the number of components executed along this sequence.
Then, the risk factor is given by:
L

HRF 1 (1 hrf i )
i 1

wherehrfiistheriskfactorassociatedwiththeithcomponent,or
connector,inthesequence.

SoftwareTestingand

72

Risk Analysis Algorithm-OR paths

Traverse the CDG starting at node s and stop until either t is


reached or the average application execution time is consumed.

Breadth expansions correspond to OR paths. The risk factors


associated with all nodes along the breadth expansion are
summed up weighted by the transition probabilities.
s

e1:<(s,n1),0,0.3>
e1

e2:<(s,n2),0,0.7>
n1:<(C1,0.5,5>
n2:<(C2,0.6,12>

n1

e2
n2

HRF=1-[(1-0.5)0.3+(1-0.6)0.7].

SoftwareTestingand

73

Risk Analysis Algorithm-AND paths

The depth of a path implies sequential execution. For example,


suppose that node n1 is reached from node s via edge e1 and
that node n2 is reached from n1 via edge e2. Attributes of the
edges and components are as follows:
e1:<(s,n1),0,0.3>
e2:<(s,n2),0,0.7>
n1:<(C1,0.5,5>
n2:<(C2,0.6,12>

s
e1
n1
e2
n2

HRF=1-[(1-0.5)0.3 x (1-0.6)0.7]. Time=Time+5 +12


The AND paths take into consideration the connector risk factors (hrf ij)

SoftwareTestingand

74

Pacemaker Risk

Given the architecture and the risk factors associated with


components and connector, the risk factor associated with the
pacemaker is computed to be approx. 0.9.

This value of risk is considered high. It implies that the


pacemaker architecture is critical and failures are likely to be
catastrophic.

Risk analysis tells us that the VT and AR components are the highest risk components

Risk analysis also tells us that the connectors between VT, AR and heart components are the highest risk
components

SoftwareTestingand

75

Advantages of Risk Analysis

The CDG is useful for the risk analysis of hierarchical systems.


Risks for subsystems can be computed. These could then be
aggregated to compute then risk of the entire system.

The CDG is useful for performing sensitivity analysis. One could


study the impact of changing the risk factor of a component on
the risk associated with the entire system.

As the analysis is being done, most likely, prior to coding, one might
consider revising the architecture or use the same architecture but
allocate resources for coding and testing based on individual risk
factors.

SoftwareTestingand

76

Summary

Reliability, modeling uncertainty, failure intensity, operational


profile, reliability growth models, parameter estimation.

Risk assessment, architecture, severity analysis, risk factors,


CDGs.

SoftwareTestingand

77

Das könnte Ihnen auch gefallen