Sie sind auf Seite 1von 10

1

some junk on rst page because it always comes out without a headers and footers. Latex manual
says if want headers and footers on rst page, then generate a blank page. this is my blank page,
but I will mark the rest (not this one) and save to a .ps le which is the one I'll submit
ASE'98-DRAFT-DMW 0
COVER PAGE
Submission for:
13TH IEEE INTL. CONFERENCE ON AUTOMATED
SOFTWARE TESTING - ASE'98
to:
Dr. David Redmiles
Information and Computer Science
University of California, Irvine
Irvine, CA 92697-3425, USA
Tel: +1 714 824-3823
Fax: +1 714 824-1715
Email: ase98@ics.uci.edu

Title: Operational Pro le Testing with Input Dependencies


Author: Dr. D.M. Woit
Address: Dept. of Math, Physics and Computer Science
Ryerson Polytechnic University
350 Victoria St. Toronto, Ontario
CANADA M5B 2K3
Phone: (416) 979-5000-1-7063
FAX: (416) 979-5064
Email: dwoit@scs.ryerson.ca
URL: http://www.scs.ryerson.ca/~dwoit
ASE'98-DRAFT-DMW 1
Operational Pro le Testing with Input Dependencies
1 Summary and Conclusions
Testing is an important and costly part of software development. Recently, operational pro le testing
has proven to increase productivity, improve customer satisfaction, reduce cost and time to market
[musa97]. However, this method is limited by (1) lack of adequate, general tool support; (2) lack of
underlying theory to model conditional dependencies among inputs and test runs; (3) inapplicability
to the large class of software in which dependencies exist among the software's operations [horg96].
We address these drawbacks so that operational pro le testing may be more soundly de ned, and
more widely applicable.
2 Introduction
Operatioal pro le testing di ers from traditional system-level testing in that test cases are not selected
to cover aspects of the code or to probe areas of the software's input space we expect more likely
to elicit failure. In operational pro le testing, test cases are selected so that each operation of the
softare is exercised with a intensity proportional to that which it would experience in actual software
operation [musa97]. Thus, it di ers from traditional system-level testing in that test cases are not
selected to cover aspects of the code or to probe areas of the software's input space we expect more
likely to elicit failure. Operational pro le testing is necessary when statistical estimates are to be
calculated for the software, such as reliability. However, recently, operational pro le testing has
emerged as a practical, e ective and ecient method for software testing in general. It forms an
important part of the Best Current Practice suite of methodologies adopted by AT&T and has been
successfully used in testing various applications [donne96]. We wish to provide underlying theory to
extend the applicability of this method, and to provide general-purpose tools for its application.
To perform operational pro le testing, one rst de nes an operational pro le for the software under
test, and then randomly selects test cases in accordance with the pro le. In [musa96] and [musa93]
a step-by-step description is given by which one can determine an operational pro le. The process
begins by identifying high-level customers, customer functions and probabilties of occurrence, and
successively re ning these categories until one obtains a list of software operations and associated
probabilities. The expected customers, functions, probabilities, etc., are derived from various sources,
including data collected from previous systems, marketing data, etc. [musa97]. After suitable de-
composition, operations will partition the software's input space (where input space includes both
explicit inputs and environmental factors, such as the type of telephone with which the user makes a
call. The simplest operational pro le is merely a table of all the system's operations and associated
probabilities [musa97]. For example, for a billing system, the input space may consist of tuples of
<account, plan,status>, where account can be business or residential; the calling plan can be cpA,
cpB, or none; and status is paid or delinquent. An operational pro le for this system would be an
enumeration of all value combinations for <account,plan,statsus> and associated probabilities as in
Figure 2 [musa96]. Combinations expected to occur with zero probability are not included.
Tabular pro les as in Figure 1 are refered to as explicit pro les. In certain situations implicit pro les
are suggested [musa96,musa97]. They are particularly advised when the number of items (rows) in
an explicit operational pro le is too large, when a ner granularity in usage measurement is desired,
when testing transaction-based systems; when key input variables present themselves sequentially
in the description of the task being accomplished; when dependencies exist among input variables.
Implicit pro les are described as graphs or trees, with nodes representing variable values and branches
ASE'98-DRAFT-DMW 2

Operation Occurrence Probability


Residential, none, paid 0.5940
Residential, cpA, paid 0.1584
Business, cpB, delinquent 0.0001
1.0000
Figure 1: Operational Pro le for Billing System

representing probabilities of values. For example, consider inputs user, call-type, dial-type, answer-
type, response, hold-response, when describing a phone call. The user can be a manager or oce-
worker; the call-type can be internal or external; the dial-type can be abbreviated or standard; the
answer-type can be busy or noanswer or answer; the response can be talk, hold or conference; the
hold-response (the action after putting the call on hold) can be talk or call. However, the probability
of, call-type depends on who is dialing (user); the probability of dial-type depends on the call-type
as well as the user, etc. These dependencies can be captured by a graph, as in Figure 2 [musa96].
*******************REFS The modular approach to reliability estimation was originally proposed
by Cheung et al. [CR75], and continues to be modi ed and re ned [Che80, Lit81, Lap84, Kub89,
Lit79, Sho83, PMM93, LK96]. [PMM93, LK96]. *******************REFS
3 Extension of Implicit Pro les
In the description of operational pro le testing, it indicates that in some cases, sequences of operations
must be selected in a conditional fashion as well. For example, consider a ight- control system with
operations climb, dive, steady- ight, turn etc. There exists some dependency among the ordering
of operations because the vehicle's position and acceleration can change a limited amount during a
manouvre. No explanation is given as to how to account for such dependencies; however, a graphical
ordering, similar to the implicit pro le of Figure 2 would suce, if nodes represent operations. In
the description of operational pro le testing, it is expected that operations can be de ned such that
they are independent. The software is exercised continuously with various operations, selected at
random; it is expected that the system is re-initialized periodically, at a rate similar to that with
which it would be re-started in practice. Thus, for the ight-control software, tests would be a series
of operations, such as climb, turn, steady- ight, turn, dive; the length of each series would be similar
to the length of a typical ight. However, it has been pointed out that for systems that save state,
such operations cannot be considered independent; therfore, the ordering of the operations becomes
important and must be captured in the operational pro le [horg96, parnas93, thom90, wadd94,
woit94]. For example, a number of test sequences may be run for the ight-control software, such
that the proportion of operations, climbs, turns, etc., are as expected for actual ights. However, if
the sequences of operations are not what would be expected in operation, the tests are not exercising
the software as in operation. The sequences might not put the software in states similar to those
in actual operation; thus, failures typical of operation might not be detected by the test sequences,
which would defeat the purpose of operational pro le testing. No consideration to the necessity of
the ordering of operationsl is given in [musa96, musa97] because of the underlying assumption is that
all operations are independent. However, we note that a graphical ordering, similar to the implicit
pro le of Figure 2 would suce, if nodes represent operations.
The graphically represented, implicit operational pro le is a very important part of operational pro le
ASE'98-DRAFT-DMW 3
testing; it encompases situations described in the given operational pro le creation methodology, such
as input variable dependencies [musa96,musa97]; situations brie y mentioned, but not fully described
in [musa96,musa97] such as dependencies between operations caused by physical limitations of the
system (a vehicle unable to perform certain actions in certain maneuvres); situations not considered in
[musa96, musa97] that arise when the software under test maintains state over a series of operations.
Although implicit pro les are important, no methodology is presented in [musa96, musa97] describing
their speci cation, other than diagrams of graphs. No general-purpose tools exist for the speci cation
of such pro les, or automatic test case generation from such pro les. This is likely because when
operations are considered fully independent, the graph could be a tree whose depth is the number of
input variables. Such pro les would be relatively simple pro le to specify, and application-dependent
tools can be built to walk the trees to produce test cases. However, once our application of implicit
pro les is expanded to allow inter-operational dependencies as well as inter-input dependencies, the
complexity of the graphs and necessary tools can make the method of operational pro le testing
infeasible. In the subsequent sections, we present an underlying theory for the expanded implicit
operational pro le, as well as our general tools for pro le speci cation and test case generation.
4 The Conditional Speci cation Method
Several alternatives for incorporating general conditional information into the component interaction
speci cations were considered, and the following tabular speci cation method was selected; it is
referred to as a Conditional, Cyclic Table (CCT). A syntax and semantics for the CCT was developed,
and a tool with which to create and maintain formal CCT speci cations was prototyped and utilized.
4.1 Notation and Terminology
In the following, we assume the system comprises components. One of these components must
n

be the initial system component (the component executed upon system initialization), due to the
sequential nature of system execution. Without loss of generality, rename the components of the
system 1 2
M ; M ; : : : Mn, s.t. the initial system component is named 1 . Suppose system termination
M

may be e ected from components of the system, 1 2


t where 2 [1 ] and 6=
Mt ; Mt ; : : : Mtt ; ti ;n ; ti tj ;

for 6= We will model imaginary component +1 as a termination component.


i j: Mn +1 is not
Mn

a component of the physical system, but will be included our model of the system, to represent
system termination. Therefore, in our system model, each Mtiwill be \followed" by +1 with the
Mn

same probability with which Mti e ects system termination in the physical system. Because +1 Mn

is de ned to have perfect reliability, the addition of +1 to our model will not alter any system
Mn

reliability calculated; it simply results in multiplication by 1 (perfect reliability) in the reliability


calculations.
4.2 CCT Properties
Informally, a CCT is a table, such that each each row comprises a probability distribution on the
components of the system. The rst entry in each row contains a condition describing some history of
component execution, ending with a particular component. The remainder of the row stipulates the
probabilities that each of the system components will next execute, given this history of component
execution. Thus, in a CCT, as many probability distributions as required can be speci ed to describe
which component will execute following a particular component, ; each probability distribution is
Mi

conditional upon how the system arrived in . For example, consider the two conditional statements
Mi

from section ??:

If execution history contains only one execution of component H, then H is followed by M


with probability .7, and by S with probability .3; otherwise, H is followed by component
ASE'98-DRAFT-DMW 4
I with probability .2, by component F with probability .2, and by component S with
probability .6.

Informally, a condition in some row of the CCT could correspond to \a component execution history
containing only one H and ending in H," and the probability distribution in the remainder of the
row would have a \.7" and a \.3" entered for components M and S, respectively. A condition in
another row of the CCT could correspond to \a component execution history containing two or more
executions of H and ending in H," and the probability distribution in the remainder of the row would
have a \.2" entered for both components I and J, and a \.6" entered for S.
More formally, a CCT has the following syntax and semantics:

 A CCT has + 2 columns, numbered from 0 to + 1 as follows: Column 0 is a condition


n n

column (described below). Columns 1 to correspond to system components 1 2


n ,
M ; M ; : : : Mn

respectively. Column + 1 corresponds to the termination component, +1 .


n Mn

 We say that a condition corresponds to a component , if that condition describes a component


Mi

execution history ending in . A CCT has at least +1 rows, numbered from 0 to at least as
Mi n n

follows: Row 0 is a component label row; it simply lists the name in cell (0 ) 2 [1 + 1].
Mi ;i ;i ;n

The remaining rows each represent some condition on component interaction, s.t. there is
at least one condition corresponding to each 2 [1 ], ordered so that all conditions
Mi ; i ;n

corresponding to are listed before all conditions corresponding to , for


Mi Mj 2 [1 ].
i < j; i; j ;n

 For a CCT with + 1 rows (numbered 0 to ) and + 1 columns (numbered 0 to ), cell


r r c c

( ) 2 [1 ] 2 [1 ], contains, , a oating point number


i; j ; i ;r ; j ;c Pi;j
P 2 [0= 1,1], foror aeach
function call
;

that will evaluate to a oating point number 2 [0 1], s.t. =1;


c
j
Pi;j 2 [1 ]. i ;r

Pi;j represents the probability that component will next execute, given that the current
j

component execution history matches the condition in cell (i,0). The probabilities within a
particular row must sum to 1 because each row is a probability distribution.
 A condition column cell entry will comprise a conditional statement about component interac-
tion, speci ed by a regular-expression-trace. Such a trace must conform to the grammar below,
which we stipulate with BNF notation, where: "|" quotes symbol |, so that it is taken to be
the symbol itself, and not the BNF \or" operator; and \M " corresponds to system component
i

Mi .
<trace> := <component>
| <subtrace>.<trace>
<subtrace> := <component>
| <subtrace>.<subtrace>
| T
| T<int>
| (<subtrace>)<repetition>
| !(<subtrace>)
| (<subtrace>"|"<subtrace>)
<repetition> := *
| *<count>
| _<count>
<count> := <int>
ASE'98-DRAFT-DMW 5
| <int>,<int>
<component> := M1|M2|M3|...|Mn
<int> := 0|1|2|3|4|...

The grammar recognizes traces as sequences of component names or subtraces, separated by


\." and ending in a component name. The syntactic items ( )*, ( )* , ( )* , , where ::: ::: i ::: i j

i; j2 N (natural numbers), , are trace expansion items. They allow one trace to be
i < j

expanded into several traces. Speci cally,


{ ( )* expands to the empty sequence,
X , , X , etc. Thus, (M1.M2)*
X:X X:X:X

represents the empty sequence, M1.M1, M1.M2.M1.M2, etc.


{ ( )* expands to .
X i , s.t. is repeated exactly times. Thus, (M2.T.M5)*2
X X :::X X i

represents M2.T.M5.M2.T.M5
{ ( )* , expands to .
X i j , with occurrences of , 2 [ ]. Thus, (M5)*2,4
X X :::X k X k i; j

represents M5.M5, M5.M5.M5, and M5.M5.M5.M5


The remaining syntactic items, T, T , ( )_ , ( )_ , , !( ), and ( | ) are matching
i ::: i ::: i j ::: ::: :::

items. Each trace is a general template for some number of component execution sequences,
each of the form: 1 2 2 I + (positive integers),
M e ; Me ; : : : Mex ; x 2 [1 ]. If a sequence of ei ;n

component executions ts the template of a trace, then it is said to match that trace. Matches
are identi ed as follows:
{ M is a subtrace matching component .
i Mi

{ T is a subtrace matching any sequence of component executions (including the empty


sequence).
{ T also matches any sequence; however, the sequence is named so that this exact sequence
i

may be referred to subsequently in the trace (a method of repeating subtraces).


{ ( )_ matches only those sequences of component executions that match
X i and have X

length exactly . i

{ ( )_ , matches only those sequences of component executions that match and have
X i j X

length 2 [ ]. k i; j

{ !( ) matches those sequences of component executions that do not match .


X X

{ ( | ) matches those sequences of component executions that match or match .


X Y X Y

A trace stipulates a set of component execution sequences. This set is determined by nding all
matches for all expansions of the trace. For example, consider trace (M1.T)_3.(M3)*1,3.M2
It expands to (M1.T)_3.M3.M2, (M1.T)_3.M3.M3.M2, and (M1.T)_3.M3.M3.M3.M2. If the
system comprises only three components, M1, M2, and M3, then the trace matches the following
27 component executions:
M1.M1.M1.M3.M2, M1.M1.M1.M3.M3.M2, M1.M1.M1.M3.M3.M3.M2,
M1.M1.M2.M3.M2, M1.M1.M2.M3.M3.M2, M1.M1.M2.M3.M3.M3.M2,
M1.M1.M3.M3.M2, M1.M1.M3.M3.M3.M2, M1.M1.M3.M3.M3.M3.M2,
M1.M2.M1.M3.M2, M1.M2.M1.M3.M3.M2, M1.M2.M1.M3.M3.M3.M2,
:::

M1.M3.M3.M3.M2, M1.M3.M3.M3.M3.M2, M1.M3.M3.M3.M3.M3.M2


ASE'98-DRAFT-DMW 6

Figure 2: Example speci cation with only portion of CCT visible

4.3 CCT Tool


The tool employs a typical window-based GUI, as shown in Figure 2. When a new speci cation
is to be created, a table is presented, into which the engineer enters the system component names
representing 1 2
M ; M ; : : : Mn +1 into cells (0,1) to (0,n+1). Regular-expression-traces are entered into
column 0, commencing with cell (1,0). In cells ( ) i; j ; i; j > 0, the engineer enters either a oating
point number, or a call to a function that returns a oating point number. For any functions used,
the engineer must supply corresponding C code. This is accomplished by selecting the appropriate
button, and entering code into the window provided. Component reliabilities are included in the
CCT speci cation so that the reliability estimation tool (see section 5) can perform its calculations.
They are added by selecting the appropriate button, and can be \attached" to cells (0,1) (0,c), :::

(1,0) (r,0), where c+1 and r+1 are the number of columns and rows, respectively in the CCT.
:::

When a reliability is attached to a cell (0,j), j 2 [1 ], this indicates that the reliability of component
;c

Mj is unconditional upon how this component is utilized in the system. When it is attached to a cell
(i,0), i 2 [1 ], this indicates that the reliability of
;r Mk is conditional upon the component execution
history matching T, where the entry in cell (i,0) is consistent with T. .Mk

4.4 Example of Usage


The CCT tool has been used successfully to capture component interaction speci cations. Figure 2
displays one such speci cation, in which only a portion of the CCT visible, for brevity. This example
system allows colleague collaboration by sharing text and graphics in both an interactive and a non-
interactive environment. In this portion of the system, notes, which are stored in a note database, can
be created and edited. A note may have attributes such as \original author", \note-type", \graphics",
etc. Aspects relate to portions of the notes' contents, such as \reply-author", \followup-text", etc.,
and are referred to as SA-A, SA-B, SA-C, etc. When a note is created, attributes and selected
aspects are stored; when a note is edited, attributes need not be stored, but selected aspects are
stored. In this example, component StoreAspect is not particularly cohesive, and provides di erent
functionality depending on whether a note is being edited or created. This is a situation in which it
is reasonable to attach two reliability estimates to one component.
ASE'98-DRAFT-DMW 7
5 System Reliability
Consider a conventional component interaction speci cation: a matrix with entries in cells (0 ) ;j

and ( 0) being
j; Mj ; j 2 [1 + 1]; and with entries in cells ( )
;n
P +1 = 1. Such 2a matrix
i; j ; i; j [1 + 1], being oat-
;n

ing point numbers, , s.t. for each 2 [1 ]


Pi;j i ;n ;
n
j=1 Pi;j corresponds to
a sequence set, SM, of component execution sequences, s.t. each sequence in SM is of the form
1 1 2
M :Me :Me : : : Mex :Mn+1 ; ex 2 [1 ]. Informally, these are all the sequences that could be obtained
;n

by Monte Carlo simulation applied to the matrix, beginning at 1 , the initial system component.
M

Theoretically, system reliability could be calculated by summing the products of the components' re-
liabilities and probabilities along each sequence in SM. In practice, however, this explicit calculation
is often impossible, because SM could be in nite in both number if elements and element length,
due to interaction cycles. Markov analysis techniques can calculate this system reliability gure
without explicitly identifying SM, by solving systems of equations involving implicit sequence cycles.
This is accomplished by processing the rst-order Markov matrix associated with SM. Fortunately,
in conventional component interaction speci cation there is no need to calculate this matrix{it is the
component interaction speci cation itself.
Consider some CCT complying with the properties of section 4.2. Such a CCT corresponds to a
sequence set, SC, of component execution sequences, s.t. each sequence in SC is also of the form
1 1 2
M :Me :Me : : : Mex :Mn+1 ; ex2 [1 ]. As with set SM above, system reliability is the sum of the
;n

products of the reliabilities and probabilities along each sequence, but typically cannot be explicitly
calculated as such. As in the conventional method, reliability can be calculated from a rst-order
Markov matrix associated with SC. However, in our CCT method, the rst-order Markov matrix
corresponding to SC is not given (the CCT is not such a matrix.) Therefore, in order to calculate
system reliability, the rst-order Markov matrix corresponding to SC must be derived. Such a
derivation is performed by our system reliability estimation tool. This tool will read a component
interaction speci cation created with the speci cation tool (see section 4.3), process it, create the
rst-order Markov matrix corresponding to the sequence set associated with the speci cation, and
nally use typical Markov analysis techniques [Che80, LK96] to derive system reliability from the
matrix. The necessary algorithms have been established, and the reliability tool has been prototyped.
6 Model Appropriateness
It should be noted that we have described the CCT model in terms of sequential systems; however,
with minor modi cations, it is applicable to non-sequential systems. This involves replacing the
components in the CCT with abstract states, s.t. one state corresponds to a set of concurrently
executing components, as in [LK96]. Since our model is an extension of previous and most recent
work in this area, e.g., [Che80, LK96], it is equally appropriate for describing component interactions
of modular systems. We have identi ed areas in which the underlying model is incomplete, and have
determined methods of mitigating potential inconsistencies [Woi97].

References
[Ber97] L. Bernstein. Software dynamics: Planning for the next century. In Proceedings 10th Intl. Software
Quality Week, May 27-30, 1997.
[Che80] R.C. Cheung. A user-oriented software reliability model. IEEE Trans. Software Engineering, SE-
6(2):118{125, March 1980.
[CR75] R.C. Cheung and C.V. Ramamoorthy. Optimal measurement of program path frequencies and its
applications. In Proceedings 1975 Intl. Fed. Automat. Contr. Congr., Aug., 1975.
ASE'98-DRAFT-DMW 8
[Fel69] W. Feller. An introduction to probability theory and its applications. John Wiley and Sons, New
York, 1969.
[Kub89] P. Kubat. Assessing reliability of modular software. Operations research letters, 8(1):35{41, Feb.
1989.
[Lap84] J.-C. Laprie. Dependability evaluation of software systems in operation. IEEE Trans. Software
Engineering, SE-10(6):701{714, Nov. 1984.
[Lit79] B. Littlewood. Software reliability model for modular program structure. IEEE Trans. Reliability,
R-28(3):241{246, Aug. 1979.
[Lit81] B. Littlewood. Stochastic reliability growth: A model for fault-removal in computer programs and
hardware designs. IEEE Trans. Reliability, R-30(4):313{320, Oct. 1981.
[LK96] J.-C. Laprie and K. Karama. Software reliability and system reliability. In M. Lyu, editor, Software
reliability engineering, pages 27{70, New York, 1996. McGraw-Hill.
[Mus97] J. Musa. Applying operational pro les in testing. In Proceedings 10th Intl. Software Quality Week,
May 27-30, 1997.
[PMM93] H.D. Poore, H.D. Mills, and D. Mutchler. Planning and certifying software system reliability. IEEE
Software, pages 88{99, Jan. 1993.
[Sho83] M.L. Shooman. Software engineering: design, reliability and management. McGraw-Hill, New York,
1983.
[Woi97] D.M. Woit. Modeling component interactions of modular systems. CRL Report work-in-progress,
Faculty of Engineering, McMaster University, Hamilton, Ontario, Canada, 1997.

Das könnte Ihnen auch gefallen