Sie sind auf Seite 1von 9

Software Testing Glossary

SOFTWARE TESTING GLOSSARY

1 acceptance testing: Formal testing conducted 2 actual outcome: The behavior actually
to enable a user, customer, or other authorized produced when the object is tested under
entity to determine whether to accept a system specified conditions.
or component. [IEEE]
3 ad hoc testing: Testing carried out using no 4 alpha testing: Simulated or actual operational
recognised test case design technique. testing at an in-house site not otherwise
involved with the software developers.
5 arc testing: See branch testing. 6 Backus-Naur form: A metalanguage used to
formally describe the syntax of a language. See
BS 6154.
7 basic block: A sequence of one or more 8 basis test set: A set of test cases derived from
consecutive, executable statements containing the code logic which ensure that 100 % branch
no branches. coverage is achieved.
9 bebugging: See error seeding. [Abbott] 11 beta testing: Operational testing at a site not
otherwise involved with the software
developers.
10 behaviour: The combination of input values
and preconditions and the required response for
a function of a system. The full specification of
a function would normally comprise one or
more behaviours.
12 big-bang testing: Integration testing where
no incremental testing takes place prior to all the
system's components being combined to form
the system.
13 black box testing: See functional test case 15 boundary value: An input value or output
design. value which is on the boundary between
equivalence classes, or an incremental distance
either side of the boundary.
14 bottom-up testing: An approach to
integration testing where the lowest level
components are tested first, then used to
facilitate the testing of higher level components.
The process is repeated until the component at
the top of the hierarchy is tested.
16 boundary value analysis: A test case design
technique for a component in which test cases
are designed which include representatives of
boundary values.

17 boundary value coverage: The percentage 18 boundary value testing: See boundary value
of boundary values of the component's analysis.
equivalence classes which have been exercised
by a test case suite.
19 branch: A conditional transfer of control 20 branch condition: See decision condition.
from any statement to any other statement in a
component, or an unconditional transfer of
control from any statement to any other
statement in the component except the next
statement, or when a component has more than
one entry point, a transfer of control to an entry
point of the component.
21 branch condition combination coverage:
The percentage of combinations of all branch
condition outcomes in every decision that have
been exercised by a test case suite.
22 branch condition combination testing: A 23 branch condition coverage: The percentage
test case design technique in which test cases are of branch condition outcomes in every decision
designed to execute combinations of branch that have been exercised by a test case suite.
condition outcomes.
24 branch condition testing: A test case design 25 branch coverage: The percentage of
technique in which test cases are designed to branches that have been exercised by a test case
execute branch condition outcomes. suite
26 branch outcome: See decision outcome. 27 branch point: See decision.
28 branch testing: A test case design technique 29 bug: See fault.
for a component in which test cases are designed
to execute branch outcomes.
30 bug seeding: See error seeding.
31 C-use: See computation data use. 32 capture/playback tool: A test tool that
records test input as it is sent to the software
under test. The input cases stored can then be
used to reproduce the test at a later time.
33 capture/replay tool: See capture/playback 34 CAST: Acronym for computer-aided
tool. software testing.
35 cause-effect graph: A graphical 36 cause-effect graphing: A test case design
representation of inputs or stimuli (causes) with technique in which test cases are designed by
their associated outputs (effects), which can be consideration of cause-effect graphs.
used to design test cases.
37 certification: The process of confirming that 38 Chow's coverage metrics: See N-switch
a system or component complies with its coverage. [Chow]
specified requirements and is acceptable for
operational use. From [IEEE].
39 code coverage: An analysis method that 40 code-based testing: Designing tests based
determines which parts of the software have on objectives derived from the implementation
been executed (covered) by the test case suite (e.g., tests that execute specific control flow
and which parts have not been executed and paths or use specific data items).
therefore may require additional attention.
41 compatibility testing: Testing whether the 42 complete path testing: See exhaustive
system is compatible with other systems with testing.
which it should communicate.
43 component: A minimal software item for
which a separate specification is available.
44 component testing: The testing of individual 45 computation data use: A data use not in a
software components. After [IEEE]. condition. Also called C-use.
46 condition: A Boolean statement containing 47 condition coverage: See branch condition
no Boolean operators. For instance, A<B is a coverage.
condition but A and B is not.
48 condition outcome: The evaluation of a
condition to TRUE or FALSE.
49 conformance criterion: Some method of 50 conformance testing: The process of testing
judging whether or not the component's action that an implementation conforms to the
on a particular specified input value conforms to specification on which it is based.
the specification.
51 control flow: An abstract representation of 52 control flow graph: The diagrammatic
all possible sequences of events in a program's representation of the possible alternative control
execution. flow paths through a component.
53 control flow path: See path. 54 conversion testing: Testing of programs or
procedures used to convert data from existing
systems for use in replacement systems.
55 correctness: The degree to which software 56 coverage: The degree, expressed as a
conforms to its specification. percentage, to which a specified coverage item
has been exercised by a test case suite.
57 coverage item: An entity or property used as 58 data definition: An executable statement
a basis for testing. where a variable is assigned a value.
59 data definition C-use coverage: The 60 data definition C-use pair: A data definition
percentage of data definition C-use pairs in a and computation data use, where the data use
component that are exercised by a test case uses the value defined in the data definition.
suite.
61 data definition P-use coverage: The 62 data definition P-use pair: A data definition
percentage of data definition P-use pairs in a and predicate data use, where the data use uses
component that are exercised by a test case the value defined in the data definition.
suite.
63 data definition-use coverage: The 64 data definition-use pair: A data definition
percentage of data definition-use pairs in a and data use, where the data use uses the value
component that are exercised by a test case defined in the data definition.
suite.
65 data definition-use testing: A test case 66 data flow coverage: Test coverage measure
design technique for a component in which test based on variable usage within the code.
cases are designed to execute data definition-use Examples are data definition-use coverage, data
pairs. definition P-use coverage, data definition C-use
coverage, etc.
67 data flow testing: Testing in which test cases 68 data use: An executable statement where the
are designed based on variable usage within the value of a variable is accessed.
code.
69 debugging: The process of finding and 70 decision: A program point at which the
removing the causes of failures in software. control flow has two or more alternative routes.
71 Decision condition: A condition within a 72 decision coverage: The percentage of
decision. decision outcomes that have been exercised by a
test case suite.
73 decision outcome: The result of a decision 74 design-based testing: Designing tests based
(which therefore determines the control flow on objectives derived from the architectural or
alternative taken). detail design of the software (e.g., tests that
execute specific invocation paths or probe the
worst case behaviour of algorithms).
75 desk checking: The testing of software by 76 dirty testing: See negative testing. [Beizer]
the manual simulation of its execution.
77 documentation testing: Testing concerned 78 domain: The set from which values are
with the accuracy of documentation. selected.
79 domain testing: See equivalence partition 80 dynamic analysis: The process of evaluating
testing. a system or component based upon its behaviour
during execution.
81 emulator: A device, computer program, or 82 entry point: The first executable statement
system that accepts the same inputs and within a component.
produces the same outputs as a given system.
83 equivalence class: A portion of the 84 equivalence partition: See equivalence
component's input or output domains for which class.
the component's behaviour is assumed to be the
same from the component's specification.
85 equivalence partition coverage: The 86 equivalence partition testing: A test case
percentage of equivalence classes generated for design technique for a component in which test
the component, which have been exercised by a cases are designed to execute representatives
test case suite. from equivalence classes.
87 error: A human action that produces an 88 error guessing: A test case design technique
incorrect result. [IEEE] where the experience of the tester is used to
postulate what faults might occur, and to design
tests specifically to expose them.
89 error seeding: The process of intentionally 90 executable statement: A statement which,
adding known faults to those already in a when compiled, is translated into object code,
computer program for the purpose of monitoring which will be executed procedurally when the
the rate of detection and removal, and estimating program is running and may perform an action
the number of faults remaining in the program. on program data.
91 exercised: A program element is exercised 92 exhaustive testing: A test case design
by a test case when the input value causes the technique in which the test case suite comprises
execution of that element, such as a statement, all combinations of input values and
branch, or other structural element. preconditions for component variables.
93 exit point: The last executable statement 94 expected outcome: See predicted outcome.
within a component.
95 facility testing: See functional test case 96 failure: Deviation of the software from its
design. expected delivery or service.
97 fault: A manifestation of an error in 98 feasible path: A path for which there exists a
software. A fault, if encountered may cause a set of input values and execution conditions
failure. which causes it to be executed.
99 feature testing: See functional test case 100 functional specification: The document
design. that describes in detail the characteristics of the
product with regard to its intended capability.
[BS 4778, Part2]
101 functional test case design: Test case 102 glass box testing: See structural test case
selection that is based on an analysis of the design.
specification of the component without
reference to its internal workings.
103 incremental testing: Integration testing 104 independence: Separation of
where system components are integrated into the responsibilities which ensures the
system one at a time until the entire system is accomplishment of objective evaluation. After
integrated. [do178b].
105 infeasible path: A path which cannot be 106 input: A variable (whether stored within a
exercised by any set of possible input values. component or outside it) that is read by the
component.
107 input domain: The set of all possible 108 input value: An instance of an input.
inputs.
109 inspection: A group review quality 110 installability testing: Testing concerned
improvement process for written material. It with the installation procedures for the system.
consists of two aspects; product (document
itself) improvement and process improvement
(of both document production and inspection).
After [Graham]
111 instrumentation: The insertion of 112 instrumenter: A software tool used to carry
additional code into the program in order to out instrumentation.
collect information about program behaviour
during program execution.
113 integration: The process of combining 114 integration testing: Testing performed to
components into larger assemblies. expose faults in the interfaces and in the
interaction between integrated components.
115 interface testing: Integration testing where 116 isolation testing: Component testing of
the interfaces between system components are individual components in isolation from
tested. surrounding components, with surrounding
components being simulated by stubs.
117 LCSAJ: A Linear Code Sequence And 118 LCSAJ coverage: The percentage of
Jump, consisting of the following three items LCSAJs of a component which are exercised by
(conventionally identified by line numbers in a a test case suite.
source code listing): the start of the linear
sequence of executable statements, the end of
the linear sequence, and the target line to which
control flow is transferred at the end of the
linear sequence.
119 LCSAJ testing: A test case design 120 logic-coverage testing: See structural test
technique for a component in which test cases case design. [Myers]
are designed to execute LCSAJs.
121 logic-driven testing: See structural test case 122 maintainability testing: Testing whether
design. the system meets its specified objectives for
maintainability.
123 modified condition/decision coverage: 124 modified condition/decision testing: A test
The percentage of all branch condition outcomes case design technique in which test cases are
that independently affect a decision outcome designed to execute branch condition outcomes
that have been exercised by a test case suite. that independently affect a decision outcome.
125 multiple condition coverage: See branch 126 mutation analysis: A method to determine
condition combination coverage. test case suite thoroughness by measuring the
extent to which a test case suite can discriminate
the program from slight variants (mutants) of
the program. See also error seeding.
127 N-switch coverage: The percentage of 128 N-switch testing: A form of state transition
sequences of N-transitions that have been testing in which test cases are designed to
exercised by a test case suite. execute all valid sequences of N-transitions.
129 N-transitions: A sequence of N+1 130 negative testing: Testing aimed at showing
transitions. software does not work. [Beizer]

132 operational testing:Testing conducted to


evaluate a system or component in its
operational environment. [IEEE]

131 non-functional requirements testing:


Testing of those requirements that do not relate
to functionality. i.e. performance, usability, etc.
133 oracle: A mechanism to produce the 134 outcome: Actual outcome or predicted
predicted outcomes to compare with the actual outcome. This is the outcome of a test. See also
outcomes of the software under test. After branch outcome, condition outcome and
[adrion] decision outcome.
135 output: A variable (whether stored within a 136 output domain: The set of all possible
component or outside it) that is written to by the outputs.
component.
137 output value: An instance of an output. 138 P-use: See predicate data use.
139 partition testing: See equivalence partition 140 path: A sequence of executable statements
testing. [Beizer] of a component, from an entry point to an exit
point.
141 path coverage: The percentage of paths in a 142 path sensitizing: Choosing a set of input
component exercised by a test case suite. values to force the execution of a component to
take a given path.
143 path testing: A test case design technique 144 performance testing: Testing conducted to
in which test cases are designed to execute paths evaluate the compliance of a system or
of a component. component with specified performance
requirements. [IEEE]
145 portability testing: Testing aimed at 146 precondition: Environmental and state
demonstrating the software can be ported to conditions which must be fulfilled before the
specified hardware or software platforms. component can be executed with a particular
input value.
147 predicate: A logical statement which 148 predicate data use: A data use in a
evaluates to TRUE or FALSE, normally to direct predicate.
the execution path in code.
149 predicted outcome: The behaviour 150 program instrumenter: See instrumenter.
predicted by the specification of an object under
specified conditions.
151 progressive testing: Testing of new features 152 pseudo-random: A series which appears to
after regression testing of previous features. be random but is in fact generated according to
[Beizer] some prearranged sequence.
153 recovery testing: Testing aimed at 154 regression testing: Retesting of a
verifying the system's ability to recover from previously tested program following
varying degrees of failure. modification to ensure that faults have not been
introduced or uncovered as a result of the
changes made.
155 requirements-based testing: Designing 156 result: See outcome.
tests based on objectives derived from
requirements for the software component (e.g.,
tests that exercise specific functions or probe the
non-functional constraints such as performance
or security). See functional test case design.
157 review: A process or meeting during which 158 security testing: Testing whether the
a work product, or set of work products, is system meets its specified security objectives.
presented to project personnel, managers, users
or other interested parties for comment or
approval. [ieee]
159 serviceability testing: See maintainability 160 simple subpath: A subpath of the control
testing. flow graph in which no program part is executed
more than necessary.
161 simulation: The representation of selected 162 simulator: A device, computer program or
behavioural characteristics of one physical or system used during software verification, which
abstract system by another system. [ISO behaves or operates like a given system when
2382/1]. provided with a set of controlled inputs.
[IEEE,do178b]

163 source statement: See statement. 164 specification: A description of a


component's function in terms of its output
values for specified input values under specified
preconditions.
165 specified input: An input for which the 166 state transition: A transition between two
specification predicts an outcome. allowable states of a system or component.
167 state transition testing: A test case design 168 statement: An entity in a programming
technique in which test cases are designed to language which is typically the smallest
execute state transitions. indivisible unit of execution.
169 statement coverage: The percentage of 170 statement testing: A test case design
executable statements in a component that have technique for a component in which test cases
been exercised by a test case suite. are designed to execute statements.
171 static analysis: Analysis of a program 172 static analyzer: A tool that carries out static
carried out without executing the program. analysis.
173 static testing: Testing of an object without 174 statistical testing: A test case design
execution on a computer. technique in which a model is used of the
statistical distribution of the input to construct
representative test cases.
175 storage testing: Testing whether the system 176 stress testing: Testing conducted to
meets its specified storage objectives. evaluate a system or component at or beyond
the limits of its specified requirements. [IEEE]
177 structural coverage: Coverage measures 178 structural test case design: Test case
based on the internal structure of the component. selection that is based on an analysis of the
internal structure of the component.
179 structural testing: See structural test case 180 structured basis testing: A test case design
design. technique in which test cases are derived from
the code logic to achieve 100% branch
coverage.
181 structured walkthrough: See walkthrough. 182 stub: A skeletal or special-purpose
implementation of a software module, used to
develop or test a component that calls or is
otherwise dependent on it. After [IEEE].
183 subpath: A sequence of executable 184 symbolic evaluation: See symbolic
statements within a component. execution.
185 symbolic execution: A static analysis 186 syntax testing: A test case design technique
technique that derives a symbolic statement for for a component or system in which test case
program paths. design is based upon the syntax of the input.
187 system testing: The process of testing an 188 technical requirements testing: See non-
integrated system to verify that it meets functional requirements testing.
specified requirements. [Hetzel]
189 test automation: The use of software to 190 test case: A set of inputs, execution
control the execution of tests, the comparison of preconditions, and expected outcomes
actual outcomes to predicted outcomes, the developed for a particular objective, such as to
setting up of test preconditions, and other test exercise a particular program path or to verify
control and test reporting functions. compliance with a specific requirement. After
[IEEE,do178b]
191 test case design technique: A method used 192 test case suite: A collection of one or more
to derive or select test cases. test cases for the software under test.
193 test comparator: A test tool that compares 194 test completion criterion: A criterion for
the actual outputs produced by the software determining when planned testing is complete,
under test with the expected outputs for that test defined in terms of a test measurement
case. technique.
195 test coverage: See coverage. 196 test driver: A program or test tool used to
execute software against a test case suite.
197 test environment: A description of the 198 test execution: The processing of a test
hardware and software environment in which the case suite by the software under test, producing
tests will be run, and any other software with an outcome.
which the software under test interacts when
under test including stubs and test drivers.
199 test execution technique: The method used 200 test generator: A program that generates
to perform the actual test execution, e.g. manual, test cases in accordance to a specified strategy
capture/playback tool, etc. or heuristic.
201 test harness: A testing tool that comprises a 202 test measurement technique: A method
test driver and a test comparator. used to measure test coverage items.
203 test outcome: See outcome. 204 test plan: A record of the test planning
process detailing the degree of tester
indedendence, the test environment, the test case
design techniques and test measurement
techniques to be used, and the rationale for their
choice.
205 test procedure: A document providing 206 test records: For each test, an unambiguous
detailed instructions for the execution of one or record of the identities and versions of the
more test cases. component under test, the test specification, and
actual outcome.
207 test script: Commonly used to refer to the 208 test specification: For each test case, the
automated test procedure used with a test coverage item, the initial state of the software
harness. under test, the input, and the predicted outcome.
209 test target: A set of test completion criteria. 210 testing: The process of exercising software
to verify that it satisfies specified requirements
and to detect errors.
211 thread testing: A variation of top-down 212 top-down testing: An approach to
testing where the progressive integration of integration testing where the component at the
components follows the implementation of top of the component hierarchy is tested first,
subsets of the requirements, as opposed to the with lower level components being simulated by
integration of components by successively lower stubs. Tested components are then used to test
levels. lower level components. The process is repeated
until the lowest level components have been
tested.
213 unit testing: See component testing. 214 usability testing: Testing the ease with
which users can learn and use a product.
215 validation: Determination of the 216 verification: The process of evaluating a
correctness of the products of software system or component to determine whether the
development with respect to the user needs and products of the given development phase satisfy
requirements. the conditions imposed at the start of that phase.
[IEEE]
217 volume testing: Testing where the system is 218 walkthrough: A review of requirements,
subjected to large volumes of data. designs or code characterized by the author of
the object under review guiding the progression
of the review.
219 white box testing: See structural test case
design.

Das könnte Ihnen auch gefallen