Beruflich Dokumente
Kultur Dokumente
1 acceptance testing: Formal testing conducted 2 actual outcome: The behavior actually
to enable a user, customer, or other authorized produced when the object is tested under
entity to determine whether to accept a system specified conditions.
or component. [IEEE]
3 ad hoc testing: Testing carried out using no 4 alpha testing: Simulated or actual operational
recognised test case design technique. testing at an in-house site not otherwise
involved with the software developers.
5 arc testing: See branch testing. 6 Backus-Naur form: A metalanguage used to
formally describe the syntax of a language. See
BS 6154.
7 basic block: A sequence of one or more 8 basis test set: A set of test cases derived from
consecutive, executable statements containing the code logic which ensure that 100 % branch
no branches. coverage is achieved.
9 bebugging: See error seeding. [Abbott] 11 beta testing: Operational testing at a site not
otherwise involved with the software
developers.
10 behaviour: The combination of input values
and preconditions and the required response for
a function of a system. The full specification of
a function would normally comprise one or
more behaviours.
12 big-bang testing: Integration testing where
no incremental testing takes place prior to all the
system's components being combined to form
the system.
13 black box testing: See functional test case 15 boundary value: An input value or output
design. value which is on the boundary between
equivalence classes, or an incremental distance
either side of the boundary.
14 bottom-up testing: An approach to
integration testing where the lowest level
components are tested first, then used to
facilitate the testing of higher level components.
The process is repeated until the component at
the top of the hierarchy is tested.
16 boundary value analysis: A test case design
technique for a component in which test cases
are designed which include representatives of
boundary values.
17 boundary value coverage: The percentage 18 boundary value testing: See boundary value
of boundary values of the component's analysis.
equivalence classes which have been exercised
by a test case suite.
19 branch: A conditional transfer of control 20 branch condition: See decision condition.
from any statement to any other statement in a
component, or an unconditional transfer of
control from any statement to any other
statement in the component except the next
statement, or when a component has more than
one entry point, a transfer of control to an entry
point of the component.
21 branch condition combination coverage:
The percentage of combinations of all branch
condition outcomes in every decision that have
been exercised by a test case suite.
22 branch condition combination testing: A 23 branch condition coverage: The percentage
test case design technique in which test cases are of branch condition outcomes in every decision
designed to execute combinations of branch that have been exercised by a test case suite.
condition outcomes.
24 branch condition testing: A test case design 25 branch coverage: The percentage of
technique in which test cases are designed to branches that have been exercised by a test case
execute branch condition outcomes. suite
26 branch outcome: See decision outcome. 27 branch point: See decision.
28 branch testing: A test case design technique 29 bug: See fault.
for a component in which test cases are designed
to execute branch outcomes.
30 bug seeding: See error seeding.
31 C-use: See computation data use. 32 capture/playback tool: A test tool that
records test input as it is sent to the software
under test. The input cases stored can then be
used to reproduce the test at a later time.
33 capture/replay tool: See capture/playback 34 CAST: Acronym for computer-aided
tool. software testing.
35 cause-effect graph: A graphical 36 cause-effect graphing: A test case design
representation of inputs or stimuli (causes) with technique in which test cases are designed by
their associated outputs (effects), which can be consideration of cause-effect graphs.
used to design test cases.
37 certification: The process of confirming that 38 Chow's coverage metrics: See N-switch
a system or component complies with its coverage. [Chow]
specified requirements and is acceptable for
operational use. From [IEEE].
39 code coverage: An analysis method that 40 code-based testing: Designing tests based
determines which parts of the software have on objectives derived from the implementation
been executed (covered) by the test case suite (e.g., tests that execute specific control flow
and which parts have not been executed and paths or use specific data items).
therefore may require additional attention.
41 compatibility testing: Testing whether the 42 complete path testing: See exhaustive
system is compatible with other systems with testing.
which it should communicate.
43 component: A minimal software item for
which a separate specification is available.
44 component testing: The testing of individual 45 computation data use: A data use not in a
software components. After [IEEE]. condition. Also called C-use.
46 condition: A Boolean statement containing 47 condition coverage: See branch condition
no Boolean operators. For instance, A<B is a coverage.
condition but A and B is not.
48 condition outcome: The evaluation of a
condition to TRUE or FALSE.
49 conformance criterion: Some method of 50 conformance testing: The process of testing
judging whether or not the component's action that an implementation conforms to the
on a particular specified input value conforms to specification on which it is based.
the specification.
51 control flow: An abstract representation of 52 control flow graph: The diagrammatic
all possible sequences of events in a program's representation of the possible alternative control
execution. flow paths through a component.
53 control flow path: See path. 54 conversion testing: Testing of programs or
procedures used to convert data from existing
systems for use in replacement systems.
55 correctness: The degree to which software 56 coverage: The degree, expressed as a
conforms to its specification. percentage, to which a specified coverage item
has been exercised by a test case suite.
57 coverage item: An entity or property used as 58 data definition: An executable statement
a basis for testing. where a variable is assigned a value.
59 data definition C-use coverage: The 60 data definition C-use pair: A data definition
percentage of data definition C-use pairs in a and computation data use, where the data use
component that are exercised by a test case uses the value defined in the data definition.
suite.
61 data definition P-use coverage: The 62 data definition P-use pair: A data definition
percentage of data definition P-use pairs in a and predicate data use, where the data use uses
component that are exercised by a test case the value defined in the data definition.
suite.
63 data definition-use coverage: The 64 data definition-use pair: A data definition
percentage of data definition-use pairs in a and data use, where the data use uses the value
component that are exercised by a test case defined in the data definition.
suite.
65 data definition-use testing: A test case 66 data flow coverage: Test coverage measure
design technique for a component in which test based on variable usage within the code.
cases are designed to execute data definition-use Examples are data definition-use coverage, data
pairs. definition P-use coverage, data definition C-use
coverage, etc.
67 data flow testing: Testing in which test cases 68 data use: An executable statement where the
are designed based on variable usage within the value of a variable is accessed.
code.
69 debugging: The process of finding and 70 decision: A program point at which the
removing the causes of failures in software. control flow has two or more alternative routes.
71 Decision condition: A condition within a 72 decision coverage: The percentage of
decision. decision outcomes that have been exercised by a
test case suite.
73 decision outcome: The result of a decision 74 design-based testing: Designing tests based
(which therefore determines the control flow on objectives derived from the architectural or
alternative taken). detail design of the software (e.g., tests that
execute specific invocation paths or probe the
worst case behaviour of algorithms).
75 desk checking: The testing of software by 76 dirty testing: See negative testing. [Beizer]
the manual simulation of its execution.
77 documentation testing: Testing concerned 78 domain: The set from which values are
with the accuracy of documentation. selected.
79 domain testing: See equivalence partition 80 dynamic analysis: The process of evaluating
testing. a system or component based upon its behaviour
during execution.
81 emulator: A device, computer program, or 82 entry point: The first executable statement
system that accepts the same inputs and within a component.
produces the same outputs as a given system.
83 equivalence class: A portion of the 84 equivalence partition: See equivalence
component's input or output domains for which class.
the component's behaviour is assumed to be the
same from the component's specification.
85 equivalence partition coverage: The 86 equivalence partition testing: A test case
percentage of equivalence classes generated for design technique for a component in which test
the component, which have been exercised by a cases are designed to execute representatives
test case suite. from equivalence classes.
87 error: A human action that produces an 88 error guessing: A test case design technique
incorrect result. [IEEE] where the experience of the tester is used to
postulate what faults might occur, and to design
tests specifically to expose them.
89 error seeding: The process of intentionally 90 executable statement: A statement which,
adding known faults to those already in a when compiled, is translated into object code,
computer program for the purpose of monitoring which will be executed procedurally when the
the rate of detection and removal, and estimating program is running and may perform an action
the number of faults remaining in the program. on program data.
91 exercised: A program element is exercised 92 exhaustive testing: A test case design
by a test case when the input value causes the technique in which the test case suite comprises
execution of that element, such as a statement, all combinations of input values and
branch, or other structural element. preconditions for component variables.
93 exit point: The last executable statement 94 expected outcome: See predicted outcome.
within a component.
95 facility testing: See functional test case 96 failure: Deviation of the software from its
design. expected delivery or service.
97 fault: A manifestation of an error in 98 feasible path: A path for which there exists a
software. A fault, if encountered may cause a set of input values and execution conditions
failure. which causes it to be executed.
99 feature testing: See functional test case 100 functional specification: The document
design. that describes in detail the characteristics of the
product with regard to its intended capability.
[BS 4778, Part2]
101 functional test case design: Test case 102 glass box testing: See structural test case
selection that is based on an analysis of the design.
specification of the component without
reference to its internal workings.
103 incremental testing: Integration testing 104 independence: Separation of
where system components are integrated into the responsibilities which ensures the
system one at a time until the entire system is accomplishment of objective evaluation. After
integrated. [do178b].
105 infeasible path: A path which cannot be 106 input: A variable (whether stored within a
exercised by any set of possible input values. component or outside it) that is read by the
component.
107 input domain: The set of all possible 108 input value: An instance of an input.
inputs.
109 inspection: A group review quality 110 installability testing: Testing concerned
improvement process for written material. It with the installation procedures for the system.
consists of two aspects; product (document
itself) improvement and process improvement
(of both document production and inspection).
After [Graham]
111 instrumentation: The insertion of 112 instrumenter: A software tool used to carry
additional code into the program in order to out instrumentation.
collect information about program behaviour
during program execution.
113 integration: The process of combining 114 integration testing: Testing performed to
components into larger assemblies. expose faults in the interfaces and in the
interaction between integrated components.
115 interface testing: Integration testing where 116 isolation testing: Component testing of
the interfaces between system components are individual components in isolation from
tested. surrounding components, with surrounding
components being simulated by stubs.
117 LCSAJ: A Linear Code Sequence And 118 LCSAJ coverage: The percentage of
Jump, consisting of the following three items LCSAJs of a component which are exercised by
(conventionally identified by line numbers in a a test case suite.
source code listing): the start of the linear
sequence of executable statements, the end of
the linear sequence, and the target line to which
control flow is transferred at the end of the
linear sequence.
119 LCSAJ testing: A test case design 120 logic-coverage testing: See structural test
technique for a component in which test cases case design. [Myers]
are designed to execute LCSAJs.
121 logic-driven testing: See structural test case 122 maintainability testing: Testing whether
design. the system meets its specified objectives for
maintainability.
123 modified condition/decision coverage: 124 modified condition/decision testing: A test
The percentage of all branch condition outcomes case design technique in which test cases are
that independently affect a decision outcome designed to execute branch condition outcomes
that have been exercised by a test case suite. that independently affect a decision outcome.
125 multiple condition coverage: See branch 126 mutation analysis: A method to determine
condition combination coverage. test case suite thoroughness by measuring the
extent to which a test case suite can discriminate
the program from slight variants (mutants) of
the program. See also error seeding.
127 N-switch coverage: The percentage of 128 N-switch testing: A form of state transition
sequences of N-transitions that have been testing in which test cases are designed to
exercised by a test case suite. execute all valid sequences of N-transitions.
129 N-transitions: A sequence of N+1 130 negative testing: Testing aimed at showing
transitions. software does not work. [Beizer]