Working Draft:
Glossary of terms used in software testing
Version 6.3 
produced by the British Computer 
Society 
Specialist Interest Group in Software Testing (BCS SIGIST) 
Copyright Notice
This document may be copied in its entirety, or extracts 
made, if the source is acknowledged.  
Contents
- Introduction
- Scope
- Arrangement
- Normative references
- Definitions 
Annexes 
A Index of sources 
B 
Document details 
Foreword 
In compiling this glossary the committee has sought the views and comments of 
as broad a spectrum of opinion as possible in industry, commerce and government 
bodies and organisations, with the aim of producing a standard which would gain 
acceptance in as wide a field as possible
. Totalagreement will rarely, if ever, be achieved in 
compiling a document of this nature. 
1. Introduction 
Much time and effort is wasted both within and between industry, commerce, 
government and professional and academic institutions when ambiguities arise as 
a result of the inability to differentiate adequately between such terms as 
`path coverage' and `branch coverage'; `test case suite', `test specification' 
and `test plan' and similar terms which form an interface between various 
sectors and society
. Moreover, the 
professional, or technical use of these terms is often at 
variance with the meanings attributed to them by lay people. 
2. Scope 
This document presents concepts, terms and definitions designed to aid 
communication in software testing and related disciplines. 
3. Arrangement 
The glossary has been arranged in a single section of definitions arranged 
alphabetically
. The use of a term defined within this 
glossary is printed in italics. 
Some terms are preferred to other synonymous ones, in which case, the 
definition of the preferred term appears, with the synonymous ones referring to 
that
. For example 
dirty testing refers to 
negative testing. 
4. Normative references 
At the time of publication, the edition indicated was valid
. All standards are subject to revision, and parties to 
agreements based upon this Standard are encouraged to investigate the 
possibility of applying the most recent edition of the standards listed 
below
. Members of IEC and ISO maintain registers of 
currently valid International Standards. 
ISO 8402:1986
. Quality Vocabulary. 
ISO/IEC 2382-1:1993
. Data processing - Vocabulary 
- Part 01:Fundamental 
terms. 
BS 6154:1981
. Method of defining Syntactic Metalanguage. 
5. Definitions 
5.1acceptance testing: Formal 
testing conducted to enable a user, customer, or other 
authorized entity to determine whether to accept a system or 
component. [IEEE] 
5.2actual 
outcome: The 
behaviour actually produced when the 
object is tested under specified conditions. 
5.3 ad hoc testing: Testing carried out 
using no recognised 
test case design 
technique. 
5.4 alpha testing: Simulated or actual operational 
testing at an in-house site not otherwise involved with the 
software developers. 
5.5 arc testing: See 
branch testing. 
5.6 Backus-Naur form: A 
metalanguage used to 
formally describe the syntax of a language
. See BS 
6154. 
5.7 basic block: A sequence of one or more consecutive, 
executable statements containing no 
branches. 
5.8 basis test set: A set of 
test cases 
derived from the code logic which ensure that 100\% branch 
coverage is achieved. 
5.9 bebugging: See 
error seeding. [Abbott] 
5.10 behaviour: The combination of 
input values and 
preconditions and the required response for a function 
of a system
. The full 
specification of a function would normally comprise 
one or more 
behaviours. 
5.11 beta testing: Operational testing 
at a site not otherwise involved with the software developers. 
5.12 big-bang testing: Integration 
testing where no 
incremental testing takes 
place prior to all the system's 
components being 
combined to form the system. 
5.13 black box testing: See 
functional test 
case design. 
5.14 bottom-up testing: An approach to 
integration testing where the lowest level 
components are tested first, then used to facilitate the 
testing of higher level 
components
. The process is 
repeated until the 
component at the top of the 
hierarchy is tested. 
5.15 boundary value: An 
input value or 
output value 
which is on the boundary between 
equivalence 
classes, or an incremental distance either side of the boundary. 
5.16 boundary value analysis: A 
test case design technique for a 
component in which 
test cases are 
designed which include representatives of 
boundary 
values. 
5.17 boundary value coverage: The percentage of 
boundary values of the component's 
equivalence classes which have been 
exercised by a 
test case 
suite. 
5.18 boundary value testing: See 
boundary 
value analysis. 
5.19 branch: A conditional transfer of control from 
any 
statement to any other 
statement in a 
component, or an 
unconditional transfer of control from any 
statement to 
any other 
statement in the 
component except the next 
statement, or when a 
component 
has more than one 
entry point, a transfer of control 
to an entry point of the 
component. 
5.20 branch condition: See 
decision 
condition. 
5.21 branch condition combination coverage: The percentage of 
combinations of all branch condition outcomes in every 
decision that have been 
exercised 
by a 
test case suite. 
5.22 branch condition combination testing: 
A 
test case design technique in which 
test cases are designed to execute combinations of branch 
condition outcomes. 
5.23 branch condition coverage: The 
percentage of branch condition outcomes in every 
decision that have been exercised by a 
test case suite. 
5.24 branch condition testing: A 
test case 
design technique in which 
test cases are designed to 
execute branch condition outcomes. 
5.25 branch coverage: The percentage of 
branches that have been 
exercised by 
a 
test case suite 
5.26 branch outcome: See 
decision outcome. 
5.27 branch point: See 
decision. 
5.28 branch testing: A test case design technique for a 
component in which 
test cases are 
designed to execute 
branch outcomes. 
5.29 bug: See 
fault. 
5.30 bug seeding: See 
error seeding. 
5.31 C-use: See 
computation 
data use. 
5.32 capture/playback tool: A test tool that 
records test input as it is sent to the software under test
. 
The input cases stored can then be used to reproduce the test at a later 
time. 
5.33 capture/replay tool: See 
capture/playback 
tool. 
5.34 CAST: Acronym for computer-aided software testing. 
5.35 cause-effect graph: A graphical 
representation of 
inputs or stimuli (causes) with their 
associated 
outputs (effects), which can be used to design 
test cases. 
5.36 cause-effect graphing: A 
test case design 
technique in which 
test cases are designed by 
consideration of 
cause-effect graphs. 
5.37 certification: The process of confirming that a system or 
component complies with its specified requirements and is 
acceptable for operational use
. From [IEEE]. 
5.38 Chow's coverage metrics: See 
N-switch 
coverage. [Chow] 
5.39 code coverage: An analysis method that determines which parts of 
the software have been executed (covered) by the 
test 
case suite and which parts have not been executed and therefore may require 
additional attention. 
5.40 code-based testing: Designing tests based on objectives derived 
from the implementation (e.g., tests that execute specific 
control flow paths or use specific data items). 
5.41 compatibility testing: Testing whether the 
system is compatible with other systems with which it should communicate. 
5.42 complete path testing: See 
exhaustive 
testing. 
5.43 component: A 
minimal software item for which a separate 
specification is available. 
5.44 component testing: The 
testing of individual software 
components. After [IEEE]. 
5.45 computation data 
use: A 
data use not in a 
condition. Also called C-use. 
5.46 condition: A Boolean expression containing 
no Boolean operators
. For instance
, A<B is a 
condition but 
A and 
B is not
. [DO-178B] 
5.47 condition coverage: See 
branch 
condition coverage. 
5.48 condition outcome: The evaluation of 
a 
condition to TRUE or FALSE. 
5.49 conformance criterion: Some method of judging whether or not the 
component's action on a particular 
specified input value conforms to the 
specification. 
5.50 conformance testing: The process of 
testing that an implementation conforms to the 
specification on which it is based. 
5.51 control flow: An abstract representation 
of all possible sequences of events in a program's execution
. 
5.52 control flow graph: The diagrammatic representation of the 
possible alternative 
control flow paths through a 
component. 
5.53 control flow 
path: See 
path. 
5.54 conversion testing: Testing of programs or 
procedures used to convert data from existing systems for use in replacement 
systems. 
5.55 correctness: The degree 
to which software conforms to its specification. 
5.56 coverage: 
The degree, expressed as a percentage, to which a specified 
coverage item has been 
exercised by a 
test case 
suite. 
5.57 coverage item: An entity or property 
used as a basis for 
testing. 
5.58 data definition: An 
executable statement where a variable is assigned a 
value
. 
5.59 data definition C-use coverage: The 
percentage of 
data definition C-use pairs in 
a component that are exercised by a 
test case 
suite. 
5.60 data definition C-use pair: A 
data definition and 
computation 
data use, where the 
data use uses the value defined 
in the 
data definition. 
5.61 data definition P-use coverage: The 
percentage of 
data definition P-use pairs in 
a component that are exercised by a 
test case 
suite. 
5.62 data definition P-use pair: A 
data definition and 
predicate data 
use, where the 
data use uses the value defined in the 
data definition. 
5.63 data definition-use coverage: The 
percentage of 
data definition-use pairs in 
a component that are exercised by a 
test case 
suite. 
5.64 data definition-use pair: A 
data definition and 
data use, where 
the 
data use uses the value defined in the 
data definition. 
5.65 data definition-use testing: A 
test case 
design technique for a 
component in which 
test cases are designed to execute 
data definition-use pairs. 
5.66 data flow coverage: Test coverage 
measure based on variable usage within the code
. 
Examples are 
data definition-use coverage, 
data definition P-use coverage, 
data definition C-use coverage, etc. 
5.67 data flow testing: Testing in which 
test cases are designed based on variable usage within the 
code. 
5.68 data use: An 
executable 
statement where the value of a variable is accessed
. 
5.69 debugging: The process of finding and removing the causes of 
failures in software. 
5.70 decision: A program 
point at which the 
control flow has two or more 
alternative routes. 
5.71 Decision condition: A 
condition within a 
decision. 
5.72 decision coverage: The percentage of 
decision outcomes that have been 
exercised by a 
test case 
suite. 
5.73 decision outcome: The result of a 
decision (which therefore determines the 
control flow alternative taken). 
5.74 design-based testing: Designing tests based on objectives derived 
from the architectural or detail design of the software (e.g., tests that 
execute specific invocation paths or probe the worst case behaviour of 
algorithms). 
5.75 desk checking: The 
testing of software by 
the manual 
simulation of its execution. 
5.76 dirty testing: See 
negative testing. [
Beizer] 
5.77 documentation testing: Testing concerned 
with the accuracy of documentation. 
5.78 domain: The set from which values are selected. 
5.79 domain testing: See 
equivalence 
partition testing. 
5.80 dynamic analysis: The process of evaluating a system or 
component based upon its 
behaviour during execution
. 
[IEEE] 
5.81 emulator: A device, computer program, or system that accepts the 
same 
inputs and produces the same 
outputs as a given system
. [
IEEE,do178b] 
5.82 entry point: The first 
executable statement within a 
component. 
5.83 equivalence class: A portion of the 
component's 
input or 
output domains for which the 
component's behaviour is assumed to be the same from the 
component's specification. 
5.84 equivalence partition: See 
equivalence class. 
5.85 equivalence partition coverage: The percentage of 
equivalence classes generated for the 
component, which have been 
exercised by a 
test case 
suite. 
5.86 equivalence partition testing: 
A 
test case design technique for a 
component in which 
test cases are 
designed to execute representatives from 
equivalence 
classes. 
5.87 error: A human action that produces an 
incorrect result
. [IEEE] 
5.88 error guessing: A 
test case design 
technique where the experience of the tester is used to postulate what 
faults might occur, and to design tests specifically to expose 
them. 
5.89 error seeding: The process of 
intentionally adding known 
faults to those already in a 
computer program for the purpose of monitoring the rate of detection and 
removal, and estimating the number of 
faults remaining in 
the program
. [IEEE] 
5.90 executable statement: A 
statement which, when compiled, is translated into object 
code, which will be executed procedurally when the program is running and may 
perform an action on program data. 
5.91 exercised: A program element is 
exercised by a 
test case when the 
input value causes the execution of that element, such 
as a 
statement, 
branch, or other 
structural element. 
5.92 exhaustive testing: A 
test case design technique in which the 
test case suite comprises all combinations of 
input values and 
preconditions for 
component 
variables. 
5.93 exit point: The last 
executable statement within a 
component. 
5.94 expected outcome: See 
predicted outcome. 
5.95 facility testing: See 
functional test 
case design. 
5.96 failure: Deviation of the software from its 
expected delivery or service
. [Fenton] 
5.97 fault: A manifestation of an 
error in software
. A 
fault, if encountered may cause a 
failure. [
do178b] 
5.98 feasible path: A path for which there exists a set of 
input values and execution conditions which causes it to 
be executed. 
5.99 feature testing: See 
functional test 
case design. 
5.100 functional specification: The document that describes in detail 
the characteristics of the product with regard to its intended capability
. [BS 4778, 
Part2] 
5.101 functional test case design: Test case selection that is based on an analysis of the 
specification of the 
component without reference to its internal workings. 
5.102 glass box testing: See 
structural 
test case design. 
5.103 incremental testing: Integration testing where system 
components 
are integrated into 
the system one at a time until the entire system is integrated. 
5.104 independence: Separation of responsibilities which ensures the 
accomplishment of objective evaluation
. After [do178b]. 
5.105 infeasible path: A 
path which cannot be 
exercised by any 
set of possible 
input values. 
5.106 input: A variable (whether stored within a 
component or outside it) that is read by the 
component. 
5.107 input domain: The set of all possible 
inputs. 
5.108 input value: An instance of an 
input. 
5.109 inspection: A group 
review quality improvement process for written material
. It consists of two aspects; product (document itself) 
improvement and process improvement (of both document production and 
inspection)
. After [Graham] 
5.110 installability testing: Testing concerned 
with the installation procedures for the system. 
5.111 instrumentation: 
The insertion of additional code into the program in order to collect 
information about program behaviour during program 
execution. 
5.112 instrumenter: A software tool used to 
carry out 
instrumentation. 
5.113 integration: The 
process of combining components into larger 
assemblies. 
5.114 integration testing: Testing performed to expose 
faults in 
the interfaces and in the interaction between integrated 
components. 
5.115 interface testing: Integration 
testing where the interfaces between system 
components are tested. 
5.116 isolation testing: Component testing 
of individual 
components in isolation from 
surrounding 
components, with surrounding 
components being simulated by 
stubs. 
5.117 LCSAJ: A Linear Code Sequence And Jump, 
consisting of the following three items (conventionally identified by line 
numbers in a source code listing): the start of the linear sequence of 
executable statements, the end of the linear sequence, 
and the target line to which 
control flow is 
transferred at the end of the linear sequence. 
5.118 LCSAJ coverage: The percentage of 
LCSAJs of a 
component which are 
exercised by 
a 
test case suite. 
5.119 LCSAJ testing: A 
test case design 
technique for a 
component in which 
test cases are designed to execute 
LCSAJs. 
5.120 logic-coverage testing: See 
structural test case design. [Myers] 
5.121 logic-driven testing: See 
structural test case design. 
5.122 maintainability testing: Testing whether the system meets its specified objectives 
for maintainability. 
5.123 modified condition/decision coverage: The percentage of all 
branch condition outcomes that independently affect a 
decision outcome that have been exercised by a test 
case suite. 
5.124 modified condition/decision testing: A 
test case design technique in which 
test cases are designed to execute branch condition 
outcomes that independently affect a 
decision 
outcome. 
5.125 multiple condition coverage: See 
branch condition combination coverage. 
5.126 mutation analysis: A method to determine 
test case suite thoroughness by measuring the extent 
to which a 
test case suite can discriminate the 
program from slight variants (mutants) of the program
. 
See also 
error seeding. 
5.127 N-switch coverage: The percentage of 
sequences of 
N-transitions that have been 
exercised by a 
test case 
suite. 
5.128 N-switch testing: A form of 
state 
transition testing in which 
test cases are designed 
to execute all valid sequences of 
N-transitions. 
5.129 N-transitions: A sequence of 
N+1 transitions. 
5.130 negative testing: Testing aimed at showing software does not work
. [
Beizer] 
5.131 non-functional requirements testing: 
Testing of those requirements that do not relate to 
functionality
. i.e. performance, usability, etc. 
5.132 operational testing:Testing 
conducted 
to evaluate a system or component in its operational 
environment. [IEEE] 
5.133 oracle: A mechanism to produce the 
predicted outcomes to compare with the 
actual outcomes of the software under test
. After [
adrion] 
5.134 outcome: Actual 
outcome or 
predicted outcome. This is the outcome of a test
. See 
also 
branch outcome, 
condition outcome and 
decision outcome. 
5.135 output: A variable (whether stored within a 
component or outside it) that is written to by the 
component. 
5.136 output domain: The set of all possible 
outputs. 
5.137 output value: An instance of an 
output. 
5.138 P-use: See 
predicate data use. 
5.139 partition testing: See 
equivalence partition testing. [
Beizer] 
5.140 path: A sequence of 
executable statements of a 
component, from an 
entry point 
to an 
exit point. 
5.141 path coverage: The percentage of 
paths in a 
component exercised by a 
test case suite. 
5.142 path sensitizing: Choosing a set of 
input 
values to force the execution of a 
component to 
take a given 
path. 
5.143 path testing: A test case design 
technique in which 
test cases are designed to 
execute 
paths of a 
component. 
5.144 performance testing: Testing conducted to 
evaluate the compliance of a system or 
component with 
specified performance requirements
. [IEEE] 
5.145 portability testing: Testing aimed at 
demonstrating the software can be ported to specified hardware or software 
platforms. 
5.146 precondition: Environmental and state 
conditions which must be fulfilled before the 
component 
can be executed with a particular 
input value. 
5.147 predicate: A logical expression which 
evaluates to TRUE or FALSE, normally to direct the execution 
path in code. 
5.148 predicate data use: A 
data use in a 
predicate. 
5.149 predicted outcome: The 
behaviour predicted by the 
specification of an object under specified conditions. 
5.150 program instrumenter: See 
instrumenter. 
5.151 progressive testing: Testing of new 
features after 
regression testing of previous 
features
. [
Beizer] 
5.152 pseudo-random: A series which appears 
to be random but is in fact generated according to some prearranged sequence. 
5.153 recovery testing: Testing aimed at 
verifying the system's ability to recover from varying degrees of 
failure. 
5.154 regression testing: Retesting of a 
previously tested program following modification to ensure that 
faults have not been introduced or uncovered as a result of 
the changes made. 
5.155 requirements-based testing: Designing tests based on objectives 
derived from requirements for the software component (e.g., tests that exercise 
specific functions or probe the non-functional constraints such as performance 
or security)
. See 
functional 
test case design. 
5.156 result: See 
outcome. 
5.157 review: A process or meeting during which a 
work product, or set of work products, is presented to project personnel, 
managers, users or other interested parties for comment or approval
. [
ieee] 
5.158 security testing: Testing whether the 
system meets its specified security objectives. 
5.159 serviceability testing: See 
maintainability testing. 
5.160 simple subpath: A 
subpath of 
the 
control flow graph in which no program part 
is executed more than necessary. 
5.161 simulation: The representation of 
selected behavioural characteristics of one physical or abstract system by 
another system
. [ISO 2382/1]. 
5.162 simulator: A device, computer program or system used during 
software 
verification, which behaves or operates 
like a given system when provided with a set of controlled 
inputs. [
IEEE,do178b] 
5.163 source statement: See 
statement. 
5.164 specification: A description of a 
component's function in terms of its 
output values for 
specified 
input values under specified 
preconditions. 
5.165 specified input: An 
input for which the 
specification 
predicts an 
outcome. 
5.166 state transition: A transition between two allowable states of a 
system or 
component. 
5.167 state transition testing: A 
test case design technique in which 
test cases are designed to execute 
state transitions. 
5.168 statement: An 
entity in a programming language which is typically the smallest indivisible 
unit of execution. 
5.169 statement coverage: The percentage of 
executable statements in a 
component that have been 
exercised by a 
test case 
suite. 
5.170 statement testing: A 
test case design 
technique for a 
component in which 
test cases are designed to execute 
statements. 
5.171 static analysis: Analysis of a program carried out without 
executing the program. 
5.172 static analyzer: A tool that carries out static analysis. 
5.173 static testing: Testing of an object 
without execution on a computer. 
5.174 statistical testing: A test case design technique in which a model is used of 
the statistical distribution of the 
input to construct 
representative 
test cases. 
5.175 storage testing: Testing whether the 
system meets its specified storage objectives. 
5.176 stress testing: Testing conducted to 
evaluate a system or component at or beyond the limits of its specified 
requirements
. [IEEE] 
5.177 structural coverage: Coverage measures based 
on the internal structure of the component. 
5.178 structural test case design: Test case selection that 
is based on an analysis of the internal structure of the 
component. 
5.179 structural testing: See 
structural 
test case design. 
5.180 structured basis testing: A 
test case 
design technique in which 
test cases are derived 
from the code logic to achieve 100% 
branch coverage. 
5.181 structured walkthrough: See 
walkthrough. 
5.182 stub: A skeletal or special-purpose 
implementation of a software module, used to develop or test a 
component that calls or is otherwise dependent on it
. After [IEEE]. 
5.183 subpath: A sequence of 
executable statements within a 
component. 
5.184 symbolic evaluation: See 
symbolic 
execution. 
5.185 symbolic execution: 
A 
static analysis technique that derives a 
symbolic expression for program 
paths. 
5.186 syntax testing: A 
test case design 
technique for a 
component or system in which 
test case design is based upon the syntax of the 
input. 
5.187 system testing: The process of 
testing an integrated system to verify that it meets 
specified requirements
. [Hetzel] 
5.188 technical requirements testing: See 
non-functional requirements testing. 
5.189 test automation: The use of software to control the execution of 
tests, the comparison of 
actual outcomes to 
predicted outcomes, the setting up of test 
preconditions, and other test control and test 
reporting functions. 
5.190 test case: A set of 
inputs, execution 
preconditions, 
and 
expected outcomes developed for a particular 
objective, such as to exercise a particular program 
path or 
to verify compliance with a specific requirement
. After 
[
IEEE,do178b] 
5.191 test case design technique: A method 
used to derive or select 
test cases. 
5.192 test case suite: A collection of one 
or more 
test cases for the software under test. 
5.193 test comparator: A test tool that 
compares the actual 
outputs produced by the software under 
test with the expected 
outputs for that 
test case. 
5.194 test completion criterion: A 
criterion for determining when planned 
testing is 
complete, defined in terms of a 
test measurement 
technique. 
5.195 test coverage: See 
coverage. 
5.196 test driver: A program or test tool used 
to execute software against a 
test case suite. 
5.197 test environment: A description of the 
hardware and software environment in which the tests will be run, and any other 
software with which the software under test interacts when under test including 
stubs and 
test drivers. 
5.198 test execution: The processing of a 
test case suite by the software under test, producing 
an 
outcome. 
5.199 test execution technique: The method used to perform the actual 
test execution, e.g. manual, 
capture/playback tool, etc. 
5.200 test generator: A program that generates 
test cases in accordance to a specified strategy or 
heuristic
. After [Beizer]. 
5.201 test harness: A 
testing tool that comprises a 
test 
driver and a 
test comparator. 
5.202 test measurement technique: A method 
used to measure 
test coverage items. 
5.203 test outcome: See 
outcome. 
5.204 test plan: A record of the test planning process detailing the 
degree of tester 
indedendence, the 
test environment, the 
test case 
design techniques and 
test measurement techniques 
to be used, and the rationale for their choice. 
5.205 test procedure: A document providing 
detailed instructions for the execution of one or more 
test 
cases. 
5.206 test records: For each test, an unambiguous record of the 
identities and versions of the 
component under test, 
the 
test specification, and 
actual outcome. 
5.207 test script: Commonly used to refer to the automated 
test procedure used with a 
test 
harness. 
5.208 test specification: For each 
test case, the 
coverage item, 
the initial state of the software under test, the 
input, 
and the 
predicted outcome. 
5.209 test target: A set of 
test completion 
criteria. 
5.210 testing: The process of exercising 
software to verify that it satisfies specified requirements and to detect 
errors
. After [
do178b] 
5.211 thread testing: A variation of 
top-down 
testing where the progressive 
integration of 
components follows the implementation of subsets of the 
requirements, as opposed to the 
integration of 
components by successively lower levels. 
5.212 top-down testing: An approach to 
integration testing where the 
component at the top of the 
component 
hierarchy is tested first, with lower level 
components being simulated by 
stubs. 
Tested 
components are then used to test lower level 
components
. The process is 
repeated until the lowest level 
components 
have been tested. 
5.213 unit testing: See 
component testing. 
5.214 usability testing: Testing the ease with 
which users can learn and use a product. 
5.215 validation: Determination of the c
orrectness of the products of software development with 
respect to the user needs and requirements.
NOTE: The definition in BS 7925-1:1998 reads: 
confirmation by examination and provision of objective evidence that the 
particular requirements for a specific intended use have been fulfilled
5.216 verification: The process of evaluating 
a system or 
component to determine whether the products 
of the given development phase satisfy the conditions imposed at the start of 
that phase
. [IEEE] 
NOTE: The definition in BS 7925-1:1998 reads: 
confirmation by examination and provision of objective evidence that specified 
requirements have been fulfilled
5.217 volume testing: Testing where the system 
is subjected to large volumes of data. 
5.218 walkthrough: A 
review of requirements, designs or code characterized by the 
author of the object under 
review guiding the progression 
of the 
review. 
5.219 white box testing: See 
structural 
test case design.  
Annex A (informative)
Index of sources 
The following non-normative sources were used in constructing this glossary. 
[Abbott
] J Abbot, 
Software Testing 
Techniques, 
NCC Publications, 1986. 
[
Adrion] W R 
Adrion, M A 
Branstad and J C 
Cherniabsky. Validation, Verification 
and Testing of Computer Software, Computing Surveys, Vol 14, No 2, June 1982. 
[BCS
] A Glossary of Computing Terms, The 
British Computer Society, 7th edition, ISBN 0-273-03645-9. 
[
Beizer] B 
Beizer. Software Testing 
Techniques, van 
Nostrand Reinhold, 1990, ISBN 
0-442-20672-0. 
[Chow
] T S Chow, 
Testing Software Design Modelled 
by Finite-Sate Machines, IEEE Transactions on Software Engineering, 
Vol SE-4, No 3, May 1978. 
[DO-178B
] Software Considerations in Airborne 
Systems and Equipment Certification. Issued in the 
USA by the 
Requirements and Technical Concepts for Aviation (document RTCA SC167/DO-178B) and in 
Europe 
by the European Organization for Civil Aviation Electronics (EUROCAE document ED-12B), December 1992. 
[Fenton
] N E 
Fenton, Software Metrics, Chapman 
\& Hall, 1991. 
[Graham
] D Graham and T Gilb, 
Software Inspection, Addison-Wesley, 1993, ISBN 0-201-63181-4. 
[Hetzel
] W C Hetzel, The complete guide to software 
testing, 2nd edition, QED Information Sciences, 1988. 
[IEEE
] IEEE Standard Glossary of Software 
Engineering Terminology, IEEE 
Std 610.12-1990. 
[Myers
] G J Myers, 
The Art of Software Testing, Wiley, 1979.  
Annex B (informative)
Document Details 
B. 1 Method of commenting on this draft 
Comments are invited on this draft so that the glossary can be improved to 
satisfy the requirements of an ISO standard. 
When making a comment, be sure to include the following information: 
Your name and address; 
The version number of the glossary (currently 6.2); 
Exact part of the glossary; 
Supporting information, such as the reason for a proposed 
change, or the reference to the use of a term. 
You can submit comments in a variety of ways, which in order of preference 
are as follows: 
- By E-mail to reids@rmcs.cranfield.ac.uk;
- By post to S C Reid, SEAS/CISE, 
Cranfield 
University, RMCS, Shrivenham, Swindon 
Wilts SN6 8LA, 
UK; 
By FAX to +44 (0) 1793 783192, marked for the 
attention of Stuart Reid.
B. 2 Status 
Working draft for the BCS Specialist Interest Group in 
Software Testing.