lunedì 17 dicembre 2012

Installare Sahi

Scaricare l'ultima versione da: http://sourceforge.net/projects/sahi/files/


To install Sahi using the installer, download install_sahi_v35_2011mmdd.jar
and run
java -jar install_sahi_v35_2011mmdd.jar
If you do not wish to use the installer,
1) download sahi_2011ddmm.zip
2) Unzip it to any folder, say D:\sahi
3) Open a command prompt and navigate to sahi\userdata\bin
cd D:\sahi\userdata\bin
4) run start_dashboard.bat
start_dashboard.bat
Io ho provato al procedura automatica e mi è sembrata molto pratica.

http://google-gruyere.appspot.com/

 Web Application Exploits and Defenses

A Codelab by Bruce Leban, Mugdha Bendre, and Parisa Tabriz





Want to beat the hackers at their own game?

  • Learn how hackers find security vulnerabilities!
  • Learn how hackers exploit web applications!
  • Learn how to stop them!

This codelab shows how web application vulnerabilities can be exploited and how to defend against these attacks. The best way to learn things is by doing, so you'll get a chance to do some real penetration testing, actually exploiting a real application. Specifically, you'll learn the following:

  • How an application can be attacked using common web security vulnerabilities, like cross-site scripting vulnerabilities (XSS) and cross-site request forgery (XSRF).
  • How to find, fix, and avoid these common vulnerabilities and other bugs that have a security impact, such as denial-of-service, information disclosure, or remote code execution.
To get the most out of this lab, you should have some familiarity with how a web application works (e.g., general knowledge of HTML, templates, cookies, AJAX, etc.).

giovedì 29 novembre 2012

SAHI - HTTPS/SSL sites

Sahi supports HTTPS out of the box. Sahi Pro eases the pain by automatically accepting SSL certificates. Sahi ships with a root certificate and all other certificates will be signed by this root certificate, making SSL testing absolutely smooth.
But, for some reason if the browser reports a certificate error as shown below, then you will need to import the root certificate to the “Trusted Root Certificate Authorities” store.
error message
To import the root certificate, click on the “SSL” link on dashboard.

  1. Sahi first tries to import the certificate with “certutil” command available on Windows.

  2. If Step 1 fails, Sahi then tries to import the certificate through Java. At this point you should be able to see this screen
    security warning
    Click “Yes” to import the certificate.

  3. If Step 2 fails, Sahi will then try a direct import. Follow these steps.
    direct import
    import wizard
    import location
    completing import
    Security Warning
    This should import the certificate successfully.
    Once done, you should be able to access your HTTPS/SSL site.

  4. In other cases:
    • Make sure that your browser is using Sahi as its proxy for “Secure” or “SSL Proxy” too.
    • Look at “Is keytool available” under “Java” section on the “Info” tab of the Controller. If you are unable to get the Controller up on an HTTPS site, go to an HTTP site and bring up the Controller.
    • If “Is keytool available” is false, add <java>/bin to your PATH variable. (Details on adding Java/bin to Path), or specify the full path to keytool.exe in <sahi>/config/sahi.properties. keytool.exe is present in the <java_home>/bin directory
    For eg. you could do
    PATH=C:\Java\bin;%PATH% start_sahi.bat
    to add java\bin to the path before you start Sahi.
    Sahi Controller - Recorder tab
    • Navigate to the HTTPS site. If the above instructions have been followed, you will get a page which warns you that the certifcate is incorrect. On Firefox , click on “Add Exception” and then “Confirm Security Exception”. The web site will then be displayed.

    • At this point, the website which has been displayed may not work properly if it fetches css and javascript files from another https domain or sub-domain. The Controller will also not come up with ALT-DblClick.
    Sahi Controller - Recorder tab
    • You will now see a list of domains that Sahi has created certificates for. Some of them will be red and some green. Click on the red ones, and you will get the same certificate dialog which you would need to accept. Once you have accepted the required certificates on the browser, you should be able to navigate properly to the web page.
    NOTE: It is possible that there are some domains/subdomains that are “hidden”. They may be used to fetch css, javascript and other artefacts. These certificates also need to be accepted via the SSLManager if your site has to work well. If your browser hangs, or the web page looks different than normal, or shows javascript errors, it may be because of these unaccepted certificates.
    Follow the steps in these video for accepting SSL certificates on
    Internet Explorer 8, 9
    Internet Explorer 7 or before.

martedì 27 novembre 2012


Facciamo il punto sulla certificazione ISTQB.

Nel Marzo 2010 le tre organizzazioni ISTQB (International Software Testing Board), UKTB (UK Testing Board) BCS -The Chartered Institute for IT hanno trovato un accordo per fornire due livelli di certificazione condivisi tra tutte le organizzazioni:

  • livello Foundation
  • livello Advanced 
Per quanto riguarda il livello Advanced sembra che non sia possibile eseguire il test online tramite centri autorizzati, ma occorre rivolgersi direttamente a ISTQB o BCS (UKTB rimanda al sito BCS).

Sia Prometric che Pearson VUE, come centri autorizzati agli esami onlie, offrono rispettivamente:

Pearson VUE:
  • ISEB-SWT2 ISTQB-ISEB Certified Tester Foundation Level
  • ISEB-SWTINT1 ISEB Intermediate Certificate in Software Testing (only holders of the Foundation Certificate in Software Testing are eligible for this exam)
Prometric:
  • ISTQB BCS Certified Tester Foundation Level (CTFL)
  • Intermediate Certificate in Software Testing (Candidates must hold the ISTQB BCS Certified Tester Foundation Level Certificate)
A quanto pare al momento l'unico modo per sostenere l'esame Advanced è quello di partecipare alle sessioni pubbliche offerte da ISQTB e BCS o di avvalersi di formatori autorizzati che offrono corsi di preparazione e l'esame finale. Gli esami pubblici hanno costi più abbordabili.

Prerequisiti

I prerequisiti dell'esame sono due:
  • si deve avere conseguito "ISTQB-BCS Certified Tester Foundation Level"
  • si deve aver maturato una significativa esperienza nel campo dei test
A proposito della "significativa esperienza" ISQTB delega questa valutazione ai singoli Exam Boards nazionali (per l'Italia "ITA SQTB"), mentre BCS dice esplicitamente che si devono avere almeno 3 anni di esperienza. 
In nessuno dei due casi viene specificato come deve essere dimostrata l'esperienza maturata (basta una autocertificazione?, una dichiarazioen del datore di lavoro o cosa?).

Per chi svolge l'esame tramite centri di formazione autorizzati questo requisito non sembra necessario.

Calendario
BCS offre esami pubblici solo a Londra, quindi no l'ho presa in considerazione.
ITA ISQTB invece li programma a Milano con questo calendario:

TipologiaDataSedeAcquista



Advanced Level Test ManagerAprile, 12Milano
Advanced Level Test AnalystAprile, 19Milano
Advanced Level Technical Test AnalystAprile, 24Milano




Advanced Level Test ManagerOttobre, 25Milano
Advanced Level Test AnalystNovembre, 8Milano
Advanced Level Technical Test AnalystNovembre, 15Milano




La certificazione e' possibile anche senza aver partecipato al corso.
Per effettuare l'iscrizione on-line agli esami di cerificazione, e' possibile utilizzare il servizio prenotazioni oppure contattare ITA-STQB via e-mail.

Il costo dell’esame di certificazione e' di 200€ + IVA (Euro duecento + IVA) per ogni tipo di esame;

La sede degli esami è le seguente:
  • Milano, presso Alten Italia, in Via Gaetano Crespi, 12 (Zona Lambrate)
Gli esami di certificazione sono previsti alle 14.00 del giorno pianificato.

All the ISTQB® Certification Levels:


Note sulla nomenclatura:
BCS sta per British Computer Society che nel 2009 ha preso il nuovo nome  "BCS — The Chartered Institute for IT", sebbe a ciò non sia seguita una variazione dal punto di vista legale del nome.
BCS offre varie certificazioni, anche quelle che prima erano offerte dalla ISEB ( Information Systems Examinations Board). Queste certificazioni adesso sono rilasciate sotto il nome di "BCS Professional Certifications"

lunedì 19 novembre 2012

IEEE 610.121990

IEEE
S t .  610.121990
(Revision and redesignation of
IEEE Std792-l933)

IEEE Standard Glossary of Software Engineering Terminology

IEEE 829




IEEE 829-2008, also known as the 829 Standard for Software and System Test Documentation, is an IEEE standard that specifies the form of a set of documents for use in eight defined stages of software testing, each stage potentially producing its own separate type of document. The standard specifies the format of these documents but does not stipulate whether they all must be produced, nor does it include any criteria regarding adequate content for these documents. These are a matter of judgment outside the purview of the standard. The documents are:
  • Test Plan: a management planning document that shows:
  • How the testing will be done (including SUT (system under test) configurations).
  • Who will do it
  • What will be tested
  • How long it will take (although this may vary, depending upon resource availability).
  • What the test coverage will be, i.e. what quality level is required
  • Test Design Specification: detailing test conditions and the expected results as well as test pass criteria.
  • Test Case Specification: specifying the test data for use in running the test conditions identified in the Test Design Specification
  • Test Procedure Specification: detailing how to run each test, including any set-up preconditions and the steps that need to be followed
  • Test Item Transmittal Report: reporting on when tested software components have progressed from one stage of testing to the next
  • Test Log: recording which tests cases were run, who ran them, in what order, and whether each test passed or failed
  • Test Incident Report: detailing, for any test that failed, the actual versus expected result, and other information intended to throw light on why a test has failed. This document is deliberately named as an incident report, and not a fault report. The reason is that a discrepancy between expected and actual results can occur for a number of reasons other than a fault in the system. These include the expected results being wrong, the test being run wrongly, or inconsistency in the requirements meaning that more than one interpretation could be made. The report consists of all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. The report will also include, if possible, an assessment of the impact of an incident upon testing.
  • Test Summary Report: A management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the software system under test, and statistics derived from Incident Reports. The report also records what testing was done and how long it took, in order to improve any future test planning. This final document is used to indicate whether the software system under test is fit for purpose according to whether or not it has met acceptance criteria defined by project stakeholders.
http://cs.pugetsound.edu/~jross/courses/csci240/resources/IEEE%20Std%20829-2008.pdf

ISO/IEC 9126

http://en.wikipedia.org/wiki/ISO/IEC_9126

ISO/IEC 9126 Software engineering — Product quality is an international standard for the evaluation of software quality. The fundamental objective of this standard is to address some of the well known human biases that can adversely affect the delivery and perception of a software development project. These biases include changing priorities after the start of a project or not having any clear definitions of "success." By clarifying, then agreeing on the project priorities and subsequently converting abstract priorities (compliance) to measurable values (output data can be validated against schema X with zero intervention), ISO/IEC 9126 tries to develop a common understanding of the project's objectives and goals.

BS 7925-2 is the Software Component Testing Standard

http://www.testingstandards.co.uk/Component%20Testing.pdf
BS 7925-2 is BSI's software component testing standard.[1].
The standard was developed by the Testing Standards Working Party,[2] sponsored by BCS SIGiST, and published in August 1998.

BS 7925-1 is a Glossary of Software Testing Terms

http://www.testingstandards.co.uk/bs_7925-1.htm

Working Draft:

Glossary of terms used in software testing

Version 6.3
produced by the British Computer Society
Specialist Interest Group in Software Testing (BCS SIGIST)

Copyright Notice

This document may be copied in its entirety, or extracts made, if the source is acknowledged.

Contents

  1. Introduction
  2. Scope
  3. Arrangement
  4. Normative references
  5. Definitions
Annexes
A Index of sources
B Document details

Foreword
In compiling this glossary the committee has sought the views and comments of as broad a spectrum of opinion as possible in industry, commerce and government bodies and organisations, with the aim of producing a standard which would gain acceptance in as wide a field as possible. Totalagreement will rarely, if ever, be achieved in compiling a document of this nature.
1. Introduction
Much time and effort is wasted both within and between industry, commerce, government and professional and academic institutions when ambiguities arise as a result of the inability to differentiate adequately between such terms as `path coverage' and `branch coverage'; `test case suite', `test specification' and `test plan' and similar terms which form an interface between various sectors and society. Moreover, the professional, or technical use of these terms is often at variance with the meanings attributed to them by lay people.
2. Scope
This document presents concepts, terms and definitions designed to aid communication in software testing and related disciplines.
3. Arrangement
The glossary has been arranged in a single section of definitions arranged alphabetically. The use of a term defined within this glossary is printed in italics.
Some terms are preferred to other synonymous ones, in which case, the definition of the preferred term appears, with the synonymous ones referring to that. For example dirty testing refers to negative testing.
4. Normative references
At the time of publication, the edition indicated was valid. All standards are subject to revision, and parties to agreements based upon this Standard are encouraged to investigate the possibility of applying the most recent edition of the standards listed below. Members of IEC and ISO maintain registers of currently valid International Standards.
ISO 8402:1986. Quality Vocabulary.
ISO/IEC 2382-1:1993. Data processing - Vocabulary - Part 01:Fundamental terms.
BS 6154:1981. Method of defining Syntactic Metalanguage.
5. Definitions
5.1acceptance testing: Formal testing conducted to enable a user, customer, or other authorized entity to determine whether to accept a system or component. [IEEE]
5.2actual outcome: The behaviour actually produced when the object is tested under specified conditions.
5.3 ad hoc testing: Testing carried out using no recognised test case design technique.
5.4 alpha testing: Simulated or actual operational testing at an in-house site not otherwise involved with the software developers.
5.5 arc testing: See branch testing.
5.6 Backus-Naur form: A metalanguage used to formally describe the syntax of a language. See BS 6154.
5.7 basic block: A sequence of one or more consecutive, executable statements containing no branches.
5.8 basis test set: A set of test cases derived from the code logic which ensure that 100\% branch coverage is achieved.
5.9 bebugging: See error seeding. [Abbott]
5.10 behaviour: The combination of input values and preconditions and the required response for a function of a system. The full specification of a function would normally comprise one or more behaviours.
5.11 beta testing: Operational testing at a site not otherwise involved with the software developers.
5.12 big-bang testing: Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system.
5.13 black box testing: See functional test case design.
5.14 bottom-up testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
5.15 boundary value: An input value or output value which is on the boundary between equivalence classes, or an incremental distance either side of the boundary.
5.16 boundary value analysis: A test case design technique for a component in which test cases are designed which include representatives of boundary values.
5.17 boundary value coverage: The percentage of boundary values of the component's equivalence classes which have been exercised by a test case suite.
5.18 boundary value testing: See boundary value analysis.
5.19 branch: A conditional transfer of control from any statement to any other statement in a component, or an unconditional transfer of control from any statement to any other statement in the component except the next statement, or when a component has more than one entry point, a transfer of control to an entry point of the component.
5.20 branch condition: See decision condition.
5.21 branch condition combination coverage: The percentage of combinations of all branch condition outcomes in every decision that have been exercised by a test case suite.
5.22 branch condition combination testing: A test case design technique in which test cases are designed to execute combinations of branch condition outcomes.
5.23 branch condition coverage: The percentage of branch condition outcomes in every decision that have been exercised by a test case suite.
5.24 branch condition testing: A test case design technique in which test cases are designed to execute branch condition outcomes.
5.25 branch coverage: The percentage of branches that have been exercised by a test case suite
5.26 branch outcome: See decision outcome.
5.27 branch point: See decision.
5.28 branch testing: A test case design technique for a component in which test cases are designed to execute branch outcomes.
5.29 bug: See fault.
5.30 bug seeding: See error seeding.
5.31 C-use: See computation data use.
5.32 capture/playback tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time.
5.33 capture/replay tool: See capture/playback tool.
5.34 CAST: Acronym for computer-aided software testing.
5.35 cause-effect graph: A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.
5.36 cause-effect graphing: A test case design technique in which test cases are designed by consideration of cause-effect graphs.
5.37 certification: The process of confirming that a system or component complies with its specified requirements and is acceptable for operational use. From [IEEE].
5.38 Chow's coverage metrics: See N-switch coverage. [Chow]
5.39 code coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
5.40 code-based testing: Designing tests based on objectives derived from the implementation (e.g., tests that execute specific control flow paths or use specific data items).
5.41 compatibility testing: Testing whether the system is compatible with other systems with which it should communicate.
5.42 complete path testing: See exhaustive testing.
5.43 component: A minimal software item for which a separate specification is available.
5.44 component testing: The testing of individual software components. After [IEEE].
5.45 computation data use: A data use not in a condition. Also called C-use.
5.46 condition: A Boolean expression containing no Boolean operators. For instance, A<B is a condition but A and B is not. [DO-178B]
5.47 condition coverage: See branch condition coverage.
5.48 condition outcome: The evaluation of a condition to TRUE or FALSE.
5.49 conformance criterion: Some method of judging whether or not the component's action on a particular specified input value conforms to the specification.
5.50 conformance testing: The process of testing that an implementation conforms to the specification on which it is based.
5.51 control flow: An abstract representation of all possible sequences of events in a program's execution.
5.52 control flow graph: The diagrammatic representation of the possible alternative control flow paths through a component.
5.53 control flow path: See path.
5.54 conversion testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
5.55 correctness: The degree to which software conforms to its specification.
5.56 coverage: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test case suite.
5.57 coverage item: An entity or property used as a basis for testing.
5.58 data definition: An executable statement where a variable is assigned a value.
5.59 data definition C-use coverage: The percentage of data definition C-use pairs in a component that are exercised by a test case suite.
5.60 data definition C-use pair: A data definition and computation data use, where the data use uses the value defined in the data definition.
5.61 data definition P-use coverage: The percentage of data definition P-use pairs in a component that are exercised by a test case suite.
5.62 data definition P-use pair: A data definition and predicate data use, where the data use uses the value defined in the data definition.
5.63 data definition-use coverage: The percentage of data definition-use pairs in a component that are exercised by a test case suite.
5.64 data definition-use pair: A data definition and data use, where the data use uses the value defined in the data definition.
5.65 data definition-use testing: A test case design technique for a component in which test cases are designed to execute data definition-use pairs.
5.66 data flow coverage: Test coverage measure based on variable usage within the code. Examples are data definition-use coverage, data definition P-use coverage, data definition C-use coverage, etc.
5.67 data flow testing: Testing in which test cases are designed based on variable usage within the code.
5.68 data use: An executable statement where the value of a variable is accessed.
5.69 debugging: The process of finding and removing the causes of failures in software.
5.70 decision: A program point at which the control flow has two or more alternative routes.
5.71 Decision condition: A condition within a decision.
5.72 decision coverage: The percentage of decision outcomes that have been exercised by a test case suite.
5.73 decision outcome: The result of a decision (which therefore determines the control flow alternative taken).
5.74 design-based testing: Designing tests based on objectives derived from the architectural or detail design of the software (e.g., tests that execute specific invocation paths or probe the worst case behaviour of algorithms).
5.75 desk checking: The testing of software by the manual simulation of its execution.
5.76 dirty testing: See negative testing. [Beizer]
5.77 documentation testing: Testing concerned with the accuracy of documentation.
5.78 domain: The set from which values are selected.
5.79 domain testing: See equivalence partition testing.
5.80 dynamic analysis: The process of evaluating a system or component based upon its behaviour during execution. [IEEE]
5.81 emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. [IEEE,do178b]
5.82 entry point: The first executable statement within a component.
5.83 equivalence class: A portion of the component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.
5.84 equivalence partition: See equivalence class.
5.85 equivalence partition coverage: The percentage of equivalence classes generated for the component, which have been exercised by a test case suite.
5.86 equivalence partition testing: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
5.87 error: A human action that produces an incorrect result. [IEEE]
5.88 error guessing: A test case design technique where the experience of the tester is used to postulate what faults might occur, and to design tests specifically to expose them.
5.89 error seeding: The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program. [IEEE]
5.90 executable statement: A statement which, when compiled, is translated into object code, which will be executed procedurally when the program is running and may perform an action on program data.
5.91 exercised: A program element is exercised by a test case when the input value causes the execution of that element, such as a statement, branch, or other structural element.
5.92 exhaustive testing: A test case design technique in which the test case suite comprises all combinations of input values and preconditions for component variables.
5.93 exit point: The last executable statement within a component.
5.94 expected outcome: See predicted outcome.
5.95 facility testing: See functional test case design.
5.96 failure: Deviation of the software from its expected delivery or service. [Fenton]
5.97 fault: A manifestation of an error in software. A fault, if encountered may cause a failure. [do178b]
5.98 feasible path: A path for which there exists a set of input values and execution conditions which causes it to be executed.
5.99 feature testing: See functional test case design.
5.100 functional specification: The document that describes in detail the characteristics of the product with regard to its intended capability. [BS 4778, Part2]
5.101 functional test case design: Test case selection that is based on an analysis of the specification of the component without reference to its internal workings.
5.102 glass box testing: See structural test case design.
5.103 incremental testing: Integration testing where system components are integrated into the system one at a time until the entire system is integrated.
5.104 independence: Separation of responsibilities which ensures the accomplishment of objective evaluation. After [do178b].
5.105 infeasible path: A path which cannot be exercised by any set of possible input values.
5.106 input: A variable (whether stored within a component or outside it) that is read by the component.
5.107 input domain: The set of all possible inputs.
5.108 input value: An instance of an input.
5.109 inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection). After [Graham]
5.110 installability testing: Testing concerned with the installation procedures for the system.
5.111 instrumentation: The insertion of additional code into the program in order to collect information about program behaviour during program execution.
5.112 instrumenter: A software tool used to carry out instrumentation.
5.113 integration: The process of combining components into larger assemblies.
5.114 integration testing: Testing performed to expose faults in the interfaces and in the interaction between integrated components.
5.115 interface testing: Integration testing where the interfaces between system components are tested.
5.116 isolation testing: Component testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs.
5.117 LCSAJ: A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.
5.118 LCSAJ coverage: The percentage of LCSAJs of a component which are exercised by a test case suite.
5.119 LCSAJ testing: A test case design technique for a component in which test cases are designed to execute LCSAJs.
5.120 logic-coverage testing: See structural test case design. [Myers]
5.121 logic-driven testing: See structural test case design.
5.122 maintainability testing: Testing whether the system meets its specified objectives for maintainability.
5.123 modified condition/decision coverage: The percentage of all branch condition outcomes that independently affect a decision outcome that have been exercised by a test case suite.
5.124 modified condition/decision testing: A test case design technique in which test cases are designed to execute branch condition outcomes that independently affect a decision outcome.
5.125 multiple condition coverage: See branch condition combination coverage.
5.126 mutation analysis: A method to determine test case suite thoroughness by measuring the extent to which a test case suite can discriminate the program from slight variants (mutants) of the program. See also error seeding.
5.127 N-switch coverage: The percentage of sequences of N-transitions that have been exercised by a test case suite.
5.128 N-switch testing: A form of state transition testing in which test cases are designed to execute all valid sequences of N-transitions.
5.129 N-transitions: A sequence of N+1 transitions.
5.130 negative testing: Testing aimed at showing software does not work. [Beizer]
5.131 non-functional requirements testing: Testing of those requirements that do not relate to functionality. i.e. performance, usability, etc.
5.132 operational testing:Testing
conducted to evaluate a system or component in its operational environment. [IEEE] 5.133 oracle: A mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test. After [adrion]
5.134 outcome: Actual outcome or predicted outcome. This is the outcome of a test. See also branch outcome, condition outcome and decision outcome.
5.135 output: A variable (whether stored within a component or outside it) that is written to by the component.
5.136 output domain: The set of all possible outputs.
5.137 output value: An instance of an output.
5.138 P-use: See predicate data use.
5.139 partition testing: See equivalence partition testing. [Beizer]
5.140 path: A sequence of executable statements of a component, from an entry point to an exit point.
5.141 path coverage: The percentage of paths in a component exercised by a test case suite.
5.142 path sensitizing: Choosing a set of input values to force the execution of a component to take a given path.
5.143 path testing: A test case design technique in which test cases are designed to execute paths of a component.
5.144 performance testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. [IEEE]
5.145 portability testing: Testing aimed at demonstrating the software can be ported to specified hardware or software platforms.
5.146 precondition: Environmental and state conditions which must be fulfilled before the component can be executed with a particular input value.
5.147 predicate: A logical expression which evaluates to TRUE or FALSE, normally to direct the execution path in code.
5.148 predicate data use: A data use in a predicate.
5.149 predicted outcome: The behaviour predicted by the specification of an object under specified conditions.
5.150 program instrumenter: See instrumenter.
5.151 progressive testing: Testing of new features after regression testing of previous features. [Beizer]
5.152 pseudo-random: A series which appears to be random but is in fact generated according to some prearranged sequence.
5.153 recovery testing: Testing aimed at verifying the system's ability to recover from varying degrees of failure.
5.154 regression testing: Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
5.155 requirements-based testing: Designing tests based on objectives derived from requirements for the software component (e.g., tests that exercise specific functions or probe the non-functional constraints such as performance or security). See functional test case design.
5.156 result: See outcome.
5.157 review: A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users or other interested parties for comment or approval. [ieee]
5.158 security testing: Testing whether the system meets its specified security objectives.
5.159 serviceability testing: See maintainability testing.
5.160 simple subpath: A subpath of the control flow graph in which no program part is executed more than necessary.
5.161 simulation: The representation of selected behavioural characteristics of one physical or abstract system by another system. [ISO 2382/1].
5.162 simulator: A device, computer program or system used during software verification, which behaves or operates like a given system when provided with a set of controlled inputs. [IEEE,do178b]
5.163 source statement: See statement.
5.164 specification: A description of a component's function in terms of its output values for specified input values under specified preconditions.
5.165 specified input: An input for which the specification predicts an outcome.
5.166 state transition: A transition between two allowable states of a system or component.
5.167 state transition testing: A test case design technique in which test cases are designed to execute state transitions.
5.168 statement: An entity in a programming language which is typically the smallest indivisible unit of execution.
5.169 statement coverage: The percentage of executable statements in a component that have been exercised by a test case suite.
5.170 statement testing: A test case design technique for a component in which test cases are designed to execute statements.
5.171 static analysis: Analysis of a program carried out without executing the program.
5.172 static analyzer: A tool that carries out static analysis.
5.173 static testing: Testing of an object without execution on a computer.
5.174 statistical testing: A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases.
5.175 storage testing: Testing whether the system meets its specified storage objectives.
5.176 stress testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. [IEEE]
5.177 structural coverage: Coverage measures based on the internal structure of the component.
5.178 structural test case design: Test case selection that is based on an analysis of the internal structure of the component.
5.179 structural testing: See structural test case design.
5.180 structured basis testing: A test case design technique in which test cases are derived from the code logic to achieve 100% branch coverage.
5.181 structured walkthrough: See walkthrough.
5.182 stub: A skeletal or special-purpose implementation of a software module, used to develop or test a component that calls or is otherwise dependent on it. After [IEEE].
5.183 subpath: A sequence of executable statements within a component.
5.184 symbolic evaluation: See symbolic execution.
5.185 symbolic execution: A static analysis technique that derives a symbolic expression for program paths.
5.186 syntax testing: A test case design technique for a component or system in which test case design is based upon the syntax of the input.
5.187 system testing: The process of testing an integrated system to verify that it meets specified requirements. [Hetzel]
5.188 technical requirements testing: See non-functional requirements testing.
5.189 test automation: The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
5.190 test case: A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. After [IEEE,do178b]
5.191 test case design technique: A method used to derive or select test cases.
5.192 test case suite: A collection of one or more test cases for the software under test.
5.193 test comparator: A test tool that compares the actual outputs produced by the software under test with the expected outputs for that test case.
5.194 test completion criterion: A criterion for determining when planned testing is complete, defined in terms of a test measurement technique.
5.195 test coverage: See coverage.
5.196 test driver: A program or test tool used to execute software against a test case suite.
5.197 test environment: A description of the hardware and software environment in which the tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.
5.198 test execution: The processing of a test case suite by the software under test, producing an outcome.
5.199 test execution technique: The method used to perform the actual test execution, e.g. manual, capture/playback tool, etc.
5.200 test generator: A program that generates test cases in accordance to a specified strategy or heuristic. After [Beizer].
5.201 test harness: A testing tool that comprises a test driver and a test comparator.
5.202 test measurement technique: A method used to measure test coverage items.
5.203 test outcome: See outcome.
5.204 test plan: A record of the test planning process detailing the degree of tester indedendence, the test environment, the test case design techniques and test measurement techniques to be used, and the rationale for their choice.
5.205 test procedure: A document providing detailed instructions for the execution of one or more test cases.
5.206 test records: For each test, an unambiguous record of the identities and versions of the component under test, the test specification, and actual outcome.
5.207 test script: Commonly used to refer to the automated test procedure used with a test harness.
5.208 test specification: For each test case, the coverage item, the initial state of the software under test, the input, and the predicted outcome.
5.209 test target: A set of test completion criteria.
5.210 testing: The process of exercising software to verify that it satisfies specified requirements and to detect errors. After [do178b]
5.211 thread testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.
5.212 top-down testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
5.213 unit testing: See component testing.
5.214 usability testing: Testing the ease with which users can learn and use a product.
5.215 validation: Determination of the correctness of the products of software development with respect to the user needs and requirements.
NOTE: The definition in BS 7925-1:1998 reads: confirmation by examination and provision of objective evidence that the particular requirements for a specific intended use have been fulfilled
5.216 verification: The process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase. [IEEE]
NOTE: The definition in BS 7925-1:1998 reads: confirmation by examination and provision of objective evidence that specified requirements have been fulfilled
5.217 volume testing: Testing where the system is subjected to large volumes of data.
5.218 walkthrough: A review of requirements, designs or code characterized by the author of the object under review guiding the progression of the review.
5.219 white box testing: See structural test case design.

Annex A (informative)

Index of sources
The following non-normative sources were used in constructing this glossary.
[Abbott] J Abbot, Software Testing Techniques, NCC Publications, 1986.
[Adrion] W R Adrion, M A Branstad and J C Cherniabsky. Validation, Verification and Testing of Computer Software, Computing Surveys, Vol 14, No 2, June 1982.
[BCS] A Glossary of Computing Terms, The British Computer Society, 7th edition, ISBN 0-273-03645-9.
[Beizer] B Beizer. Software Testing Techniques, van Nostrand Reinhold, 1990, ISBN 0-442-20672-0.
[Chow] T S Chow, Testing Software Design Modelled by Finite-Sate Machines, IEEE Transactions on Software Engineering, Vol SE-4, No 3, May 1978.
[DO-178B] Software Considerations in Airborne Systems and Equipment Certification. Issued in the USA by the Requirements and Technical Concepts for Aviation (document RTCA SC167/DO-178B) and in Europe by the European Organization for Civil Aviation Electronics (EUROCAE document ED-12B), December 1992.
[Fenton] N E Fenton, Software Metrics, Chapman \& Hall, 1991.
[Graham] D Graham and T Gilb, Software Inspection, Addison-Wesley, 1993, ISBN 0-201-63181-4.
[Hetzel] W C Hetzel, The complete guide to software testing, 2nd edition, QED Information Sciences, 1988.
[IEEE] IEEE Standard Glossary of Software Engineering Terminology, IEEE Std 610.12-1990.
[Myers] G J Myers, The Art of Software Testing, Wiley, 1979.

Annex B (informative)

Document Details
B. 1 Method of commenting on this draft
Comments are invited on this draft so that the glossary can be improved to satisfy the requirements of an ISO standard.
When making a comment, be sure to include the following information:
Your name and address;
The version number of the glossary (currently 6.2);
Exact part of the glossary;
Supporting information, such as the reason for a proposed change, or the reference to the use of a term.
You can submit comments in a variety of ways, which in order of preference are as follows:
  1. By E-mail to reids@rmcs.cranfield.ac.uk;
  2. By post to S C Reid, SEAS/CISE, Cranfield University, RMCS, Shrivenham, Swindon Wilts SN6 8LA, UK;

By FAX to +44 (0) 1793 783192, marked for the attention of Stuart Reid.
B. 2 Status
Working draft for the BCS Specialist Interest Group in Software Testing.