ISTQB-CTFL-CT Definitions Flashcards
Acceptance testing
Formal testing with respect to user needs, requirements and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers, or other authorized entities to determine whether or not to accept the system.
Ad-hoc review
A review in which reviewers are provided with little to no guidance, read the work product sequentially, identifying and documenting issues as they encounter them.
Alpha testing
Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing
Beta testing
Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software to acquire feedback from the market
Black-box test technique
Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.
Boundary value analysis
A black box test design technique in which test cases are designed based on boundary values (input or output values on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range).
Change-related testing
Testing done to confirm that changes to a component or system have not caused any unforeseen adverse consequences
Checklist-based review
A review in which reviewers detect issues based on systematic checklist distributed at the review initiation
Commercial off-the-shelf (COTS)
A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format
Component integration testing
Testing performed to expose defects in the interfaces and interaction between integrated components
Component testing
The testing of individual software components
Configuration management
A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, and record and report change processing and implementation status, and verify compliance with specified requirements.
Confirmation testing
(or “re-testing”) Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective action.
Contractual acceptance testing
Testing performed against a contract’s acceptance criteria for producing custom-developed software.
Coverage
The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite
Data-driven testing
A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools.
Debugging
The process of finding, analyzing, and removing the causes of failures in software
Decision coverage
The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.
Decision table testing
A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli shown in a decision table (a table showing combinations of inputs and/or stimuli/causes – with their associated outputs and/or actions/effects).
Defect
A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g., an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system
Defect management
The process of recognizing, investigating, taking action, and disposing of defects. It involves recording defects, classifying them, and identifying their impact.
Defect report
A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function.
Dynamic testing
Testing that involves the execution of the software of a component or system
Entry criteria
The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g., test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria.
Equivalence partitioning
Divide data into partitions in such a way that all the members of a given partition are expected to be processed in the same way (this can be done for invalid as well as valid values)
Error
A human action that produces an incorrect result
Error guessing
A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test because of errors made, and to design tests specifically to expose them.
Equivalence partitioning
Divide data into partitions in such a way that all the members of a given partition are expected to be processed in the same way (this can be done for invalid as well as valid values)
Exit criteria
The set of generic and specific conditions, agreed upon with stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.
Experience-based test technique
Procedure to derive and/or select test cases based on the tester’s experience, knowledge, and intuition.
Exploratory testing
An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.
Formal review
A review characterized by team participation, documented results of the review, and documented procedures for conducting the review. (See ISO/IEC 20246 for more on review processes for work products)
Functional testing
Testing based on an analysis of the specification of the functionality of a component or system. (See also black box testing).
Impact analysis
The assessment of change to the layers of development documentation, test documentation, and components, in order to implement a given change to specified requirements
Informal review
A review characterized by not following a defined process and not having formal documented output.
Inspection
A type of peer review that relies on visual examination of documents to detect defects, e.g., violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure.
Integration testing
Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.
Keyword-driven testing
A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. They keywords are interpreted by special supporting scripts that are called by the control script for the test.
Maintenance testing
Testing the changes to an operational system or the impact of a changed environment to an operational system.
Non-functional testing
Testing the attributes of a component or system that do not relate to functionality, e.g., reliability, efficiency, usability, maintainability, and portability.
Operational acceptance testing
Operational testing in the acceptance test phase, typically performed in a (simulated) operational environment by operations and/or systems administration staff focusing on operational aspects, e.g., recoverability, resource-behaviour, installability, and technical compliance.
Perspective-based review
A review characterized by reviewers taking on different stakeholder viewpoints in individual reviews, e.g., marketing, designer, tester, or operations. Similar to role-based review.