Terminology Flashcards
Acceptance criteria
The criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity.
Acceptance testing
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers, or other authorized entity to determine whether or not to accept the system.
Accessibility
The degree to which a component or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use.
Actual result
The behavior produced/ observed when a component or system is tested.
Ad hoc reviewing
A review technique carried out by independent reviewers informally, without a structured process.
Alpha testing
Simulated or actual operational testing conducted in the developer’s test environment, by roles outside the development organization.
Anomaly
Any condition that deviates from expectation based on requirement specifications, design documents, user documents, standards etc., or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation.
Audit
An independent examination of a work product, process, or set of processes that is performed by a third party to assess compliance with specifications, standards, contractual agreements, or other criteria.
Availability
The degree to which a component or system is operational and accessible when required for use.
Behavior
The response of a component or system to a set of input values and preconditions.
Beta testing
Simulated or actual operational testing conducted at an external site, by roles outside the development organization.
Black-box test technique
A procedure to drive and/ or select test cases based on an analysis or the specification, either functional or non-functional, of a component or system without reference to its internal structure.
Boundary value
A minimum or maximum value of an ordered equivalence partition.
Boundary value analysis
A black-box test in which test cases are designed based on boundary values.
Burndown chart
A publicly displayed chart that depicts the outstanding effort versus time in an iteration. It shows the status and trend of completing the tasks of the iteration. The X-axis typically represents days in the sprint, while the Y-axis is the remaining effort (usually either in ideal engineering hours or story points).
Checklist-based reviewing
A review technique guided by a list of questions or required attributes.
Check-list based testing
An experience-based test technique whereby the experienced tester uses a high-level list of items to be noted, checked, or remembered, or a set of rules or criteria against which a product has to be verified.
Code coverage
An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g., statement coverage, decision coverage, or condition coverage.
Commercial off-the-shelf (COTS)
A software product that is developed for the general market, i.e., for a large number of customers, and that is delivered to many customers in identical format.
Compatibility
The degree to which a component or system can exchange information with other components or systems.
Complexity
The degree to which a component or system has a design and/ or internal structure that is difficult to understand, maintain, and verify.
Compliance
The capability of the software product to adhere to standards, conventions, or regulations in laws and similar prescriptions.
Component
A minimal part of a system that can be tested in isolation.
Component integration testing
Testing performed to expose defects in the interfaces and interactions between integrated components.
Component specification
A description of a component’s function in terms of its output values for specified input values under specified conditions, and required non-functional behavior (e.g., resource utilization)
Component testing
The testing of individual hardware or software components.
Condition
A logical expression that can be evaluated as true or false.
Configuration
The composition of a component or system as defined by the number, nature, and interconnections of its constituent parts.
Configuration item
An aggregation of work products that is designated for configuration management and treated as a single entity in the configuration management process.
Configuration management
A discipline applying technical and administrative direction and surveillance to identify and document the functional and physical characteristics of a configuration item, controlling changes to those characteristics, record, and report change processing and implementation status, and verify compliance with specified requirements.
Configuration management tool
A tool that provides support for the identification and control of configuration items, their status over changes and versions, and the release of baselines consisting of configuration items.
Confirmation testing
Dynamic testing conducted after fixing defects with the objective to confirm that failures caused by those defects do not occur anymore.
Contractual acceptance testing
Acceptance testing conducted to verify whether a system satisfies its contractual requirements.
Control flow
The sequence in which operations are performed during the execution of a test item.
Cost of quality
The total costs incurred on quality activities and issues and often split into prevention costs, appraisal costs, internal failure costs and external failure costs.
Coverage
The degree to which specified coverage items have been determined or have been exercised by a test suite expressed as a percentage.
Coverage item
An attribute or combination of attributes that is derived from one or more test conditions by using a test technique that enables the measurements of the thoroughness of the test execution.
Coverage tool
A tool that provides objective measures of what structural elements e.g., statements, branches, have been exercised by a test suite.
Data flow
An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is any of creation, usage, or destruction.
Data-driven testing
A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data-driven testing is often used to support the application of test execution tools such as capture/ playback tools.
Debugging
The process of finding, analyzing, and removing the causes of failures in software.
Decision
A type of statement in which a choice between two or more possible outcomes controls which set of actions will result.
Decision coverage
The coverage of decision outcomes.
Decision outcome
The result of a decision that determines the next statement to be executed.
Decision table
A table used to show sets of conditions and the actions resulting from them.
Decision table testing
A black-box test technique in which cases are designed to execute the combinations of inputs and/ or stimuli (causes) shown in a decision table.
Decision testing
A white-box test technique in which test cases are designed to execute decision outcomes.
Defect
An imperfection or deficiency in a work product where it does not meet its requirements or specifications.
Defect density
The number of defects per unit size of a work product.
Defect management
The process of recognizing and recording defects, classifying them, investigating them, taking action to resolve them, and disposing of them when resolved.
Defect management tool
A tool that facilitates the recording and status tracking of defects.
Defect report
Documentation of the occurrence, nature, and status of a defect.
Driver
A software component or test tool that replaces a component that takes care of the control and/ or the calling of a component or system.
Dynamic analysis
The process of evaluating behavior e.g. memory performance, CPU usage, of a system or component during execution
Dynamic analysis tool
A tool that provides runtime information on the state of the software code. These tools are most commonly used to identify unassigned pointers, check pointer arithmetic, and to monitor the allocation, use and deallocation of memory and to flag memory leaks.
Dynamic testing
Testing that involves the execution of the software of a component or system.
Effectiveness
Extent to which correct and complete goals are achieved.
Efficiency
Resources expended in relation to the extent with which users achieve specified goals.
Entry criteria
The set of conditions for officials starting a defined task.
Equivalence partition
A portion of the value domain of a data element related to the test object for which all values are expected to be treated the same based on the specification.
Equivalence partitioning
A black-box test technique in which test cases are designed to exercise equivalence partitions by using one representative member of each partition.
Error
A human action that produces an incorrect result
Error guessing
A test technique in which tests are derived on the basis of the tester’s knowledge of past failures, or general knowledge of failure modes.
Executable statement
A statement which, when compiled, is translated into object code, and which will be executed procedurally when the program is running and may perform an action on data.
Exercised
A program element is said to be exercised by a test case when the input value causes the execution of that element, such as statement, decision, or other structural elements.
Exhaustive testing
A test approach in which the test suite comprises all combinations of input values and preconditions.
Exit criteria
The set of conditions for officially completing a defined task.
Expected result
The predicted observable behavior of a component or system executing under specified conditions, based on its specification or another source.
Experience-based testing technique
A procedure to derive and/or select test cases based on the tester’s experience, knowledge, and intuition.
Experience-based testing
Testing based in the tester’s experience, knowledge, and intuition.
Exploratory testing
An approach to testing whereby the testers dynamically design and execute tests based in their knowledge, exploration of the test often and the results of previous tests.