Glossary Flashcards

1
Q

acceptance criteria

A

The criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

acceptance test-driven development (ATDD)

A

A collaborative approach to development in which the team and customers are using the customers own domain language to understand their requirements, which forms the basis for testing a component or system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

acceptance testing

A

A test level that focuses on determining whether to accept the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

accessibility

A

The degree to which a component or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

accessibility testing

A

Testing to determine the ease by which users with disabilities can use a component or system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

accuracy

A

The capability of the software product to provide the right or agreed results or effects with the needed degree of precision.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

actual result

A

The behavior produced/observed when a component or system is tested.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

ad hoc review

A

A review technique performed informally without a structured process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Agile Manifesto

A

A statement on the values that underpin Agile software development. The values are: individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, responding to change over following a plan.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Agile software development

A

A group of software development methodologies based on iterative incremental development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Agile testing

A

Testing practice for a project using Agile software development methodologies, incorporating techniques and methods, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

agile testing quadrants

A

A classification model of test types/levels in four quadrants, relating them to two dimensions of test goals: supporting the team vs. critiquing the product, and technology-facing vs. business-facing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

alpha testing

A

A type of acceptance testing performed in the developer’s test environment by roles outside the development organization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

anomaly

A

Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, etc., or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

audit

A

An independent examination of a work product or process performed by a third party to assess whether it complies with specifications, standards, contractual agreements, or other criteria.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

availability

A

The degree to which a component or system is operational and accessible when required for use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

behavior-driven development (BDD)

A

A collaborative approach to development in which the team is focusing on delivering expected behavior of a component or system for the customer, which forms the basis for testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

beta testing

A

A type of acceptance testing performed at an external site to the developer’s test environment by roles outside the development organization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

black-box test technique

A

A test technique based on an analysis of the specification of a component or system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

boundary value

A

A minimum or maximum value of an ordered equivalence partition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

boundary value analysis

A

A black-box test technique in which test cases are designed based on boundary values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

build verification test (BVT)

A

An automated test that validates the integrity of each new build and verifies its key/core functionality, stability, and testability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

change-related testing

A

A type of testing initiated by modification to a component or system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

checklist-based review

A

A review technique guided by a list of questions or required attributes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

checklist-based testing

A

An experience-based test technique whereby the experienced tester uses a high-level list of items to be noted, checked, or remembered, or a set of rules or criteria against which a product has to be verified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

coding standard

A

A standard that describes the characteristics of a design or a design description of data or program components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

commercial off-the-shelf (COTS)

A

A type of product developed in an identical format for a large number of customers in the general market.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

compatibility

A

The degree to which a component or system can exchange information with other components or systems, and/or perform its required functions while sharing the same hardware or software environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

complexity

A

The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

compliance

A

Adherence of a work product to standards, conventions or regulations in laws and similar prescriptions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

component

A

A part of a system that can be tested in isolation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

component integration testing

A

Testing in which the test items are interfaces and interactions between integrated components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

component testing

A

A test level that focuses on individual hardware or software components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

concurrency

A

The simultaneous execution of multiple independent threads by a component or system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

configuration item

A

An aggregation of work products that is designated for configuration management and treated as a single entity in the configuration management process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

configuration management

A

A discipline applying technical and administrative direction and surveillance to identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify that it complies with specified requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

confirmation testing

A

A type of change-related testing performed after fixing a defect to confirm that a failure caused by that defect does not reoccur.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

continuous integration

A

An automated software development procedure that merges, integrates and tests all changes as soon as they are committed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

contractual acceptance testing

A

A type of acceptance testing performed to verify whether a system satisfies its contractual requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

control flow

A

The sequence in which operations are performed by a business process, component or system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

cost of quality

A

The total costs incurred on quality activities and issues and often split into prevention costs, appraisal costs, internal failure costs and external failure costs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

coverage

A

The degree to which specified coverage items have been determined or have been exercised by a test suite expressed as a percentage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

coverage criteria

A

The criteria to define the coverage items required to reach a test objective.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

coverage item

A

An attribute or combination of attributes derived from one or more test conditions by using a test technique.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

dashboard

A

A representation of dynamic measurements of operational performance for some organization or activity, using metrics represented via metaphors such as visual dials, counters, and other devices resembling those on the dashboard of an automobile, so that the effects of events or activities can be easily understood and related to operational goals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

data-driven testing

A

A scripting technique that uses data files to contain the test data and expected results needed to execute the test scripts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

debugging

A

The process of finding, analyzing and removing the causes of failures in a component or system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

decision

A

A type of statement in which a choice between two or more possible outcomes controls which set of actions will result.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

decision coverage

A

The coverage of decision outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

decision table testing

A

A black-box test technique in which test cases are designed to exercise the combinations of conditions and the resulting actions shown in a decision table.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

decision testing

A

A white-box test technique in which test cases are designed to execute decision outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

defect

A

An imperfection or deficiency in a work product where it does not meet its requirements or specifications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

defect density

A

The number of defects per unit size of a work product.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

defect management

A

The process of recognizing, recording, classifying, investigating, resolving and disposing of defects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

defect report

A

Documentation of the occurrence, nature, and status of a defect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

defect taxonomy

A

A system of (hierarchical) categories designed to be a useful aid for reproducibly classifying defects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

defect-based test design technique

A

A procedure to derive and/or select test cases targeted at one or more defect types, with tests being developed from what is known about the specific defect type.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

driver

A

A temporary component or tool that replaces another component and controls or calls a test item in isolation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

dynamic analysis

A

The process of evaluating a component or system based on its behavior during execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

dynamic testing

A

Testing that involves the execution of the test item.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

effectiveness

A

The extent to which correct and complete goals are achieved.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

efficiency

A

The degree to which resources are expended in relation to results achieved.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

entry criteria

A

The set of conditions for officially starting a defined task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

epic

A

A large user story that cannot be delivered as defined within a single iteration or is large enough that it can be split into smaller user stories.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

equivalence partition

A

A subset of the value domain of a variable within a component or system in which all values are expected to be treated the same based on the specification.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

equivalence partitioning

A

A black-box test technique in which test cases are designed to exercise equivalence partitions by using one representative member of each partition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

error

A

A human action that produces an incorrect result.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

error guessing

A

A test technique in which tests are derived on the basis of the tester’s knowledge of past failures, or general knowledge of failure modes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

exhaustive testing

A

A test approach in which the test suite comprises all combinations of input values and preconditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

exit criteria

A

The set of conditions for officially completing a defined task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

expected result

A

The observable predicted behavior of a test item under specified conditions based on its test basis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

experience-based test technique

A

A test technique only based on the tester’s experience, knowledge and intuition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

experience-based testing

A

Testing based on the tester’s experience, knowledge and intuition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

exploratory testing

A

An approach to testing whereby the testers dynamically design and execute tests based on their knowledge, exploration of the test item and the results of previous tests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Extreme Programming (XP)

A

A software engineering methodology used within Agile software development whereby core practices are programming in pairs, doing extensive code review, unit testing of all code, and simplicity and clarity in code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

failed

A

The status of a test result in which the actual result does not match the expected result.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

failure

A

An event in which a component or system does not perform a required function within specified limits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

failure rate

A

The ratio of the number of failures of a given category to a given unit of measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

finding

A

A result of an evaluation that identifies some important issue, problem, or opportunity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

formal review

A

A type of review that follows a defined process with a formally documented output.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

functional testing

A

Testing performed to evaluate if a component or system satisfies functional requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

functional suitability

A

The degree to which a component or system provides functions that meet stated and implied needs when used under specified conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

heuristic

A

A generally recognized rule of thumb that helps to achieve a goal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

high-level test case

A

A test case with abstract preconditions, input data, expected results, postconditions, and actions (where applicable).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

impact analysis

A

The identification of all work products affected by a change, including an estimate of the resources needed to accomplish the change.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

incremental development model

A

A type of software development lifecycle model in which the component or system is developed through a series of increments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

independence of testing

A

Separation of responsibilities, which encourages the accomplishment of objective testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

informal review

A

A type of review that does not follow a defined process and has no formally documented output.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

inspection

A

A type of formal review to identify issues in a work product, which provides measurement to improve the review process and the software development process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

integration testing

A

A test level that focuses on interactions between components or systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

integrity

A

The degree to which a component or system allows only authorized access and modification to a component, a system or data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

interoperability

A

The degree to which two or more components or systems can exchange information and use the information that has been exchanged.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

interoperability testing

A

Testing to determine the interoperability of a software product.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

iterative development model

A

A type of software development lifecycle model in which the component or system is developed through a series of repeated cycles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

keyword-driven testing

A

A scripting technique in which test scripts contain high-level keywords and supporting files that contain low-level scripts that implement those keywords.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

load testing

A

A type of performance testing conducted to evaluate the behavior of a component or system under varying loads, usually between anticipated conditions of low, typical, and peak usage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

low-level test case

A

A test case with concrete values for preconditions, input data, expected results, postconditions, and a detailed description of actions (where applicable).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

maintainability

A

The degree to which a component or system can be modified by the intended maintainers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

maintenance

A

The process of modifying a component or system after delivery to correct defects, improve quality characteristics, or adapt to a changed environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

maintenance testing

A

Testing the changes to an operational system or the impact of a changed environment to an operational system.

101
Q

master test plan

A

A test plan that is used to coordinate multiple test levels or test types.

102
Q

maturity

A

(1) The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. (2) The degree to which a component or system meets needs for reliability under normal operation.

103
Q

measure

A

The number or category assigned to an attribute of an entity by making a measurement.

104
Q

measurement

A

The process of assigning a number or category to an entity to describe an attribute of that entity.

105
Q

memory leak

A

A memory access failure due to a defect in a program’s dynamic store allocation logic that causes it to fail to release memory after it has finished using it.

106
Q

metric

A

A measurement scale and the method used for measurement.

107
Q

model-based testing (MBT)

A

Testing based on or involving models.

108
Q

moderator

A

(1) The person responsible for running review meetings. (2) The person who conducts a usability test session.

109
Q

modularity

A

The degree to which a system is composed of discrete components such that a change to one component has minimal impact on other components.

110
Q

non-functional testing

A

Testing performed to evaluate that a component or system complies with non-functional requirements.

111
Q

operational acceptance testing

A

A type of acceptance testing performed to determine if operations and/or systems administration staff can accept a system.

112
Q

passed

A

The status of a test result in which the actual result matches the expected result.

113
Q

path

A

A sequence of consecutive edges in a directed graph.

114
Q

peer review

A

A review performed by others with the same abilities to create the work product.

115
Q

performance efficiency

A

The degree to which a component or system uses time, resources and capacity when accomplishing its designated functions.

116
Q

performance indicator

A

A metric that supports the judgment of process performance.

117
Q

performance testing

A

Testing to determine the performance efficiency of a component or system.

118
Q

performance testing tool

A

A test tool that generates load for a designated test item and that measures and records its performance during test execution.

119
Q

perspective-based reading

A

A review technique in which a work product is evaluated from the perspective of different stakeholders with the purpose to derive other work products.

120
Q

planning poker

A

A consensus-based estimation technique, mostly used to estimate effort or relative size of user stories in Agile software development. It is a variation of the Wideband Delphi method using a deck of cards with values representing the units in which the team estimates.

121
Q

portability

A

The degree to which a component or system can be transferred from one hardware, software or other operational or usage environment to another.

122
Q

postcondition

A

The expected state of a test item and its environment at the end of test case execution.

123
Q

precondition

A

The required state of a test item and its environment prior to test case execution.

124
Q

priority

A

The level of (business) importance assigned to an item, e.g., defect.

125
Q

probe effect

A

An unintended change in behavior of a component or system caused by measuring it.

126
Q

process model

A

A framework in which processes of the same nature are classified into an overall model.

127
Q

product risk

A

A risk impacting the quality of a product.

128
Q

project risk

A

A risk that impacts project success.

129
Q

quality

A

The degree to which a component or system satisfies the stated and implied needs of its various stakeholders.

130
Q

quality assurance (QA)

A

Activities focused on providing confidence that quality requirements will be fulfilled.

131
Q

quality characteristic

A

A category of quality attributes that bears on work product quality.

132
Q

quality control (QC)

A

A set of activities designed to evaluate the quality of a component or system.

133
Q

quality management

A

The process of establishing and directing a quality policy, quality objectives, quality planning, quality control, quality assurance, and quality improvement for an organization.

134
Q

quality risk

A

A product risk related to a quality characteristic.

135
Q

Rational Unified Process (RUP)

A

A proprietary adaptable iterative software development process framework consisting of four project lifecycle phases: inception, elaboration, construction and transition.

136
Q

regression testing

A

A type of change-related testing to detect whether defects have been introduced or uncovered in unchanged areas of the software.

137
Q

regulatory acceptance testing

A

regulatory acceptance testing

138
Q

reliability

A

The degree to which a component or system performs specified functions under specified conditions for a specified period of time.

139
Q

reliability growth model

A

A model that shows the growth in reliability over time of a component or system as a result of the defect removal.

140
Q

requirement

A

A provision that contains criteria to be fulfilled.

141
Q

retrospective meeting

A

A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.

142
Q

reusability

A

The degree to which a work product can be used in more than one system, or in building other work products.

143
Q

review

A

A type of static testing in which a work product or process is evaluated by one or more individuals to detect defects or to provide improvements.

144
Q

reviewer

A

A participant in a review who identifies issues in the work product.

145
Q

risk

A

A factor that could result in future negative consequences.

146
Q

risk analysis

A

The overall process of risk identification and risk assessment.

147
Q

risk level

A

The qualitative or quantitative measure of a risk defined by impact and likelihood.

148
Q

risk management

A

The process for handling risks.

149
Q

risk mitigation

A

The process through which decisions are reached and protective measures are implemented for reducing or maintaining risks to specified levels.

150
Q

risk-based testing

A

Testing in which the management, selection, prioritization, and use of testing activities and resources are based on corresponding risk types and risk levels.

151
Q

robustness

A

The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions.

152
Q

role-based review

A

A review technique in which a work product is evaluated from the perspective of different stakeholders.

153
Q

root cause

A

A source of a defect such that if it is removed, the occurrence of the defect type is decreased or removed.

154
Q

root cause analysis

A

An analysis technique aimed at identifying the root causes of defects. By directing corrective measures at root causes, it is hoped that the likelihood of defect recurrence will be minimized.

155
Q

scenario-based review

A

A review technique in which a work product is evaluated to determine its ability to address specific scenarios.

156
Q

scribe

A

A person who records information at a review meeting.

157
Q

scrum

A

An iterative incremental framework for managing projects commonly used with Agile software development.

158
Q

security

A

The degree to which a component or system protects information and data so that persons or other components or systems have the degree of access appropriate to their types and levels of authorization.

159
Q

security testing

A

Testing to determine the security of the software product.

160
Q

sequential development model

A

A type of software development lifecycle model in which a complete system is developed in a linear way of several discrete and successive phases with no overlap between them.

161
Q

service virtualization

A

A technique to enable virtual delivery of services which are deployed, accessed and managed remotely.

162
Q

session-based testing

A

An approach in which test activities are planned as test sessions.

163
Q

severity

A

The degree of impact that a defect has on the development or operation of a component or system.

164
Q

simulator

A

A device, computer program or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs.

165
Q

software development lifecycle (SDLC)

A

The activities performed at each stage in software development, and how they relate to one another logically and chronologically.

166
Q

software lifecycle

A

The period of time that begins when a software product is conceived and ends when the software is no longer available for use. The software lifecycle typically includes a concept phase, requirements phase, design phase, implementation phase, test phase, installation and checkout phase, operation and maintenance phase, and sometimes, retirement phase. Note these phases may overlap or be performed iteratively.

167
Q

standard

A

Formal, possibly mandatory, set of requirements developed and used to prescribe consistent approaches to the way of working or to provide guidelines (e.g., ISO/IEC standards, IEEE standards, and organizational standards).

168
Q

state transition testing

A

A black-box test technique in which test cases are designed to exercise elements of a state transition model.

169
Q

statement

A

An entity in a programming language, which is typically the smallest indivisible unit of execution.

170
Q

statement coverage

A

The coverage of executable statements.

171
Q

statement testing

A

A white-box test technique in which test cases are designed to execute statements.

172
Q

static analysis

A

The process of evaluating a component or system without executing it, based on its form, structure, content, or documentation.

173
Q

static testing

A

Testing a work product without the work product code being executed.

174
Q

structural coverage

A

Coverage measures based on the internal structure of a component or system.

175
Q

stub

A

A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.

176
Q

system integration testing

A

A test level that focuses on interactions between systems.

177
Q

system testing

A

A test level that focuses on verifying that a system as a whole meets specified requirements.

178
Q

system under test (SUT)

A

A type of test object that is a system.

179
Q

technical review

A

A formal review by technical experts that examine the quality of a work product and identify discrepancies from specifications and standards.

180
Q

test

A

A set of one or more test cases.

181
Q

test analysis

A

The activity that identifies test conditions by analyzing the test basis.

182
Q

test approach

A

The implementation of the test strategy for a specific project.

183
Q

test automation

A

The use of software to perform or support test activities.

184
Q

test basis

A

The body of knowledge used as the basis for test analysis and design.

185
Q

test case

A

A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions.

186
Q

test charter

A

Documentation of the goal or objective for a test session.

187
Q

test completion

A

The activity that makes testware available for later use, leaves test environments in a satisfactory condition and communicates the results of testing to relevant stakeholders.

188
Q

test completion report

A

A type of test report produced at completion milestones that provides an evaluation of the corresponding test items against exit criteria.

189
Q

test condition

A

A testable aspect of a component or system identified as a basis for testing.

190
Q

test control

A

The activity that develops and applies corrective actions to get a test project on track when it deviates from what was planned.

191
Q

test cycle

A

An instance of the test process against a single identifiable version of the test object.

192
Q

test data

A

Data needed for test execution.

193
Q

test data preparation tool

A

A type of test tool that enables data to be selected from existing databases or created, generated, manipulated and edited for use in testing.

194
Q

test design

A

The activity that derives and specifies test cases from test conditions.

195
Q

test environment

A

An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.

196
Q

test estimation

A

An approximation related to various aspects of testing.

197
Q

test execution

A

The activity that runs a test on a component or system producing actual results.

198
Q

test execution schedule

A

A schedule for the execution of test suites within a test cycle.

199
Q

test execution tool

A

A test tool that executes tests against a designated test item and evaluates the outcomes against expected results and postconditions.

200
Q

test harness

A

A collection of stubs and drivers needed to execute a test suite

201
Q

test implementation

A

The activity that prepares the testware needed for test execution based on test analysis and design.

202
Q

test infrastructure

A

The organizational artifacts needed to perform testing, consisting of test environments, test tools, office environment and procedures.

203
Q

test item

A

A part of a test object used in the test process.

204
Q

test leader

A

On large projects, the person who reports to the test manager and is responsible for project management of a particular test level or a particular set of testing activities.

205
Q

test level

A

A specific instantiation of a test process.

206
Q

test management

A

The planning, scheduling, estimating, monitoring, reporting, control and completion of test activities.

207
Q

test management tool

A

A tool that supports test management.

208
Q

test manager

A

The person responsible for project management of testing activities, resources, and evaluation of a test object.

209
Q

test monitoring

A

The activity that checks the status of testing activities, identifies any variances from planned or expected, and reports status to stakeholders.

210
Q

test object

A

The work product to be tested.

211
Q

test objective

A

The reason or purpose of testing.

212
Q

test oracle

A

A source to determine an expected result to compare with the actual result of the system under test.

213
Q

test plan

A

Documentation describing the test objectives to be achieved and the means and the schedule for achieving them, organized to coordinate testing activities.

214
Q

test planning

A

The activity of establishing or updating a test plan.

215
Q

test policy

A

A high-level document describing the principles, approach and major objectives of the organization regarding testing.

216
Q

test procedure

A

A sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap up activities post execution.

217
Q

test process

A

The set of interrelated activities comprising of test planning, test monitoring and control, test analysis, test design, test implementation, test execution, and test completion.

218
Q

test process improvement

A

A program of activities undertaken to improve the performance and maturity of the organization’s test processes.

219
Q

test progress report

A

A type of test report produced at regular intervals about the progress of test activities against a baseline, risks, and alternatives requiring a decision.

220
Q

test report

A

Documentation summarizing test activities and results.

221
Q

test reporting

A

Collecting and analyzing data from testing activities and subsequently consolidating the data in a report to inform stakeholders.

222
Q

test result

A

The consequence/outcome of the execution of a test.

223
Q

test schedule

A

A list of activities, tasks or events of the test process, identifying their intended start and finish dates and/or times, and interdependencies

224
Q

test script

A

A sequence of instructions for the execution of a test.

225
Q

test session

A

An uninterrupted period of time spent in executing tests.

226
Q

test strategy

A

Documentation aligned with the test policy that describes the generic requirements for testing and details how to perform testing within an organization.

227
Q

test suite

A

A set of test scripts or test procedures to be executed in a specific test run.

228
Q

test technique

A

A procedure used to define test conditions, design test cases, and specify test data.

229
Q

test tool

A

Software or hardware that supports one or more test activities.

230
Q

test type

A

A group of test activities based on specific test objectives aimed at specific characteristics of a component or system.

231
Q

test-first approach

A

An approach to software development in which the test cases are designed and implemented before the associated component or system is developed.

232
Q

testability

A

The degree to which test conditions can be established for a component or system, and tests can be performed to determine whether those test conditions have been met.

233
Q

tester

A

A person who performs testing.

234
Q

testing

A

The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of a component or system and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

235
Q

testware

A

Work products produced during the test process for use in planning, designing, executing, evaluating and reporting on testing.

236
Q

traceability

A

The degree to which a relationship can be established between two or more work products.

237
Q

usability

A

The degree to which a component or system can be used by specified users to achieve specified goals in a specified context of use.

238
Q

usability testing

A

Testing to evaluate the degree to which the system can be used by specified users with effectiveness, efficiency and satisfaction in a specified context of use.

239
Q

use case testing

A

A black-box test technique in which test cases are designed to exercise use case behaviors.

240
Q

user acceptance testing (UAT)

A

A type of acceptance testing performed to determine if intended users accept the system.

241
Q

user interface

A

All components of a system that provide information and controls for the user to accomplish specific tasks with the system.

242
Q

user story

A

A user or business requirement consisting of one sentence expressed in the everyday or business language which is capturing the functionality a user needs, the reason behind it, any non-functional criteria, and also including acceptance criteria.

243
Q

V-model

A

A sequential development lifecycle model describing a one-for-one relationship between major phases of software development from business requirements specification to delivery, and corresponding test levels from acceptance testing to component testing.

244
Q

validation

A

Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.

245
Q

verification

A

Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.

246
Q

walkthrough

A

A type of review in which an author leads members of the review through a work product and the members ask questions and make comments about possible issues.

247
Q

white-box test technique

A

A test technique only based on the internal structure of a component or system.

248
Q

white-box testing

A

Testing based on an analysis of the internal structure of the component or system.

249
Q

Wideband Delphi

A

An expert-based test estimation technique that aims at making an accurate estimation using the collective wisdom of the team members.