GLOSSARY Flashcards

1
Q

coverage

A

The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

debugging

A

The development activity that finds, analyzes, and fixes such defects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

defect

A

A flaw in a component or system that can cause the component or system to fail to perform its required function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

error

A

A human action that produces an incorrect result

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

failure

A

Deviation of the component or system from its expected delivery, service or result.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

quality

A

The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

quality assurance

A

Part of quality management focused on providing confidence that quality requirements will be fulfilled.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

root cause

A

The earliest actions or conditions that contributed to creating the defects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

test analysis

A

During test analysis, the test basis is analyzed to identify testable features and define associated test conditions. In other words, test analysis determines “what to test” in terms of measurable coverage criteria.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

test basis

A

The documentation on which the test cases are based.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

test case

A

A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

test charter

A

A statement of test objectives, and possibly test ideas about how to test. Test charters are used in exploratory testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

test completion

A

Test completion activities collect data from completed test activities to consolidate experience, testware, and any other relevant information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

test condition

A

A specification that a tester must adhere to when testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

test control

A

Involves taking actions necessary to meet the objectives of the test plan

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

test data

A

Data that exists before a test is executed, and that affects or is affected by the component or system under test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

test design

A

During test design, the test conditions are elaborated into high-level test cases, sets of high-level test cases, and other testware. Test design answers the question “how to test?”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

test environment

A

An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

test execution

A

The process of running a test on the component or system under test, producing actual result(s). During test execution, test suites are run in accordance with the test execution schedule.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

test implementation

A

During test implementation, the testware necessary for test execution is created and/or completed, including sequencing the test cases into test procedures. Test implementation answers the question “do we now have everything in place to run the tests?”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

test monitoring

A

Test monitoring involves the on-going comparison of actual progress against planned progress using any test monitoring metrics defined in the test plan.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

test object

A

The component or system to be tested.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

test objective

A

A reason or purpose for designing and executing a test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

test oracle

A

A source to determine expected results to compare with the actual result of the software under test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

test planning

A

Test planning involves activities that define the objectives of testing and the approach for meeting test objectives within constraints imposed by the context.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

test procedure

A

A sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap up activities post execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

test process

A

Sets of test activities fundamental to testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

test suite

A

A set of several test cases for a component or system under test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

testing

A

The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

testware

A

Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

traceability

A

The ability to identify related items in documentation and software, such as requirements with associated tests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

validation

A

Checking whether the system will meet user and other stakeholder needs in its operational environment(s).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

verification

A

Checking whether the system meets specified requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

acceptance testing

A

Produce information to assess the system’s readiness for deployment and use by the customer (end-user).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

alpha testing

A

Performed by potential users/customers or an independent testing team at the developing organization’s site

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

beta testing

A

Performed by potential or existing customers, and/or operators at their own locations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

change-related testing

A

A type of testing initiated by modification to a component or system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

commercial off-the-shelf (COTS)

A

A software product that is developed for the general market and that is delivered to many customers in identical format.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

component integration testing

A

Focuses on the interactions and interfaces between integrated components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

component testing

A

(also known as unit or module testing) focuses on components that are separately testable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

confirmation testing

A

The purpose of a confirmation test is to confirm whether the original defect has been successfully fixed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

contractual acceptance testing

A

performed against a contract’s acceptance criteria for producing custom-developed software.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

functional testing

A

involves tests that evaluate functions that the system should perform.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

impact analysis

A

evaluates the changes that were made for a maintenance release to identify the intended consequences as well as expected and possible side effects of a change, and to identify the areas in the system that will be affected by the change.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

incremental development

A

involves establishing requirements, designing, building, and testing a system in pieces, which means that the software’s features grow incrementally.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

integration testing

A

Focuses on interactions between components or systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

iterative development

A

occurs when groups of features are specified, designed, built, and tested together in a series of cycles, often of a fixed duration.

48
Q

maintenance testing

A

Testing the changes to an operational system or the impact of a changed environment to an operational system.

49
Q

non-functional testing

A

evaluates characteristics of systems and software such as usability, performance efficiency or security.

50
Q

operational acceptance testing (OAT)

A

performed in a (simulated) operational environment by operations and/or systems administration staff focusing on operational aspect.

51
Q

regression testing

A

involves running tests to detect unintended side-effect that may accidentally affect the behavior of other parts of the code after change.

52
Q

regulatory acceptance testing

A

performed against any regulations that must be adhered to, such as government, legal, or safety regulations

53
Q

sequential development model

A

Describes the software development process as a linear, sequential flow of activities.

54
Q

system integration testing

A

Focuses on the interactions and interfaces between systems, packages, and microservices.

55
Q

system testing

A

focuses on the behavior and capabilities of a whole system or product, often considering the end-to-end tasks the system can perform and the non-functional behaviors it exhibits while performing those tasks.

56
Q

test level

A

A group of test activities that are organized and managed together.

57
Q

test type

A

A group of test activities aimed at testing a component or system focused on a specific test objective, i.e. functional test, usability test, regression test etc.

58
Q

user acceptance testing (UAT)

A

focused on validating the fitness for use of the system by intended users in a real or simulated operational environment.

59
Q

white-box testing

A

Derives tests based on the system’s internal structure or implementation.

60
Q

ad hoc review

A

A review technique performed informally without a structured process, with little or no guidance and needing little preparation

61
Q

checklist-based review

A

systematic technique, whereby the reviewers detect issues based on checklists or set of questions based on potential defects that are distributed at review initiation

62
Q

dynamic testing

A

requires the execution of the software being tested and focuses on externally visible behaviors.

63
Q

formal review

A

characterized by team participation, documented results of the review, and documented procedures for conducting the review.

64
Q

informal review

A

(buddy check, pairing, pair review) characterized by not following a defined process and not having formal documented output

65
Q

inspection

A

Relies on visual examination of documents to detect defects. The most formal review technique and therefore always based on a documented procedure.

66
Q

perspective-based reading

A

A review technique whereby reviewers evaluate the work product from different viewpoints.

67
Q

review

A

Manual examination of work products or the evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements.

68
Q

role-based review

A

A review technique where reviewers evaluate a work product from the perspective of different stakeholder roles.

69
Q

scenario-based review

A
  • reviewers are provided with structured guidelines on how to read through the work product.
  • supports reviewers in performing “dry runs” on the work product based on expected usage of the work product.
70
Q

static analysis

A

Tool-driven evaluation of the code or other work products.

71
Q

static testing

A

relies on the manual examination of work products (i.e., reviews) or tool-driven evaluation of the code or other work products to improve the consistency and internal quality of work products.

72
Q

technical review

A

A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken.

73
Q

walkthrough

A

A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content.

74
Q

black-box test technique

A

(also called behavioral or behavior-based techniques) are based on an analysis of the appropriate test basis (e.g., formal requirements documents, specifications, use cases, user stories, or business processes.)

75
Q

boundary value analysis

A

Test cases are designed based on boundary values which is on the edge of an equivalence partition (minimum and maximum values)

76
Q

checklist-based testing

A

Testers design, implement, and execute tests to cover test conditions found in a checklist.

77
Q

decision coverage

A

exercises the decisions in the code and tests the code that is executed based on the decision outcomes.

78
Q

decision table testing

A

test cases are designed to execute the combinations of conditions inputs and the resulting actions (outputs) shown in a decision table.

79
Q

error guessing

A

technique used to anticipate the occurrence of errors, defects, and failures, based on the tester’s knowledge

80
Q

equivalence partitioning

A

divides data into partitions (also known as equivalence classes) in such a way that all the members of a given partition are expected to be processed in the same way

81
Q

experience-based test technique

A

leverage the experience of developers, testers and users to design, implement, and execute tests.

82
Q

exploratory testing

A

informal (not pre-defined) tests are designed, executed, logged, and evaluated dynamically (concurrently) during test execution.

83
Q

session-based testing

A

Exploratory testing that is conducted within a defined time-box and the tester uses a test charter containing test objectives to guide the testing.

84
Q

state transition testing

A

test cases are designed to execute valid and invalid state transitions.

85
Q

statement coverage

A

exercises the potential executable statements in the code

86
Q

test technique

A

A procedure used to derive and/or select test cases.

87
Q

use case testing

A

test cases are designed to execute scenarios of use cases, which is a sequence of transactions in a dialogue between an actor and a component or system with a tangible result.

88
Q

white-box test technique

A

(also called structural or structure-based techniques) are based on an analysis of the architecture, detailed design, internal structure, or the code of the test object.

89
Q

configuration management

A

establishes and maintains the integrity of the component or system, the testware, and their relationships to one another through the project and product lifecycle.

90
Q

defect management

A

The process of recognizing, investigating, taking action and disposing of defects. It involves recording defects, classifying them and identifying the impact.

91
Q

defect report

A

(bug report) A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function.

92
Q

entry criteria

A

(definition of ready) define the preconditions for undertaking a given test activity.

93
Q

exit criteria

A

(definition of done) define what conditions must be achieved in order to declare a test level or a set of tests completed.

94
Q

product risk

A

(quality risks) involves the possibility that a work product may fail to satisfy the legitimate needs of its users and/or stakeholders.

95
Q

project risk

A

involves situations that, should they occur, may have a negative effect on a project’s ability to achieve its objectives.

96
Q

risk

A

involves the possibility of an event in the future which has negative consequences.

97
Q

risk level

A

determined by the likelihood of the event and the impact (the harm) from that event.

98
Q

risk-based testing

A

An approach to testing to reduce the level of product risks and inform stakeholders of their status, starting in the initial stages of a project

99
Q

test approach

A

the starting point for selecting the test techniques, test levels, and test types, and for defining the entry criteria and exit criteria (or definition of ready and definition of done, respectively).

100
Q

test estimation

A

used to determine the effort required for adequate testing

101
Q

test manager

A

tasked with overall responsibility for the test process and successful leadership of the test activities.

102
Q

test plan

A

A document describing the scope, approach, resources and schedule of intended test activities.

103
Q

test planning

A

Test planning is a continuous activity and is performed throughout the product’s lifecycle.

104
Q

test progress report

A

A document summarizing testing activities and results to report progress of testing activities against a baseline and to communicate risks and alternatives requiring a decision to management.

105
Q

test strategy

A

provides a generalized description of the test process, usually at the product or organizational level.

106
Q

test summary report

A

A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria.

107
Q

tester

A

A skilled professional who is involved in the testing of a component or system.

108
Q

data-driven testing

A

separates out the test inputs and expected results, usually into a spreadsheet, and uses a more generic test script that can read the input
data and execute the same test script with different data

109
Q

keyword-driven testing

A

a generic script processes keywords describing the actions to be taken (also called action words), which then calls keyword scripts to process the associated test data.

110
Q

pilot project

A

Tests involving the introduction of the selected tool into an organization to gain knowledge about the tool, evaluate how it fits with existing processes and practices, etc.

111
Q

probe effect

A

Consequence of using intrusive tools which may affect the actual outcome of a test

112
Q

proof-of-concept

A

Establishes whether the tool performs effectively with the software under test and within the current infrastructure or, if necessary, to identify changes needed to that infrastructure to use the tool effectively.

113
Q

test automation

A

The use of software or scripted sequences to perform or support test activities executed by testing tools.

114
Q

test execution tool

A

execute test objects using automated test scripts

115
Q

test management tool

A

provides support to the test management and control part of a test process.

116
Q

test harness

A

A test environment comprised of stubs and drivers needed to execute a test.