GLOSSARY Flashcards
coverage
The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
debugging
The development activity that finds, analyzes, and fixes such defects.
defect
A flaw in a component or system that can cause the component or system to fail to perform its required function.
error
A human action that produces an incorrect result
failure
Deviation of the component or system from its expected delivery, service or result.
quality
The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.
quality assurance
Part of quality management focused on providing confidence that quality requirements will be fulfilled.
root cause
The earliest actions or conditions that contributed to creating the defects
test analysis
During test analysis, the test basis is analyzed to identify testable features and define associated test conditions. In other words, test analysis determines “what to test” in terms of measurable coverage criteria.
test basis
The documentation on which the test cases are based.
test case
A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.
test charter
A statement of test objectives, and possibly test ideas about how to test. Test charters are used in exploratory testing.
test completion
Test completion activities collect data from completed test activities to consolidate experience, testware, and any other relevant information.
test condition
A specification that a tester must adhere to when testing.
test control
Involves taking actions necessary to meet the objectives of the test plan
test data
Data that exists before a test is executed, and that affects or is affected by the component or system under test.
test design
During test design, the test conditions are elaborated into high-level test cases, sets of high-level test cases, and other testware. Test design answers the question “how to test?”
test environment
An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.
test execution
The process of running a test on the component or system under test, producing actual result(s). During test execution, test suites are run in accordance with the test execution schedule.
test implementation
During test implementation, the testware necessary for test execution is created and/or completed, including sequencing the test cases into test procedures. Test implementation answers the question “do we now have everything in place to run the tests?”
test monitoring
Test monitoring involves the on-going comparison of actual progress against planned progress using any test monitoring metrics defined in the test plan.
test object
The component or system to be tested.
test objective
A reason or purpose for designing and executing a test.
test oracle
A source to determine expected results to compare with the actual result of the software under test.
test planning
Test planning involves activities that define the objectives of testing and the approach for meeting test objectives within constraints imposed by the context.
test procedure
A sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap up activities post execution.
test process
Sets of test activities fundamental to testing.
test suite
A set of several test cases for a component or system under test.
testing
The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
testware
Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.
traceability
The ability to identify related items in documentation and software, such as requirements with associated tests.
validation
Checking whether the system will meet user and other stakeholder needs in its operational environment(s).
verification
Checking whether the system meets specified requirements.
acceptance testing
Produce information to assess the system’s readiness for deployment and use by the customer (end-user).
alpha testing
Performed by potential users/customers or an independent testing team at the developing organization’s site
beta testing
Performed by potential or existing customers, and/or operators at their own locations
change-related testing
A type of testing initiated by modification to a component or system.
commercial off-the-shelf (COTS)
A software product that is developed for the general market and that is delivered to many customers in identical format.
component integration testing
Focuses on the interactions and interfaces between integrated components.
component testing
(also known as unit or module testing) focuses on components that are separately testable.
confirmation testing
The purpose of a confirmation test is to confirm whether the original defect has been successfully fixed.
contractual acceptance testing
performed against a contract’s acceptance criteria for producing custom-developed software.
functional testing
involves tests that evaluate functions that the system should perform.
impact analysis
evaluates the changes that were made for a maintenance release to identify the intended consequences as well as expected and possible side effects of a change, and to identify the areas in the system that will be affected by the change.
incremental development
involves establishing requirements, designing, building, and testing a system in pieces, which means that the software’s features grow incrementally.
integration testing
Focuses on interactions between components or systems.
iterative development
occurs when groups of features are specified, designed, built, and tested together in a series of cycles, often of a fixed duration.
maintenance testing
Testing the changes to an operational system or the impact of a changed environment to an operational system.
non-functional testing
evaluates characteristics of systems and software such as usability, performance efficiency or security.
operational acceptance testing (OAT)
performed in a (simulated) operational environment by operations and/or systems administration staff focusing on operational aspect.
regression testing
involves running tests to detect unintended side-effect that may accidentally affect the behavior of other parts of the code after change.
regulatory acceptance testing
performed against any regulations that must be adhered to, such as government, legal, or safety regulations
sequential development model
Describes the software development process as a linear, sequential flow of activities.
system integration testing
Focuses on the interactions and interfaces between systems, packages, and microservices.
system testing
focuses on the behavior and capabilities of a whole system or product, often considering the end-to-end tasks the system can perform and the non-functional behaviors it exhibits while performing those tasks.
test level
A group of test activities that are organized and managed together.
test type
A group of test activities aimed at testing a component or system focused on a specific test objective, i.e. functional test, usability test, regression test etc.
user acceptance testing (UAT)
focused on validating the fitness for use of the system by intended users in a real or simulated operational environment.
white-box testing
Derives tests based on the system’s internal structure or implementation.
ad hoc review
A review technique performed informally without a structured process, with little or no guidance and needing little preparation
checklist-based review
systematic technique, whereby the reviewers detect issues based on checklists or set of questions based on potential defects that are distributed at review initiation
dynamic testing
requires the execution of the software being tested and focuses on externally visible behaviors.
formal review
characterized by team participation, documented results of the review, and documented procedures for conducting the review.
informal review
(buddy check, pairing, pair review) characterized by not following a defined process and not having formal documented output
inspection
Relies on visual examination of documents to detect defects. The most formal review technique and therefore always based on a documented procedure.
perspective-based reading
A review technique whereby reviewers evaluate the work product from different viewpoints.
review
Manual examination of work products or the evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements.
role-based review
A review technique where reviewers evaluate a work product from the perspective of different stakeholder roles.
scenario-based review
- reviewers are provided with structured guidelines on how to read through the work product.
- supports reviewers in performing “dry runs” on the work product based on expected usage of the work product.
static analysis
Tool-driven evaluation of the code or other work products.
static testing
relies on the manual examination of work products (i.e., reviews) or tool-driven evaluation of the code or other work products to improve the consistency and internal quality of work products.
technical review
A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken.
walkthrough
A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content.
black-box test technique
(also called behavioral or behavior-based techniques) are based on an analysis of the appropriate test basis (e.g., formal requirements documents, specifications, use cases, user stories, or business processes.)
boundary value analysis
Test cases are designed based on boundary values which is on the edge of an equivalence partition (minimum and maximum values)
checklist-based testing
Testers design, implement, and execute tests to cover test conditions found in a checklist.
decision coverage
exercises the decisions in the code and tests the code that is executed based on the decision outcomes.
decision table testing
test cases are designed to execute the combinations of conditions inputs and the resulting actions (outputs) shown in a decision table.
error guessing
technique used to anticipate the occurrence of errors, defects, and failures, based on the tester’s knowledge
equivalence partitioning
divides data into partitions (also known as equivalence classes) in such a way that all the members of a given partition are expected to be processed in the same way
experience-based test technique
leverage the experience of developers, testers and users to design, implement, and execute tests.
exploratory testing
informal (not pre-defined) tests are designed, executed, logged, and evaluated dynamically (concurrently) during test execution.
session-based testing
Exploratory testing that is conducted within a defined time-box and the tester uses a test charter containing test objectives to guide the testing.
state transition testing
test cases are designed to execute valid and invalid state transitions.
statement coverage
exercises the potential executable statements in the code
test technique
A procedure used to derive and/or select test cases.
use case testing
test cases are designed to execute scenarios of use cases, which is a sequence of transactions in a dialogue between an actor and a component or system with a tangible result.
white-box test technique
(also called structural or structure-based techniques) are based on an analysis of the architecture, detailed design, internal structure, or the code of the test object.
configuration management
establishes and maintains the integrity of the component or system, the testware, and their relationships to one another through the project and product lifecycle.
defect management
The process of recognizing, investigating, taking action and disposing of defects. It involves recording defects, classifying them and identifying the impact.
defect report
(bug report) A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function.
entry criteria
(definition of ready) define the preconditions for undertaking a given test activity.
exit criteria
(definition of done) define what conditions must be achieved in order to declare a test level or a set of tests completed.
product risk
(quality risks) involves the possibility that a work product may fail to satisfy the legitimate needs of its users and/or stakeholders.
project risk
involves situations that, should they occur, may have a negative effect on a project’s ability to achieve its objectives.
risk
involves the possibility of an event in the future which has negative consequences.
risk level
determined by the likelihood of the event and the impact (the harm) from that event.
risk-based testing
An approach to testing to reduce the level of product risks and inform stakeholders of their status, starting in the initial stages of a project
test approach
the starting point for selecting the test techniques, test levels, and test types, and for defining the entry criteria and exit criteria (or definition of ready and definition of done, respectively).
test estimation
used to determine the effort required for adequate testing
test manager
tasked with overall responsibility for the test process and successful leadership of the test activities.
test plan
A document describing the scope, approach, resources and schedule of intended test activities.
test planning
Test planning is a continuous activity and is performed throughout the product’s lifecycle.
test progress report
A document summarizing testing activities and results to report progress of testing activities against a baseline and to communicate risks and alternatives requiring a decision to management.
test strategy
provides a generalized description of the test process, usually at the product or organizational level.
test summary report
A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria.
tester
A skilled professional who is involved in the testing of a component or system.
data-driven testing
separates out the test inputs and expected results, usually into a spreadsheet, and uses a more generic test script that can read the input
data and execute the same test script with different data
keyword-driven testing
a generic script processes keywords describing the actions to be taken (also called action words), which then calls keyword scripts to process the associated test data.
pilot project
Tests involving the introduction of the selected tool into an organization to gain knowledge about the tool, evaluate how it fits with existing processes and practices, etc.
probe effect
Consequence of using intrusive tools which may affect the actual outcome of a test
proof-of-concept
Establishes whether the tool performs effectively with the software under test and within the current infrastructure or, if necessary, to identify changes needed to that infrastructure to use the tool effectively.
test automation
The use of software or scripted sequences to perform or support test activities executed by testing tools.
test execution tool
execute test objects using automated test scripts
test management tool
provides support to the test management and control part of a test process.
test harness
A test environment comprised of stubs and drivers needed to execute a test.