ISTQB Chapter 5 - Managing the Test Activities - Keywords Flashcards

1
Q

Defect Management

A

The process of recognizing, investigating, taking action and disposing of defects. It involves recording defects, classifying them and identifying the impact.

Since one of the major test objectives is to find defects, an established defect management process is essential. Although we refer to “defects” here, the reported anomalies may turn out to be real defects or something else (e.g., false positive, change request) - this is resolved during the process of dealing with the defect reports. Anomalies may be reported during any phase of the SDLC and the form depends on the SDLC. At a minimum, the defect management process includes a workflow for handling individual anomalies from their discovery to their closure and rules for their classification. The workflow typically comprises activities to log the reported anomalies, analyze and classify them, decide on a suitable
response such as to fix or keep it as it is and finally to close the defect report. The process must be followed by all involved stakeholders. It is advisable to handle defects from static testing (especially static analysis) in a similar way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Defect Report

A

A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function.

Typical defect reports have the following objectives:
* Provide those responsible for handling and resolving reported defects with sufficient information to resolve the issue
* Provide a means of tracking the quality of the work product
* Provide ideas for improvement of the development and test process

A defect report logged during dynamic testing typically includes:
* Unique identifier
* Title with a short summary of the anomaly being reported
* Date when the anomaly was observed, issuing organization, and author, including their role
* Identification of the test object and test environment
* Context of the defect (e.g., test case being run, test activity being performed, SDLC phase, and other relevant information such as the test technique, checklist or test data being used)
* Description of the failure to enable reproduction and resolution including the steps that detected the anomaly, and any relevant test logs, database dumps, screenshots, or recordings
* Expected results and actual results
* Severity of the defect (degree of impact) on the interests of stakeholders or requirements
* Priority to fix
* Status of the defect (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed, rejected)
* References (e.g., to the test case)

Some of this data may be automatically included when using defect management tools (e.g., identifier, date, author and initial status). Document templates for a defect report and example defect reports can be found in the ISO/IEC/IEEE 29119-3 standard, which refers to defect reports as incident reports.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Entry Criteria

A

The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g., test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria.

Entry criteria define the preconditions for undertaking a given activity. If entry criteria are not met, it is likely that the activity will prove to be more difficult, time-consuming, costly, and riskier.
Typical entry criteria include: availability of resources (e.g., people, tools, environments, test data, budget, time), availability of testware (e.g., test basis, testable requirements, user stories, test cases), and initial quality level of a test object (e.g., all smoke tests have passed).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Exit Criteria

A

The set of generic and specific conditions, agreed upon with the stakeholders for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.

Exit criteria define
what must be achieved in order to declare an activity completed. Entry criteria and exit criteria should be defined for each test level, and will differ based on the test objectives.
Typical exit criteria include: measures of thoroughness (e.g., achieved level of coverage, number of unresolved defects, defect density, number of failed test cases), and completion criteria (e.g., planned tests have been executed, static testing has been performed, all defects found are reported, all regression tests are automated).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Product Risk

A

A risk directly related to the test object.

Product risks are related to the product quality characteristics (e.g., described in the ISO 25010 quality
model). Examples of product risks include: missing or wrong functionality, incorrect calculations, runtime
errors, poor architecture, inefficient algorithms, inadequate response time, poor user experience, security
vulnerabilities. Product risks, when they occur, may result in various negative consequences, including:
* User dissatisfaction
* Loss of revenue, trust, reputation
* Damage to third parties
* High maintenance costs, overload of the helpdesk
* Criminal penalties
* In extreme cases, physical damage, injuries or even death

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Project Risk

A

A risk related to management and control of the (test) project.
For example;
* Organizational issues (e.g., delays in work products deliveries, inaccurate estimates, cost-cutting)
* People issues (e.g., insufficient skills, conflicts, communication problems, shortage of staff)
* Technical issues (e.g., scope creep, poor tool support)
* Supplier issues (e.g., third-party delivery failure, bankruptcy of the supporting company)
Project risks, when they occur, may have an impact on the project schedule, budget or scope, which
affects the project’s ability to achieve its objectives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Risk

A

A factor that could result in future negative consequences.

Risk is a potential event, hazard, threat, or situation whose occurrence causes an adverse effect. A risk can be characterized by two factors:
* Risk likelihood – the probability of the risk occurrence (greater than zero and less than one)
* Risk impact (harm) – the consequences of this occurrence
These two factors express the risk level, which is a measure for the risk. The higher the risk level, the more important is its treatment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Risk Analysis

A

The process of assessing identified project or product risks to determine their level of risk, typically by estimating their impact and probability of occurrence (likelihood).

Consists of risk identification and risk assessment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Risk Assessment

A

The process of identifying and subsequently analyzing the identified project or product risk to determine its level of risk, typically by assigning likelihood and impact ratings.

Risk assessment involves:
Categorization of identified risks, determining their risk likelihood, risk impact and level, prioritizing, and proposing ways to handle them. Categorization helps in assigning mitigation actions, because usually
risks falling into the same category can be mitigated using a similar approach.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Risk Control

A

Product risk control comprises all measures that are taken in response to identified and assessed product risks. Product risk control consists of risk mitigation and risk monitoring.

With respect to product risk control, once a risk has been analyzed, several response options to risk are possible, e.g., risk mitigation by testing, risk acceptance, risk transfer, or contingency plan (Veenendaal 2012).
Actions that can be taken to mitigate the product risks by testing are as follows:
* Select the testers with the right level of experience and skills, suitable for a given risk type
* Apply an appropriate level of independence of testing
* Conduct reviews and perform static analysis
* Apply the appropriate test techniques and coverage levels
* Apply the appropriate test types addressing the affected quality characteristics
* Perform dynamic testing, including regression testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Risk Identification

A

The process of identifying risks using techniques such as brainstorming, checklists, failure history, workshops, interviews, or cause-effect diagrams.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Risk Level

A

The importance of a risk as defined by its characteristics impact and likelihood. The level of risk can be used to determine the intensity of testing to be performed. A risk level can be expressed either qualitatively (e.g., high, medium, low) or quantitatively.

Risk assessment can use a quantitative or qualitative approach, or a mix of them. In the quantitative approach the risk level is calculated as the multiplication of risk likelihood and risk impact. In the qualitative approach the risk level can be determined using a risk matrix.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Risk Management

A

Systematic application of procedures and practices to the tasks of identifying, analyzing, prioritizing, and controlling risk.

Risk management allows the organizations to increase the likelihood of achieving objectives, improve the quality of their products and increase the stakeholders’ confidence and trust.

The main risk management activities are:
* Risk analysis (consisting of risk identification and risk assessment; see section 5.2.3)
* Risk control (consisting of risk mitigation and risk monitoring; see section 5.2.4)

The test approach, in which test activities are selected, prioritized, and managed based on risk analysis and risk control, is called risk-based testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Risk Mitigation

A

The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels.

Risk mitigation involves implementing the actions proposed in risk assessment to reduce the risk level. The aim of risk monitoring is to ensure that the mitigation actions are effective, to obtain further information to improve risk assessment, and to identify emerging risks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Risk Monitoring

A

The aim of risk monitoring is to ensure that the mitigation actions are effective, to obtain further information to improve risk assessment, and to identify emerging risks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Risk-Based Testing

A

An approach to testing to reduce the level of product risks and inform stakeholders of their status, starting in the initial stages of a project. It involves the identification of product risks and the use of risk levels to guide the test process.

The test approach, in which test activities are selected, prioritized, and managed based on risk analysis and risk control, is called risk-based testing.

17
Q

Test Approach

A

The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.

18
Q

Test Completion Report

A

Summarizes a specific stage of testing (e.g., test level, test cycle, iteration) and can give information for subsequent testing.

Test completion reports summarize a specific stage of testing (e.g., test level, test cycle, iteration) and can give information for subsequent testing.
A test completion report is prepared during test completion, when a project, test level, or test type is complete and when, ideally, its exit criteria have been met. This report uses test progress reports and other data. Typical test completion reports include:
* Test summary
* Testing and product quality evaluation based on the original test plan (i.e., test objectives and exit criteria)
* Deviations from the test plan (e.g., differences from the planned schedule, duration, and effort).
* Testing impediments and workarounds
* Test metrics based on test progress reports
* Unmitigated risks, defects not fixed
* Lessons learned that are relevant to the testing

Different audiences require different information in the reports, and influence the degree of formality and the frequency of reporting. Reporting on test progress to others in the same team is often frequent and informal, while reporting on testing for a completed project follows a set template and occurs only once.
The ISO/IEC/IEEE 29119-3 standard includes templates and examples for test progress reports (called test status reports) and test completion reports.

19
Q

Test Control

A

A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned.

Test control uses the information from test monitoring to provide, in a form of the control directives, guidance and the necessary corrective actions to achieve the most effective and efficient testing.
Examples of control directives include:
* Reprioritizing tests when an identified risk becomes an issue
* Re-evaluating whether a test item meets entry criteria or exit criteria due to rework
* Adjusting the test schedule to address a delay in the delivery of the test environment
* Adding new resources when and where needed
Test monitoring and control work products include: test progress reports (see section 5.3.2), documentation of control directives (see section 5.3) and risk information (see section 5.2).

During test monitoring and control, the test team generates test progress reports for stakeholders to keep them informed. Test progress reports are usually generated on a regular basis (e.g., daily, weekly, etc.) and include:
* Test period
* Test progress (e.g., ahead or behind schedule), including any notable deviations
* Impediments for testing, and their workarounds
* Test metrics (see section 5.3.1 for examples)
* New and changed risks within testing period
* Testing planned for the next period

20
Q

Test Monitoring

A

Is a type of test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned.

Test monitoring involves the ongoing checking of all test activities and the
comparison of actual progress against the plan.
Test monitoring and control work products include: test progress reports (see section 5.3.2), documentation of control directives (see section 5.3) and risk information (see section 5.2).

Test monitoring gathers a variety of metrics to support the test control and test completion.
Common test metrics include:
* Project progress metrics (e.g., task completion, resource usage, test effort)
* Test progress metrics (e.g., test case implementation progress, test environment preparation progress, number of test cases run/not run, passed/failed, test execution time)
* Product quality metrics (e.g., availability, response time, mean time to failure)
* Defect metrics (e.g., number and priorities of defects found/fixed, defect density, defect detection percentage)
* Risk metrics (e.g., residual risk level)
* Coverage metrics (e.g., requirements coverage, code coverage)
* Cost metrics (e.g., cost of testing, organizational cost of quality)

During test monitoring and control, the test team generates test progress reports for stakeholders to keep them informed. Test progress reports are usually generated on a regular basis (e.g., daily, weekly, etc.) and include:
* Test period
* Test progress (e.g., ahead or behind schedule), including any notable deviations
* Impediments for testing, and their workarounds
* Test metrics (see section 5.3.1 for examples)
* New and changed risks within testing period
* Testing planned for the next period

21
Q

Test Plan

A

A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.

A test plan describes the objectives, resources and processes for a test project. A test plan:
* Documents the means and schedule for achieving test objectives
* Helps to ensure that the performed test activities will meet the established criteria
* Serves as a means of communication with team members and other stakeholders
* Demonstrates that testing will adhere to the existing test policy and test strategy (or explains why the testing will deviate from them)

The typical content of a test plan includes:
* Context of testing (e.g., scope, test objectives, constraints, test basis)
* Assumptions and constraints of the test project
* Stakeholders (e.g., roles, responsibilities, relevance to testing, hiring and training needs)
* Communication (e.g., forms and frequency of communication, documentation templates)
* Risk register (e.g., product risks, project risks)
* Test approach (e.g., test levels, test types, test techniques, test deliverables, entry criteria and exit criteria, independence of testing, metrics to be collected, test data requirements, test environment requirements, deviations from the organizational test policy and test strategy)
* Budget and schedule

More details about the test plan and its content can be found in the ISO/IEC/IEEE 29119-3 standard.

22
Q

Test Planning

A

The activity of establishing or updating a test plan.

Test planning consists of defining the test objectives and then selecting an approach that best achieves the objectives within the constraints imposed by the overall context.

Test planning work products include: test plan, test schedule, risk register, and entry and exit criteria (see section 5.1). Risk register is a list of risks together with risk likelihood, risk impact and information about risk mitigation (see section 5.2). Test schedule, risk register and entry and exit
criteria are often a part of the test plan.

Test planning guides the testers’ thinking and forces the testers to confront the future challenges related to risks, schedules, people, tools, costs, effort, etc. The process of preparing a test plan is a useful way to
think through the efforts needed to achieve the test project objectives.

23
Q

Test Progress Report

A

A document summarizing testing activities and results, produced at regular intervals, to report progress of testing activities against a baseline (such as the original test plan) and to communicate risks and alternatives requiring a decision to management.

Test progress reports support the ongoing control of the testing and must provide enough information to make modifications to the test schedule, resources, or test plan, when such changes are needed due to deviation from the plan or changed circumstances.
Test progress reports are usually generated on a regular basis (e.g., daily, weekly, etc.) and include:
* Test period
* Test progress (e.g., ahead or behind schedule), including any notable deviations
* Impediments for testing, and their workarounds
* Test metrics (see section 5.3.1 for examples)
* New and changed risks within testing period
* Testing planned for the next period

The ISO/IEC/IEEE 29119-3 standard includes templates and examples for test progress reports (called test status reports) and test completion reports.

24
Q

Test Pyramid

A

A graphical model representing the relationship of the amount of testing per level, with more at the bottom than at the top.

The test pyramid is a model showing that different tests may have different granularity. The test pyramid
model supports the team in test automation and in test effort allocation by showing that different goals are
supported by different levels of test automation. The pyramid layers represent groups of tests. The higher
the layer, the lower the test granularity, test isolation and test execution time. Tests in the bottom layer
are small, isolated, fast, and check a small piece of functionality, so usually a lot of them are needed to
achieve a reasonable coverage. The top layer represents complex, high-level, end-to-end tests. These
high-level tests are generally slower than the tests from the lower layers, and they typically check a large
piece of functionality, so usually just a few of them are needed to achieve a reasonable coverage. The
number and naming of the layers may differ. For example, the original test pyramid model (Cohn 2009)
defines three layers: “unit tests”, “service tests” and “UI tests”. Another popular model defines unit (component) tests, integration (component integration) tests, and end-to-end tests. Other test levels (see
section 2.2.1) can also be used.

25
Q

Testing Quadrants

A

A classification model of test types/levels in four quadrants, relating them to two dimensions of test goals: supporting the team vs. critiquing the product, and technology-facing vs. business-facing.

The testing quadrants, defined by Brian Marick (Marick 2003, Crispin 2008), group the test levels with the appropriate test types, activities, test techniques and work products in the Agile software development. The model supports test management in visualizing these to ensure that all appropriate test types and test levels are included in the SDLC and in understanding that some test types are more relevant to certain test levels than others. This model also provides a way to differentiate and describe the types of tests to all stakeholders, including developers, testers, and business representatives.
In this model, tests can be business facing or technology facing. Tests can also support the team (i.e., guide the development) or critique the product (i.e., measure its behavior against the expectations). The combination of these two viewpoints determines the four quadrants:
* Quadrant Q1 (technology facing, support the team). This quadrant contains component and component integration tests. These tests should be automated and included in the CI process.
* Quadrant Q2 (business facing, support the team). This quadrant contains functional tests, examples, user story tests, user experience prototypes, API testing, and simulations. These tests check the acceptance criteria and can be manual or automated.
* Quadrant Q3 (business facing, critique the product). This quadrant contains exploratory testing, usability testing, user acceptance testing. These tests are user-oriented and often manual.
* Quadrant Q4 (technology facing, critique the product). This quadrant contains smoke tests and non-functional tests (except usability tests). These tests are often automated.

26
Q

SDLC

A

Software Development LifeCycle