chapter 5 Managing the test activities Flashcards
The process of recognizing, recording, classifying, investigating, resolving and disposing of defects
Defect management
- Documentation of the occurrence, nature, and status of a defect. Synonyms: bug report
Defect report
The set of conditions for officially starting a defined task. Reference: Gilb and Graham See also: exit criteria
Entry criteria
The set of conditions for officially completing a defined task.
Synonyms?
Exit criteria
Synonyms: test completion criteria, completion criteria
- A risk impacting the quality of a product. See also: risk
Product risk
A risk that impacts project success. See also: risk
Project risk
A factor that could result in future negative consequences.
Risk
The overall process of risk identification and risk assessment
Risk analysis
The process to examine identified risks and determine the risk level
Risk assessment
The overall process of risk mitigation and risk monitoring
Risk control
- The process of finding, recognizing and describing risks.
Risk identification
- The measure of a risk defined by risk impact and risk likelihood.
Risk level
- The process for handling risks
Risk management
- The process through which decisions are reached and protective measures are implemented for reducing or maintaining risks to specified levels
Risk mitigation
- The activity that checks and reports the status of known risks to stakeholders
Risk monitoring
- Testing in which the management, selection, prioritization, and use of testing activities and resources are based on corresponding risk types and risk levels.
Risk-based testing
- The manner of implementing testing tasks
Test approach
- A type of test report produced at completion milestones that provides an evaluation of the corresponding test items against exit criteria.
Test completion report
The activity that develops and applies corrective actions to get a test project on track when it deviates from what was planned. See also: test management
Test control
The activity that checks the status of testing activities, identifies any variances from planned or expected, and reports status to stakeholders. See also: test management
Test monitoring
Documentation describing the test objectives to be achieved and the means and the schedule for achieving them, organized to coordinate testing activities. Reference: After ISO 29119-1 See also: master test plan, level test plan, test scope
Test plan
- The activity of establishing or updating a test plan.
Test planning
- A type of periodic test report that includes the progress of test activities against a baseline, risks, and alternatives requiring a decision. Synonyms: test status report
Test progress report
- A graphical model representing the relationship of the amount of testing per level, with more at the bottom than at the top
Test pyramid
A classification model of test types/test levels in four quadrants, relating them to two dimensions of test objectives: supporting the product team versus critiquing the product, and technology-facing versus business-facing
Testing quadrants
A test plan describes the objectives, resources and processes for a test project. A test plan:
Documents the means and schedule for achieving test objectives
Helps to ensure that the performed test activities will meet the established criteria
Serves as a means of communication with team members and other stakeholders
Demonstrates that testing will adhere to the existing test policy and test strategy (or explains why the testing will deviate from them)
the efforts needed to achieve the test project objectives.
Test Plan
In iterative SDLCs, typically two kinds of planning occur:
release planning and iteration planning
Test planning guides the testers’ thinking and forces the testers to confront the future challenges related to
risks, schedules, people, tools, costs, effort, etc.
The typical content of a test plan includes:
Context of testing (e.g., scope, test objectives, constraints, test basis)
Assumptions and constraints of the test project
Stakeholders (e.g., roles, responsibilities, relevance to testing, hiring and training needs)
Communication (e.g., forms and frequency of communication, documentation templates)
Risk register (e.g., product risks, project risks)
Test approach (e.g., test levels, test types, test techniques, test deliverables, entry criteria and exit criteria, independence of testing, metrics to be collected, test data requirements, test environment requirements, deviations from the organizational test policy and test strategy)
Budget and schedule
looks ahead to the release of a product, defines and re-defines the product backlog, and may involve refining larger user stories into a set of smaller user stories.
It also serves as the basis for the test approach and test plan across all iterations.
release planning
Testers involved in release planning participate in
writing testable user stories and acceptance criteria (see section 4.5),
participate in project and quality risk analyses (see section 5.2),
estimate test effort associated with user stories (see section 5.1.4),
determine the test approach,
and plan the testing for the release.
looks ahead to the end of a single iteration and is concerned with the iteration backlog.
iteration planning
Testers involved in iteration planning:
participate in the detailed risk analysis of user stories,
determine the testability of user stories,
break down user stories into tasks (particularly testing tasks),
estimate test effort for all testing tasks,
identify and refine functional and non-functional aspects of the test object.
define the preconditions for undertaking a given activity.
Entry criteria
If entry criteria are not met, it is likely that the activity will prove to be
more difficult, time-consuming, costly, and riskier.
define what must be achieved in order to declare an activity completed.
Exit criteria
Entry criteria and exit criteria should be
defined for each test level, and will differ based on the test objectives.
Typical entry criteria include:
availability of resources (e.g., people, tools, environments, test data, budget, time),
availability of testware (e.g., test basis, testable requirements, user stories, test cases),
and initial quality level of a test object (e.g., all smoke tests have passed).
Typical exit criteria include:
measures of thoroughness (e.g., achieved level of coverage, number of unresolved defects, defect density, number of failed test cases),
and completion criteria (e.g., planned tests have been executed, static testing has been performed, all defects found are reported, all regression tests are automated).
Running out of time or budget can also be viewed as valid
exit criteria
Even without other exit criteria being satisfied, it can be acceptable to end testing under such circumstances, if the stakeholders have reviewed and accepted the risk to go live without further testing.
In Agile software development, exit criteria are often called
Definition of Done
defining the team’s objective metrics for a releasable item
Definition of Done
Entry criteria that a user story must fulfill to start the development and/or testing activities are called
Definition of Ready
involves predicting the amount of test-related work needed to meet the objectives of a test project.
Test effort estimation
It is important to make it clear to the stakeholders that the estimate is based on a number of assumptions and is always subject to estimation error.
Estimation for small tasks is usually more accurate than for the large ones. Therefore, when estimating a large task, it can
be decomposed into a set of smaller tasks which then in turn can be estimated.
estimation techniques
Estimation based on ratios
Extrapolation.
Wideband Delphi.
Three-point estimation
-In this metrics-based technique, figures are collected from previous projects within the organization, which makes it possible to derive “standard” ratios for similar projects.
Estimation based on ratios
-The ratios of an organization’s own projects (e.g., taken from historical data) are generally the best source to use in the estimation process.
Estimation based on ratios
-These standard ratios can then be used to estimate the test effort for the new project. -For example, if in the previous project the development-to-test effort ratio was 3:2, and in the current project the development effort is expected to be 600 person-days, the test effort can be estimated to be 400 person-days.
In this metrics-based technique, measurements are made as early as possible in the current project to gather the data
Extrapolation
Having enough observations, the effort required for the remaining work can be approximated by extrapolating this data (usually by applying a mathematical model). This method is very suitable in iterative SDLCs. For example, the team may extrapolate the test effort in the forthcoming iteration as the averaged effort from the last three iterations.
-Data source: Data is gathered from past projects within the organization.
Estimation based on ratios
-Data source: Data is gathered from the current project’s early stages or iterations.
Extrapolation
In this iterative, expert-based technique, experts make experience-based estimations. Each expert, in isolation, estimates the effort.
Wideband Delphi
-The results are collected and if there are deviations that are out of range of the agreed upon boundaries, the experts discuss their current estimates. Each expert is then asked to make a new estimation based on that feedback, again in isolation. This process is repeated until a consensus is reached. Planning Poker is a variant of Wideband Delphi, commonly used in Agile software development. In Planning Poker, estimates are usually made using cards with numbers that represent the effort size.
a consensus based estimation
Wideband Delphi
In this expert-based technique, three estimations are made by the experts: the most optimistic estimation (a), the most likely estimation (m) and the most pessimistic estimation (b). The final estimate (E) is their weighted arithmetic mean.
Three-point estimation.
In the most popular version of this technique, the estimate is calculated as E = (a + 4m + b) / 6. The advantage of this technique is that it allows the experts to calculate the measurement error: SD = (b – a) / 6. For example, if the estimates (in personhours) are: a=6, m=9 and b=18, then the final estimation is 10±2 person-hours (i.e., between 8 and 12 person-hours), because E = (6 + 49 + 18) / 6 = 10 and SD = (18 – 6) / 6 = 2.
Three-point estimation.
test case prioritization: Once the test cases and test procedures are specified and assembled into test suites,
these test suites can be arranged in a test execution schedule that defines the order in which they are to be run
When prioritizing test cases, different factors can be taken into account. The most commonly used test case prioritization strategies are as follows:
Risk-based prioritization
Coverage-based prioritization,
Requirements-based prioritization
Ideally, test cases would be ordered to run based on their priority levels, using, for example, one of the above-mentioned prioritization strategies. However, this practice may not work if the test cases or the features being tested have dependencies. If a test case with a higher priority is dependent on a test case with a lower priority, the lower priority test case must be executed first.
where the order of test execution is based on the results of risk analysis (see section 5.2.3). Test cases covering the most important risks are executed first.
Risk-based prioritization,
, where the order of test execution is based on coverage (e.g., statement coverage). Test cases achieving the highest coverage are executed first. In another variant, called additional coverage prioritization, the test case achieving the highest coverage is executed first; each subsequent test case is the one that achieves the highest additional coverage.
Coverage-based prioritization
where the order of test execution is based on the priorities of the requirements traced back to the corresponding test cases. Requirement priorities are defined by stakeholders. Test cases related to the most important requirements are executed first.
Requirements-based prioritization,
The order of test execution must also take into account the availability of resources. For example:
the required test tools, test environments or people that may only be available for a specific time window.
is a model showing that different tests may have different granularity.
The test pyramid
supports the team in test automation and in test effort allocation by showing that different goals are supported by different levels of test automation
The test pyramid model
The pyramid layers represent groups of tests.
The higher the layer, the lower the test granularity (small parts ), test isolation and test execution time.
Test pyramid
testing pyramid: The number and naming of the layers may differ. For example, the original test pyramid model (Cohn 2009) defines three layers:
“unit tests”, “service tests” and “UI tests”.
Another popular model defines unit (component) tests, integration (component integration) tests, and end-to-end tests. Other test levels (see section 2.2.1) can also be used.
group the test levels with the appropriate test types, activities, test techniques and work products in the Agile software development.
testing quadrants
This model also provides a way to differentiate and describe the types of tests to all stakeholders, including developers, testers, and business representatives.
testing quadrants
Quadrant Q1
(technology facing, support the team). This quadrant contains component and component integration tests. These tests should be automated and included in the CI process.
Quadrant Q2
(business facing, support the team). This quadrant contains functional tests, examples, user story tests, user experience prototypes, API testing, and simulations. These tests check the acceptance criteria and can be manual or automated.
Quadrant Q3
(business facing, critique the product). This quadrant contains exploratory testing, usability testing, user acceptance testing. These tests are user-oriented and often manual.
Quadrant Q4
(technology facing, critique the product). This quadrant contains smoke tests and non-functional tests (except usability tests). These tests are often automated.
allows the organizations to increase the likelihood of achieving objectives, improve the quality of their products and increase the stakeholders’ confidence and trust.
Risk management
The test approach, in which test activities are selected, prioritized, and managed based on risk analysis and risk control,
risk-based testing
The main risk management activities are:
Risk analysis (consisting of risk identification and risk assessment; see section 5.2.3)
Risk control (consisting of risk mitigation and risk monitoring; see section 5.2.4)
is a potential event, hazard, threat, or situation whose occurrence causes an adverse effect.
Risk
A risk can be characterized by two factors:
Risk likelihood – the probability of the risk occurrence (greater than zero and less than one)
Risk impact (harm) – the consequences of this occurrence
These two factors express the risk level, which is a measure for the risk. The higher the risk level, the more important is its treatment.
In software testing one is generally concerned with two types of risks:
project risks and product risks.
Project risksare related to the management and control of the project.
Project risks include:
Organizational issues (e.g., delays in work products deliveries, inaccurate estimates, cost-cutting)
People issues (e.g., insufficient skills, conflicts, communication problems, shortage of staff)
Technical issues (e.g., scope creep, poor tool support)
Supplier issues (e.g., third-party delivery failure, bankruptcy of the supporting company)
It happens when new features, requirements, or tasks are added to a project after it has already begun, often without proper review or adjustment to the project plan.
- scope creep:
when they occur, may have an impact on the project schedule, budget or scope, which affects the project’s ability to achieve its objectives.
Project risks,
are related to the product quality characteristics (e.g., described in the ISO 25010 quality model).
Product risks
Examples of product risks include:
missing or wrong functionality,
incorrect calculations,
runtime errors,
poor architecture,
inefficient algorithms,
inadequate response time,
poor user experience,
security vulnerabilities.
Product risks, when they occur, may result in various negative consequences, including:
User dissatisfaction
Loss of revenue, trust, reputation
Damage to third parties
High maintenance costs, overload of the helpdesk
Criminal penalties
In extreme cases, physical damage, injuries or even death
From a testing perspective, the goal of this risk is to provide an awareness of product risk in order to focus the testing effort in a way that minimizes the residual level (remaining level) of product risk. Ideally, this risk analysis begins early in the SDLC.
product risk analysis
Product risk analysis consists of
risk identification and risk assessment.
is about generating a comprehensive list of risks. Stakeholders can identify risks by using various techniques and tools, e.g., brainstorming, workshops, interviews, or cause-effect diagrams.
Risk identification
Risk assessment involves:
categorization of identified risks,
determining their risk likelihood,
risk impact and level,
prioritizing,
proposing ways to handle them.
Categorization helps in assigning mitigation actions, because usually risks falling into the same category can be mitigated using a similar approach. (Mitigate is to make less severe)
can use a quantitative or qualitative approach, or a mix of them.
-Risk assessment
the risk level is calculated as the multiplication of risk likelihood and risk impact.
-In the quantitative approach
(risk assessment)
the risk level can be determined using a risk matrix.
- In the qualitative approach
(risk assessment)
Product risk analysis may influence the thoroughness and scope of testing. Its results are used to:
Determine the scope of testing to be carried out
Determine the particular test levels and propose test types to be performed
Determine the test techniques to be employed and the coverage to be achieved
Estimate the test effort required for each task
Prioritize testing in an attempt to find the critical defects as early as possible
Determine whether any activities in addition to testing could be employed to reduce risk
comprises all measures that are taken in response to identified and assessed product risks.
Product risk control
Product risk control consists of
risk mitigation and risk monitoring.
involves implementing the actions proposed in risk assessment to reduce the risk level.
Risk mitigation
The aim of this risk is to ensure that the mitigation actions are effective, to obtain further information to improve risk assessment, and to identify emerging risks.
risk monitoring
With respect to product risk control, once a risk has been analyzed, several response options to risk are possible (Examples):
risk mitigation by testing,
risk acceptance,
risk transfer, or
contingency plan
Actions that can be taken to mitigate the product risks by testing are as follows:
Select the testers with the right level of experience and skills, suitable for a given risk type
Apply an appropriate level of independence of testing
Conduct reviews and perform static analysis
Apply the appropriate test techniques and coverage levels
Apply the appropriate test types addressing the affected quality characteristics
Perform dynamic testing, including regression testing
is a variant of Wideband Delphi, commonly used in Agile software development.
Planning Poker
In Planning Poker, estimates are usually made using cards with numbers that represent the effort size.
three-point estimation, the estimate is calculated as
E = (a + 4m + b) / 6
three estimations are made by the experts: the most optimistic estimation (a), the most likely estimation (m) and the most pessimistic estimation (b). The final estimate (E) is their weighted arithmetic mean.
three-point estimation, the measurement error is calculated as
SD = (b – a) / 6
three estimations are made by the experts: the most optimistic estimation (a), the most likely estimation (m) and the most pessimistic estimation (b). The final estimate (E) is their weighted arithmetic mean.
is a tool used to assess and prioritize risks by evaluating the likelihood of an event occurring and the potential impact or severity if the event occurs.
A risk matrix
Risk analysis consists of
risk identification and risk assessment
Risk control consists of
risk mitigation and risk monitoring
The degree to which test conditions can be established for a component or system, and tests can be performed to determine whether those test conditions have been met.
testability
Context of testing in The typical content of a test plan includes:
(e.g., scope, test objectives, constraints, test basis)
Stakeholders in The typical content of a test plan includes:
(e.g., roles, responsibilities, relevance to testing, hiring and training needs)
Risk register in The typical content of a test plan includes:
(e.g., product risks, project risks)
Test approach in The typical content of a test plan includes:
(e.g., test levels, test types, test techniques, test deliverables, entry criteria and exit criteria, independence of testing, metrics to be collected, test data requirements, test environment requirements, deviations from the organizational test policy and test strategy)