ISTQB Chapter 5 - Managing the Test Activities - Keywords Flashcards
Defect Management
The process of recognizing, investigating, taking action and disposing of defects. It involves recording defects, classifying them and identifying the impact.
Since one of the major test objectives is to find defects, an established defect management process is essential. Although we refer to “defects” here, the reported anomalies may turn out to be real defects or something else (e.g., false positive, change request) - this is resolved during the process of dealing with the defect reports. Anomalies may be reported during any phase of the SDLC and the form depends on the SDLC. At a minimum, the defect management process includes a workflow for handling individual anomalies from their discovery to their closure and rules for their classification. The workflow typically comprises activities to log the reported anomalies, analyze and classify them, decide on a suitable
response such as to fix or keep it as it is and finally to close the defect report. The process must be followed by all involved stakeholders. It is advisable to handle defects from static testing (especially static analysis) in a similar way.
Defect Report
A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function.
Typical defect reports have the following objectives:
* Provide those responsible for handling and resolving reported defects with sufficient information to resolve the issue
* Provide a means of tracking the quality of the work product
* Provide ideas for improvement of the development and test process
A defect report logged during dynamic testing typically includes:
* Unique identifier
* Title with a short summary of the anomaly being reported
* Date when the anomaly was observed, issuing organization, and author, including their role
* Identification of the test object and test environment
* Context of the defect (e.g., test case being run, test activity being performed, SDLC phase, and other relevant information such as the test technique, checklist or test data being used)
* Description of the failure to enable reproduction and resolution including the steps that detected the anomaly, and any relevant test logs, database dumps, screenshots, or recordings
* Expected results and actual results
* Severity of the defect (degree of impact) on the interests of stakeholders or requirements
* Priority to fix
* Status of the defect (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed, rejected)
* References (e.g., to the test case)
Some of this data may be automatically included when using defect management tools (e.g., identifier, date, author and initial status). Document templates for a defect report and example defect reports can be found in the ISO/IEC/IEEE 29119-3 standard, which refers to defect reports as incident reports.
Entry Criteria
The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g., test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria.
Entry criteria define the preconditions for undertaking a given activity. If entry criteria are not met, it is likely that the activity will prove to be more difficult, time-consuming, costly, and riskier.
Typical entry criteria include: availability of resources (e.g., people, tools, environments, test data, budget, time), availability of testware (e.g., test basis, testable requirements, user stories, test cases), and initial quality level of a test object (e.g., all smoke tests have passed).
Exit Criteria
The set of generic and specific conditions, agreed upon with the stakeholders for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.
Exit criteria define
what must be achieved in order to declare an activity completed. Entry criteria and exit criteria should be defined for each test level, and will differ based on the test objectives.
Typical exit criteria include: measures of thoroughness (e.g., achieved level of coverage, number of unresolved defects, defect density, number of failed test cases), and completion criteria (e.g., planned tests have been executed, static testing has been performed, all defects found are reported, all regression tests are automated).
Product Risk
A risk directly related to the test object.
Product risks are related to the product quality characteristics (e.g., described in the ISO 25010 quality
model). Examples of product risks include: missing or wrong functionality, incorrect calculations, runtime
errors, poor architecture, inefficient algorithms, inadequate response time, poor user experience, security
vulnerabilities. Product risks, when they occur, may result in various negative consequences, including:
* User dissatisfaction
* Loss of revenue, trust, reputation
* Damage to third parties
* High maintenance costs, overload of the helpdesk
* Criminal penalties
* In extreme cases, physical damage, injuries or even death
Project Risk
A risk related to management and control of the (test) project.
For example;
* Organizational issues (e.g., delays in work products deliveries, inaccurate estimates, cost-cutting)
* People issues (e.g., insufficient skills, conflicts, communication problems, shortage of staff)
* Technical issues (e.g., scope creep, poor tool support)
* Supplier issues (e.g., third-party delivery failure, bankruptcy of the supporting company)
Project risks, when they occur, may have an impact on the project schedule, budget or scope, which
affects the project’s ability to achieve its objectives.
Risk
A factor that could result in future negative consequences.
Risk is a potential event, hazard, threat, or situation whose occurrence causes an adverse effect. A risk can be characterized by two factors:
* Risk likelihood – the probability of the risk occurrence (greater than zero and less than one)
* Risk impact (harm) – the consequences of this occurrence
These two factors express the risk level, which is a measure for the risk. The higher the risk level, the more important is its treatment.
Risk Analysis
The process of assessing identified project or product risks to determine their level of risk, typically by estimating their impact and probability of occurrence (likelihood).
Consists of risk identification and risk assessment.
Risk Assessment
The process of identifying and subsequently analyzing the identified project or product risk to determine its level of risk, typically by assigning likelihood and impact ratings.
Risk assessment involves:
Categorization of identified risks, determining their risk likelihood, risk impact and level, prioritizing, and proposing ways to handle them. Categorization helps in assigning mitigation actions, because usually
risks falling into the same category can be mitigated using a similar approach.
Risk Control
Product risk control comprises all measures that are taken in response to identified and assessed product risks. Product risk control consists of risk mitigation and risk monitoring.
With respect to product risk control, once a risk has been analyzed, several response options to risk are possible, e.g., risk mitigation by testing, risk acceptance, risk transfer, or contingency plan (Veenendaal 2012).
Actions that can be taken to mitigate the product risks by testing are as follows:
* Select the testers with the right level of experience and skills, suitable for a given risk type
* Apply an appropriate level of independence of testing
* Conduct reviews and perform static analysis
* Apply the appropriate test techniques and coverage levels
* Apply the appropriate test types addressing the affected quality characteristics
* Perform dynamic testing, including regression testing
Risk Identification
The process of identifying risks using techniques such as brainstorming, checklists, failure history, workshops, interviews, or cause-effect diagrams.
Risk Level
The importance of a risk as defined by its characteristics impact and likelihood. The level of risk can be used to determine the intensity of testing to be performed. A risk level can be expressed either qualitatively (e.g., high, medium, low) or quantitatively.
Risk assessment can use a quantitative or qualitative approach, or a mix of them. In the quantitative approach the risk level is calculated as the multiplication of risk likelihood and risk impact. In the qualitative approach the risk level can be determined using a risk matrix.
Risk Management
Systematic application of procedures and practices to the tasks of identifying, analyzing, prioritizing, and controlling risk.
Risk management allows the organizations to increase the likelihood of achieving objectives, improve the quality of their products and increase the stakeholders’ confidence and trust.
The main risk management activities are:
* Risk analysis (consisting of risk identification and risk assessment; see section 5.2.3)
* Risk control (consisting of risk mitigation and risk monitoring; see section 5.2.4)
The test approach, in which test activities are selected, prioritized, and managed based on risk analysis and risk control, is called risk-based testing.
Risk Mitigation
The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels.
Risk mitigation involves implementing the actions proposed in risk assessment to reduce the risk level. The aim of risk monitoring is to ensure that the mitigation actions are effective, to obtain further information to improve risk assessment, and to identify emerging risks.
Risk Monitoring
The aim of risk monitoring is to ensure that the mitigation actions are effective, to obtain further information to improve risk assessment, and to identify emerging risks.