chapter 5 Flashcards
purpose and content of a test plan?
- Documents the means and schedule for achieving test objectives
- Helps to ensure that the performed test activities will meet the established criteria
- Serves as a means of communication with team members and other stakeholders
- Demonstrates that testing will adhere to the existing test policy and test strategy (or explains why the testing will deviate from them)
The typical content of a test plan includes:
* Context of testing (e.g., scope, test objectives, constraints, test basis)
* Assumptions and constraints of the test project
* Stakeholders (e.g., roles, responsibilities, relevance to testing, hiring and training needs)
* Communication (e.g., forms and frequency of communication, documentation templates)
* Risk register (e.g., product risks, project risks)
* Test approach (e.g., test levels, test types, test techniques, test deliverables, entry criteria and exit criteria, independence of testing, metrics to be collected, test data requirements, test environment requirements, deviations from the organizational test policy and test strategy)
* Budget and schedule
difference between release planning and iteration planning?
Release planning focuses on the overall product release, refining user stories, and setting the testing approach for all iterations.
Iteration planning focuses on each individual iteration, breaking down user stories into tasks, estimating testing efforts, and refining aspects of the test object.
entry and exit criteria?
Entry criteria set the conditions necessary to begin a task, ensuring that the activity can proceed effectively.
Exit criteria specify the conditions that must be met to declare the activity finished, ensuring that objectives are achieved.
Examples of entry criteria include resource availability and initial quality levels, while exit criteria may include measures of coverage, defect resolution, and completion of planned tests. In Agile, exit criteria are often referred to as the Definition of Done, while entry criteria are known as the Definition of Ready for user stories.
running out of time and budget is a valid exit criteria.
estimation techniques?
Estimation based on ratios: Uses historical data to derive standard ratios for similar projects, then applies these ratios to estimate effort for the new project. For instance, if a previous project had a 3:2 development-to-test effort ratio and the current project’s development effort is 600 person-days, the test effort can be estimated as 400 person-days.
Extrapolation: Involves making measurements early in the project to gather data, then approximating the effort required for the remaining work by extrapolating this data using a mathematical model. This method is well-suited for iterative SDLCs, where the team may extrapolate test effort in forthcoming iterations based on averaged effort from past iterations.
Wideband Delphi: An expert-based technique where experts provide experience-based estimations independently. Deviations are discussed, and experts adjust their estimates until a consensus is reached. Planning Poker, a variant of Wideband Delphi, is commonly used in Agile development, where estimates are made using cards representing effort size.
Three-point estimation: Experts make three estimations - optimistic (a), likely (m), and pessimistic (b) - and calculate the final estimate (E) as the weighted arithmetic mean (E = (a + 4*m + b) / 6). This technique allows experts to calculate the measurement error (SD = (b – a) / 6), providing a range for the estimate. For example, if estimates are a=6, m=9, and b=18, the final estimation is 10±2 person-hours, indicating a range of 8 to 12 person-hours.
what is test case prioritization?
Once the test cases and test procedures are specified and assembled into test suites, these test suites can be arranged in a test execution schedule that defines the order in which they are to be run
Risk-based Prioritization: Test execution order is determined by the results of risk analysis. Test cases covering the most critical risks are executed first to mitigate potential high-impact issues.
Coverage-based Prioritization: Test execution order is based on coverage metrics, such as statement coverage. Test cases achieving the highest coverage are executed first. Another variant, additional coverage prioritization, prioritizes test cases based on the additional coverage they provide.
Requirements-based Prioritization: Test execution order is determined by the priorities of traced requirements. Stakeholders define requirement priorities, and test cases related to the most critical requirements are executed first.
When ordering test cases based on priority, it’s important to consider dependencies between test cases or features being tested. If a higher-priority test case depends on a lower-priority one, the lower-priority test case must be executed first. Additionally, test execution order should consider resource availability, such as test tools, environments, or personnel.
what does a test pyramid show?
The test pyramid model illustrates that tests vary in granularity, with different levels supporting different goals and automation needs. It helps teams allocate testing efforts effectively. At the bottom are small, fast, isolated tests covering specific functionality, requiring many to achieve coverage. The top layer includes high-level, end-to-end tests that are slower but cover larger functionality, requiring fewer tests for coverage.
what Is testing quadrants?
The testing quadrants, organize test levels and types in Agile development. They help manage tests and ensure all relevant types are included. Tests can be business or technology facing and either guide development or critique the product. The four quadrants are:
Q1 for component and integration tests,
Q2 for functional tests and user experience checks,
Q3 for user-oriented tests like exploratory testing, and
Q4 for technical tests like smoke tests and non-functional checks.
difference between project risks and product risks?
In software testing, two types of risks are considered: project risks and product risks.
Project risks relate to project management and control, such as organizational, people, technical, and supplier issues, impacting schedule, budget, or scope.
Product risks are related to product quality, including missing functionality, errors, poor performance, security vulnerabilities, which can lead to user dissatisfaction, revenue loss, damage to reputation, and even legal consequences.
what is product risk analysis?
Product risk analysis in testing aims to identify and assess risks early in the software development lifecycle (SDLC) to focus testing efforts effectively and minimize residual product risk. It involves identifying risks using techniques like brainstorming and workshops, categorizing them, assessing their likelihood and impact, and proposing mitigation strategies. The analysis may influence testing scope, levels, techniques, effort estimation, and prioritization to address critical defects early and consider additional risk reduction activities beyond testing
product risk control?
Product risk control involves implementing measures to address identified and assessed product risks.
It includes risk mitigation, which reduces the risk level through actions proposed during risk assessment, and risk monitoring, which ensures the effectiveness of mitigation actions and identifies emerging risks.
Response options to risk include mitigation through testing, risk acceptance, transfer, or contingency plans. Mitigation actions in testing involve selecting skilled testers, ensuring independence, conducting reviews and static analysis, applying suitable test techniques and coverage, addressing affected quality characteristics, and performing dynamic testing, including regression testing.
difference between test monitoring and test control?
Test monitoring is concerned with gathering information about testing. This information is used to assess test progress and to measure whether the test exit criteria or the test tasks associated with the exit criteria are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria.
Test control uses the information from test monitoring to provide, in a form of the control directives, guidance and the necessary corrective actions to achieve the most effective and efficient testing. Examples of control directives include:
* Reprioritizing tests when an identified risk becomes an issue
* Re-evaluating whether a test item meets entry criteria or exit criteria due to rework
Test completion collects data from completed test activities to consolidate experience, testware, and any other relevant information. Test completion activities occur at project milestones such as when a test level is completed, an agile iteration is finished, a test project is completed (or cancelled), a software system is released, or a maintenance release is completed.
metrics used in testing
Test metrics are used to measure progress against planned schedules, budgets, and test quality.
They help monitor the effectiveness of test activities and support test control and completion.
Common metrics include project progress, test progress, product quality, defect, risk, coverage, and cost metrics, providing insights into various aspects such as task completion, resource usage, test case implementation, defect density, coverage, and testing costs.
what do test completion reports typically include?
- Test summary
- Testing and product quality evaluation based on the original test plan (i.e., test objectives and exit
criteria) - Deviations from the test plan (e.g., differences from the planned schedule, duration, and effort).
- Testing impediments and workarounds
- Test metrics based on test progress reports
- Unmitigated risks, defects not fixed
- Lessons learned that are relevant to the testing
what do test progress reports include?
- Test period
- Test progress (e.g., ahead or behind schedule), including any notable deviations
- Impediments for testing, and their workarounds
- Test metrics
- New and changed risks within testing period
- Testing planned for the next period
what is configuration management?
Configuration management (CM) in testing involves identifying, controlling, and tracking test-related work products such as test plans, cases, scripts, results, and reports.
It ensures that these items are uniquely identified, version controlled, and related to each other for traceability.
CM records the composition and versions of complex items like test environments, establishing baselines that can only be changed through formal processes.
It also supports reproducibility by keeping track of changes and enabling reverting to previous baselines. In modern DevOps pipelines, automated CM is typically integrated to support continuous integration, delivery, and deployment practices.