Chapter 5 Flashcards

1
Q

What is a Test Plan?

A

A test plan describes the objectives, resources and processes for a test project.
A test plan:
* Documents the means and schedule for achieving test objectives
* Helps to ensure that the performed test activities will meet the established criteria
* Serves as a means of communication with team members and other stakeholders
* Demonstrates that testing will adhere to the existing test policy and test strategy (or explains why the testing will deviate from them)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does a Test Plan include?

A

The typical content of a test plan includes:
* Context of testing (e.g., scope, test objectives, constraints, test basis)
* Assumptions and constraints of the test project
* Stakeholders (e.g., roles, responsibilities, relevance to testing, hiring and training needs)
* Communication (e.g., forms and frequency of communication, documentation templates)
* Risk register (e.g., product risks, project risks)
* Test approach (e.g., test levels, test types, test techniques, test deliverables, entry criteria and exit criteria, independence of testing, metrics to be collected, test data requirements, test environment requirements, deviations from the organizational test policy and test strategy)
* Budget and schedule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are 2 kinds of Planning?

A

release and iteration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Release planning?

A

Release planning looks ahead to the release of a product, defines and re-defines the product backlog, and may involve refining larger user stories into a set of smaller user stories. It also serves as the basis for the test approach and test plan across all iterations. Testers involved in release planning participate in writing testable user stories and acceptance criteria, participate in project and quality risk analyses, estimate test effort associated with user stories, determine the test approach, and plan the testing for the release.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Iteration planning?

A

Iteration planning looks ahead to the end of a single iteration and is concerned with the iteration backlog. Testers involved in iteration planning participate in the detailed risk analysis of user stories, determine the testability of user stories, break down user stories into tasks (particularly testing tasks), estimate test effort for all testing tasks, and identify and refine functional and non-functional aspects of the test object.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Entry Criteria?

A

Entry criteria define the preconditions for undertaking a given activity. If entry criteria are not met, it is likely that the activity will prove to be more difficult, time-consuming, costly, and riskier.
Typical entry criteria include: availability of resources (e.g., people, tools, environments, test data, budget, time), availability of testware (e.g., test basis, testable requirements, user stories, test cases), and initial quality level of a test object (e.g., all smoke tests have passed).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Exit Criteria?

A

Exit criteria define what must be achieved in order to declare an activity completed.
Typical exit criteria include: measures of thoroughness (e.g., achieved level of coverage, number of unresolved defects, defect density, number of failed test cases), and completion criteria (e.g., planned tests have been executed, static testing has been performed, all defects found are reported, all regression tests are automated).
Running out of time or budget can also be viewed as valid exit criteria. Even without other exit criteria being satisfied, it can be acceptable to end testing under such circumstances, if the stakeholders have reviewed and accepted the risk to go live without further testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are 4 Estimation Techniques?

A
  • Estimation based on ratios
  • Extrapolation
  • Wideband Delphi
  • Three-point estimation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Describe: Extrapolation

A

In this metrics-based technique, measurements are made as early as possible in the current project to gather the data. Having enough observations, the effort required for the remaining work can be approximated by extrapolating this data (usually by applying a mathematical model). This method is very suitable in iterative SDLCs. For example, the team may extrapolate the test effort in the forthcoming iteration as the averaged effort from the last three iterations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Describe: Estimation based on ratios

A

In this metrics-based technique, figures are collected from previous projects within the organization, which makes it possible to derive “standard” ratios for similar projects. The ratios of an organization’s own projects (e.g., taken from historical data) are generally the best source to use in the estimation process. These standard ratios can then be used to estimate the test effort for the new project. For example, if in the previous project the development-to-test effort ratio was 3:2, and in the current project the development effort is expected to be 600 person-days, the test effort can be estimated to be 400 person-days.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe: Three-point estimation

A

In this expert-based technique, three estimations are made by the experts: the most optimistic estimation (a), the most likely estimation (m) and the most pessimistic estimation (b). The final estimate (E) is their weighted arithmetic mean. In the most popular version of this technique, the estimate is calculated as E = (a + 4m + b) / 6. The advantage of this technique is that it allows the experts to calculate the measurement error: SD = (b – a) / 6. For example, if the estimates (in person- hours) are: a=6, m=9 and b=18, then the final estimation is 10±2 person-hours (i.e., between 8 and 12 person-hours), because E = (6 + 49 + 18) / 6 = 10 and SD = (18 – 6) / 6 = 2.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are 3 ways to prioritize Test Cases?

A

by rick, coverage, and/or requirements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe: Wideband Delphi

A

In this iterative, expert-based technique, experts make experience-based estimations. Each expert, in isolation, estimates the effort. The results are collected and if there are deviations that are out of range of the agreed upon boundaries, the experts discuss their current estimates. Each expert is then asked to make a new estimation based on that feedback, again in isolation. This process is repeated until a consensus is reached.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is Planning Poker?

A

Is a variant of Wideband Delphi, commonly used in Agile software development. In Planning Poker, estimates are usually made using cards with numbers that represent the effort size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is Risk-based prioritization?

A

where the order of test execution is based on the results of risk analysis. Test cases covering the most important risks are executed first.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is Coverage-based prioritization?

A

Test cases achieving the highest coverage are executed first. In another variant, called additional coverage prioritization, the test case achieving the highest coverage is executed first; each subsequent test case is the one that achieves the highest additional coverage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is Requirements-based prioritization?

A

where the order of test execution is based on the priorities of the requirements traced back to the corresponding test cases. Requirement priorities are defined by stakeholders. Test cases related to the most important requirements are executed first.

12
Q

What are testing quadrants?

A

group the test levels with the appropriate test types, activities, test techniques and work products in the Agile software development. The model supports test management in visualizing these to ensure that all appropriate test types and test levels are included in the SDLC and in understanding that some test types are more relevant to certain test levels than others. This model also provides a way to differentiate and describe the types of tests to all stakeholders, including developers, testers, and business representatives

12
Q

What is the ‘Test Pyramid’?

A

The test pyramid is a model showing that different tests may have different granularity. The test pyramid model supports the team in test automation and in test effort allocation by showing that different goals are supported by different levels of test automation. The pyramid layers represent groups of tests. The higher the layer, the lower the test granularity, test isolation and test execution time. Tests in the bottom layer are small, isolated, fast, and check a small piece of functionality, so usually a lot of them are needed to achieve a reasonable coverage. The top layer represents complex, high-level, end-to-end tests. These high-level tests are generally slower than the tests from the lower layers, and they typically check a large piece of functionality, so usually just a few of them are needed to achieve a reasonable coverage.

12
Q

What are the 4 Quadrants?

A
  • Quadrant Q1 (technology facing, support the team). This quadrant contains component and component integration tests. These tests should be automated and included in the CI process.
  • Quadrant Q2 (business facing, support the team). This quadrant contains functional tests, examples, user story tests, user experience prototypes, API testing, and simulations. These tests check the acceptance criteria and can be manual or automated.
  • Quadrant Q3 (business facing, critique the product). This quadrant contains exploratory testing, usability testing, user acceptance testing. These tests are user-oriented and often manual.
  • Quadrant Q4 (technology facing, critique the product). This quadrant contains smoke tests and non-functional tests (except usability tests). These tests are often automated.
12
Q

What are the two main risk management activities?

A

Risk analysis and risk control

13
Q

Define: Risk

A

Risk is a potential event, hazard, threat, or situation whose occurrence causes an adverse effect. A risk can be characterized by two factors:
-Risk likelihood
-Risk impact (harm)

14
Q
A
15
Q
A
16
Q

What are Project Risks?

A

Project risks include:
* Organizational issues (e.g., delays in work products deliveries, inaccurate estimates, cost-cutting)
* People issues (e.g., insufficient skills, conflicts, communication problems, shortage of staff)
* Technical issues (e.g., scope creep, poor tool support)
* Supplier issues (e.g., third-party delivery failure, bankruptcy of the supporting company)
Project risks, when they occur, may have an impact on the project schedule, budget or scope, which affects the project’s ability to achieve its objectives.

16
Q

What are Product Risks?

A

Product risks are related to the product quality characteristics (e.g., described in the ISO 25010 quality model). Examples of product risks include: missing or wrong functionality, incorrect calculations, runtime errors, poor architecture, inefficient algorithms, inadequate response time, poor user experience, security vulnerabilities.

16
Q

What are negative consequences of Product Risks?

A
  • User dissatisfaction
  • Loss of revenue, trust, reputation
  • Damage to third parties
  • High maintenance costs, overload of the helpdesk
  • Criminal penalties
  • In extreme cases, physical damage, injuries or even death
17
Q

What is the goal of Product risk analysis?

A

to provide an awareness of product risk in order to focus the testing effort in a way that minimizes the residual level of product risk. Ideally, product risk analysis begins early in the SDLC.
Product risk analysis consists of risk identification and risk assessment.

18
Q

What is Risk identification?

A

Risk identification is about generating a comprehensive list of risks. Stakeholders can identify risks by using various techniques and tools, e.g., brainstorming, workshops, interviews, or cause-effect diagrams.

19
Q

What does Risk Assessment involve?

A

categorization of identified risks, determining their risk likelihood, risk impact and level, prioritizing, and proposing ways to handle them. Categorization helps in assigning mitigation actions, because usually risks falling into the same category can be mitigated using a similar approach.

20
Q

What are the results of Product Risk Analysis used for?

A
  • Determine the scope of testing to be carried out
  • Determine the particular test levels and propose test types to be performed
  • Determine the test techniques to be employed and the coverage to be achieved
  • Estimate the test effort required for each task
  • Prioritize testing in an attempt to find the critical defects as early as possible
  • Determine whether any activities in addition to testing could be employed to reduce risk
20
Q

What is Product Risk Control?

A

comprises all measures that are taken in response to identified and assessed product risks. Product risk control consists of risk mitigation and risk monitoring. Risk mitigation involves implementing the actions proposed in risk assessment to reduce the risk level. The aim of risk monitoring is to ensure that the mitigation actions are effective, to obtain further information to improve risk assessment, and to identify emerging risks.

21
Q

What actions are taken to mitigate the product risks by testing?

A
  • Select the testers with the right level of experience and skills, suitable for a given risk type
  • Apply an appropriate level of independence of testing
  • Conduct reviews and perform static analysis
  • Apply the appropriate test techniques and coverage levels
  • Apply the appropriate test types addressing the affected quality characteristics
  • Perform dynamic testing, including regression testing
22
Q

What is Test Monitoring?

A

is concerned with gathering information about testing. This information is used to assess test progress and to measure whether the test exit criteria or the test tasks associated with the exit criteria are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria.

23
Q

Define: Test Completion

A

Test completion collects data from completed test activities to consolidate experience, testware, and any other relevant information. Test completion activities occur at project milestones such as when a test level is completed, an agile iteration is finished, a test project is completed (or cancelled), a software system is released, or a maintenance release is completed.

24
Q

What is Test Control?

A

Test control uses the information from test monitoring to provide, in a form of the control directives, guidance and the necessary corrective actions to achieve the most effective and efficient testing. Examples of control directives include:
* Reprioritizing tests when an identified risk becomes an issue
* Re-evaluating whether a test item meets entry criteria or exit criteria due to rework
* Adjusting the test schedule to address a delay in the delivery of the test environment
* Adding new resources when and where needed

25
Q

What are common metrics used in Testing?

A
  • Project progress metrics (e.g., task completion, resource usage, test effort)
  • Test progress metrics (e.g., test case implementation progress, test environment preparation progress, number of test cases run/not run, passed/failed, test execution time)
  • Product quality metrics (e.g., availability, response time, mean time to failure)
  • Defect metrics (e.g., number and priorities of defects found/fixed, defect density, defect detection
    percentage)
  • Risk metrics (e.g., residual risk level)
  • Coverage metrics (e.g., requirements coverage, code coverage)
  • Cost metrics (e.g., cost of testing, organizational cost of quality)
26
Q

What do Test Progress Reports include?

A
  • Test period
  • Test progress (e.g., ahead or behind schedule), including any notable deviations
  • Impediments for testing, and their workarounds
  • Test metrics (see section 5.3.1 for examples)
  • New and changed risks within testing period
  • Testing planned for the next period
26
Q

What do Test Completion reports include?

A
  • Test summary
  • Testing and product quality evaluation based on the original test plan (i.e., test objectives and exit
    criteria)
  • Deviations from the test plan (e.g., differences from the planned schedule, duration, and effort).
  • Testing impediments and workarounds
  • Test metrics based on test progress reports
  • Unmitigated risks, defects not fixed
  • Lessons learned that are relevant to the testing
27
Q

What are ways to communicate the Status of Testing?

A
  • Verbal communication with team members and other stakeholders
  • Dashboards (e.g., CI/CD dashboards, task boards, and burn-down charts)
  • Electronic communication channels (e.g., email, chat)
  • Online documentation
  • Formal test reports
28
Q

What is Configuration Management?

A

In testing, configuration management (CM) provides a discipline for identifying, controlling, and tracking work products such as test plans, test strategies, test conditions, test cases, test scripts, test results, test logs, and test reports as configuration items.
To properly support testing, CM ensures the following:
* All configuration items, including test items (individual parts of the test object), are uniquely identified, version controlled, tracked for changes, and related to other configuration items so that traceability can be maintained throughout the test process
* All identified documentation and software items are referenced unambiguously in test documentation

29
Q

What does a defect report in dynamic testing include?

A

A defect report logged during dynamic testing typically includes:
* Unique identifier
* Title with a short summary of the anomaly being reported
* Date when the anomaly was observed, issuing organization, and author, including their role
* Identification of the test object and test environment
* Context of the defect (e.g., test case being run, test activity being performed, SDLC phase, and other relevant information such as the test technique, checklist or test data being used)
* Description of the failure to enable reproduction and resolution including the steps that detected the anomaly, and any relevant test logs, database dumps, screenshots, or recordings
* Expected results and actual results
* Severity of the defect (degree of impact) on the interests of stakeholders or requirements
* Priority to fix
* Status of the defect (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed, rejected)
* References (e.g., to the test case)

29
Q

What are the objectives of defect reports?

A

Typical defect reports have the following objectives:
* Provide those responsible for handling and resolving reported defects with sufficient information to resolve the issue
* Provide a means of tracking the quality of the work product
* Provide ideas for improvement of the development and test process