ISTQB Chapter 5 Theories Flashcards

1
Q

What is a test plan?

A

A test plan
- Documents the means and schedule for achieving the test objective
- Helps to ensure that the performed test activities will meet the established criteria
- Serves as a means of communication with team members and other stakeholders
- Demonstrates that testing will adhere to the existing test policy and test strategy (or explains why the testing will deviate from them)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the typical content of a test plan?

A
  1. Context of testing (e.g. test scope, test objectives, test basis)
  2. Assumptions and constraints of the test project
  3. Stakeholders (e.g., roles, responsibilities, relevance to testing, hiring and training needs)
  4. Risk register (e.g. product risks, project risks)
  5. Test approach (e.g. test levels, test types, test techniques, test deliverables, entry criteria and exit criteria, independence of testing, metrics to be collected, test data requirements, test environment requirements, deviations from the test policy and test strategy)
  6. Budget and schedule
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the two kinds of planning for iterative SDLCs?

A

a. Release Planning
b. Iteration Planning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

This planning looks ahead to the release of a product, defines and re-defines the product backlog, and may involve refining larger user stories into a set of smaller set of smaller user stories. It also serves as the basis for the test approach and test plan across all iterations. Testers involved participate in writing testable user stories and acceptance criteria, participate in project and quality risk analyses, estimate test effort associated with user stories, determine the test approach, and plan the testing for the release.

a. Release planning
b. Iteration planning

A

a. Release planning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

This planning looks ahead to the end of a single iteration and is concerned with the iteration backlog. Testers involved participate in the detailed risk analysis of user stories, determine the testability of user stories, break down user stories into tasks (particularly testing tasks), estimate test efforts for all testing tasks, and identify and refine functional and non-functional aspects of the test object.

a. Release planning
b. Iteration planning

A

b. Iteration planning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

This refers to the preconditions for undertaking a given activity. If this is not met, it is likely that the activity will prove to be more difficult, time-consuming, costly, and riskier.

A

Entry Criteria

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

This refer to what must be achieved to declare an activity completed.

A

Exit Criteria

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In Agile software development, what is the other term for exit criteria?

A

Definition of Done

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

In Agile software development, what is the other term for entry criteria?

A

Definition of Ready

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the typical entry criteria?

A
  1. Availability of resources (e.g., people, tools, environments, test data, budget, time)
  2. Availability of testware (e.g. test basis, testable requirements, user stories, test cases(
  3. Initial quality of a test object (e.g. all smoke tests have passed)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the typical exit criteria?

A
  1. Measures of thoroughness (e.g., achieved level of coverage, number of unresolved defects, defect density, number of failed test cases).
  2. A binary Yes/No criteria e.g., planned tests have been executed, static testing has been performed, all defects found are reported, all regression tests are automated).

Note: Running out of time or budget can also be viewed as valid exit criteria. Even without other exit criteria being satisfied, it can be acceptable to end testing under such circumstances, if the stakeholder have reviewed and accepted the risk to go live without further testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the commonly used Estimation Technique for test effort in test planning?

A
  1. Estimation based on ratios
  2. Extrapolation
  3. Wideband Delphi
  4. Three-point estimation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

In this metrics-based estimation technique, figures are collected from previous projects within the organization, which makes it possible to derive “standard” ratios. The ratios of an organization’s own projects (e.g. taken from historical data) are generally the best source to use in the estimation process.

a. Estimation based on ratios
b. Extrapolation
c. Wideband Delphi
d. Three-point estimation

A

a. Estimation based on ratios

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

In this metrics-based estimation technique, measurements are made as early as possible in the current project to gather the data. Having enough observations, the effort required for the remaining work can be approximated by extrapolating this data (usually by applying a mathematical model). This is very suitable in iterative SDLCs.

a. Estimation based on ratios
b. Extrapolation
c. Wideband Delphi
d. Three-point estimation

A

b. Extrapolation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

In this iterative, expert-based technique, experts make experience-based estimations. The results are collected and if there are deviations of an expert’s estimate that are out of range of the agreed upon boundaries, the experts discuss their current estimates. Each expert is then asked to make a new estimation based on that feedback, again in isolation. This process is repeated until a consensus is reached.

a. Estimation based on ratios
b. Extrapolation
c. Wideband Delphi
d. Three-point estimation

A

c. Wideband Delphi

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

In this expert-based technique, three estimations are made by experts:

the most optimistic estimation (a);
the most likely estimation (m);
and the most pessimistic estimation (b).

The final estimate (E) is their weighted arithmetic mean. The estimate is calculated as:

E = (a + 4*m +b) /6

A

Three-point estimation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the commonly used test case prioritization strategies?

A
  1. Risk-based prioritization
  2. Coverage-based prioritization
  3. Requirements-based prioritization

Note: Priority level would not work if test cases being tested have dependencies. In addition, availability of resources must also be taken into account when strategizing test case prioritization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

This test case prioritization prioritizes test execution based on the result of risk analysis. Test cases covering the most important risks are executed first.

a. Risk-based prioritization
b. Coverage-based prioritization
c. Requirements-based prioritization

A

a. Risk-based prioritization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

This test case prioritization prioritizes based on coverage (e.g., statement coverage). Test cases achieving the highest coverage are executed first. In another variant, called additional coverage prioritization, the test case achieving the highest coverage is executed first; each subsequent test case is one that achieves the highest additional coverage.

a. Risk-based prioritization
b. Coverage-based prioritization
c. Requirements-based prioritization

A

b. Coverage-based prioritization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

This test case prioritization prioritizes execution based on the priorities of the requirements traced back to the corresponding test cases. Requirements priorities are defined by stakeholders. Test cases related to the most important requirements are executed first.

a. Risk-based prioritization
b. Coverage-based prioritization
c. Requirements-based prioritization

A

c. Requirements-based prioritization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

This is a model showing that different tests may have different granularity. This supports the team in test automation and in test effort allocation by showing that different test objectives are supported by different levels of test automation.

Granularity - the scale or level of detail present in a set of data or other phenomenon.

More granular = more detail

A

Test Pyramid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

This groups the test levels with the appropriate test types, activities, test techniques, and work products in the Agile software development. The model supports test management in visualizing these to ensure that all appropriate test types and test levels are included in the SDLC and in understanding that some test types are more relevant to certain test levels than others. This model also provides a way to differentiate and describe the test types to all stakeholders, including developers, testers, and business representatives.

A

Testing Quadrant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

In the testing quadrant, what are the two viewpoints that exist:

A

a. Business facing or Technology facing

b. Support the Team (i.e., guide the development) or Critique the Product (i.e., measure its behavior against the expectations)

24
Q

What are the four quadrants in testing quadrants?

A

a. Quadrant Q1 (technology facing, support the team)
b. Quadrant Q2 (business facing, support the team)
c. Quadrant Q3 (business facing, critique the product)
d. Quadrant Q4 (technology facing, critique the product)

25
Q

This quadrant contains component tests, and component integration tests. These tests should be automated and included in the CI Process.

a. Quadrant Q1 (technology facing, support the team)
b. Quadrant Q2 (business facing, support the team)
c. Quadrant Q3 (business facing, critique the product)
d. Quadrant Q4 (technology facing, critique the product)

A

a. Quadrant Q1 (technology facing, support the team)

26
Q

This quadrant contains functional tests, examples, user story tests, user experience prototypes, API testing, and simulations. These tests check the acceptance criteria, and can be manual or automated.

a. Quadrant Q1 (technology facing, support the team)
b. Quadrant Q2 (business facing, support the team)
c. Quadrant Q3 (business facing, critique the product)
d. Quadrant Q4 (technology facing, critique the product)

A

b. Quadrant Q2 (business facing, support the team)

27
Q

This quadrant contains exploratory testing, usability testing, user acceptance testing. These tests are user-oriented and often manual.

a. Quadrant Q1 (technology facing, support the team)
b. Quadrant Q2 (business facing, support the team)
c. Quadrant Q3 (business facing, critique the product)
d. Quadrant Q4 (technology facing, critique the product)

A

c. Quadrant Q3 (business facing, critique the product)

28
Q

This quadrant contains smoke tests and non-functional tests (except usability tests). These tests are often automated.

a. Quadrant Q1 (technology facing, support the team)
b. Quadrant Q2 (business facing, support the team)
c. Quadrant Q3 (business facing, critique the product)
d. Quadrant Q4 (technology facing, critique the product)

A

d. Quadrant Q4 (technology facing, critique the product)

29
Q

What are the two main risk management activities?

A

a. Risk analysis
b. Risk control

Note: The test approach, in which test activities are selected, prioritized, and managed on risk analysis and risk control, is called risk-based testing.

30
Q

This is a potential event, hazard, threat, or situation whose occurrence cause an adverse effect.

A

a. Risk likelihood - the probability of the risk occurrence (greater than zero and less than one)

b. Risk impact (harm) - the consequences of this occurrence

31
Q

What are the two general types of risks in software testing?

A

a. Project Risks
b. Product Risks

32
Q

This risk is related to the management and control of the project. When this occur, this may have an impact on the project schedule, budget or scope, which affect the project’s ability to achieve its objectives. This includes:

  • Organizational issues (e.g., delays in work product deliveries, inaccurate estimates, cost cutting)
  • People issues (e.g., insufficient skills, conflicts, communication problems, shortage of staff)
  • Technical issues (e.g., scope creep, poor tool support)
  • Supplier issues (e.g., third-party delivery failure, bankruptcy of the supporting company)

a. Project Risks
b. Product Risks

A

a. Project Risks

33
Q

This risk is related to the product quality characteristics. This includes:

  • Missing or wrong functionality
  • Incorrect calculations
  • Runtime errors
  • Poor architecture
  • Inefficient algorithms
  • Inadequate response time
  • Poor user experience
  • Security vulnerabilities

Negative consequences include:

  • User dissatisfaction
  • Loss of revenue
  • Damage to third parties
  • High maintenance costs, overload of the help desk
  • Criminal penalties
  • In extreme cases, physical damage, injuries or even death

a. Project Risks
b. Product Risks

A

b. Product Risks

34
Q

The goal of this activity is to provide an awareness of product risk to focus the test effort in a way that minimizes the residual level of product risk. Ideally, this begins early in the SDLC.

A

Product Risk Analysis

35
Q

What are the two activities involved in Product Risk Analysis?

A

a. Risk Identification
b. Risk Assessment

36
Q

This activity in the Product Risks analysis is about generating a comprehensive list of risks. Stakeholders can identify risks by using various techniques and tools, e.g., brainstorming, workshops, interviews, or cause-effect diagrams.

This involves: categorization of identified risks, determining their likelihood, risk impact and risk level, prioritizing, and proposing ways to handle them. Categorization helps in assigning mitigation actions, because usually risks falling into the same category can be mitigated using a similar approach.

a. Risk Identification
b. Risk Assessment

A

b. Risk Identification

37
Q

This activity in the Product Risk analysis can use a quantitative or qualitative approach, or a mix of them. In the quantitative approach the risk level is calculated as the multiplication of risk likelihood and risk impact. In the qualitative approach the risk level can be determined using a risk matrix.

a. Risk Identification
b. Risk Assessment

A

b. Risk Assessment

38
Q

What are the results of Product Risks Analysis used for?

A
  • Determine the test scope to be carried out
  • Determine the particular test levels and propose test types to be performed
  • Determine the test techniques to be employed and the coverage to be achieved
  • Estimate the test effort required for each task
  • Prioritize testing in an attempt to find the critical defects as early as possible
  • Determine whether any activities in addition to testing could be employed to reduce risk
39
Q

This comprises all measures that are taken in response to identified and assessed product risks.

A

Product Risk Control

40
Q

What are the two activities involved in Product Risk Control?

A

a. Risk Mitigation
b. Risk Monitoring

41
Q

This activity in Product Risk Control involves implementing the actions proposed in risk assessment to reduce the risk level.

a. Risk Mitigation
b. Risk Monitoring

A

a. Risk Mitigation

42
Q

The aim of this is to ensure that the mitigation actions are effective, to obtain further information to improve risk assessment, and to identify emerging risks

a. Risk Mitigation
b. Risk Control

A

b. Risk Control

43
Q

What are some of the actions that can be taken to mitigate the product risk through testing?

A

a. Select the testers with the right level of experience and skills, suitable for a given risk type
b. Apply an appropriate level of independence of testing
c. Perform and reviews and static analysis
d. Apply the appropriate test techniques and coverage levels
e. Apply the appropriate test types addressing the affected quality characteristics
d. Perform dynamic testing, including regression testing

44
Q

This is concerned with gathering information about testing, which is used to assess test progress and to measure whether the exit criteria or the test tasks associated with the exit criteria are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria.

a. Test Monitoring
b. Test Control
c. Test Completion

A

a. Test Monitoring

45
Q

This uses information from test monitoring to provide, in a form of the control directives, guidance and the necessary corrective actions to achieve the most effective and efficient testing. Example of these are

  • Reprioritizing tests when an identified risk become an issue
  • Re-evaluating whether a test item meets entry criteria or exit criteria due to rework
  • Adjusting the test schedule to address a delay in the delivery of the test environment
  • Adding new resources when and where needed

a. Test Monitoring
b. Test Control
c. Test Completion

A

b. Test Control

46
Q

This collects data from completed test activities to consolidate experience, testware, and any other relevant information.

a. Test Monitoring
b. Test Control
c. Test Completion

A

c. Test Completion

Test completion activities occurs at project milestones such as when a test level is completed, an agile iteration is finished, a test project is completed (or cancelled), a software system is released, or a maintenance release is completed.

47
Q

Test metrics are gathered to show progress against the planned test schedule and budget, the current quality of the test object, and the effectiveness of the test activities with respect to the test objectives or an iteration goal. Test monitoring gathers a variety of metrics to support the test control and test completion

What are the common test metrics?

A
  1. Project progress metrics (e.g., task completion, resource usage, test effort)
  2. Test progress metrics (e.g., test case implementation progress, test environment preparation progress, number of test cases run/not run, passed/failed, test execution time)
  3. Product quality metrics (e.g., availability, response time, mean time to failure)
  4. Defect metrics (e.g., number and priorities of defects found/fixed, defect density, defect detection percentage)
  5. Risk metrics (e.g., residual risk level)
  6. Coverage metrics (e.g., requirements coverage, code coverage)
  7. Cost metrics (e.g., cost of testing, organizational cost of quality)
48
Q

This summarizes and communicates test information during and after testing.

A

Test Reporting

49
Q

This report support the ongoing test control and provide information to make modifications to the test schedule, resources, or test plan, when such changes are needed due to deviation from the plan or changed circumstances

a. Test progress report
b. Test completion report

A

a. Test progress report

50
Q

This report summarize a specific test activity (e.g., test level, test cycle, iteration) and can give information for subsequent testing.

a. Test progress report
b. Test completion report

A

b. Test completion report

51
Q

During test monitoring and test control, the test team generates test progress reports for stakeholders to keep them informed. Test progress reports are usually generated on a regular basis (e.g., daily, weekly, etc.) and include?

A
  1. Testing period
  2. Test progress (e.g., ahead or behind schedule, including any notable deviations
  3. Impediments for testing, and their workarounds
  4. Test metrics
  5. New and changed risks within testing period
  6. Testing planned for the next period
52
Q

A test completion report is prepared during test completion, when a project, test level, or test type is complete and when, ideally, its exit criteria have been met. This report uses test progress reports and other data. Typical test completion reports include?

A
  1. Test Summary
  2. Testing and product quality evaluation based on the original test plan (i.e., test objectives and exit criteria)
  3. Deviations from the test plan (e.g., differences from the planned test schedule, duration, and effort)
  4. Testing impediments and workaround
  5. Unmitigated risks, defects not fixed
  6. Lessons learned that are relevant to the testing
53
Q

In testing, this provides a discipline for identifying, controlling, and tracking work products such as test plans, test strategies, test conditions, test cases, test scripts, test results, test logs, and test reports as configuration items.

A

Configuration management (CM)

Note: For a complex configuration item (e.g., a test environment), CM records the items it consists of, their relationships, and versions. If the configuration item is approved for testing, it becomes a baseline and can only be changed through a formal change control process

Configuration management keeps a record of changed configuration items when a new baseline is created. It is possible to revert to a previous baseline to reproduce previous test results.

54
Q

To properly support testing, what does Configuration Management (CM) needs to ensure?

A
  1. That all configuration items, including test items (individual parts of the test object), are uniquely identified, version controlled, tracked for changes, and related to other configuration items so that traceability can be maintained throughout the test process
  2. All identified documentation and software items are referenced unambiguously in testware
55
Q

What are the typical objectives of a defect report?

A
  1. Provide those responsible for handling and resolving reported defects with sufficient information to resolve the issue
  2. Provide a means of tracking the quality of the work product
  3. Provide ideas for improvement of the development and test process
56
Q

What does a defect report logged during dynamic testing include?

A
  1. Unique identifier
  2. Title with a short summary of the anomaly being reported
  3. Identification of the test object and test environment
  4. Context of the defect (e.g., testcase being run, test activity being performed, SDLC phase, and other relevant information such as the test technique, checklist or test data being used)
  5. Description of the failure to enable reproduction and resolution including the test steps that detected the anomaly, and any relevant test logs, database dumps, screenshots, or recordings
  6. Expected results and actual results
  7. Severity of the defect (degree of impact) on the interests of stakeholders or requirements
  8. Priority to fix
  9. Status of the defect (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed, rejected)
  10. References (e.g., the test case)