Chapter 5 - Managing the Test Activities Flashcards

1
Q

What is a test plan ?

A

A test plan describes the test objectives, resources and processes for a test project.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does the test plan :

A
  • Documents the means and schedule for achieving test objectives
  • Helps to ensure that the performed test activities will meet the established criteria
  • Serves as a means of communication with team members and other stakeholders
  • Demonstrates that testing will adhere to the existing test policy and test strategy (or explains why
    the testing will deviate from them)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The typical content of a test plan includes :

A
  • Context of testing (e.g., test scope, test objectives, test basis)
  • Assumptions and constraints of the test project
  • Stakeholders (e.g., roles, responsibilities, relevance to testing, hiring and training needs)
  • Communication (e.g., forms and frequency of communication, documentation templates)
  • Risk register (e.g., product risks, project risks)
  • Test approach (e.g., test levels, test types, test techniques, test deliverables, entry criteria and
    exit criteria, independence of testing, metrics to be collected, test data requirements, test
    environment requirements, deviations from the test policy and test strategy)
  • Budget and schedule
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is release planning ? (K1)

A

Release planning looks ahead to the release of a product, defines and re-defines the product backlog,
and may involve refining larger user stories into a set of smaller user stories. It also serves as the basis for the test approach and test plan across all iterations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is iteration planning ? (K1)

A

Iteration planning looks ahead to the end of a single iteration and is concerned with the iteration backlog.
Testers involved in iteration planning participate in the detailed risk analysis of user stories, determine the testability of user stories, break down user stories into tasks (particularly testing tasks), estimate test effort
for all testing tasks, and identify and refine functional and non-functional aspects of the test object.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the entry criteria ?

A

Entry criteria define the preconditions for undertaking a given activity. If entry criteria are not met, it is likely that the activity will prove to be more difficult, time-consuming, costly, and riskier.
Entry criteria and exit criteria should be defined for each test level, and will differ based on the test objectives.
Entry criteria that a user story must fulfill to start the development and/or testing activities are called Definition of Ready.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the exit criteria ?

A

Exit criteria define what must be achieved to declare an activity completed.
Entry criteria and exit criteria should be defined for each test level, and will differ based on the test objectives.
In Agile software development, exit criteria are often called Definition of Done.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does typical entry criteria include ?

A
  • availability of resources (e.g., people, tools, environments, test data, budget, time)
  • availability of testware (e.g., test basis, testable requirements, user stories, test cases)
  • initial quality level of a test object (e.g., all smoke tests have passed)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does typical exit criteria include ?

A
  • measures of thoroughness (e.g., achieved level of coverage, number of unresolved defects, defect density, number of failed test cases)
  • binary “yes/no” criteria (e.g., planned tests have been executed, static testing has been performed, all defects found are reported, all
    regression tests are automated).
    Running out of time or budget can also be viewed as valid exit criteria.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the four estimation techniques ? (descirbed in syllabus) (K3)

A
  • Estimation based on ratios
  • Extrapolation
  • Wideband Delphi
  • Three-point estimation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Most commonly used test case prioritization strategies ? (K3)

A
  • Risk-based prioritization, where the order of test execution is based on the results of risk analysis. Most risky test case -> less risky.
  • Coverage-based prioritization, where the order of test execution is based on coverage. Highest coverage executed first.
  • Requirements-based prioritization, where the order of test execution is based on the priorities of
    the requirements traced back to the corresponding test cases. Priorities defined by stakeholders.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a Test Pyramid ? (K1)

A

The test pyramid is a model showing that different tests may have different granularity. The test pyramid model supports the team in test automation and in test effort allocation by showing that different test objectives are supported by different levels of test automation. The higher the layer, the lower the test granularity, the lower the test isolation and the higher the test execution time.
Fom bottom to top: Unit tests / service tests / UI tests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Testing Quadrants ?

A
  • Q1 (technology facing, supports the team) = component tests/component integration tests (automated)
  • Q2 (business facing, support the team) functional tests/examples/user story tests/UX prototypes/API simulations/prototypes (manual or automated)
    -Q3 (business facing, critique the product) exploratory testing/usability testing/ user acceptance testing (user-oriented tests and often manual)
    -Q4 (technology-facing, critique the product) smoke tests/non-functional tests. (often automated)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Main risk management activities :

A
  • Risk analysis (consisting of risk identification and risk assessment)
  • Risk control (consisting of risk mitigation and risk monitoring)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A risk can be characterized by two factors :

A
  • Risk likelihood – the probability of the risk occurrence (greater than zero and less than one)
  • Risk impact (harm) – the consequences of this occurrence
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are project risks :

A

Related to management and control of the project.
- Organizational issues (e.g., delays in work products deliveries, inaccurate estimates, cost cutting)
- People issues (e.g., insufficient skills, conflicts, communication problems, shortage of staff)
- Technical issues (e.g., scope creep, poor tool support)
- Supplier issues (e.g., third-party delivery failure, bankruptcy of the supporting company)

When they occur, may have an impact on the project schedule, budget or scope, which affects the project’s ability to achieve its objectives.

17
Q

What are product risks :

A

Related to the product quality characteristics.
When they occur, may result in various negative consequences, including:
- User dissatisfaction
- Loss of revenue, trust, reputation
- Damage to third parties
- High maintenance costs, overload of the help desk
- Criminal penalties
- In extreme cases, physical damage, injuries or even death

18
Q

What does Product risk analysis consist of ?

A

It consists of risk identification and risk assessment. Risk identification is about
generating a comprehensive list of risks.
Stakeholders can identify risks by using various techniques and
tools, e.g., brainstorming, workshops, interviews, or cause-effect diagrams.

19
Q

Product risk analysis may influence the thoroughness and test scope. Its results are used to:

A
  • Determine the test scope to be carried out
  • Determine the particular test levels and propose test types to be performed
  • Determine the test techniques to be employed and the coverage to be achieved
  • Estimate the test effort required for each task
  • Prioritize testing in an attempt to find the critical defects as early as possible
  • Determine whether any activities in addition to testing could be employed to reduce risk
20
Q

Product Risk Control - Actions that can be taken to mitigate the product risks by testing are as follows:

A
  • Select the testers with the right level of experience and skills, suitable for a given risk type
  • Apply an appropriate level of independence of testing
  • Perform reviews and static analysis
  • Apply the appropriate test techniques and coverage levels
  • Apply the appropriate test types addressing the affected quality characteristics
  • Perform dynamic testing, including regression testing
21
Q

What is/does Test Monitoring ?

A

Test monitoring is concerned with gathering information about testing. This information is used to assess
test progress and to measure whether the exit criteria or the test tasks associated with the exit criteria are
satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria.

22
Q

What is/does Test Control ?

A

Test control uses the information from test monitoring to provide, in a form of the control directives,
guidance and the necessary corrective actions to achieve the most effective and efficient testing.

23
Q

Control directives in Test Control ?

A
  • Reprioritizing tests when an identified risk becomes an issue
  • Re-evaluating whether a test item meets entry criteria or exit criteria due to rework
  • Adjusting the test schedule to address a delay in the delivery of the test environment
  • Adding new resources when and where needed
24
Q

What is/does Test Completion ?

A

Test completion collects data from completed test activities to consolidate experience, testware, and any
other relevant information. Test completion activities occur at project milestones such as when a test level
is completed, an agile iteration is finished, a test project is completed (or cancelled), a software system is
released, or a maintenance release is completed.

25
Q

What is the purpose in test metrics ? (K1)

A

Test metrics are gathered to show progress against the planned test schedule and budget, the current quality of the test object, and the effectiveness of the test activities with respect to the test objectives or an iteration goal. Test monitoring gathers a variety of metrics to support the test control and test completion.

26
Q

What are Metrics used in Testing ? (K1)

A
  • Project progress metrics (e.g., task completion, resource usage, test effort)
  • Test progress metrics (e.g., test case implementation progress, test environment preparation
    progress, number of test cases run/not run, passed/failed, test execution time)
  • Product quality metrics (e.g., availability, response time, mean time to failure)
  • Defect metrics (e.g., number and priorities of defects found/fixed, defect density, defect detection
    percentage)
  • Risk metrics (e.g., residual risk level)
  • Coverage metrics (e.g., requirements coverage, code coverage)
  • Cost metrics (e.g., cost of testing, organizational cost of quality)
27
Q

What is the puprose of Test Reporting ?

A

Test reporting summarizes and communicates test information during and after testing.

28
Q

What is the puprose of Test Progress Reports ?

A

Test progress reports support the ongoing test control and must provide enough information to make modifications to the test schedule, resources, or test plan, when such changes are needed due to deviation from the plan
or changed circumstances.

29
Q

What is the puprose of Test Completion Reports ?

A

Test completion reports summarize a specific test activity (e.g., test level, test
cycle, iteration) and can give information for subsequent testing.

30
Q

Test Progress Reports are usually generated on a reguilar basis and include :

A
  • Testing period
  • Test progress (e.g., ahead or behind schedule), including any notable deviations
  • Impediments for testing, and their workarounds
  • Test metrics (see section 5.3.1 for examples)
  • New and changed risks within testing period
  • Testing planned for the next period
31
Q

Typical test completion reports include:

A
  • Test summary
  • Testing and product quality evaluation based on the original test plan (i.e., test objectives and exit
    criteria)
  • Deviations from the test plan (e.g., differences from the planned test schedule, duration, and
    effort).
  • Testing impediments and workarounds
  • Test metrics based on test progress reports
  • Unmitigated risks, defects not fixed
  • Lessons learned that are relevant to the testing
32
Q

How to communicate the Status of Testing ?

A
  • Verbal communication with team members and other stakeholders
  • Dashboards (e.g., CI/CD dashboards, task boards, and burn-down charts)
  • Electronic communication channels (e.g., email, chat)
  • Online documentation
  • Formal test reports
33
Q

What does Configuration Management (CM) provides in testing ?

A

CM provides a discipline for identifying, controlling, and tracking
work products such as test plans, test strategies, test conditions, test cases, test scripts, test results, test logs, and test reports as configuration items.

34
Q

To properly support testing, Configuration Management (CM) ensures the following :

A
  • All configuration items, including test items (individual parts of the test object), are uniquely identified, version controlled, tracked for changes, and related to other configuration items so that traceability can be maintained throughout the test process
  • All identified documentation and software items are referenced unambiguously in testware
35
Q

What are typical objectives of defect reports :

A
  • Provide those responsible for handling and resolving reported defects with sufficient information
    to resolve the issue
  • Provide a means of tracking the quality of the work product
  • Provide ideas for improvement of the development and test process
36
Q

A defect report logged during dynamic testing typically includes:

A
  • Title with a short summary of the anomaly being reported
  • Date when the anomaly was observed, issuing organization, and author, including their role
  • Identification of the test object and test environment
  • Context of the defect (e.g., test case being run, test activity being performed, SDLC phase, and
    other relevant information such as the test technique, checklist or test data being used)
  • Description of the failure to enable reproduction and resolution including the test steps that
    detected the anomaly, and any relevant test logs, database dumps, screenshots, or recordings
  • Expected results and actual results
  • Severity of the defect (degree of impact) on the interests of stakeholders or requirements
  • Priority to fix
  • Status of the defect (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation htesting, re-opened, closed, rejected)
  • References (e.g., to the test case)