chapter 5 Managing the test activities -part 2 Flashcards

1
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

is concerned with gathering information about testing. This information is used to assess test progress and to measure whether the test exit criteria or the test tasks associated with the exit criteria are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria.

A

Test monitoring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

uses the information from test monitoring to provide, in a form of the control directives, guidance and the necessary corrective actions to achieve the most effective and efficient testing

A

Test control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Examples of control directives include:

A

Reprioritizing tests when an identified risk becomes an issue

Re-evaluating whether a test item meets entry criteria or exit criteria due to rework

Adjusting the test schedule to address a delay in the delivery of the test environment

Adding new resources when and where needed


How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

collects data from completed test activities to consolidate (make stronger) experience, testware, and any other relevant information. Test activities occur at project milestones such as when a test level is completed, an agile iteration is finished, a test project is completed (or cancelled), a software system is released, or a maintenance release is completed.

A

Test completion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Test metrics are gathered to show progress against the:

A

planned schedule and budget,

the current quality of the test object,

the effectiveness of the test activities with respect to the objectives or an iteration goal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Test monitoring gathers a variety of metrics to support the test control and test completion.
Common test metrics include:

A

Project progress metrics (e.g., task completion, resource usage, test effort)

Test progress metrics (e.g., test case implementation progress, test environment preparation progress, number of test cases run/not run, passed/failed, test execution time)

Product quality metrics (e.g., availability, response time, mean time to failure)

Defect metrics (e.g., number and priorities of defects found/fixed, defect density, defect detection percentage)

Risk metrics (e.g., residual risk level)

Coverage metrics (e.g., requirements coverage, code coverage)

Cost metrics (e.g., cost of testing, organizational cost of quality)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The number of defects per unit size of a work product.

A

-Defect density

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

summarizes and communicates test information during and after testing.

A

Test reporting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

support the ongoing control of the testing and must provide enough information to make modifications to the test schedule, resources, or test plan, when such changes are needed due to deviation from the plan or changed circumstances.

A

Test progress reports:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

summarize a specific stage of testing (e.g., test level, test cycle, iteration) and can give information for subsequent testing.

A

Test completion reports

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

During test monitoring and control, the test team generates test progress reports for stakeholders to keep them informed. Test progress reports are usually generated on a regular basis (e.g., daily, weekly, etc.) and include:

A

Test period

Test progress (e.g., ahead or behind schedule), including any notable deviations

Impediments (blocks) for testing, and their workarounds

Test metrics (see section 5.3.1 for examples)

New and changed risks within testing period

Testing planned for the next period

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

is prepared during test completion, when a project, test level, or test type is complete and when, ideally, its exit criteria have been met.

A

A test completion report

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Typical test completion reports include:

A

Test summary

Testing and product quality evaluation based on the original test plan (i.e., test objectives and exit criteria)
Deviations from the test plan (e.g., differences from the planned schedule, duration, and effort).

Testing impediments and workarounds

Test metrics based on test progress reports

Unmitigated (mitigate: make less severe) risks, defects not fixed

Lessons learned that are relevant to the testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

-Reporting on test progress to others in the same team is often frequent and informal, while reporting on testing for a completed project

A

follows a set template and occurs only once.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The best means of communicating test status varies, depending on:

A

test management concerns,

organizational test strategies,

regulatory standards,

or, in the case of self-organizing teams (see section 1.5.2), on the team itself.

The options include:
Verbal communication with team members and other stakeholders

Dashboards (e.g., CI/CD dashboards, task boards, and burn-down charts)

Electronic communication channels (e.g., email, chat)

Online documentation * Formal test reports (see section 5.3.2)

17
Q

A discipline applying technical and administrative direction and surveillance to identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify that it complies with specified requirements.

A

Configuration management:

18
Q

In testing, configuration management (CM) provides a discipline for identifying, controlling, and tracking work products such as

A

test plans: Documentation describing the test objectives to be achieved and the means and the schedule for achieving them, organized to coordinate testing activities.

test strategies: A description of how to perform testing to reach test objectives under given circumstances.

test conditions: A testable aspect of a component or system identified as a basis for testing.

test cases: A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions.

test scripts: A sequence of instructions for the execution of a test.,

test results: The consequence/outcome of the execution of a test.

test logs: A chronological record of relevant details about the execution of tests.

and test reports as configuration items.

19
Q

Documentation summarizing testing and results.

A

test reports:

20
Q

-For a complex configuration item (e.g., a test environment), this records the items it consists of, their relationships, and versions.

A

CM Configuration management

21
Q

-If the configuration item is approved for testing, it becomes a

A

baseline and can only be changed through a formal change control process.

22
Q
  • keeps a record of changed configuration items when a new baseline is created.
A

Configuration management

It is possible to revert to a previous baseline to reproduce previous test results.

23
Q

To properly support testing, CM ensures the following:

A

All configuration items, including test items (individual parts of the test object), are uniquely identified, version controlled, tracked for changes, and related to other configuration items so that traceability can be maintained throughout the test process


All identified documentation and software items are referenced unambiguously (not open to more than one interpretation) in test documentation

24
Q

Continuous integration, continuous delivery, continuous deployment and the associated testing are typically implemented as part of an automated DevOps pipeline (see section 2.1.4), in which automated CM is normally included.

A

CM

25
Q

Since one of the major test objectives is to find defects,

A

an established defect management process is essential.

26
Q

Although we refer to “defects” here, the reported anomalies may turn out to be real defects or something else

A

(e.g., false positive, change request)

this is resolved during the process of dealing with the defect reports.

27
Q

Anomalies may be reported during any phase of the SDLC and the form depends on the SDLC.

A

defect management

28
Q

At a minimum, the this process includes a workflow for handling individual anomalies from their discovery to their closure and rules for their classification.

A

defect management

29
Q

The defect management workflow typically comprises activities to:

A

log the reported anomalies,

analyze and classify them,

decide on a suitable response such
as to fix or keep it as it is

and finally to close the defect report.

30
Q

Typical defect reports have the following objectives:

A

Provide those responsible for handling and resolving reported defects with sufficient information to resolve the issue

Provide a means of tracking the quality of the work product

Provide ideas for improvement of the development and test process

31
Q

A defect report logged during dynamic testing typically includes:

A

Unique identifier

Title with a short summary of the anomaly being reported

Date when the anomaly was observed, issuing organization, and author, including their role

Identification of the test object and test environment

Context of the defect (e.g., test case being run, test activity being performed, SDLC phase, and other relevant information such as the test technique, checklist or test data being used)

Description of the failure to enable reproduction and resolution including the steps that detected the anomaly, and any relevant test logs, database dumps, screenshots, or recordings

Expected results and actual results

Severity of the defect (degree of impact) on the interests of stakeholders or requirements

Priority to fix

Status of the defect (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed, rejected)

References (e.g., to the test case)

Some of this data may be automatically included when using defect management tools (e.g., identifier, date, author and initial status).