chapter 5 Managing the test activities -part 2 Flashcards
is concerned with gathering information about testing. This information is used to assess test progress and to measure whether the test exit criteria or the test tasks associated with the exit criteria are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria.
Test monitoring
uses the information from test monitoring to provide, in a form of the control directives, guidance and the necessary corrective actions to achieve the most effective and efficient testing
Test control
Examples of control directives include:
Reprioritizing tests when an identified risk becomes an issue
Re-evaluating whether a test item meets entry criteria or exit criteria due to rework
Adjusting the test schedule to address a delay in the delivery of the test environment
Adding new resources when and where needed
collects data from completed test activities to consolidate (make stronger) experience, testware, and any other relevant information. Test activities occur at project milestones such as when a test level is completed, an agile iteration is finished, a test project is completed (or cancelled), a software system is released, or a maintenance release is completed.
Test completion
Test metrics are gathered to show progress against the:
planned schedule and budget,
the current quality of the test object,
the effectiveness of the test activities with respect to the objectives or an iteration goal.
Test monitoring gathers a variety of metrics to support the test control and test completion.
Common test metrics include:
Project progress metrics (e.g., task completion, resource usage, test effort)
Test progress metrics (e.g., test case implementation progress, test environment preparation progress, number of test cases run/not run, passed/failed, test execution time)
Product quality metrics (e.g., availability, response time, mean time to failure)
Defect metrics (e.g., number and priorities of defects found/fixed, defect density, defect detection percentage)
Risk metrics (e.g., residual risk level)
Coverage metrics (e.g., requirements coverage, code coverage)
Cost metrics (e.g., cost of testing, organizational cost of quality)
The number of defects per unit size of a work product.
-Defect density
summarizes and communicates test information during and after testing.
Test reporting
support the ongoing control of the testing and must provide enough information to make modifications to the test schedule, resources, or test plan, when such changes are needed due to deviation from the plan or changed circumstances.
Test progress reports:
summarize a specific stage of testing (e.g., test level, test cycle, iteration) and can give information for subsequent testing.
Test completion reports
During test monitoring and control, the test team generates test progress reports for stakeholders to keep them informed. Test progress reports are usually generated on a regular basis (e.g., daily, weekly, etc.) and include:
Test period
Test progress (e.g., ahead or behind schedule), including any notable deviations
Impediments (blocks) for testing, and their workarounds
Test metrics (see section 5.3.1 for examples)
New and changed risks within testing period
Testing planned for the next period
is prepared during test completion, when a project, test level, or test type is complete and when, ideally, its exit criteria have been met.
A test completion report
Typical test completion reports include:
Test summary
Testing and product quality evaluation based on the original test plan (i.e., test objectives and exit criteria) Deviations from the test plan (e.g., differences from the planned schedule, duration, and effort).
Testing impediments and workarounds
Test metrics based on test progress reports
Unmitigated (mitigate: make less severe) risks, defects not fixed
Lessons learned that are relevant to the testing
-Reporting on test progress to others in the same team is often frequent and informal, while reporting on testing for a completed project
follows a set template and occurs only once.
The best means of communicating test status varies, depending on:
test management concerns,
organizational test strategies,
regulatory standards,
or, in the case of self-organizing teams (see section 1.5.2), on the team itself.
The options include:
Verbal communication with team members and other stakeholders
Dashboards (e.g., CI/CD dashboards, task boards, and burn-down charts)
Electronic communication channels (e.g., email, chat)
Online documentation * Formal test reports (see section 5.3.2)
A discipline applying technical and administrative direction and surveillance to identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify that it complies with specified requirements.
Configuration management:
In testing, configuration management (CM) provides a discipline for identifying, controlling, and tracking work products such as
test plans: Documentation describing the test objectives to be achieved and the means and the schedule for achieving them, organized to coordinate testing activities.
test strategies: A description of how to perform testing to reach test objectives under given circumstances.
test conditions: A testable aspect of a component or system identified as a basis for testing.
test cases: A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions.
test scripts: A sequence of instructions for the execution of a test.,
test results: The consequence/outcome of the execution of a test.
test logs: A chronological record of relevant details about the execution of tests.
and test reports as configuration items.
Documentation summarizing testing and results.
test reports:
-For a complex configuration item (e.g., a test environment), this records the items it consists of, their relationships, and versions.
CM Configuration management
-If the configuration item is approved for testing, it becomes a
baseline and can only be changed through a formal change control process.
- keeps a record of changed configuration items when a new baseline is created.
Configuration management
It is possible to revert to a previous baseline to reproduce previous test results.
To properly support testing, CM ensures the following:
All configuration items, including test items (individual parts of the test object), are uniquely identified, version controlled, tracked for changes, and related to other configuration items so that traceability can be maintained throughout the test process
All identified documentation and software items are referenced unambiguously (not open to more than one interpretation) in test documentation
Continuous integration, continuous delivery, continuous deployment and the associated testing are typically implemented as part of an automated DevOps pipeline (see section 2.1.4), in which automated CM is normally included.
CM
Since one of the major test objectives is to find defects,
an established defect management process is essential.
Although we refer to “defects” here, the reported anomalies may turn out to be real defects or something else
(e.g., false positive, change request)
this is resolved during the process of dealing with the defect reports.
Anomalies may be reported during any phase of the SDLC and the form depends on the SDLC.
defect management
At a minimum, the this process includes a workflow for handling individual anomalies from their discovery to their closure and rules for their classification.
defect management
The defect management workflow typically comprises activities to:
log the reported anomalies,
analyze and classify them,
decide on a suitable response such
as to fix or keep it as it is
and finally to close the defect report.
Typical defect reports have the following objectives:
Provide those responsible for handling and resolving reported defects with sufficient information to resolve the issue
Provide a means of tracking the quality of the work product
Provide ideas for improvement of the development and test process
A defect report logged during dynamic testing typically includes:
Unique identifier
Title with a short summary of the anomaly being reported
Date when the anomaly was observed, issuing organization, and author, including their role
Identification of the test object and test environment
Context of the defect (e.g., test case being run, test activity being performed, SDLC phase, and other relevant information such as the test technique, checklist or test data being used)
Description of the failure to enable reproduction and resolution including the steps that detected the anomaly, and any relevant test logs, database dumps, screenshots, or recordings
Expected results and actual results
Severity of the defect (degree of impact) on the interests of stakeholders or requirements
Priority to fix
Status of the defect (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed, rejected)
References (e.g., to the test case)
Some of this data may be automatically included when using defect management tools (e.g., identifier, date, author and initial status).