Chapter 5 Flashcards
Test Organization
Independent Testing
- A certain degree of independence often makes the tester more effective at finding defects
- It is not a replacement for familiarity
Testing’s degree of independence (lowest to highest)
- No independent testers; developers testing their own code
- Independent developers or testers within the development teams or the project team
- Independent test team or group within the organization
- Independent testers from the business organization or user community
- independent testers external to the organization
Benefits
- Independent testers are likely to recognize different kinds of failures
- An independent tester can verify, challenge, or disprove assumptions made by stakeholders
Drawbacks
- Isolation from the development team
- Developers may lose a sense of responsibility for the quality
- Independent testers may be seen as a bottleneck or blamed for delays in release
- Independent testers may lack some important information
Tasks of a Test Manager and Tester
Tester and Test Leader
The manager begins the process and also he ends the process. Such as planning, monitoring and controlling test completion.
Any task in the middle of the process, such as analysis, design, implementation and execution are the responsibility of the Tester
Test Manager Tasks
- Test policy - Test Strategy - Test Plan
- Test monitoring and control (Test progress report - Test summary report)
- Initiate the analysis, design, implementation, and execution of tests
- Configuration Management
- Metrics
- Tools selection
- Test environment implementation decision
- Developer the skills and careers of testers
Tester Tasks
- Review and contribute to test plans
- Assess requirements for testability
- Test conditions, test cases, test procedures, test data, and test execution schedule
- Test environment setup
- Test execution
- Test automation
- Non-functional testing
- Review tests developed by others
Which of the following BEST describes how tasks are divided between the test manager and the tester?
→ The test manager plans, organizes, and controls the testing activities, while the tester specifies and executes tests.
Which of the following is a benefit of test independence?
→ Testers have different biases than developers
Test Planning and Estimation
Purpose and content of a Test Plan
- A test plan outlines test activities for development and maintenance projects.
- As the project and test planning progress, more information becomes available and more detail can be included in the test plan.
- Is a continuous activity and is performed throughout the products lifecycle
- Feedback from test activities should be used to recognize changing risks so that planning can be adjusted
- Planning may be documented in a master test plan and in separate test plans for test levels or for separate test types.
Test Planning Activities
- Determining the scope, objectives, and risks of testing
- Defining the overall approach to testing
- Integrating and coordinating the test activities into the software lifecycle activities
- Making decisions about what to test, the people and other resources required to perform the various test activities, and how to test activities will be carried out
- Scheduling of test analysis, design, implementation, execution, and evaluation activities
- Selecting metrics for test monitoring and control
- Budgeting for the test activities
- Determining the level of detail and structure for test documentation
- Analytical
- This type of test strategy is based on analysis of some factor
- Risk-based testing is an example of an analytical approach, where tests are designed and prioritized based on the level of risk
- Model-based
- In this type of test strategy, tests are designed based on some model of some required aspect of the product, such as a function, a business process, an internal structure, or a non-functional characteristic.
- Examples of such models include business process models, state models, and reliability growth models.
- Methodical
- Relies on making systematic use of some predefined set of tests or test conditions, such as a taxonomy of common or likely types of failures or a list of important quality characteristics.
- Error guessing
- Process-compliant or Standard Compliant
Involves analyzing, designing, and implementing tests based on external rules and standards, such as those specified by industry-specific standards.
Directed or Consultative
Is driven primarily by the advice, guidance, or instructions of stakeholders, business domain experts, or technology experts, who may be outside the test team or outside the organization itself.
- Regression-averse
- Is motivated by a desire to avoid regression of existing capabilities.
- Includes reuse of existing testware, extensive automation of regression tests, and standard test suites.
- Reactive
- Testing is reactive to the component or system being tested, and the events occurring during test execution, rather than being pre-planned.
- Tests are designed and implemented, and may immediately be executed in response to knowledge gained from prior test results.
- Exploratory testing is a common technique employed in reactive strategies.
Entry Criteria
- Define the preconditions for undertaking a given test activity.
- Typical entry criteria include:
- Availability of testable requirements, user stories, or models
- Availability of test items that have met the exit criteria for any prior test levels
- Availability of test environment
- Availability of necessary test tools
- Availability of test data and other necessary resources
Exit Criteria
- Define what conditions must be achieved in order to declare a test level or a set of tests completed.
- Typical exit criteria include:
- Planned tests have been executed
- A defined level of coverage has been achieved
- The number of unresolved defects is within an agreed limit
- The number of estimated remaining defects is sufficiently low
- The evaluated levels of quality characteristics are sufficient
Test Execution Schedule
Test cases → Test Procedures → Test Suites → Test Execution Schedule
Test case is a document that tests a functionality, verify login.
Test procedures are a group of test cases along with their preconditions.
Test suites are like a folder that includes your test cases.
- Ideally, test cases would be ordered to run based on their priority levels
- If a test case with a higher priority is dependent on a test case with a lower priority, the lower priority test case must be executed first.
- Product Characteristics
- Risks
- Quality of the test basis
- Size - Complexity - Requirements
- Documentation Required
- Development Process Characteristics
- The stability and maturity of the organization
- The development model in use
- The test approach
- The tools used
- The test process
- Time pressure
- People’s Characteristics
- The skills and experience of the people involved, especially with similar projects and products
- Team cohesion and leadership
- Test Results
- The number and severity of defects found
- The amount of rework required
- Metrics-based
Estimating the test effort based on metrics of former similar projects, or based on typical values
Expert-based
Estimating the test effort based on the experience of the owners of the testing tasks or by experts
Test Monitoring and Control
- The purpose of test monitoring is to gather information and provide feedback and visibility about test activities.
- Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and reported
Examples of test control activities
- Re-prioritizing tests when an identified risk occurs
- Changing the test schedule due to availability or unavailability of a test environment or other resources
- Re-evaluating whether a test item meets an entry or exit criterion due to rework
Metrics Used in Testing
- Percentage of planned work done in (test case preparation, implementation - Test environment preparation)
- Test case execution (number of test cases run or not run, test cases passed or failed)
- Defect information
- Test coverage of requirements, user stories, acceptance criteria, risks, or code
- Task completion, resource allocation and usage, and effort
- Cost of testing
Purposes, Contents and Audiences for Test Reports
- The purpose of test reporting is to summarize and communicate test activity information, both during and at the end of a test activity
- The test report prepared during a test activity may be referred to as a test progress report, while a test report prepared at the end of a test activity may be referred to as a test summary report
- Typical test progress reports may also include:
- The status of the test activities and progress against the test plan
- Factors impeding progress
- Testing planned for the next reporting period
- The quality of the test object
- When exit criteria are reached, the test manager issues the test summary report.
- This report provides a summary of the testing performed, based on the latest test progress report and any other relevant information
Which of the following metrics would be MOST useful to monitor during test execution?
→ Percentage of executed test cases
Which one of the following is NOT included in a test summary report?
→ Defining pass/fail criteria and objectives of testing
Which of the following variances should be explained in the Test Summary Report?
→ The variances between what was planned for testing and what was actually tested
Which of the following is a test metric?
→ Confirmation test results (pass/fail)
Which of the following BEST describes a test summary report for executive-level employees
→ The report is high-level and includes a status summary of defects by priority or budget
Defect Density calculated in terms of
→ The number of defects identified in a component or system divided by the size of the component or the system
Which of the following would be a valid measure of test progress?
→ Number of test cases not yet executed
A metric that tracks the number of test cases executed is gathered during which activity in the test process?
→ Execution
Which of the following helps in monitoring the Test Progress:
I. Percentage of planned work done in test case preparation
II. Test coverage of requirements
III. Defect information
IV. The size of the testing Team and skills of the engineers
I. Percentage of planned work done in test case preparation
II. Test coverage of requirements
III. Defect information
Which of the following is a common test metric often used to monitor BOTH test preparation and test execution?
Test case status
In a test summary report, the project’s test leader makes the following statement, The payment processing subsystem fails to accept payments from American Express cardholders, which is considered a must-work feature for this release.’ This statement is likely to be found in which of the following sections?
Status of testing and product quality with respect to the exit criteria or definition of done
What is the ratio of the number of failures relative to a category and a unit of measure?
Failure rate
Which of the following would be a valid measure of test progress?
Number of test cases not yet executed
A metric that tracks the number of test cases executed is gathered during which activity in the test process?
Execution
Which of the following is a test metric?
Confirmation test results (pass/fail)
Configuration Management
- The purpose is to establish and maintain the integrity of the component or system, the testware, and their relationships to one another through the project and product lifecycle.
- During test planning, configuration management procedures and infrastructure (tools) should be identified and implemented.
- To properly support testing, configuration management may involve ensuring the following:
- All items of testware are uniquely identified, version controlled, tracked for changes, related to each other and related to versions of the test item(s) so that traceability can be maintained throughout the test process
- All identified documents and software items are referenced unambiguously in test documentation
Definition of Risk
- Risk involves the possibility of an event in the future which has negative consequences
- The level of risk is determined by the likelihood of the event and the impact (the harm) from that event
Product (Quality) Risks
- Product risk involves the possibility that a work product may fail to satisfy the legitimate needs of its users and/or stakeholders, examples include:
- Software might not perform its intended functions
- A system architecture may not adequately support some non-functional requirements
- A particular computation may be performed incorrectly in some circumstances
- A loop control structure may be coded incorrectly
- Response-times may be inadequate for a high-performance transaction processing system
- UX feedback might not meet product expectations
Project Risks
- Project risk involves situations that may have a negative effect on a project’s ability to achieve its objectives, examples include:
- Project issues
- Organizational issues
- Political Issues
- Technical Issues
- Supplier Issues
Which of the following is project risk?
→ A problem with the development manager which is resulting in his rejecting all defect reports
Which of the following is project risk?
→ A schedule that requires work during the Christmas shutdown
Product Risk Analysis - Effect on Testing - Example
-
Identification of items for the basis of Product Risk Analysis such as:
- Features
- Functionalities
- User Stories
- Requirements
- Use Cases
- Test Cases
- Features are used
- Determining the importance of each risk item (feature in this case)
- This can be done in a lot of ways, including:
- Project documentation
- Stakeholder’s knowledge about the product and existing product in the market
- Inputs on most used functionality
- Inputs from consulting a domain expert
- Data from the previous version of the product or similar product in the market.
- Determining the likelihood of risk occurrence
- The likelihood of risk occurrence can be due to:
- Poor understanding of the feature by the product development team
- Improper architecture and design
- Insufficient time to design
- Incompetency of the team
- Executed by the Technical team
Defect Report Objectives
- Provide developers and other parties with information about any adverse event that occurred
- Provide test managers with a means of tracking the quality of the work product and the impact on the testing
- Provide ideas for development and test process improvement
Defect Report Components
- Identifier - title - summary - date - author - test item
- The development lifecycle phases in which the defect was observed
- A description of the defect to enable reproduction and resolution, including logs
- Expected and actual results
- Scope or degree of impact severity of the defect on the interests of stakeholders
- Urgency priority to fix
- State of the defect report
- Conclusions, recommendations and approvals
- Global issues, such as other areas that may be affected by a change resulting from the defect
- Change history
- References including the test case that revealed the problem
Defect Management
- During the defect management process, some of the reports may turn out to describe false positives, not actual failures due to defects
- For example, a test may fail when a network connection is broken or times out. This behavior does not result from a defect in the test object, but is an anomaly that needs to be investigated.
- Testers should attempt to minimize the number of false positive reported as defects.
- Some of these details may be automatically included and/or managed when using defect management tools.
- Defects found during static testing, particularly reviews, will normally be documented in a different way.
- Defects can be reported in all stages of the development lifecycle
- Defects may be reported on all work products
- Defect Detection Percentage compares field defects (found in production) with test defects (found before production) to measure the test process effectiveness.
defects (testers) / defects (testers) + defects (field) * 100%