Chapter 5 Flashcards

1
Q

Test Organization

Independent Testing

A
  • A certain degree of independence often makes the tester more effective at finding defects
  • It is not a replacement for familiarity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Testing’s degree of independence (lowest to highest)

A
  • No independent testers; developers testing their own code
  • Independent developers or testers within the development teams or the project team
  • Independent test team or group within the organization
  • Independent testers from the business organization or user community
  • independent testers external to the organization
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Benefits

A
  • Independent testers are likely to recognize different kinds of failures
  • An independent tester can verify, challenge, or disprove assumptions made by stakeholders
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Drawbacks

A
  • Isolation from the development team
  • Developers may lose a sense of responsibility for the quality
  • Independent testers may be seen as a bottleneck or blamed for delays in release
  • Independent testers may lack some important information
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Tasks of a Test Manager and Tester

Tester and Test Leader

A

The manager begins the process and also he ends the process. Such as planning, monitoring and controlling test completion.

Any task in the middle of the process, such as analysis, design, implementation and execution are the responsibility of the Tester

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Test Manager Tasks

A
  • Test policy - Test Strategy - Test Plan
  • Test monitoring and control (Test progress report - Test summary report)
  • Initiate the analysis, design, implementation, and execution of tests
  • Configuration Management
  • Metrics
  • Tools selection
  • Test environment implementation decision
  • Developer the skills and careers of testers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Tester Tasks

A
  • Review and contribute to test plans
  • Assess requirements for testability
  • Test conditions, test cases, test procedures, test data, and test execution schedule
  • Test environment setup
  • Test execution
  • Test automation
  • Non-functional testing
  • Review tests developed by others
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which of the following BEST describes how tasks are divided between the test manager and the tester?

A

→ The test manager plans, organizes, and controls the testing activities, while the tester specifies and executes tests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following is a benefit of test independence?

A

→ Testers have different biases than developers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Test Planning and Estimation

Purpose and content of a Test Plan

A
  • A test plan outlines test activities for development and maintenance projects.
  • As the project and test planning progress, more information becomes available and more detail can be included in the test plan.
  • Is a continuous activity and is performed throughout the products lifecycle
  • Feedback from test activities should be used to recognize changing risks so that planning can be adjusted
  • Planning may be documented in a master test plan and in separate test plans for test levels or for separate test types.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Test Planning Activities

A
  • Determining the scope, objectives, and risks of testing
  • Defining the overall approach to testing
  • Integrating and coordinating the test activities into the software lifecycle activities
  • Making decisions about what to test, the people and other resources required to perform the various test activities, and how to test activities will be carried out
  • Scheduling of test analysis, design, implementation, execution, and evaluation activities
  • Selecting metrics for test monitoring and control
  • Budgeting for the test activities
  • Determining the level of detail and structure for test documentation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
  1. Analytical
A
  • This type of test strategy is based on analysis of some factor
  • Risk-based testing is an example of an analytical approach, where tests are designed and prioritized based on the level of risk
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
  1. Model-based
A
  • In this type of test strategy, tests are designed based on some model of some required aspect of the product, such as a function, a business process, an internal structure, or a non-functional characteristic.
  • Examples of such models include business process models, state models, and reliability growth models.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
  1. Methodical
A
  • Relies on making systematic use of some predefined set of tests or test conditions, such as a taxonomy of common or likely types of failures or a list of important quality characteristics.
  • Error guessing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
  1. Process-compliant or Standard Compliant
A

Involves analyzing, designing, and implementing tests based on external rules and standards, such as those specified by industry-specific standards.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Directed or Consultative

A

Is driven primarily by the advice, guidance, or instructions of stakeholders, business domain experts, or technology experts, who may be outside the test team or outside the organization itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q
  1. Regression-averse
A
  • Is motivated by a desire to avoid regression of existing capabilities.
  • Includes reuse of existing testware, extensive automation of regression tests, and standard test suites.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q
  1. Reactive
A
  • Testing is reactive to the component or system being tested, and the events occurring during test execution, rather than being pre-planned.
  • Tests are designed and implemented, and may immediately be executed in response to knowledge gained from prior test results.
  • Exploratory testing is a common technique employed in reactive strategies.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Entry Criteria

A
  • Define the preconditions for undertaking a given test activity.
  • Typical entry criteria include:
    • Availability of testable requirements, user stories, or models
    • Availability of test items that have met the exit criteria for any prior test levels
    • Availability of test environment
    • Availability of necessary test tools
    • Availability of test data and other necessary resources
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Exit Criteria

A
  • Define what conditions must be achieved in order to declare a test level or a set of tests completed.
  • Typical exit criteria include:
    • Planned tests have been executed
    • A defined level of coverage has been achieved
    • The number of unresolved defects is within an agreed limit
    • The number of estimated remaining defects is sufficiently low
    • The evaluated levels of quality characteristics are sufficient
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Test Execution Schedule

A

Test cases → Test Procedures → Test Suites → Test Execution Schedule

Test case is a document that tests a functionality, verify login.

Test procedures are a group of test cases along with their preconditions.

Test suites are like a folder that includes your test cases.

  • Ideally, test cases would be ordered to run based on their priority levels
  • If a test case with a higher priority is dependent on a test case with a lower priority, the lower priority test case must be executed first.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q
  1. Product Characteristics
A
  • Risks
  • Quality of the test basis
  • Size - Complexity - Requirements
  • Documentation Required
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q
  1. Development Process Characteristics
A
  • The stability and maturity of the organization
  • The development model in use
  • The test approach
  • The tools used
  • The test process
  • Time pressure
24
Q
  1. People’s Characteristics
A
  • The skills and experience of the people involved, especially with similar projects and products
  • Team cohesion and leadership
25
Q
  1. Test Results
A
  • The number and severity of defects found

- The amount of rework required

26
Q
  1. Metrics-based
A

Estimating the test effort based on metrics of former similar projects, or based on typical values

27
Q

Expert-based

A

Estimating the test effort based on the experience of the owners of the testing tasks or by experts

28
Q

Test Monitoring and Control

A
  • The purpose of test monitoring is to gather information and provide feedback and visibility about test activities.
  • Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and reported
29
Q

Examples of test control activities

A
  • Re-prioritizing tests when an identified risk occurs
  • Changing the test schedule due to availability or unavailability of a test environment or other resources
  • Re-evaluating whether a test item meets an entry or exit criterion due to rework
30
Q

Metrics Used in Testing

A
  • Percentage of planned work done in (test case preparation, implementation - Test environment preparation)
  • Test case execution (number of test cases run or not run, test cases passed or failed)
  • Defect information
  • Test coverage of requirements, user stories, acceptance criteria, risks, or code
  • Task completion, resource allocation and usage, and effort
  • Cost of testing
31
Q

Purposes, Contents and Audiences for Test Reports

A
  • The purpose of test reporting is to summarize and communicate test activity information, both during and at the end of a test activity
  • The test report prepared during a test activity may be referred to as a test progress report, while a test report prepared at the end of a test activity may be referred to as a test summary report
  • Typical test progress reports may also include:
    • The status of the test activities and progress against the test plan
    • Factors impeding progress
    • Testing planned for the next reporting period
    • The quality of the test object
  • When exit criteria are reached, the test manager issues the test summary report.
  • This report provides a summary of the testing performed, based on the latest test progress report and any other relevant information
32
Q

Which of the following metrics would be MOST useful to monitor during test execution?

A

→ Percentage of executed test cases

33
Q

Which one of the following is NOT included in a test summary report?

A

→ Defining pass/fail criteria and objectives of testing

34
Q

Which of the following variances should be explained in the Test Summary Report?

A

→ The variances between what was planned for testing and what was actually tested

35
Q

Which of the following is a test metric?

A

→ Confirmation test results (pass/fail)

36
Q

Which of the following BEST describes a test summary report for executive-level employees

A

→ The report is high-level and includes a status summary of defects by priority or budget

37
Q

Defect Density calculated in terms of

A

→ The number of defects identified in a component or system divided by the size of the component or the system

38
Q

Which of the following would be a valid measure of test progress?

A

→ Number of test cases not yet executed

39
Q

A metric that tracks the number of test cases executed is gathered during which activity in the test process?

A

→ Execution

40
Q

Which of the following helps in monitoring the Test Progress:

I. Percentage of planned work done in test case preparation

II. Test coverage of requirements

III. Defect information

IV. The size of the testing Team and skills of the engineers

A

I. Percentage of planned work done in test case preparation

II. Test coverage of requirements

III. Defect information

41
Q

Which of the following is a common test metric often used to monitor BOTH test preparation and test execution?

A

Test case status

42
Q

In a test summary report, the project’s test leader makes the following statement, The payment processing subsystem fails to accept payments from American Express cardholders, which is considered a must-work feature for this release.’ This statement is likely to be found in which of the following sections?

A

Status of testing and product quality with respect to the exit criteria or definition of done

43
Q

What is the ratio of the number of failures relative to a category and a unit of measure?

A

Failure rate

44
Q

Which of the following would be a valid measure of test progress?

A

Number of test cases not yet executed

45
Q

A metric that tracks the number of test cases executed is gathered during which activity in the test process?

A

Execution

46
Q

Which of the following is a test metric?

A

Confirmation test results (pass/fail)

47
Q

Configuration Management

A
  • The purpose is to establish and maintain the integrity of the component or system, the testware, and their relationships to one another through the project and product lifecycle.
  • During test planning, configuration management procedures and infrastructure (tools) should be identified and implemented.
  • To properly support testing, configuration management may involve ensuring the following:
    • All items of testware are uniquely identified, version controlled, tracked for changes, related to each other and related to versions of the test item(s) so that traceability can be maintained throughout the test process
    • All identified documents and software items are referenced unambiguously in test documentation
48
Q

Definition of Risk

A
  • Risk involves the possibility of an event in the future which has negative consequences
  • The level of risk is determined by the likelihood of the event and the impact (the harm) from that event
49
Q

Product (Quality) Risks

A
  • Product risk involves the possibility that a work product may fail to satisfy the legitimate needs of its users and/or stakeholders, examples include:
    • Software might not perform its intended functions
    • A system architecture may not adequately support some non-functional requirements
    • A particular computation may be performed incorrectly in some circumstances
    • A loop control structure may be coded incorrectly
    • Response-times may be inadequate for a high-performance transaction processing system
    • UX feedback might not meet product expectations
50
Q

Project Risks

A
  • Project risk involves situations that may have a negative effect on a project’s ability to achieve its objectives, examples include:
    • Project issues
    • Organizational issues
    • Political Issues
    • Technical Issues
    • Supplier Issues
51
Q

Which of the following is project risk?

A

→ A problem with the development manager which is resulting in his rejecting all defect reports

52
Q

Which of the following is project risk?

A

→ A schedule that requires work during the Christmas shutdown

53
Q

Product Risk Analysis - Effect on Testing - Example

A
  • Identification of items for the basis of Product Risk Analysis such as:
    • Features
    • Functionalities
    • User Stories
    • Requirements
    • Use Cases
    • Test Cases
  • Features are used
  • Determining the importance of each risk item (feature in this case)
  • This can be done in a lot of ways, including:
    • Project documentation
    • Stakeholder’s knowledge about the product and existing product in the market
    • Inputs on most used functionality
    • Inputs from consulting a domain expert
    • Data from the previous version of the product or similar product in the market.
  • Determining the likelihood of risk occurrence
  • The likelihood of risk occurrence can be due to:
    • Poor understanding of the feature by the product development team
    • Improper architecture and design
    • Insufficient time to design
    • Incompetency of the team
  • Executed by the Technical team
54
Q

Defect Report Objectives

A
  • Provide developers and other parties with information about any adverse event that occurred
  • Provide test managers with a means of tracking the quality of the work product and the impact on the testing
  • Provide ideas for development and test process improvement
55
Q

Defect Report Components

A
  • Identifier - title - summary - date - author - test item
  • The development lifecycle phases in which the defect was observed
  • A description of the defect to enable reproduction and resolution, including logs
  • Expected and actual results
  • Scope or degree of impact severity of the defect on the interests of stakeholders
  • Urgency priority to fix
  • State of the defect report
  • Conclusions, recommendations and approvals
  • Global issues, such as other areas that may be affected by a change resulting from the defect
  • Change history
  • References including the test case that revealed the problem
56
Q

Defect Management

A
  • During the defect management process, some of the reports may turn out to describe false positives, not actual failures due to defects
  • For example, a test may fail when a network connection is broken or times out. This behavior does not result from a defect in the test object, but is an anomaly that needs to be investigated.
  • Testers should attempt to minimize the number of false positive reported as defects.
  • Some of these details may be automatically included and/or managed when using defect management tools.
  • Defects found during static testing, particularly reviews, will normally be documented in a different way.
  • Defects can be reported in all stages of the development lifecycle
  • Defects may be reported on all work products
  • Defect Detection Percentage compares field defects (found in production) with test defects (found before production) to measure the test process effectiveness.

defects (testers) / defects (testers) + defects (field) * 100%