Foundation Level 5 Flashcards

1
Q

Defect Management

A
  1. the reported anomalies may turn out to be real defects or something else
  2. Anomalies may be reported during any phase of the SDLC
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

At a minimum, the defect management process includes

A
  1. a workflow for handling individual anomalies from their discovery to their closure
  2. rules for their classification
  3. activities to log the reported anomalies
  4. decide on a suitable response such as to fix or keep it as it is and finally to close the defect report
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Typical defect reports have the following objectives:

A
  1. Provide those responsible for handling and resolving reported defects with sufficient information to resolve the issue
  2. Provide a means of tracking the quality of the work product
  3. Provide ideas for improvement of the development and test process
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A defect report logged during dynamic testing typically includes:

A
  1. Unique identifier
  2. Title with a short summary of the anomaly being reported
  3. Date when the anomaly was observed, issuing organization, and author, including their role
  4. Identification of the test object and test environment
  5. Context of the defect (e.g., test case being run, test activity being performed, SDLC phase, and other relevant information such as the test technique, checklist or test data being used)
  6. Description of the failure to enable reproduction and resolution including the steps that detected the anomaly, and any relevant test logs, database dumps, screenshots, or recordings
  7. Expected results and actual results
  8. Severity of the defect (degree of impact) on the interests of stakeholders or requirements
  9. Priority to fix
  10. Status of the defect (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed, rejected)
  11. References (e.g., to the test case)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Configuration Management

A

provides a discipline for identifying, controlling, and tracking work products such as test plans, test strategies, test conditions, test cases, test scripts, test results, test logs, and test reports as configuration items.

NB Configuration management keeps a record of changed configuration items when a new baseline is created.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

To properly support testing, CM ensures the following

A
  1. All configuration items, including test items (individual parts of the test object), are uniquely identified, version controlled, tracked for changes, and related to other configuration items so that traceability can be maintained throughout the test process
  2. All identified documentation and software items are referenced unambiguously in test documentation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The main risk management activities are:

A
  1. Risk analysis
  2. Risk control
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Risk-based testing

A

The test approach, in which test activities are selected, prioritized, and managed based on risk analysis and risk control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Risk attributes

A
  1. Risk likelihood – the probability of the risk occurrence
  2. Risk impact (harm) – the consequences of this occurrence
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Main types of risk in software testing

A
  1. project risks
  2. product risks
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Project risks

A
  1. related to the management and control of the project
  2. may have an impact on the project schedule, budget or scope, which affects the project’s ability to achieve its objectives.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Project risk examples

A
  1. Organizational issues
  2. People issues (e.g., insufficient skills, conflicts, communication problems, shortage of staff)
  3. Technical issues (e.g., scope creep, poor tool support)
  4. Supplier issues (e.g., third-party delivery failure, bankruptcy of the supporting company)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Product risks

A

related to the product quality characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Product risks examples

A
  1. missing or wrong functionality
  2. incorrect calculations
  3. runtime errors
  4. poor architecture
  5. inefficient algorithms
  6. inadequate response time
  7. poor user experience
  8. security vulnerabilities
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Product risks may result in

A
  1. User dissatisfaction
  2. Loss of revenue, trust, reputation
  3. Damage to third parties
  4. High maintenance costs, overload of the helpdesk
  5. Criminal penalties
  6. In extreme cases, physical damage, injuries or even death
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Product Risk Analysis

A

provide an awareness of product risk in order to focus the testing effort in a way that minimizes the residual level of product risk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Product risk analysis consists of

A
  1. risk identification
  2. risk assessment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Product risk analysis may

A
  1. Determine the scope of testing to be carried out
  2. Determine the particular test levels and propose test types to be performed
  3. Determine the test techniques to be employed and the coverage to be achieved
  4. Estimate the test effort required for each task
  5. Prioritize testing in an attempt to find the critical defects as early as possible
  6. Determine whether any activities in addition to testing could be employed to reduce risk
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Risk identification

A

generating a comprehensive list of risks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Risk assessment

A
  1. categorization of identified risks
  2. determining their risk likelihood
  3. risk impact and level, prioritizing
  4. proposing ways to handle them
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Product Risk Control

A

all measures that are taken in response to identified and assessed product risks

  1. risk mitigation
  2. risk monitoring
22
Q

Risk mitigation

A

implementing the actions proposed in risk assessment to reduce the risk level.

23
Q

Goal of risk monitoring

A
  1. ensure that the mitigation actions are effective
  2. obtain further information to improve risk assessment
  3. to identify emerging risks
24
Q

Response options to risk

A
  1. risk mitigation by testing
  2. risk acceptance
  3. risk transfer
  4. contingency plan
25
Q

Actions that can be taken to mitigate the product risks by testing

A
  1. Select the testers with the right level of experience and skills, suitable for a given risk type
  2. Apply an appropriate level of independence of testing
  3. Conduct reviews and perform static analysis
  4. Apply the appropriate test techniques and coverage levels
  5. Apply the appropriate test types addressing the affected quality characteristics
  6. Perform dynamic testing, including regression testing
26
Q

Common test metrics

A
  1. Project progress metrics (e.g., task completion, resource usage, test effort)
  2. Test progress metrics (e.g., test case implementation progress, test environment preparation progress, number of test cases run/not run, passed/failed, test execution time)
  3. Product quality metrics (e.g., availability, response time, mean time to failure)
  4. Defect metrics (e.g., number and priorities of defects found/fixed, defect density, defect detection
    percentage)
  5. Risk metrics (e.g., residual risk level)
  6. Coverage metrics (e.g., requirements coverage, code coverage)
  7. Cost metrics (e.g., cost of testing, organizational cost of quality)
27
Q

Examples of test control directives

A
  1. Reprioritizing tests when an identified risk becomes an issue
  2. Re-evaluating whether a test item meets entry criteria or exit criteria due to rework
  3. Adjusting the test schedule to address a delay in the delivery of the test environment
  4. Adding new resources when and where needed
28
Q

Test monitoring

A
  1. gathering information about testing
  2. measure whether the test exit criteria or the test tasks associated with the exit criteria are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria.
29
Q

Test control

A

ses the information from test monitoring to provide, in a form of the control directives, guidance and the necessary corrective actions to achieve the most effective and efficient testing.

30
Q

Test completion

A

collects data from completed test activities to consolidate experience, testware, and any other relevant information.

31
Q

Best means of communicating test status depends on

A
  1. test management concerns
  2. organizational test strategies
  3. regulatory standards
  4. the team
32
Q

Test status communication options

A
  1. Verbal communication with team members and other stakeholders
  2. Dashboards (e.g., CI/CD dashboards, task boards, and burn-down charts)
  3. Electronic communication channels (e.g., email, chat)
  4. Online documentation
  5. Formal test reports

NB different stakeholders are interested in different types of information => tailor accordingly

33
Q

Test progress reports usually include

A
  1. Test period
  2. Test progress (e.g., ahead or behind schedule), including any notable deviations
  3. Impediments for testing, and their workarounds
  4. Test metrics
  5. New and changed risks within testing period
  6. Testing planned for the next period
34
Q

Typical test completion reports include

A
  1. Test summary
  2. Testing and product quality evaluation based on the original test plan (i.e., test objectives and exit
    criteria)
  3. Deviations from the test plan (e.g., differences from the planned schedule, duration, and effort).
  4. Testing impediments and workarounds
  5. Test metrics based on test progress reports
  6. Unmitigated risks, defects not fixed
  7. Lessons learned that are relevant to the testing
35
Q

The typical content of a test plan includes

A
  1. Context of testing (e.g., scope, test objectives, constraints, test basis)
  2. Assumptions and constraints of the test project
  3. Stakeholders (e.g., roles, responsibilities, relevance to testing, hiring and training needs)
  4. Communication (e.g., forms and frequency of communication, documentation templates)
  5. Risk register (e.g., product risks, project risks)
  6. Test approach (e.g., test levels, test types, test techniques, test deliverables, entry criteria and exit criteria, independence of testing, metrics to be collected, test data requirements, test environment requirements, deviations from the organizational test policy and test strategy)
  7. Budget and schedule
36
Q

A test plan

A
  1. Documents the means and schedule for achieving test objectives
  2. Helps to ensure that the performed test activities will meet the established criteria
  3. Serves as a means of communication with team members and other stakeholders
  4. Demonstrates that testing will adhere to the existing test policy and test strategy (or explains why the testing will deviate from them)
37
Q

Test estimation techniques

A
  1. metrics based (estimation is based on the metrics of former similar projects)
  2. expert based technique (estimating the test effort based on the experience of the owners of the testing tasks or by experts)
38
Q

Testing might depend on the number of factors

A
  1. characteristics of the product
  2. characteristics of the development process
  3. people characteristics
  4. test results
39
Q

Entry criteria

A

preconditions for undertaking a given activity

  1. availability of testable requirements and/or models
  2. availability of test items that met exit criteria from previous level
  3. availability of test environment
  4. availability of test tools
  5. availability of test data and other resources
40
Q

Exit criteria

A

what must be achieved in order to declare an activity completed

  1. planned tests have been executed
  2. defined level of coverage has been achieved
  3. number of resolved defects is within an agreed limit
  4. number of estimated remaining defects is sufficiently low
  5. evaluated levels of reliability, performance efficiency, usability, security and other relevant quality characteristics are sufficient
  6. running out of time or budget
41
Q

Test strategy

A

provides a generalised description of the test process, usually at the product or organisational level

42
Q

Common types of test strategies

A
  1. analytical (risk based)
  2. model based
  3. methodical
  4. process compliant
  5. directed (consultative)
  6. regression averse
  7. reactive (ongoing)
43
Q

Test planning activities

A
  1. Determine the scope and risks and identifying the objective of testing
  2. defining overall approach
  3. integrating and coordinating testing activities into software lifecycle
  4. making decisions about what to test
  5. scheduling test activities
  6. assigning resources
  7. selecting metrics for monitoring and controlling
  8. defining entry and exit criteria
44
Q

The typical content of a test plan includes:

A
  1. Context of testing (e.g., scope, test objectives, constraints, test basis)
  2. Assumptions and constraints of the test project
  3. Stakeholders (e.g., roles, responsibilities, relevance to testing, hiring and training needs)
  4. Communication (e.g., forms and frequency of communication, documentation templates)
  5. Risk register (e.g., product risks, project risks)
  6. Test approach (e.g., test levels, test types, test techniques, test deliverables, entry criteria and exit criteria, independence of testing, metrics to be collected, test data requirements, test environment requirements, deviations from the organizational test policy and test strategy)
  7. Budget and schedule
45
Q

Two kinds of planning in iterative SDCL

A
  1. release planning
  2. iteration planning
46
Q

Test effort estimation techniques

A
  1. Estimation based on ratios
  2. Extrapolation
  3. Wideband Delphi
  4. Three-point estimation
47
Q

Test Case Prioritization

A
  1. Risk-based prioritization
  2. Coverage-based prioritization
  3. Requirements-based prioritization
48
Q

Testing Quadrants

A
  1. group the test levels with the appropriate test types, activities, test techniques and work products in the Agile software development
  2. tests can be business facing or technology facing.
  3. support the team (i.e., guide the development) or critique the product (i.e., measure its behavior against the expectations)
49
Q

Quadrant Q1

A
  1. (technology facing, support the team).
  2. component and component integration tests.
  3. should be automated and included in the CI process.
50
Q

Quadrant Q2

A
  1. (business facing, support the team).
  2. contains functional tests, examples, user story tests, user experience prototypes, API testing, and simulations.
  3. check the acceptance criteria and can be manual or automated.
51
Q

Quadrant Q3

A
  1. (business facing, critique the product).
  2. exploratory testing, usability testing, user acceptance testing.
  3. are user-oriented and often manual.
52
Q

Quadrant Q4

A
  1. (technology facing, critique the product).
  2. contains smoke tests and non-functional tests (except usability tests).
  3. often automated.