Test Management Flashcards

1
Q

Besides testing professionals, who are examples of other roles that may test?

A

Testing tasks may be done by people in a specific testing role or by people in another role. Ex: Customers, developers, admins

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the 5 degrees of independence of testers and testing departments (least to most independent)

A

① No independent testers-the only form of testing avaliable is developers testing their own code

② Independent developers or testers within the development team. Could be developers testing each other’s code

③ Independent test team or group within the organization reporting to product management or executive management

④ Indepent testers from the business organization or user community or with specializations in specific test types i.e. usability, security, or performance

⑤ Independent testers external to the organization either working on site or offsite

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are 3 benefits of independent testing?

A

Benefits of test independence include:
① Independent testers are likely to recognize different kinds of failures from developers due to their different backgrounds, technical perspectives and biases
② Independent testers can verify, challenge or disprove assumptions made by stakeholders
③ Independent testers of a vendor can report in an upright and objective manner without pressure from the company that hired them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are some drawbacks of independent testers?

A

Drawbacks of test independence are:
① Isolation from the development team may lead to a lack of collaboration, delays in providing feedback, or an adversarial relationship between testers and developers
② Developers may lose a sense of responsibility for the quality of their work
③ Independent testers might be seen as a bottleneck
④ Independent testing may lack some important information about the test objects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Within the ISTQB, 2 test roles are covered what are those roles?

A

Test manager- Is responsible for the test process overall. They lead the test activities.

Testers- Execute tests and evaluate results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Name at least 5 typical duties of a test manager.

A

• Developing or reviewing a test policy
• Planning test activities by considering the context and understanding test objectives and risks
• Writing and updating test plans
• Coordinating that test plan with project managers and product owners
• Monitoring test progress and results
• Checking the status of exit criteria to facilitate test completion activities
• Preparing and delivering test progress reports and test summary reports
• Introducing suitable metrics for measuring test progress and results
• Evaluating the quality of both the testing and the product
• Supporting the testing and implementation of test tools
• Advocate for the testers, test team, & the test profession within the organization
• Help develop the skills of the testers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Name at least 5 typical duties of a tester.

A

• Reviewing and contributing to test plans
• Analyzing, reviewing and assessing requirements, user stories and acceptance criteria
• Identifying and documenting test conditions
• Capturing traceability between test cases, conditions, & the test basis
• Designing, setting up and verifying test environments
• Designing & implementing test cases and procedures
• Preparing & acquiring test data
• Creating a detailed test execution schedule
• Executing tests and evaluating results
• Documenting any deviations from expected results
• Automating necessary tests
• Evaluating non-functional characteristics like performance, efficacy, reliability and security
• Reviewing tests developed by others

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a test plan and what are at least 5 factors that influence test planning?

A

Test Plans - outline test activities for development and maintenance projects. Test planning is a continuous activity and is performed throughout the product’s life Cycle

Factors that influence Test Planning:
* Test policy & strategy of the organization
* SDLC and methods being used
* scope of testing
* objectives
* Risks
* criticality
* Testability
* Availability of Resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are at least 5 test planning activities?

A

Test planning activities may include the following:
* Determining the scope, objectives, and risks of testing
* Defining the overall approach
* Making decisions about:
① What-to test
② Who-People & Resources required
③ How - the test will be carried out
* scheduling of test activities
* selecting metrics for test monitoring and control
* Budgeting
* Determining documentation detail level & structure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a test strategy and how is it different from a test plan?

A

Test strategy - a generalized description of the test process usually at the product or organizational level

Test Plans - outline test activities for development and maintenance projects. Test planning is a continuous activity and is performed throughout the product’s life Cycle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are 5 common types of test strategy.

A

Common types of test strategy include:
* Analytical - based on the analysis of some factor. Ex: Risk-based testing tests are designed & prioritized based on level of risk
* Model-based - based on a model of some required aspect of the product Ex: a function, business process, internal structure or reliability
* Methodical - systemic use of some pre-defined set of tests or test conditions EX: taxonomy of common failures
* Process or standards compliance-Designing and implementing tests based on external rules and standards. Ex: Industry specific standards
* Directed-driven by advice, guidance or instructions of stakeholders
* Regression Averse-Motivated by a desire to avoid regression of existing capabilities
* Reactive-Testing is reactive to the components or system being tested and events occurring during testing execution rather than being pre-planned

  • An appropriate test strategy is often created by combining several of these test strategies. Ex: Risk-based testing, an analytical strategy can be combined with exploratory testing, a reactive strategy
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are entry and exit criteria? How often should they be defined? What should they be based on?

A

Entry and Exit Criteria-criteria which define when a given test activity should start and when it is complete

Entry and Exit criteria should be defined for each test level and type and will differ based on the test objectives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is typical entry criteria based on?

A

Typical Entry criteria may be based the availability of:
* testable requirements, User stories, and/or models
* test items that have met the exit criteria for any prior test levels
* test environment
* necessary test tools
* test data & other necessary resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are some typical exit criteria?

A

Typical Exit criteria may include:
* Planned tests have been executed
* A defined level of coverage has been achieved
* Number of unresolved defects is within an agreed limit
* Number of estimated remaining defects is sufficiently low
* These evaluated levels are sufficient:
① Reliability
② Performance
③ Efficiency
④ Usability
⑤ security
⑥ Other quality characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Test Estimation is used to determine the effort required for adequate testing. What are 2 common test estimation techniques and give an example of each.

A

*Metrics-based technique: Based on metrics of similar projects or typical values
Ex: Burn Down Charts in Agile is an example of the metrics-based approach as effort remaining is being captured and reported

  • Expert-based technique: Based on experience of testers or exports Ex:
    Planning Poker is an example of the expert-based approach as team members are estimating effort based on their experiences.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is test monitoring?

A

Test monitoring is done to gather information and provide feedback and visibility about test activities.

  • Information may be collected manually or automatically
  • Information should be used to assess test progress and measure whether the exit criteria have been satisfied
17
Q

What is test control? Give at least two examples of test control actions.

A

Test control is any guiding or corrective actions taken as a result of test information reported.

  • Example of test control actions:
    • Reprioritizing tests when a risk is identified such as: late software delivery,
    • Change in the test schedule due to availability of a needed resource
    • Reevaluating whether the product meets exit or entry criteria due to rework
18
Q

Name 4 things test metrics are meant to assess.

A

Test metrics - collected during or at the end of a test activity to assess the following:
① Progress against the planned schedule & budget
② Current quality of the test object
③ Adequacy of the test approach
④ Effectiveness of test activities with respect to the objectives

19
Q

How are test metrics calculated?

A

Test metrics include
* Percentage of planned work done in test case preparation
* Precentage of planned work done in test environment preparation
* Test case execution
* Test coverage of requirnets, user stories, acceptance criteria, risks or code

20
Q

What is test reporting and what are two different kinds of test reporting?

A

Test Reporting- intended to summarize and communicate test activity information during & at the end of testing.
1. If done during testing it may be called a “test progress report”
2. If prepared at the end of test activity, it may be called a “test summary report”

21
Q

A “test progress report” is a test report given during the testing progress. What are some points typically included in this kind of report?

A
  • Status of test activities and progress against the plan
  • Factors impeding progress
  • Future testing planned for the next reporting period
  • Quality of the test object.
22
Q

A “test summary report” is a report given at the end of test activity. A typical test summary report may include what information?

A
  • Summary of testing performed
  • Info on what occurred during a testing period
  • Deviations from the plan. Ex: deviations in schedule, duration, or effort of test activities
  • Status of testing and product quality according to exit criteria or definition of done
  • Factors that have blocked or are currently blocking progress
  • Metrics of defects, test cases, test coverage, and resource consumption
  • Residual risks
  • Reusable test work products created
23
Q

Why is configuration management done?

A

Configuration management is done to establish and maintain integrity of the product and the testware as well as their relationship to each other throughout the product life cycle.

Configuration Management procedures and tools should be identified and implemented during test planning

24
Q

Configuration management involves making sure that all test items and all test ware are…

A

① Uniquely identified
② Version controlled
③ Tracked for changes
④ Related to each other
⑤ Related to versions of the test items to maintain traceability

25
Q

What is meant by “risk,” how is it used in testing, and how is the level of risk determined?

A
  • Risk is the possibility that a future event may have negative consequences
  • Risk is used to focus effort required during testing. Helps decide where & when to test
  • The level of risk is determined by the likelihood of the event and the level of impact from that event.
26
Q

There are different kinds of risk in testing. Name 5 kinds.

A
  1. Product risk - the possibility that a product may fail to satisfy the needs of its users or stakeholders
  2. Project Risk- possible situations that may negatively impact the project’s ability to achieve its objectives
  3. Organizational Risk - Issues that might impact the whole organization, including the current project
  4. Political Risk-Issues that might arise within development teams
  5. Technical Risk that may impact the project
27
Q

What is product risk? Give at least 3 examples.

A

Product risk - the possibility that a product may fail to satisfy the needs of its users or stakeholders
(Quality risk- product risk that is related to specific quality characteristics of a product EX: reliability)

Examples of product risk:
① Software doesn’t function as intended in the specifications
② Software doesn’t function as intended by users
③ System architecture may not support some nonfunctional requirements
④ A particular computation may be performed incorrectly
⑤ Inadequate response times for a high performance transaction system

28
Q

What is project risk? Give 3 examples.

A

Project Risk- possible situations that may negatively impact the project’s ability to achieve its objectives

Example of project risk:
① Delays in delivery or satisfaction of exit criteria
② Inaccurate estimates, reallocation of funds, or cost-cutting (inadequate funding)
③ Late changes can result in substantial rework

29
Q

What is organizational risk? Give 3 examples.

A

Organizational Risk - Issues that might impact the whole organization, including the current project

Examples of Organizational risk:
① Insufficent skills, training or staff
② Issues caused by personnel conflict
③ Unavailability of users, business staff, or subject matter exports due to conflicting business priorities

30
Q

What is political risk? Give 3 examples.

A

Political Risk-Issues that might arise within development teams

Examples of political risk -
① Inadequate communication between teams related to needs and/or test results
② Failure to follow-up on information found during testing and reviews
③ Different expectations of testing

31
Q

Give at least 3 examples of technical risk.

A

Technical Risk may impact the project

Examples of technical risk include:
① Requirements not well-defined
② Requirements not met due to existing restraints
③ Test environmet not ready
④ Late technical requirements (data conversion, migration planning, tool support)
⑤ Quality of work impacted by weaknesses in the development process.
⑥ Technical debt caused by poor defect management. (Accumulated defects)

32
Q

How is a risk-based approach to testing useful? What do risk analysis techniques help with?

A

A risk-based approach to testing allows for proactive reduction of product risk.
The results of product risk analysis are used to determine test techniques

Risk Analysis Techniques help to:
① Determine which test techniques to employ
② Determine the particular levels and types of testing
③ Determine the extent of testing to be carried out
④ Prioritize testing to find critical defects early
⑤ Determine other activities that might be used to reduce risk

33
Q

What are risk management activities?

A

Risk Management Activities provide a diciplined approach to:
① Analyze what can go wrong
② Determine the most important risks
③ Do what’s necessary to mitigate the most important risks
④ Make plans to deal with risks if they happen

34
Q

What is meant by “defect management?”

A

Any defects found during testing should be logged and managed correctly
and should be investigated and tracked from discovery to resolution.

35
Q

What is the purpose of a defect report?

A

Typical Defect Report Objectives:
① Provide developers all information about any adverse event that occurred
② Provide test managers a way to track the quality of the product
③ Provide ideas for both development and test process improvement

36
Q

What is an outline of a typical defect report?

A

Typical Testing Defect report includes:
1. An identifier for the defect
2. Title & Summary of defect
3. Date defect was reported
4. Test item & environment used
5. SDLC phase where defect was observed
6. Description of the defect with logs, database dumps, screenshots & recordings
7. Scope or degree of impact of the defect
8. Urgency or priority to fix
9. Expected & Actual results
10. Conclusions
11. Change history
12. References including the test case that revealed the problem

37
Q

What is the Probe Effect?

A

Intrusive tools - may affect the outcome of the test Ex: response times may be affected by the extra instructions performed by a performance testing tool? The Probe Effect-the consequences of using intrusive tools