ISTQB-ATM Learning Objectives Flashcards
TM-1.2.1 (K4) Analyze the test needs for a system in order to plan test activities and work products that will achieve the test objectives.
SAMPLE QUESTION:
You are the Test Manager working on a project developing a tourist information mobile application. The project recently switched to an agile process and test-driven development. Each development cycle lasts 15 days, with daily builds beginning at day 7. After day 10, no new features are allowed to be added. The development team is composed of very experienced team members, who are proud of their work, but not tolerant of the testing team. The requirements are written down as coarse-grained user stories like the following one:
US 03-30: Search nearest matching hotel
- As a casual user at an unfamiliar location I want to get information on the nearest hotel matching my financial and comfort profile best.
- Priority: High;
- Estimate: 7 (out of 10)
The software depends on existing web services, which are stubbed during development. Unit testing is done by developers, while system and user acceptance testing is the testing team’s responsibility. System test in earlier development cycles was often blocked due to severe failures of newly developed features. Analysis shows that many of these failures could have been found during unit test. Analysis of issues found during production show that 30% of performance problems were due to unreliable web services delivered by 3rd party suppliers.
Primary test objectives are to mitigate the perceived performance risks and to increase the confidence that no failures with high severity will occur in user stories with priority >= High. Moreover, upper management demanded for closer cooperation of testers and developers.
Which of the following test activities and/or work products will achieve the test objectives best?
a) Approval of detailed design specifications by inspections done by the test team before day 7, when the daily builds begin.
b) Identification of external web services and enforcement of service level agreements (SLAs) with service provider done by project management and test management.
c) Integration test level plan defined by test manager before each development cycle and handed over to developers on day 10.
d) Metrics suite for unit testing defined by and reported to test management at day 7.
e) Automated performance testing of user stories with priority >= High done by testers during system test with test execution starting on day 10.
Select TWO options.
CORRECT ANSWER: B and E
EXPLANATION:
a) Incorrect: TDD starts with unit test case design; in agile processes, normally there are no detailed design specifications.
b) Correct: 30% of performance issues are reported in relationship to web services. These (or some of them) may be due to undefined SLA.
c) Incorrect: there is no integration test level.
d) Incorrect: unit testing is under the hood of development.
e) Correct: performance. Tests must be conducted; system not stable before day 10.
NOTES FROM SYLLABUS:
- Ensure activities meet the mission and objectives in the organization’s test strategy
- Plan activities: Use testing strategy to determine the tasks that might need to occur (e.g., if using risk-based testing strategy, then risk analysis guiding planning mitigation activities)
- Test Approach: using testing strategy, determine test levels to be employed and with what goals & objectives?
- Identify method for gathering/tracking metrics that will be used to guide the project, determine plan adherence, andassess objective achievement.
TM-1.3.1 (K3) Use traceability to check completeness and consistency of defined test conditions with respect to the test objectives, test strategy, and test plan.
SAMPLE QUESTION:
You are the Test Manager working on a project developing a tourist information mobile application. The project recently switched to an agile process and test-driven development. Each development cycle lasts 15 days, with daily builds beginning at day 7. After day 10, no new features are allowed to be added. The development team is composed of very experienced team members, who are proud of their work, but not tolerant of the testing team. The requirements are written down as coarse-grained user stories like the following one:
US 03-30: Search nearest matching hotel
- As a casual user at an unfamiliar location I want to get information on the nearest hotel matching my financial and comfort profile best.
- Priority: High;
- Estimate: 7 (out of 10)
The software depends on existing web services, which are stubbed during development. Unit testing is done by developers, while system and user acceptance testing is the testing team’s responsibility. System test in earlier development cycles was often blocked due to severe failures of newly developed features. Analysis shows that many of these failures could have been found during unit test. Analysis of issues found during production show that 30% of performance problems were due to unreliable web services delivered by 3rd party suppliers.
Primary test objectives are to mitigate the perceived performance risks and to increase the confidence that no failures with high severity will occur in user stories with priority >= High. Moreover, upper management demanded for closer cooperation of testers and developers.
The following exit criteria for acceptance testing have been specified:
- AC 1: Software response time <= 3 sec for up to 1,000 simultaneous requests of user stories with priority = Very High
- AC 2: Software response time <= 10 sec for up to 10,000 simultaneous requests of user stories with priority >= High
- AC 3: No severe failure in system and user acceptance test of user stories with priority >= High
- AC 4: All user stories covered by at least one user acceptance test case
In the test strategy, equivalence partitioning is required for the system and acceptance testing of user stories with priority >= High.
For this development cycle, the following user stories were selected and implemented:
(P = Priority; E = Estimated Effort)
- US 02-10: Play video for selected hotel (P: Medium; E: 4)
- US 02-20: Play background music (P: Low; E: 2)
- US 03-20: Search for five nearest hotels (P: Very High; E: 4)
- US 03-30: Search for nearest matching hotel (P: High; E: 7)
Test analysis for system testing has just begun and the following test conditions have been identified:
- TC 02-10-1: Play video, use all supported formats
- TC 03-20-1: List 5 nearest hotels, use equivalence partitioning for location
- TC 03-30-1: List nearest matching hotel, use equivalence partitioning for user profile and location
- TC PE-xx-1: Performance tests for up to 10,000 simultaneous requests of user story US 03-30
- TC PE-xx-2: Performance tests for up to 1,000 simultaneous requests of user story US 03-20
What is the MINIMUM number of test conditions that must be added to fulfill all exit criteria in this cycle?
a) 2
b) 1
c) 3
d) 4
Select ONE option.
CORRECT ANSWER: A
EXPLANATION:
a) Correct
1) Performance tests with maximum allowed response time 10 seconds for up to 10,000 simultaneous requests of user story US 03-20 are missing.
2) A test condition for user story US 02-20 is missing
b) Incorrect
c) Incorrect
d) Incorrect
NOTES FROM SYLLABUS:
Keep in mind the objectives, strategy, and test plan and the circumstances of the team to ensure the level of detail of the test conditions is appropriate (see 1.3 notes for more on appropriateness)
TM-1.4.1 (K3) Use traceability to check completeness and consistency of designed test cases with respect to the defined test conditions.
SAMPLE QUESTION:
Assume that you are working for an ambitious start-up. They are creating a system that will provide customized loyalty and rewards programs for small- and medium-sized companies selling to customers on the Web. These companies enroll themselves on the system’s web store. This allows the companies to create customized buttons, to be placed on their websites, that let customers enroll in the companies’ loyalty and rewards program. Each subsequent purchase earns points, and both companies and their customers can manage the program; for example, to determine the number of points required to receive a free product or service.
Your employer’s marketing staff are heavily promoting the system, offering aggressive discounts on the first year’s fees to sign up inaugural companies. The marketing materials state that the service will be highly reliable and extremely fast for companies and their customers.
At this time, the requirements are complete, and development of the software has just begun. The current schedule will allow companies and their customers to start enrolling in three months.
Your employer intends to use cloud computing resources to host this service, and to have no hardware resources other than ordinary office computers for its developers, testers, and other engineers and managers. Industry-standard web-based application software components will be used to build the system.
Consider the following risk item that was identified during the quality risk analysis process:
“Customized enrollment buttons for a company’s website are not assigned the correct URL for that company’s loyalty program.”
Assume that you have used traceability to determine the logical test cases that cover this risk item.
Which of the following is a positive logical test that is complete, is correct, and covers this risk item?
a) Click rapidly on company enrollment button to see what happens.
b) Click on URL for our home page; check that home page displays.
c) Click on company enrollment button; verify that you go to that company’s enrollment page.
d) Click on company enrollment button; verify that you go to our home page.
Select ONE option.
CORRECT ANSWER: C
a) Incorrect: might cover this risk item, but it is a negative test and does not contain an expected result; it’s a good exploratory negative test for this risk item, though.
b) Incorrect: is a perfectly good positive logical test but does not cover the risk item.
c) Correct: has the input to occur, the correct expected result per the scenario, and relates to the risk item.
d) Incorrect: has the wrong expected result and so it incorrect.
TM-1.5.1 (K3) Use risks, prioritization, test environment and data dependencies, and constraints to develop a test execution schedule which is complete and consistent with respect to the test objectives, test strategy, and test plan.
SAMPLE QUESTION:
Assume that you are working for an ambitious start-up. They are creating a system that will provide customized loyalty and rewards programs for small- and medium-sized companies selling to customers on the Web. These companies enroll themselves on the system’s web store. This allows the companies to create customized buttons, to be placed on their websites, that let customers enroll in the companies’ loyalty and rewards program. Each subsequent purchase earns points, and both companies and their customers can manage the program; for example, to determine the number of points required to receive a free product or service.
Your employer’s marketing staff is heavily promoting the system, offering aggressive discounts on the first year’s fees to sign up inaugural companies. The marketing materials state that the service will be highly reliable and extremely fast for companies and their customers.
At this time, the requirements are complete, and development of the software has just begun. The current schedule will allow companies and their customers to start enrolling in three months.
Your employer intends to use cloud computing resources to host this service, and to have no hardware resources other than ordinary office computers for its developers, testers, and other engineers and managers. Industry-standard web-based application software components will be used to build the system.
You are following a risk-based testing strategy, where likelihood and impact are both assessed on a five-point scale ranging from very low to very high. Consider the following risk item that was identified during the quality risk analysis process:
“Customized enrollment buttons for a company’s website are not assigned the correct URL for that company’s loyalty program.”
Assume that technical project stakeholders have assessed the likelihood of this risk at a medium level.
Given only the information above, which of the following statements is certainly true?
a) This risk item should be assessed as a very high impact level risk.
b) The test cases associated with this risk item must be run first in the test execution period.
c) The test cases associated with this risk item must be run toward the middle of the test execution period.
d) A large number of test cases should be associated with this risk item, based on impact.
Select ONE option.
CORRECT ANSWER: A
a) Correct: this risk relates to the core functionality of the application.
b) Incorrect: tests with very high impact and higher likelihood should run before this test.
c) Incorrect: is not certainly true because we don’t know how this risk item relates to other risk items.
d) Incorrect: is not certainly true because we don’t know how effort allocation is determined based on combined impact and likelihood.
TM-1.6.1 (K3) Use traceability to monitor test progress for completeness and consistency with the test objectives, test strategy, and test plan.
SAMPLE QUESTION:
In a given company, testing is expected to follow a risk-based testing strategy. Assume the project is currently in test execution. For the following tests, the values given represent the test identifier, the risk level, the identifier for the requirement covered by the test, and the current test status, respectively.
Test ID | Risk Level | Requirement ID | Status
- 007 | Very high | 09.003 | Fail
- 010 | High | 09.003 | Ready to run
- 19 | Very low | 09.020 | Pass
Which of the following statements are true?
a) The test sequencing is certainly incorrect, since test 02.010 is higher risk than 02.019.
b) If the test plan calls for running at least one test for each requirement as early as possible, the test sequencing might be correct.
c) The test manager should stop test execution while evaluating all problems that exist with test sequencing.
d) Running test 02.019 was a waste of time, because it did not find any defects.
e) The test team might not be following the test strategy, since test 02.010 is higher risk than 02.019.
Select TWO options.
CORRECT ANSWER: B and E
a) Incorrect: the situation in option B, or perhaps simply blockage of tests, can explain running tests out of risk order.
b) Correct: 02.019 covers a different requirement than 02.010.
c) Incorrect: while evaluating problems with test sequencing makes sense, there is no need to stop running tests while doing so.
d) Incorrect: finding defects is not the only objective of testing.
e) Correct: higher-risk tests precede lower-risk tests in risk-based testing strategies.
TM-1.7.1 (K2) Explain the importance of accurate and timely information collection during the test process to support accurate reporting and evaluation against exit criteria.
Timely and accurate information collection is required at all stages of the test process because that information will be required for periodic reporting to stakeholders as well as evaluation of progress against the project’s exit criteria. Without this information collection being timely or accurate, both stakeholder reports and exit criteria evaluation will be seriously impeded.
TM-1.8.1 (K2) Summarize the four groups of test closure activities.
1 - TEST COMPLETION CHECK - All planned tests run or skipped, known defects fixed and confirmation tested (or deferred/accepted)
2 - TEST ARTIFACTS HANDOVER - Deferred or accepted defects communicated to those who’ll use and support the system, tests/test environment given to those responsible for maintenance testing, regression test sets (automated or manual) documented and delivered to the maintenance team, etc.
3 - LESSONS LEARNED (RETROSPECTIVE) - Review whether (a) user representation was broad enough in risk analysis, (b) estimates accurate, and if not why not, (c) trends and results of defect cause-and-effect analysis, (d) process improvement opportunities, (e) unanticipated variances that must be accommodated for in the future
4 - ARCHIVE RESULTS - archvie logs, reports, other documents and work products in config management system (e.g., test plan, project plan, with clear linkage to system and version they were used on)
TM-1.8.2 (K3) Implement a project retrospective to evaluate processes and discover areas to improve.
SAMPLE QUESTION:
Assume that you are working for an ambitious start-up. They are creating a system that will provide customized loyalty and rewards programs for small- and medium-sized businesses selling to customers on the web. These companies enroll themselves on the system’s web store. This allows the companies to create customized buttons, to be placed on their websites, that let customers to enroll in the companies’ loyalty and rewards program. Each subsequent purchase earns points, and both companies and their customers can manage the program; for example, to determine the number of points required to receive a free product or service.
Your employer’s marketing staff is heavily promoting the system, offering aggressive discounts on the first year’s fees to sign up inaugural companies. The marketing materials state that the service will be highly reliable and extremely fast for companies and their customers.
At this time, the requirements are complete, and development of the software has just begun. The current schedule will allow companies and their customers to enroll starting in three months.
Your employer intends to use cloud computing resources to host this service, and to have no hardware resources other than ordinary office computers for its developers, testers, and other engineers and managers. Industry-standard web-based application software components will be used to build the system.
Assume that the project has completed the initial release, and the system has been in use by companies and their customers for a month now. Your team used a blended risk-based, requirements-based, and reactive testing strategy. In the quality risk analysis, button customization was assessed as the lowest-risk area, while enrollment was assessed as the highest-risk area. You are implementing a retrospective for the testing work.
Which of the following areas should be considered in this retrospective?
a) Evaluating whether significant problems have been reported by users in button customization.
b) Determining the level of detail required for enrollment, customization and point management test cases.
c) Identifying enrollment problems that are affecting companies or their customers.
d) Delivering the known defects and failed tests to the system support team.
e) Measuring the coverage of the enrollment requirements and reporting that to project and business stakeholders.
Select TWO options.
CORRECT ANSWER: A and C
a) Correct: we want to analyze defect information to evaluate whether the quality risk analysis was correct in a retrospective.
b) Incorrect: this is supposed to happen during implementation.
c) Correct: enrollment is a key requirement area, and test retrospectives should check whether defects were missed in such areas under a requirements-based test strategy.
d) Incorrect: while this is part of test closure, it is not part of the retrospective.
e) Incorrect: this is part of test control.
TM-2.2.1 (K4) Analyze the stakeholders, circumstances, and the needs of a software project or program, including the SDLC model, and identify the optimal test activities.
SAMPLE QUESTION:
Assume that you are managing the testing of a mature application. This application is an online dating service that allows users: to enter a profile of themselves; to meet people who would be a good match for them; to arrange social events with those people; and, to block people they don’t want to contact them.
Consider the following groups of individuals:
- Users of the application who are searching for dates
- Managers and shareholders of the company
- Married couples who used the application to find their mate
- Employees of government agencies
Consider the following list of test activities.
A. Testing the affinity of matches proposed by the application
B. Testing the ability of the application to charge users correctly
C. Testing the ability of the application to comply with local tax regulations
Based only on the information given here, which of the following statements correctly matches current stakeholders with one or more their testing interest?
a) 1 – A, B; 2 – A, B, C; 3 – B; 4 – C
b) 1 – A, B; 2 – A, B, C; 4 – A, C
c) 1 – A, B, C; 2 – A, B, C; 4 – C
d) 1 – A, B; 2 – A, B, C; 4 – C
Select ONE option.
CORRECT ANSWER: D
a) Incorrect: married customers are not current customers (unless they are cheating on their spouse) and thus shouldn’t really care if invoicing is working correctly.
b) Incorrect: government employees wouldn’t really care about how well the matching works, except for those employees who are users of the application (which has nothing to do with being an employee of a government agency).
c) Incorrect: the users really don’t have much concern about whether the company is paying the proper taxes, as long as the user is being charged properly.
d) Correct: users care about receiving the service they are paying for, at the agreed price; managers and stakeholders must care about all three types of tests, so that they have satisfied customers, a profitable company, and legal compliance; government agents care about compliance with the rules; and, married couples are not current stakeholders.
TM-2.2.2 (K2) Understand how SDLC activities and work products affect testing, and how testing affects SDLC activities and work products.
1 - REQUIREMENTS - Consider these for scoping and estimation of test effort, as well as remaining aware of changes to requirements and exercising test control actions to adjust to those changes. Technical Test Analysts and Test Analysts should engage in requirements reviews
2 - PROJECT MGMT - Test Manager must provide schedule and resource requirements to Project Manager and work with them to understand changes to the project plan and exercise test control actions to adjust to those changes
3 - CONFIG / RELEASE / CHANGE MGMT - Establish the test object delivery processes and mechanisms, captured in the test plan. Test Manager may ask test analysts and technical test analysts to create build verification tests and ensure version control during test execution.
4 - SOFTWARE DEV AND MAINTENANCE - Work with dev managers to coordinate delivery of test objects, including content and dates of test releases, as well as participating in defect management.
5 - TECH SUPPORT - Ensure proper delivery of test results during test closure so support team is aware of known failures and workarounds, as well as to analyze prod failures and implement test process improvements.
6 - PRODUCTION OF TECHNICAL DOCUMENTATION - Ensure delivery of documentation for testing in a timely fashion, as well as the management of defects found in those documents.
TM-2.2.3 (K2) Explain ways to manage the test management issues associated with experience-based testing and non-functional testing.
1 - NON-FUNCTIONAL TESTING
(1. 1) begin non-functional testing before functional tests are complete; performance and security tests’ defects may be expensive and lengthy to fix, and so will create scheduling problems if found late in the iteration. Prioritize and sequence non-functional tests according to not only product but also project risk with these considerations
(1. 2) for non-functional testing requiring construction of significant test frameworks where the timescale of testing may not fit in the team’s iterations, plan the design and implementation of these activities outside of iterations with resourcing adjusted as necessary
2 - EXPERIENCE-BASED TESTING
(2. 1) time-boxed test sessions with focused test charters to prevent overlap between multiple testers/sessions
(2. 2) self-directed testing sessions are allocated to testers to create exploratory variations on pre-defined tests
TM-2.3.1 (K2) Explain the different ways that risk-based testing responds to risks.
Risk-based testing responds to risks by identifying quality risks in analysis with stakeholders, then designing, implementing, and executing tests to mitigate the quality risks identified with testing efforts resourced proportionately to risk level
TM-2.3.2 (K2) Explain, giving examples, different techniques for product risk analysis.
1 - LIGHTWEIGHT TECHNIQUES - blend the responsiveness and flexibiilty of early-phase informal approaches with the power and cross-functional consensus building of more formal approaches to capture benefits while minimizing cost. Generally use risk matrix or similar generated output as the basis for test plan & test conditions, with residual risks reported.
(1. 1) PRAM (Pragmatic Risk Analysis & Management) - encourages a blend of risk- and requirements-based strategy - requirement specifications helpful but optional
(1. 2) SST (Systematic Softare Testing) - requires specifications as risk analysis input
(1. 3) PRisMa (Product Risk Management) - like PRAM, encourages a blend of risk- and requirements-based strategy - requirement specifications helpful but optional
2 - HEATYWEIGHT TECHNIQUES
(2. 1) Hazard Analysis - extends the analytical process upstream, attempting to identify the hazards that underlie each risk
(2. 2) Cost of Exposure - for each quality risk item, determine (a) likelihood of failure (b) cost of loss associated should it occur (c) cost of testing for such failures.
(2. 3) FMEA (Failure Mode & Effect Analysis) - quality risks with causes and effects are given severity, priority, and detection ratings
(2. 4) QFD (Quality Function Deployment) - concerned with quality risks arising from incorrect / insufficient understanding of the customers’ / users’ requirements
(2. 5) FTA (Fault Tree Analysis) - observed & potential failures are subjected to a RCA starting with defects that could cause the failure, then errors/defects that could cause those defects, etc. until all root causes are identified
TM-2.3.3 (K4) Analyze, identify, and assess product quality risks, summarizing the risks and their assessed level of risk based on key project stakeholder perspectives.
SAMPLE QUESTION:
Assume that you are working for an ambitious start-up. They are creating a system that will provide customized loyalty and rewards programs for small- and medium-sized companies selling to customers on the Web. These companies enroll themselves on the system’s web store. This allows the companies to create customized buttons, to be placed on their websites, that let customers enroll in the companies’ loyalty and rewards program. Each subsequent purchase earns points, and both companies and their customers can manage the program; for example, to determine the number of points required to receive a free product or service.
Your employer’s marketing staff are heavily promoting the system, offering aggressive discounts on the first year’s fees to sign up inaugural companies. The marketing materials state that the service will be highly reliable and extremely fast for companies and their customers.
At this time, the requirements are complete, and development of the software has just begun. The current schedule will allow companies and their customers to start enrolling in three months.
Your employer intends to use cloud computing resources to host this service, and to have no hardware resources other than ordinary office computers for its developers, testers, and other engineers and managers. Industry-standard web-based application software components will be used to build the system.
Which of the following are product quality risks for this system?
a) The start-up runs out of money before testing starts.
b) Cloud computing resources are not available quickly enough to support project schedules.
c) The loyalty points calculated are incorrect.
d) Overly aggressive discounts result in a liquidity crisis for the company during the first year.
e) The system has excessive downtime due to memory leaks.
Select TWO options.
CORRECT ANSWER: C and E
a) Incorrect: is a project risk (and a very real one for any start-up).
b) Incorrect: is a project risk, not a quality risk, and it’s also of vanishingly small likelihood given the amazing range of options available in the cloud computing retail market.
c) Correct: calculating loyalty points is a function of the system and functional accuracy is a quality sub-characteristic.
d) Incorrect: is definitely a risk, but it’s not related to the quality of the system, but rather due to the discounts being offered; specifically, it’s an operational risk that can arise after release.
e) Correct: we are promising high reliability and reliability is a quality characteristic.
TM-2.3.4 (K2) Describe how identified product quality risks can be mitigated and managed, appropriate to their assessed level of risk, throughout the lifecycle and the test process.
1 - RISK IDENTIFICATION - using a broad sample of stakeholders, all possible product quality risks are identified and documented.
2 - RISK ASSESSMENT - all identified risks are categorized, assigned other properties (e.g., owner, etc.), and assessed for risk level (likelihood and impact).
3 - RISK MITIGATION - Master test plan and other test plans are designed, implemented, and executed to cover the risks identified in the analysis, with effort assigned in proportion to the level of risk (more meticulous test techniques for higher risk, lower risk using less meticulous test techniques). During the project, remain aware of additional information changing risks present or the level of risk and adjust the risk analysis as needed
4 - RISK MANAGEMENT - occurs throughout the lifecycle. Follow risk management processes as outlined in Test Policy / Test Strategy to not only identify risks, but also their sources and consequences, performing RCAs as needed. Use level of risk to sequence and prioritize testing activities (running in strict risk order - “depth first” - or running a sampline of tests across diffrent risk items first - “breadth-first”), and monitor residual risk and include in reporting so that test control, release decisions, etc can be made based on this knowledge.
TM-2.3.5 (K2) Give examples of different options for test selection, test prioritization and effort allocation.
1 - TEST SELECTION
- a REQUIREMENTS-BASED TESTING
- a.i AMBIGUITY REVIEWS - identify & eliminate ambiguities in requirements, often using checklists of common problems.
- a.ii TEST CONDITION ANALYSIS - close reading of requriements to identify test conditions to cover (using requirements priority if available)
- a.iii CAUSE-EFFECT GRAPHING - use a tool to graph causes and effects to reduce an extremely large testing problem into a manageable number of test cases and still provide 100% functional coverage of test basis.
- b - MODEL-BASED - lack of requirements can be augmented with creation of usage or operational profiles to allow testing of not only functianolity but also non-functional characterstics
- c - METHODICAL - Methodical approaches like checklists of major functional and non-functional areas can be used, but these approaches are less valid the larger the change being made
- d - REACTIVE - very few test activities prior to test execution. Instead, the test team reacts to the product as delivered, focuses on defect clusters, and prioritizes/allocates dynamically during testing. Can be a complement to other approaches but misses major areas of the application that have a small number of significant bugs, which results in isolated but significant failures.
2 - TEST PRIORITIZATION & EFFORT ALLOCATION - consider the degree to which risk, requirements, and/or usage profiles will evolve and respond accordingly in allocation & prioritization
TM-2.4.1 (K4) Analyze given samples of test policies and test strategies, and create master test plans, level test plans, and other test work products that are complete and consistent with these documents.
SAMPLE QUESTION:
Assume that you are managing the testing of a mature application. This application is an online dating service that allows users: to enter a profile of themselves; to meet people who would be a good match for them; to arrange social events with those people; and, to block people they don’t want to contact them.
Assume that the test policy defines the following mission for the test organization, in priority order:
- Find defects
- Reduce risk
- Build confidence
Assume further that your manager has defined the highest priority test process improvement for the test organization in the coming year to be achieving maximum possible automation of the regression tests for the application.
Which of the following statements is correct?
a) The application and the mission statement are aligned, but the test process improvement is misaligned with the application and the mission statement.
b) The application and the test process improvement are aligned, but the mission statement is misaligned with the application and test process improvement.
c) The application, the mission statement, and the test process improvement are all aligned.
d) The application, the mission statement, and the test process improvement are all misaligned with each other.
Select ONE option.
CORRECT ANSWER: B
a) Incorrect: for the reasons stated for the correct answer.
b) Correct: for a mature application, the main mission of testing is really building confidence that the application continues to work properly. Automated regression testing helps achieve that efficiently, so the test process improvement and the application are aligned. While the idea of automating the regression testing for this mature application is a good one, automation does not tend to find many defects. So, the mission statement is not aligned with the test process improvement, or with the real test needs of a mature application.
c) Incorrect: for the reasons stated for the correct answer.
d) Incorrect: for the reasons stated for the correct answer.
TM-2.4.2 (K4) For a given project, analyze project risks and select appropriate risk management options (i.e., mitigation, contingency, transference, and/or acceptance)
SAMPLE QUESTION:
Assume that you are working for an ambitious start-up. They are creating a system that will provide customized loyalty and rewards programs for small- and medium-sized businesses selling to customers on the web. These companies enroll themselves on the system’s web store. This allows the companies to create customized buttons, to be placed on their websites, that let customers to enroll in the companies’ loyalty and rewards program. Each subsequent purchase earns points, and both companies and their customers can manage the program; for example, to determine the number of points required to receive a free product or service.
Your employer’s marketing staff is heavily promoting the system, offering aggressive discounts on the first year’s fees to sign up inaugural companies. The marketing materials state that the service will be highly reliable and extremely fast for companies and their customers.
At this time, the requirements are complete, and development of the software has just begun. The current schedule will allow companies and their customers to enroll starting in three months.
Your employer intends to use cloud computing resources to host this service, and to have no hardware resources other than ordinary office computers for its developers, testers, and other engineers and managers. Industry-standard web-based application software components will be used to build the system.
Assume that you are writing a master test plan for this project and are currently working on the project risks section of the plan.
Which of the following topics should NOT be addressed in this section of the test plan?
a) Inability to provision a test environment by the planned test execution start date.
b) Inability to locate sufficient skilled and certified testers, especially senior testers.
c) Resignation of senior marketing staff prior to introduction of the service.
d) Insufficient resources to acquire suitable number of virtual users for load testing.
Select ONE option.
CORRECT ANSWER: C
a) Incorrect: problems with test environment readiness are classic test-related project risks.
b) Incorrect: problems with test staff availability and qualification are classic test-related project risks.
c) Correct: while this is a significant project risk, it is not a test-related project risk. What the test team needs from the marketing team—the requirements—are already complete.
d) Incorrect: problems with tool readiness are classic test-related project risks.
TM-2.4.3 (K2) Describe, giving examples, how test strategies affect test activities.
1 - ANALYTICAL STRATEGIES - Where the test team analyzes the test basis to identify the test conditions to cover. (e.g., risk-based testing, requirements-based testing)
2 - MODEL-BASED STRATEGIES - Where the test team develops a model (based on actual or anticipated situations) of the environment in which the system exists, the inputs and conditions to which the system is subjected, and how the system should behave. (e.g., operational profiling, model-based performance testing of a fast-growing mobile app)
3 - METHODICAL STRATEGIES - Where the test team uses a predetermined set of test conditions, such as a quality standard (e.g., ISO 25000 [ISO2500], which is replacing ISO 9126 [ISO9126]), a checklist, or a collection of generalized, logical test conditions which may relate to a particular domain, application, or type of testing (e.g., security testing) and uses that set of test conditions from one iteration to the next. (e.g., quality characteristic-based testing, maintenance testing)
4 - PROCESS- OR STANDARD-COMPLIANT STRATEGIES - Where the test team follows a set of processes defined by a standards committee or other panel of experts, where the processes address documentation, the proper identification and use of the test basis and test oracle(s), and the organization of the test team. (e.g., test analysis under Scrum Agile processes)
5 - REACTIVE STRATEGIES - Where the test team waits to design and implement tests until the software is received, reacting to the actual system under test. (e.g., defect-based attacks, exploratory testing)
6 - CONSULTATIVE STRATEGIES - Where the test team relies on the input of one or more key stakeholders to determine the test conditions to cover. (e.g., user-directed testing, outsourced compatibility testing)
7 - REGRESSION-AVERSE STRATEGIES - Where the test team uses various techniques to manage the risk of regression, especially functional and/or non-functional regression test automation at one or more levels. (e.g., extensive automation testing)
TM-2.4.4 (K3) Define documentation norms and templates for test work products that will fit organization, lifecycle, and project needs, adapting available templates from standards bodies where applicable.
SAMPLE QUESTION:
Assume you are a test manager on a project which is following an Agile lifecycle. The testing strategy is a blend of risk-based testing, process-compliant testing, and reactive testing. Developers are following known Agile best practices, including automated unit testing and continuous integration.
You are defining guidelines for documenting various test work products. Which of the following statements is true?
a) You should follow the IEEE 829 standard, since you are following a process-compliant test strategy.
b) You may tailor a set of templates from various sources, including the IEEE 829 standard.
c) You should follow the IEEE 829 standard, because it was designed for use in any industry.
d) You may omit documentation of test work altogether, except for defect reports.
Select ONE option.
CORRECT ANSWER: B
a) Incorrect: the process being compliant with in this case is Agile methodology, not IEEE 829.
b) Correct: agile lifecycles emphasize lightweight documentation.
c) Incorrect: IEEE 829 is documentation-heavy and thus incompatible with Agile philosophies on documentation and with reactive test strategies.
d) Incorrect: even reactive tests have charters and even Agile lifecycles have acceptance criteria.
TM-2.5.1 (K3) For a given project, create an estimate for all test process activities, using all applicable estimation techniques.
SAMPLE QUESTION:
Assume you are a test manager on a project which is following an Agile lifecycle. The testing strategy is a blend of risk-based testing, process-compliant testing, and reactive testing. Developers are following known Agile best practices, including automated unit testing and continuous integration.
You are estimating the system test effort required for a particular iteration by your test team. Which of the following statements correctly describe how you should carry out estimation in this scenario?
a) Consider the average effort required per identified risk in past iterations.
b) Allocate time-boxed test sessions for each identified test charter.
c) Estimate that most defects will be found during system test execution.
d) Include effort to create detailed test work product documentation.
e) Assume that system tests can reuse unit test data and environments.
Select TWO options.
CORRECT ANSWER: A and B
a) Correct: considering historical averages for estimation is one recognized estimation technique.
b) Correct: this is a common technique for managing experience-based testing and has estimation implications.
c) Incorrect: as cited in the syllabus, developers following known Agile best practices will remove as many as half the defects prior to system testing.
d) Incorrect: agile methods eschew highly-detailed documentation, including test documentation.
e) Incorrect: there is nothing in the scenario to make this re-use necessary or likely.
TM-2.5.2 (K2) Understand and give examples of factors which may influence test estimates.
1 - Required level of quality
2 - System Size
3 - Historical Data from previous testing
4 - Process Factors (test strategy, SDLC, process maturity, accuracy of project estimate)
5 - Material Factors (test automation and tools, test environment(s), test data, dev environment(s), project documentation (e.g., requirements, etc.), and reusable test work products)
6 - People Factors (managers and technical leads, management commitments and expectations, project team’s skills, experience, and attitudes, stability of the project team, relationships, test and debug environment support, and domain knowledge)
7 - Complexity (of the process, technology, organization, number of testing stakeholders, composition and location of sub-teams)
8 - Significant ramp up, training, and orientation needs
9 - Assimilation or development of new tools, technology, processes, techniques, custom hardware, or a large quantity of testware
10 - Requirements for a high degree of detailed test specification, especially to comply with an unfamiliar standard of documentation
11 - Complex timing of component arrival, especially for integration testing and test development
12 - Fragile test data (e.g., data that is time-sensitive)
TM-2.6.1 (K2) Describe and compare typical testing related metrics
(1) PROJECT METRICS
- Measure progress towards established project exit criteria (e.g., % test cases executed, passed, failed)
- What this ISTQB syllabus is most focused on
(2) PRODUCT METRICS
- Measure some attribute of the product (e.g., defect density or the extent to which it has been tested)
(3) PROCESS METRICS
- Measure the capability of the testing or development process (e.g., % defects detected by testing)
(4) PEOPLE METRICS
- Measure the capability of individuals or groups (e.g., implementation of test cases within a given schedule). - - Most sensitive and most prone to mistakes such as mistaking process metrics for people metrics, leading to disastrous results when people act to skew the metrics in a way that’s favourable to them
TM-2.6.2 (K2) Compare the different dimensions of test progress monitoring
(1) PRODUCT (QUALITY) RISKS
- % risks completely covered by passing tests
- % risks for which some or all tests fail
- % risks not yet completely tested
- % risks covered, sorted by risk category
- %risks identified after initial quality risk analysis
(2) DEFECTS
- Cumulative number reported/found vs cumulative number resolved/fixed.
- Mean time between failure or failure arrival rate.
- Breakdown of the # or % of defects categorized by the following: Particular test items/components, root causes, source of defect, test releases, phase introduced/detected/removed, priority/severity, reports rejected/duplicated.
- Trends in the lag time between reporting and resolution.
- Number of defect fixes that introduced new defects (sometimes called daughter bugs).
(3) TESTS
- Total # tests planned, specified (implemented), run, passed, failed, blocked, and skipped.
- Regression and confirmation test status, including trends and totals for regression test and confirmation test failures.
- Hours of testing planned per day vs actual hours achieved.
- Availability of the test environment (% of planned test hours when the test environment is usable by the test team).
(4) COVERAGE
- Requirements and design elements coverage
- Risk coverage
- Environment/configuration coverage
- Code coverage
- Note: structural code coverage, which usually refers to lower levels of testing like unit and integration testing, is critical, but Test Managers should be aware that even if structural code coverage is 100%, defects and quality risks remain to be addressed in higher levels of testing and in those levels, the primary test bases of coverage are work products like requirements and design specifications, use cases, user stories, product risks, etc.
(5) CONFIDENCE