Fundamentals of Testing Flashcards
Why is testing necessary?
Helps in achieving agreed upon goals
NOT limited to the Test Team
Testing Contributes to success
- Cost-effective way to find defects
- Evaluation of quality leads to well-informed decisions
- Testers represent the users in the development project
- Sometimes required (contractually, legally, industry-specific)
How testing reduces risk?
Reduce the risk of failures in operation by:
- Being involved in design reviews & refinements - prevent defects - prevent design defects and early test identification
- working with developers - increase understanding on code - reduce code and test defects
- verifying and validation before go-live - detect and fix failures
- better meet the stakeholder requirements
Is it possible to test everything in software?
No, testing everything is impractical and impossible due to time and resource constraints.
True or False: Testing is enough when you’ve done what you planned.
False. Plans may not account for unforeseen changes or adjustments.
True or False: Testing is enough when your customer or user is happy.
False. Testing done internally may not be visible to the customer, and their satisfaction is only apparent after product release.
Why can’t we say a system is “proven” to work correctly?
Because there is never absolute proof that a system works without bugs.
Example: You can prove that no bugs were found in executed test cases, but not that the system is bug-free.
When is testing enough according to confidence?
Testing is enough when the team is confident the system works as intended.
Note: Confidence is key to deciding whether the product is ready for release.
How does risk affect the amount of testing needed?
Higher risks, like potential loss of life or significant failure impact, require more testing. Lower-risk systems, like simple web applications, need less testing.
What are some risks that influence the amount of testing?
Risks in software testing refer to potential problems that could arise if the software does not work as expected. These risks guide how much effort and resources should be allocated to testing.
Here are the key risks, explained with examples:
- Missing Important Faults
Explanation: There is a risk of not detecting critical defects before release.
Example: In a banking app, if a bug allows unauthorized access to accounts, the consequences could be catastrophic. This risk requires rigorous testing of security features. - Incurring Failure Costs
Explanation: Bugs discovered after release can lead to significant costs for fixing them and handling the impact.
Example: A payment gateway error that double-charges customers might lead to refunds, legal issues, and customer dissatisfaction. - Releasing Untested or Under-Tested Software
Explanation: The risk of not testing enough might result in the product failing in production.
Example: A mobile app crashes on certain devices because compatibility was not thoroughly tested. - Losing Credibility and Market Share
Explanation: If users encounter bugs frequently, they might lose trust in the software and switch to competitors.
Example: A video streaming platform with constant buffering issues will lose users to competitors like Netflix or YouTube. - Missing a Market Window
Explanation: Delaying a release due to excessive testing can result in losing the opportunity to enter the market at the right time.
Example: If an e-commerce app isn’t ready for a Black Friday launch, the business misses out on a critical sales period. - Over-Testing or Ineffective Testing
Explanation: Spending too much time and resources on testing parts of the system that are low risk can lead to wasted effort.
Example: Exhaustively testing the UI color scheme of an app while neglecting backend performance testing.
Practical Summary
The amount of testing required is determined by assessing risks like:
What happens if a bug is missed?
How much will it cost to fix issues later?
How much will users or the business be impacted?
By understanding these risks, you can focus testing on the most critical areas, ensuring quality without wasting time or resources.
What is risk-based testing?
A testing approach that prioritizes testing based on risk.
How does risk-based testing prioritize tests?
Determine what to test first.
Focus on the most critical areas.
Decide how thoroughly to test each item.
Allocate time effectively.
What is the main principle of prioritizing tests in risk-based testing?
At any point, testing should focus on the most important areas so that the best testing is done within the available time.
Example: Testing login functionality of a banking app is higher priority than testing a font size change in the UI.
Q: What is Quality Management, and where do QA and QC fit into it?
Quality Management is the broader umbrella covering all activities to direct and control an organization regarding quality.
Quality Assurance (QA) sits under Quality Management and focuses on improving processes to ensure the final product meets quality standards.
Quality Control (QC) also sits under Quality Management and focuses on checking the actual product to find and correct defects (testing is a core QC activity).
Important Note:
Quality Management → includes QA and QC.
QA → process improvements (preventive).
QC → product checks (corrective).
What is the relationship between errors, defects, and failures in software?
An error (human mistake) may lead to a defect (a fault/bug in the code), which can then cause a failure (when the software deviates from expected behavior).
Important Note:
Error → Defect → Failure
A defect does not always lead to a failure. Sometimes the code path with the defect is never executed or the defect conditions never occur.
According to ISTQB terminology, which term best describes a human action producing an incorrect result in software?
Error
Explanation:
An error is a mistake made by a human (developer, designer, tester, etc.) that can lead to defects in software artifacts.
Example: A developer forgetting to handle invalid input in a date field.
In ISTQB terms, what is a “failure”?
A failure is an observable deviation of the software from its expected outcome or service.
Example: When entering letters into a date field causes a crash instead of a friendly “invalid input” message.
True or False: A system can be considered reliable even if it contains faults.
True
Explanation:
Software can have defects but still be perceived as reliable if those defects are not triggered or do not significantly impact the user.
Comparison: Think of a large, popular application that rarely crashes for most users. It still has hidden defects, but those defects don’t affect the majority of workflows.
Which statement best describes a “root cause”?
1. It is the line of code where the failure is observed.
2. It is the earliest action or condition that created a defect.
3. It is the final user complaint about a software bug.
- The root cause is the earliest action or condition that led to the defect.
Important Note:
Root cause analysis aims to prevent similar defects by addressing the underlying issue (e.g., a knowledge gap, miscommunication, or unclear requirements).
When is root cause analysis typically performed, and why is it useful?
It is performed after a failure is observed in an environment (e.g., test, production). It’s useful because identifying the root cause helps improve processes and prevent similar defects in the future.
Example: Using the “5 Whys” technique to trace an incorrect bonus calculation back to unclear requirements provided by a product owner.
Where do defects occur?
Documentation
- Requirement specifications
- Test scripts
Source code
Any supporting artifacts
Basically, defects can be found in any stage of the SDLC in any work product, but the earlier they are found, the cheaper they are to fix.
Can you explain the relationship between error, defect, and failure with an example?
A:
Error: A developer forgets to validate user input for a date field.
Defect: The code does not handle invalid inputs, allowing letters in the date field.
Failure: The software crashes when a user enters letters into the date field.
Why do defects occur in software development?
Defects occur due to a variety of reasons, including:
Human Errors: Mistakes made by developers, testers, or other stakeholders.
Time Pressure: Deadlines reduce focus and attention to detail.
Complex Systems: Complex processes or technologies increase the likelihood of mistakes.
Miscommunication: Gaps in understanding between stakeholders lead to unclear or incorrect requirements.
Inexperienced Staff: Lack of skills or experience increases errors.
Environmental Conditions: Factors like hardware issues, radiation, or pollution can impact software or firmware.
Blind Spots: People may not notice their own mistakes when reviewing their work.
Why is it important to find defects early?
Cost of Fixing Defects: Defects are cheaper to fix when found early in the software development lifecycle (SDLC).
Example: Correcting a typo in requirements costs far less than fixing a defect in production.
Key Point:
The earlier a defect is identified, the less impact it has on time, resources, and costs.
What is reliability in software, and how is it related to defects?
Reliability is the probability that software will not cause a failure under specified conditions for a specified time.
Example: A payroll system reliably processes payments without errors for six months.
Relation to Defects:
A reliable system can have defects, as long as those defects do not lead to failures under typical usage.
Can a system be fault-free?
No, there will always be defects (faults) in a system. However, not all defects cause failures.
Can a system be reliable with faults?
Yes. A system can be reliable if the faults do not lead to critical failures under normal usage.
Example: A minor spelling mistake in the UI does not affect the system’s reliability.
Is a fault-free system always reliable?
No. A system with zero defects may still fail to meet user needs due to:
Misinterpreted requirements.
Poor performance or scalability.
Lack of usability.
What are false negatives and false positives in testing?
False Negatives: The test fails to detect a defect that exists.
Risk: Critical defects may go unnoticed into production.
Example: A test suite does not cover edge cases, missing a critical bug.
False Positives: The test incorrectly identifies an issue where none exists.
Impact: Wastes time investigating non-existent defects.
Example: A test case fails due to a misconfigured test environment, not a defect.
What is root cause analysis, and why is it important?
Root Cause Analysis (RCA): A method to identify the underlying cause of a defect or failure to prevent recurrence.
Importance:
Helps improve processes to avoid similar defects in the future.
Example - Five Whys:
Why is the bonus calculation wrong? → Because the code is incorrect.
Why is the code incorrect? → Because the user story was unclear.
Why was the user story unclear? → Because the product owner misunderstood requirements.
Why did the product owner misunderstand? → Lack of knowledge about bonus calculations.
Root Cause: Product owner’s lack of knowledge.
What are common sources of defects in the SDLC?
Defects can arise from:
Requirements: Typos, unclear specifications, or misinterpretations.
Test Scripts: Incorrect or incomplete test cases.
Code: Logical errors, unhandled exceptions, or performance issues.
Supporting Artifacts: Faulty configurations, documentation, or setups.
ISTQB Example Question: Which of the following best describes a failure?
A: An incorrect action by a developer.
B: A deviation from the expected behavior during execution.
C: A mistake in the requirements.
D: A defect in the code.
Correct Answer:
B. A failure is the deviation of the software from its expected behavior during execution.
ISTQB Example Question: Which activity is used to prevent the introduction of defects?
A: Testing.
B: Root cause analysis.
C: Quality assurance.
D: Debugging.
Correct Answer:
C. Quality assurance focuses on improving processes to prevent defects.
What is the significance of the “Seven Testing Principles” in software testing?
They are fundamental guidelines that apply to all software development life cycle (SDLC) models, regardless of methodology (Agile, Waterfall, Spiral, etc.).
They help justify and explain why certain testing practices should or should not be used.
What is the meaning of Testing Principle #1 – “Testing Shows Presence of Defects, Not Their Absence.”
Meaning: Testing can prove that defects exist when you find them, but it cannot guarantee that the software has no remaining defects.
Analogy: If you search for a “pink elephant” and find one, you’ve proven they exist. But not finding one doesn’t prove they never exist—it just means you didn’t find it in your search.
Practical Example:
Even if all test cases pass, there’s no absolute proof the product is 100% defect-free. You can only reduce risk, not eliminate it.
What is the meaning of Testing Principle #2 – “Exhaustive Testing Is Impossible.”
Meaning: It’s not feasible to test all possible data inputs, configurations, or scenarios.
Implication: Use risk-based testing and test case prioritization to focus on the most critical areas.
Example Calculation:
A small 23-screen application with various inputs and fields could require over half a million tests for complete coverage, taking an impractical amount of time.
ISTQB-Style Question:
Which principle is most closely related to using risk-based testing to prioritize what to test first?
Answer: Exhaustive testing is impossible, so risk-based strategies are necessary.
What is the meaning of Testing Principle #3 – “Early Testing Saves Time and Money.”?
Meaning: Defects are cheaper to fix if found early in the SDLC.
Example: Catching a typo in a requirement document takes only minutes to fix. But if the same typo leads to incorrect code and a failure in production, the cost (time, resources, user confidence) is much higher.
What is the meaning of Testing Principle #4 – “Defect Clustering.”?
Meaning: A relatively small number of modules contain the majority of defects (often referenced as the Pareto Principle, or “80/20” rule).
Implication: Focus testing effort on high-risk or problematic areas to improve efficiency.
Example:
In an app with multiple modules, the “image viewer” might consistently have the most bugs due to complexity (e.g., varied browsers, devices, document types). Therefore, testers allocate more testing time there.
What is the meaning of Testing Principle #5 – “Pesticide Paradox.”?
Meaning: Running the same tests repeatedly leads to diminishing returns; over time, they won’t uncover new defects.
Solution: Review and revise test cases regularly to ensure you’re testing different parts of the software and different data sets.
Example:
If you always use the same input values, eventually you’ll only confirm that previously discovered bugs remain fixed; you might miss new bugs that occur with negative or edge-case values.
What is the meaning of Testing Principle #6 – “Testing Is Context Dependent.”?
Meaning: The type of testing, depth of testing, and approach vary depending on the context (e.g., safety-critical software vs. an e-commerce website).
Example: A medical device requires stricter regulatory compliance, thorough documentation, and meticulous validation, whereas an online retail site might focus more on usability and performance under peak load.
ISTQB-Style Question:
Which principle explains why safety-critical software testing differs from testing a web application?
Answer: “Testing Is Context Dependent.”
What is the meaning of “Testing Principle #7 – “Absence of Errors Fallacy.””?
Meaning: Finding and fixing defects does not guarantee the software is useful or meets user needs.
Example: A bug-free system can still fail if it’s slow, unusable, or doesn’t match business requirements.
Validation vs. Verification:
Verification: “Are we building the product right?” (checking defects, correctness)
Validation: “Are we building the right product?” (checking if it meets user needs)
What is the test process, and how does it integrate into the software development life cycle (SDLC)?
The test process is a standard set of testing activities that are integrated into the SDLC to ensure quality.
It is not isolated from development but rather overlaps and interacts with it to achieve the stakeholders’ business needs.
The process is flexible, iterative, and can vary based on context, organization, and methodology (e.g., Agile vs. Waterfall).
What are the key factors that influence how a test process is implemented?
Stakeholder needs: Cooperation, availability, and expectations.
Team skills and experience: Affects choice of techniques and documentation levels.
Business domain: Industry-specific requirements, criticality, and risks.
Technical factors: Product architecture, technology stack, and tools available.
Project constraints: Scope, time, budget, and resources.
Organizational practices: Existing policies, SDLC model, and reporting needs.
Example:
High criticality domain (e.g., aviation): Requires detailed documentation, 95% code coverage, and rigorous validation.
Low criticality domain (e.g., e-commerce): May prioritize usability and performance testing with less emphasis on extensive documentation.
What is testware, and why is it important?
Testware includes all the artifacts created during the test process, such as test plans, test cases, test scripts, and defect reports.
It must be properly managed with version control and configuration management to maintain integrity and consistency.
Examples of Testware:
Test Planning: Test strategy, test plan.
Test Execution: Test scripts, test data.
Defect Management: Bug reports, logs.
Tools Used:
Tools like Jira, Confluence, or OneDrive can be used to manage testware effectively.
What are the seven main groups of test activities in the fundamental test process?
A:
Test Planning: Defining test objectives, scope, resources, and schedule.
Test Monitoring and Control: Tracking test progress and taking corrective actions.
Test Analysis: Identifying test conditions and evaluating the test basis.
Test Design: Creating test cases, test data, and test scripts.
Test Implementation: Preparing test environments and combining testware for execution.
Test Execution: Running tests and logging defects.
Test Completion: Archiving testware and analyzing lessons learned.
How do stakeholder cooperation and team experience impact the test process?
Stakeholder Cooperation:
Example: A user acceptance test (UAT) requiring stakeholder input would fail if stakeholders are unavailable or unwilling to cooperate.
Team Experience:
Example: Inexperienced testers may struggle with experience-based testing techniques, requiring more structured approaches like checklists or scripts.
How does the choice of test techniques depend on the test process context?
Low Team Experience: Structured techniques like checklists or detailed scripts.
High Team Experience: Exploratory or experience-based testing.
Critical Systems: Use of white-box techniques for coverage (e.g., 95% code coverage in aviation).
Simple Systems: Focus on black-box or functional techniques.
How do tools and automation influence the test process?
Test Automation Tools: Increase efficiency but require appropriate tool selection based on the project.
Degree of Automation: High for complex, frequently released systems (e.g., microservices), lower for simple or rarely updated systems.
When abstract test cases are being created, in which phase of the fundamental test process are you? How about create data and scripts?
Test Design.
Which factor does not influence the implementation of the test process?
- Stakeholder needs.
- Tool availability.
- Programming language of the development team.
- Team experience.
Correct Answer:
3. While the programming language might influence automation or tools, it does not directly shape the test process.
Why are test policies and test strategies important?
A:
These define the organizational approach to testing, ensuring consistent practices across projects.
They are independent of specific projects and are influenced by the domain (e.g., medicine, aviation, e-commerce).
Example:
A pharmaceutical company may require regulatory compliance strategies that differ from an online retail company’s approach.
How does reporting differ based on organizational practices, like agile or waterfall ?
Daily Reporting: Suited for Agile teams needing quick feedback on sprint progress.
Summary Reporting: Preferred for Waterfall projects, providing high-level overviews at the end of milestones.
What is test planning, and why is it considered the most important phase of the test process?
Test Planning is the first step of the test process, where the objectives, strategies, and resources for testing are defined.
It is crucial because all subsequent testing activities depend on it, making it the foundation of the test process.
Does test planning always require extensive documentation?
No, the extent of documentation depends on the context and organizational requirements.
While some companies require compliance with IEEE standards, others may prioritize lean documentation, especially in Agile environments.
When should test planning begin, and how should it evolve?
Test planning should begin at the start of the software development project, not just when testing starts.
It is a living document that evolves and can be updated based on feedback from test monitoring and control activities.
Example:
If new requirements are added during development, the test plan can be updated to include additional test cases.
What are the main tasks and activities in test planning?
Defining testing objectives: What does testing aim to achieve?
Selecting the approach: Techniques, tools, and strategies to meet objectives.
Scheduling test activities: Creating a test schedule.
Allocating resources and budget: Determining who and what is needed.
Documenting test scope, risks, and constraints.
Example Task:
Deciding which browsers and devices to test for a web application.
What is testware, and what is its role in test planning?
Testware includes all artifacts produced during the testing process.
In test planning, testware typically includes the test plan, which contains details like the test basis, traceability information, and entry/exit criteria.
What are entry and exit criteria in test planning, and why are they important?
Entry Criteria: Conditions that must be met before testing begins (e.g., availability of test data or test environments).
Exit Criteria: Conditions that define when testing can stop (e.g., all critical test cases pass, defect rate below threshold).
Key Point for ISTQB Exams:
Exit criteria are defined during the planning phase, even though they are used at the end of testing.
How do risks influence test planning?
Risks are identified and documented in a risk register, which includes:
Likelihood: Probability of the risk occurring.
Impact: Severity if the risk materializes.
Mitigation: Actions to reduce the risk.
Example:
In a financial application, a high risk of data loss might lead to prioritizing database testing.
What are some components of a standard test plan?
A standard test plan may include:
Context of testing: Type of test plan, scope, stakeholders.
Risk register: Risks and mitigations.
Testing activities: Detailed steps and estimates.
Resources: Roles, responsibilities, training needs.
Test schedule: Timelines for testing phases.
Test strategy: Design techniques, deliverables, and completion criteria.
Environment requirements: Tools, hardware, and software needed.
Metrics: Data to be collected for evaluation.
Retesting and regression testing: Strategies for repeated testing.
What is the main purpose of exit criteria in test planning?
- To define when testing can start.
- To specify when to stop testing.
- To determine the test schedule.
- To track test execution progress.
- Exit criteria specify when to stop testing.
Q: Which of the following is NOT typically part of a standard test plan?
- Risk register.
- Test data requirements.
- Budget allocation.
- Post-deployment bug fixes.
- Post-deployment bug fixes are not part of test planning; they occur after testing and release.
When should the test plan be updated?
- Only at the beginning of the project.
- After every test cycle.
- When changes occur in the 4. development process.
- At the end of testing.
- Test plans should be updated as needed when changes occur in the development process.
How do industry requirements influence test planning?
Certain industries (e.g., aviation, pharmaceuticals) have strict regulatory requirements for test techniques, coverage, and documentation.
Example: In safety-critical systems, 95% code coverage may be mandated.
What is test monitoring?
Test monitoring is an ongoing activity that involves checking all test activities and comparing actual progress with the test plan.
It ensures that progress is tracked using metrics and determines whether testing is on schedule.
What are 3 Key Metrics for Monitoring?
- Percentage of planned test cases executed.
- Number of defects found vs. resolved.
- Test coverage (e.g., requirement coverage, code coverage).
What is test control?
Test control involves taking corrective actions to align test activities with the objectives of the test plan.
If progress deviates from the plan, control actions can include:
Adjusting schedules or resources to speed up progress.
Updating the test plan to reflect the new realities.
How do test monitoring and control differ?
- Test Monitoring: Tracks progress and identifies deviations from the plan.
- Test Control: Implements corrective actions based on monitoring insights to ensure objectives are met.
How does test monitoring and control relate to the rest of the test process?
Test monitoring and control are not confined to a single phase.
They overlap all phases of the test process, from planning to completion, ensuring that progress is continuously evaluated and adjustments are made when necessary.
What role do exit criteria play in test monitoring and control?
xit criteria define the definition of done for testing and are used to evaluate test progress and quality.
Monitoring ensures that the criteria are being met, while control actions may be needed if criteria are not on track.
Examples of Exit Criteria:
All critical defects resolved.
95% of planned test cases executed.
Test coverage goals (e.g., 90% code coverage) achieved.
What are test progress reports, and what do they contain?
A test progress report provides a summary of the current testing status during a specific reporting period.
Key Contents of a Test Progress Report:
Reporting period:
1. Timeframe covered by the report.
2. Progress against the test plan: Percentage of planned activities completed.
3. Blocking factors: Issues impeding progress.
4. Metrics: Test execution, defect status, coverage, etc.
5. Risks: Newly identified or changed risks.
6. Planned testing: Activities planned for the next period.
What is a summary report, and how is it different from a progress report?
A summary report is a type of progress report generated at the end of a phase, sprint, or release cycle.
It provides a final overview of testing activities and results, often used in go/no-go meetings.
What types of testware are produced during test monitoring and control?
Test Progress Reports: Tracking current status against the plan.
Summary Reports: Providing final testing outcomes.
Control Directives: Documenting corrective actions taken.
Risk Information: Updates on identified and mitigated risks.
How does IEEE standardize test progress reports?
According to IEEE standards, a test progress report should include:
- Reporting period.
- Progress against the test plan.
- Blocking factors.
- Test measures (e.g., defect trends, coverage).
- Newly identified or updated risks.
- Planned testing activities.
Which of the following is NOT part of test progress reporting?
- Reporting period.
- Summary of test results.
- List of test tools used.
- Blocking factors.
- While test tools may influence testing, they are not part of progress reporting.
What is the primary purpose of test control?
- To evaluate the test environment.
- To adjust testing activities based on deviations from the plan.
- To finalize the test summary report.
- To create new test cases.
- Test control involves adjusting activities to align with the test plan.
How does risk management intersect with test monitoring and control?
Risks identified during monitoring may require mitigation actions through test control.
Example: If a critical defect in a core module delays progress, additional resources may be allocated to address it.
What is Test Design? What are the key activies? Give an example
What Is It?
Test Design transforms test conditions from the analysis phase into detailed test cases (step-by-step actions, expected results).
Key Activities:
- Selecting test techniques (e.g., Equivalence Partitioning, Boundary Value Analysis).
- Defining test cases with inputs, preconditions, and expected outcomes.
- Building test data sets to cover various scenarios.
Other work products may include:
- Test Charters
- Coverage items
- Test environment requirements
- Test data requirements
Example:
For a login feature, you design test cases such as:
Valid username/password
Invalid username
Empty password
Locked account scenario
Comparison:
Test Design is like writing out your cooking steps: You specify the exact measurements, cooking times, and expected taste/texture for each dish.
What is Test Analysis? What are the key activies? Give an example.
What Is It?
Test Analysis is where you examine the test basis (requirements, design documents, risk analyses) to identify what needs to be tested—known as test conditions.
Key Activities:
- Reviewing requirements/designs to understand functionalities and constraints.
- Finding gaps, ambiguities, or errors in the documentation (static testing).
- Defining and prioritizing test conditions (e.g., “Handle invalid credit card number,” “Process multiple shipping addresses”).
Example:
In a banking application, you read the requirement spec that states “The system must allow a transfer of up to $10,000 daily.” You define the test condition: “Check system behavior when transferring $10,000, $9,999.99, and $10,001.”
Comparison:
Test Analysis is like reading a recipe carefully before cooking: You identify which ingredients (features) need special attention and note where mistakes might arise.
Why do we evaluate test bases and test objects during test analysis?
Evaluation helps discover defects in the requirements or designs before coding or test execution begins (sometimes referred to as static testing).
Common issues found include omitted details, ambiguous requirements, or contradictions in functionality.
Benefit:
Finding defects at this stage is cheaper and faster than finding them during later phases.
What testware is typically produced during the test analysis phase?
Defined and prioritized test conditions (e.g., acceptance criteria).
Defect reports for any found issues in the test basis (e.g., ambiguous or contradictory requirements).
Additional Outputs:
Lists or spreadsheets detailing features to test, along with traceability to requirements.
What does an IEEE-compliant test design specification typically include?
Feature Sets: Overview of functionalities to be tested.
- Unique ID, Objective, Priority, Strategy, Traceability
Test Conditions:
- Unique ID, Description, Priority, Traceability
Key Point:
Traceability ensures each test condition maps back to specific requirements or risk items.
Which of the following best describes test analysis?
Specifying the structure of test cases.
Translating general test objectives into test conditions by examining the test basis.
Deciding when to stop testing based on entry/exit criteria.
Automating all tests to be run continuously in a CI/CD pipeline.
- Test analysis focuses on turning general objectives into detailed conditions through examination of the test basis.
What is Test Implementation? What are the keys activities? Give an example
What Is It?
Test Implementation is about preparing everything you need to execute the tests. This involves setting up the test environment, verifying test scripts are ready, and confirming test data is in place.
Key Activities:
- Creating or configuring the test environment (hardware, software, network settings).
- Building or finalizing automated test scripts.
- Ensuring test cases and test data are accessible and correct.
Example:
You might spin up a virtual machine with the right OS, database, and application server. Then load a sample user database for the system under test.
Comparison:
Test Implementation is like setting the stage before a play: You prepare the set, place props, and make sure the lighting and sound systems are correct so the performance (test execution) can happen smoothly.
What is Test Execution? What are the keys activities? Give an example
What Is It?
Test Execution is where you run the test cases (manually or via automation) and log the results, including any defects encountered.
Key Activities:
- Executing planned tests in the test environment.
- Recording outcomes (pass/fail, reasons for failure).
- Logging defects with enough details for developers to replicate and fix them.
Example:
You run your login test cases on various browsers. Test case #3 (incorrect password) fails because the system crashes instead of showing an error message. You log a bug in Jira with screenshots and system logs.
Comparison:
Test Execution is like the actual performance of a play: The actors (testers or automation scripts) perform, and you note any mistakes or unexpected issues.
What is Test Completion? What are the keys activities? Give an example
What Is It?
Test Completion wraps up the testing effort by documenting results, archiving test artifacts, and analyzing lessons learned for future improvements.
Key Activities:
- Collecting final metrics (e.g., total tests run, defects found and fixed).
- Creating a test summary report or release notes.
- Archiving testware (test cases, data, logs) for reuse or audits.
- Conducting retrospectives or post-project reviews to improve processes.
Example:
After finishing all test cycles for a mobile app, you produce a test summary showing that 95% of test cases passed. You also record lessons like “We should automate more regression tests for the next release.”
Comparison:
Test Completion is like cleaning up and reviewing after a big event: You compile stats on attendance and feedback, store any reusable materials, and note improvements for next time.
What is the difference between Abstract and Concrete test cases?
Abstract/Logical test case can be reused across multiple test cycles using different data.
Example
Test case 1:
- Input x: x <= 3
- Output: 0
Test case 2:
- Input x: 3 < x <= 10
- Output: 50
Concrete test case:
Test case 1:
- Input x: x = 2
- Output: 0
Test case 2:
- Input x: x = 7
- Output: 5
It is good practice to first design high level (abstract) test cases.
What are the main categories of test design techniques according to ISTQB?
(1) Specification-based (Black-box),
(2) Structure-based (White-box),
(3) Experience-based.
DefineEquivalence Partitioning.
A technique that divides input data into partitions where each partition is expected to behave the same. One test per partition typically suffices.
Which technique emphasizestesting boundary valuessuch as the smallest and largest extremes of valid ranges?
Boundary Value Analysis (BVA).
Decision Tabletesting is best used when…
…there are multiple business rules or conditions that combine to produce different outcomes or results.
State Transitiontesting focuses on what?
The different states of a system and the transitions triggered by events, ensuring correct movement between states (and handling invalid transitions).
Statement Coveragevs.Decision Coverage– what’s the difference?
Statement Coverageensures every line of code is executed.
Decision Coverageensures every branching (true/false) is executed at least once.
Exploratory Testingis defined as…
A simultaneous approach to learning, test design, and test execution, adapting testing based on immediate feedback from the system.
Error Guessingis heavily reliant on what?
The tester’s experience and intuition about where defects are most likely to be found.
What isTest Implementationin the ISTQB test process?
The stage where test designs are turned into concrete test assets—test scripts, test data, and test environments—ready for execution.
Name twokey tasksinvolved in test implementation.
Prioritizing and scheduling test cases
Preparing test data and test environment
Why might you do aDry Runbefore official test execution?
To verify the environment is set up correctly, test scripts function properly, and to catch early issues before the full execution.
In manual testing, what is the difference betweenTest DesignandTest Implementation?
Test Design: Definingwhichtest cases are needed andwhatthey should cover.
Test Implementation: Writing thedetailed stepsfor those test cases and setting up all required data and environments.
Why istest environment configurationcrucial?
A mismatched or misconfigured environment can cause false failures or mask real defects, undermining test reliability.
What’s the purpose ofmapping test cases to requirementsduring test implementation?
Ensures requirement coverage, so you know which test cases verify which parts of the system, supporting traceability and completeness.
When setting uptest data, what should you watch out for?
Overlapping data across tests (potential interference)
Data validity (correct formats, boundaries)
Keeping production-like data but anonymized to protect privacy
After completing test implementation, what’s thenextstep in the ISTQB test process?
Test Execution. You run the test cases in the prepared environment with the prepared data to evaluate actual results against expected results.
What is themain objectiveof Test Execution?
To run the planned test cases under specified conditions, compare actual results against expected results, and log defects if discrepancies are found.
List twokey tasksin Test Execution.
Executing test procedures (manual or automated)
Logging and tracking the actual results (pass/fail, defects)
When a test fails, what should happen next?
Immediately raise a defect with enough detail to reproduce.
After the fix,retestto confirm it’s resolved (and do regression tests if needed).
Why isresult logging(e.g., pass/fail records) important?
It provides proof of which conditions were tested, helps track test coverage, and serves as historical evidence if issues arise later.
What isregression testing?
Re-running previously passed tests to ensure recent changes or fixes haven’t introduced new defects.
What role doestest environmentplay in Test Execution?
A stable and representative environment is crucial. If the environment differs from production or is unstable, results may be invalid or misleading.
What are some typicalstatusoptions when logging test results?
Pass, Fail, Blocked, Not Executed, In Progress.
Why isprogress trackingcrucial during Test Execution?
It gives real-time visibility into how many tests have run, how many passed, how many failed, and helps stakeholders see if deadlines or quality goals are at risk.
What is themain objectiveof Test Completion?
To wrap up remaining testing activities, consolidate results, archive test artifacts, and capture lessons learned after confirming exit criteria are met.
List twokey tasksin Test Completion.
Finalize and consolidate test results (pass/fail metrics, defect statuses).
Archive all test-related documents and artifacts for future reference.
Why areexit criteriaimportant in Test Completion?
They define the conditions (e.g., defect thresholds, coverage goals) that must be met to consider testing “done” and to grant approval for release.
Name twocommon artifactsyou archive during Test Completion.
Test cases/scripts
Test summary reports or metrics
Why is alessons learnedorretrospectivesession valuable?
It captures insights and best practices from the project, identifying what worked well and what needs improvement, thereby enhancing future testing efforts.
What should happen toopen or deferred defectsat Test Completion?
They should be clearly documented, prioritized, and handed off for resolution in future releases or accepted as known risks.
What is thefinal sign-off, and who typically gives it?
The final sign-off is management or product owner approval indicating all completion criteria are satisfied and testing can conclude.
When summarizing test results, what details are typically included?
Number of test cases executed/passed/failed
Major defects found/resolved/deferred
Coverage metrics
Overall product quality assessment
Define test suite, test case, test script, and test charter.
Test suite: a group of test scripts or test execution schedule
Test case: contains expected results
Test script: a sequence of instructions for the execution of a test
Test charter: documentation of the goal or objective for a test session