ISTQB-CTFL-CT Learning Objectives Flashcards
FL-1.1.1 (K1) Identify typical objectives of testing of testing
1 - Prevent defects by evaluating work products
2 - Verify whether all specified req’ts have been fulfilled
3 - Check if test object is complete & validate if it works as users/stakeholders would expect
4 - Build confidence in test object quality level
5 - Find defects and failures, reducing risk of inadequate software quality
6 - Provide sufficient info to stakeholders to make informed decisions
7 - Comply with, or verify test object’s compliance with, contractual, legal, or regulatory requirements
8 - During component testing, find as many failures as possible so underlying defects are identified and fixed early or to increase code coverage of component tests
FL-1.1.2 (K2) Differentiate testing from debugging
Testing can show failures caused by defects in the software, whereas debugging is the activity that finds, analyzes, and fixes the root causes of defects.
FL-1.2.1 (K2) Give examples of why testing is necessary
1 - Prevent development of incorrect/unstable features
2 - Identify design flaws as early as possible
3 - Increase quality and testability of code
4 - Reduce likelihood of failure / increase likelihood of stakeholder needs satisfaction
FL-1.2.2 (K2) Describe the relationship between testing and quality assurance and give examples of how testing contributes to higher quality
QA and Testing (QC) are both components of Quality Management. QA is focused on adherence to proper processes that provide confidence that the appropriate level of quality will be achieved, whereas QC, which includes testing, supports the achievement of the appropriate levels of quality as part of the overall SDLC
FL-1.2.3 (K2) Distinguish between error, defect, and failure
An ERROR is mistake a person makes that leads to the introduction of a DEFECT, which is a fault or bug in a work product, which - if encountered - may cause a FAILURE, which is a deviation of the component/system from its expected behaviour.
FL-1.2.4 (K2) Distinguish between the root cause of a defect and its effects
A root cause is the earliest action/condition that contributed to creating a defect. The effects are the negative manifestations of that defect.
e.g., if a single line of code in a banking app leads to incorrect interest payments which leads to customer complaints, those customer complaints are effects of the defect (the incorrect interest payments) caused by the root cause (the line of bad code).
FL-1.3.1 (K2) Explain the seven testing principles
1 - Testing shows the presence of defects, not their absence
2 - Exhaustive testing is impossible (instead, focus on risk analysis, priorities, sound techniques)
3 - Early testing saves time and money (shift-left)
4 - Defects cluster together (predicted & actual defect clusters are important inputs to risk analysis)
5 - Beware of the pesticide paradox (same tests repeated over and over may no longer find new defects)
6 - Testing is context dependent (strategy of one product/team may be wrong for another product/team)
7 - Absence of errors is a fallacy
FL-1.4.1 (K2) Explain the impact of context on the test process
Context influences the test process - what activities are carried out, how they’re implemented, when they occur. Important contextual factors include:
- Operational constraints (resources, time, complexity, contractual/regulatory requirements)
- business domain / org policies
- internal & external standards
- SDLC & project methodologies
- Product / project risks
- Test levels and types considered
FL-1.4.2 (K2) Describe the test activities and respective tasks within the test process
1 - TEST PLANNING (determine test objectives & approach)
2 - TEST MONITORING & CONTROL (ongoing planning of actual vs planned progress, evaluate exit criteria, communicate progress to relevant stakeholders)
3 - TEST ANALYSIS (“what to test” - analyze test basis for deficiencies or defects and to identify testable features, define & prioritize test conditions)
4 - TEST DESIGN (“how to test” - design test cases, test data, test environment)
5 - TEST IMPLEMENATION (develop/prioritize test cases & automated tests, create and arrange test suites as needed, build & verify the test environment)
6 - TEST EXECUTION (run tests, recording versioning, actual vs expected results, analyze anomalies and report defects, log outcomes, repeat as necessary)
7 - TEST COMPLETION (triage remaining defects, create test summary report, finalize/archive testware, gather info & metrics to learn lessons to improve future test process maturity)
FL-1.4.3 (K2) Differentiate the work products that support the test process
- Test Planning - Test basis, traceability, exit criteria
- Test Monitoring - Test reports, progress reports, test summary reports
- Test Analysis - Defined, prioritized, and traceable test conditions
- Test Design - High-level traceable test cases and test suites
- Test Implementation - Test procedures & their sequencing, test suites & automated test cases, test execution schedule, all traceable
- Test Execution - documented status of test cases / procedures, defect reports
- Test Completion - test summary reports, action items for improvements, triaged change requests or backlog defects, finalized testware
FL-1.4.4 (K2) Explain the value of maintaining traceability between the test basis and the test work products
Traceability allows you to:
- analyze impact of changes
- make testing auditable
- meet IT governance criteria
- improve understandability of test reports
- relate technical aspects of testing to non-technical stakeholders
- provide info to assess product quality, process capability, progress against business goals
FL-1.5.1 (K1) Identify the psychological factors that influence the success of testing
- Identifying defects can be perceived as personal criticism of the product’s author.
- Confirmation bias (making it difficult to accept information like being told your own work products have a problem)
- Tensions between development team roles if communicate is indelicate when communicating defects or errors.
FL-1.5.2 (K2) Explain the difference between the mindset required for test activities and the mindset required for development activities
Testers and developers have different objectives; the primary objective of the dev is to design and build, whereas primary objective of testing is verifying and validating to find defects in what has been built.
Tester’s mindset is focused on curiosity, pessimism, eye for detail, whereas developer’s mindset is tuned more to designing and building solutions rather than spending time worrying about what might be wrong.
FL-2.1.1 (K2) Explain the relationships between software development activities and test activities in the software development lifecycle
Sequential SDLCs have a linear development process, which means any phase of dev occurs only when the previous is completely done and there is little/no overlap. On the other hand, Iterative/Incremental SDLCs have feature increments developed in a series of cycles. Overlapping test levels occur simultaneously throughout development and often regression testing becomes critical as the system grows.
FL-2.1.2 (K1) Identify reasons why software development lifecycle models must be adapted to the context of project and product characteristics
1 - Difference in product risks of systems (complex vs simple project
2 - Many business units can be part of a project or program (combination of different SDLCs/processes)
3 - Short time to deliver a product to market (merge test levels and/or integration of test types in test levels)
FL-2.2.1 (K2) Compare the different test levels from the perspective of objectives, test basis, test objects, typical defects and failures, and approaches and responsibilities
The four test levels: component testing, integration testing, system testing, acceptance testing
1 - Test Objectives: all except acceptance testing reduce risk and verify functional/non-functional behaviour in their test objects, build confidence in the quality of their test objects, and find defects in their test objects to prevent them from escaping to higher test levels. Acceptance testing validates the system is complete & as expected, verifies behaviour of system is as specified, and establishes confidence in system quality.
2 - Test Basis: component testing is design & specifications, data model, & code. Integration testing is design, sequence diagrams, interface specs, use cases, architecture, workflows. System testing is system req’ts, risk analysis report, use cases, epics/stories, models of system behaviour, state diagrams, and manuals. Acceptance testing is business processes, user or business req’ts, system req’ts and documentation, installation procedures, risk analysis reports, non-functional req’ts, use cases and/or user stories, performance targets, and operations documentation.
3 - Test Objects: component = components/units code, classes, db modules; integration = interfaces, APIs, microservices, DBs, subsystems; systems = applications, OSes, system under test (SUT), etc.; acceptance = system under test, business / operational / maintenance processes of fully integrated system
4 - Typical Defects/Failures: component = incorrect functionality, data flow problems, incorrect logic/code; integration = incorrect/missing data, interface mismatch, incorrect sequencing/timing, failures in communication, incorrect assumptions about boundaries/meaning of data, unhandled failures, compliance failures; system = incorrect/unexpected calculations, functional/non-functional behaviour, or data flows, failure of e2e functional tasks, failure to work in the system environment, failure to work as specified in manuals; acceptance = workflows do not meet business / user req’ts, business rules not met, legal / regulatory req’ts not met, non-functional failures such as in security, performance, etc.
5 - Specific Approaches & Responsibilities: component = performed by developer or one with access to code. Often written/executed after writing code, but often in Agile (e.g., TDD) automated tests precede writing code; integration = focus on integration itself, not functionality of individuals modules. component integration testing often done by devs and system integration testing often done by testers, but testers should understand and be involved in planning both; system = focus on overall e2e behaviour of system in all aspects, usually carried out by independent testers relying heavily on specifications, so if testers not involved early in SDLC in user story refinement and static testing adctivities of specifications, this can lead to disagreements at this point in expected behaviour, causing false positives and negatives, wasted time, and reduced defect detection effectiveness.
FL-2.3.1 (K2) Compare functional, non-functional, and white-box testing
Functional testing tests “what the system does” based on specifications of what it should do,
- Non-functional testing tests “how well the system does it” in terms of characteristics like security, performance, usability, portability, etc.,
- and White-box testing tests “how well the system is internally structured/implemented” often including structural automated test coverage standards.
FL-2.3.2 (K1) Recognize that functional, non-functional, and white-box tests occur at any test level
Uh. Yeah. They do. Recognize it. Shift testing left as much as possible, but yeah it occurs at all test levels.
FL-2.3.3 (K2) Compare the purposes of confirmation testing and regression testing
Confirmation testing is re-testing cases that failed once a change has been made to fix a defect, to confirm whether the defect is in fact fixed, whereas regression testing is running the appropriate scope of tests after any change to a system or component to see if the change had unintended negative side-effects that created unexpected defects.
FL-2.4.1 (K2) Summarize triggers for maintenance testing
1 - MODIFICATION of operational system - planned enhancements, corrective/emergency patches, changes, of operational environment
2 - MIGRATION of operational system - moving from one platform to another, migrating data from another system into the system under test, archiving / restoring / retrieving data either to retire an app at EOL or to test long-term data archiving, and regression testing of old functionality remaining in service
FL-2.4.2 (K2) Describe the role of impact analysis in maintenance testing
Impact analysis is the evaluation of maintenance changes for expected/possible consequences and areas of the system they could impact. Also looks at impact on existing tests and regression tests that may need to be run.
FL-3.1.1 (K1) Recognize types of software work product that can be examined by the different static testing techniques
Any work product the participants know how to read and understand. Static analysis can be applied to anything with a formal structure (e.g., static testing for code), even if written in natural language (e.g., reading specifications for linguistic ambiguities).
Examples: specifications, code, testware, user guides, web pages, contracts, project plans, schedules, budget plans, configuration and infrastructure setups, models (such as activity diagrams), etc.
FL-3.1.2 (K2) Use examples to describe the value of static testing
Static testing, especially when applied early in SDLC:
- detects defects before more resource-intensive dynamic testing is performed
- identifies defects that might be difficult/expensive to find in dynamic testing
- reduces development & testing cost and time
- improves communication between team members while participating in reviews
- shift-left testing finds defects easier to remove than when they’re more entrenched later in the SDLC
- reduces cost of quality over software’s lifetime, due to fewer failures found later in lifecycle or after deployment
- improves dev productivity (improved design & maintainable code)
FL-3.1.3 (K2) Explain the difference between static and dynamic techniques, considering objectives, types of defects to be identified, and the role of these techniques within the software lifecycle
Static testing finds defects in work products rather than failures caused by those defects. A defect can reside in a work product for a very long time without causing a failure, but static testing can find it. Dynamic testing on the other hand relies on externally visible behaviour and therefore requires you to execute the exact conditions to make a defect cause a failure in order to discover that a defect is there