ISTQB-CTFL-CT Learning Objectives Flashcards

1
Q

FL-1.1.1 (K1) Identify typical objectives of testing of testing

A

1 - Prevent defects by evaluating work products
2 - Verify whether all specified req’ts have been fulfilled
3 - Check if test object is complete & validate if it works as users/stakeholders would expect
4 - Build confidence in test object quality level
5 - Find defects and failures, reducing risk of inadequate software quality
6 - Provide sufficient info to stakeholders to make informed decisions
7 - Comply with, or verify test object’s compliance with, contractual, legal, or regulatory requirements
8 - During component testing, find as many failures as possible so underlying defects are identified and fixed early or to increase code coverage of component tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

FL-1.1.2 (K2) Differentiate testing from debugging

A

Testing can show failures caused by defects in the software, whereas debugging is the activity that finds, analyzes, and fixes the root causes of defects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

FL-1.2.1 (K2) Give examples of why testing is necessary

A

1 - Prevent development of incorrect/unstable features
2 - Identify design flaws as early as possible
3 - Increase quality and testability of code
4 - Reduce likelihood of failure / increase likelihood of stakeholder needs satisfaction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

FL-1.2.2 (K2) Describe the relationship between testing and quality assurance and give examples of how testing contributes to higher quality

A

QA and Testing (QC) are both components of Quality Management. QA is focused on adherence to proper processes that provide confidence that the appropriate level of quality will be achieved, whereas QC, which includes testing, supports the achievement of the appropriate levels of quality as part of the overall SDLC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

FL-1.2.3 (K2) Distinguish between error, defect, and failure

A

An ERROR is mistake a person makes that leads to the introduction of a DEFECT, which is a fault or bug in a work product, which - if encountered - may cause a FAILURE, which is a deviation of the component/system from its expected behaviour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

FL-1.2.4 (K2) Distinguish between the root cause of a defect and its effects

A

A root cause is the earliest action/condition that contributed to creating a defect. The effects are the negative manifestations of that defect.

e.g., if a single line of code in a banking app leads to incorrect interest payments which leads to customer complaints, those customer complaints are effects of the defect (the incorrect interest payments) caused by the root cause (the line of bad code).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

FL-1.3.1 (K2) Explain the seven testing principles

A

1 - Testing shows the presence of defects, not their absence
2 - Exhaustive testing is impossible (instead, focus on risk analysis, priorities, sound techniques)
3 - Early testing saves time and money (shift-left)
4 - Defects cluster together (predicted & actual defect clusters are important inputs to risk analysis)
5 - Beware of the pesticide paradox (same tests repeated over and over may no longer find new defects)
6 - Testing is context dependent (strategy of one product/team may be wrong for another product/team)
7 - Absence of errors is a fallacy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

FL-1.4.1 (K2) Explain the impact of context on the test process

A

Context influences the test process - what activities are carried out, how they’re implemented, when they occur. Important contextual factors include:

  • Operational constraints (resources, time, complexity, contractual/regulatory requirements)
  • business domain / org policies
  • internal & external standards
  • SDLC & project methodologies
  • Product / project risks
  • Test levels and types considered
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

FL-1.4.2 (K2) Describe the test activities and respective tasks within the test process

A

1 - TEST PLANNING (determine test objectives & approach)
2 - TEST MONITORING & CONTROL (ongoing planning of actual vs planned progress, evaluate exit criteria, communicate progress to relevant stakeholders)
3 - TEST ANALYSIS (“what to test” - analyze test basis for deficiencies or defects and to identify testable features, define & prioritize test conditions)
4 - TEST DESIGN (“how to test” - design test cases, test data, test environment)
5 - TEST IMPLEMENATION (develop/prioritize test cases & automated tests, create and arrange test suites as needed, build & verify the test environment)
6 - TEST EXECUTION (run tests, recording versioning, actual vs expected results, analyze anomalies and report defects, log outcomes, repeat as necessary)
7 - TEST COMPLETION (triage remaining defects, create test summary report, finalize/archive testware, gather info & metrics to learn lessons to improve future test process maturity)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

FL-1.4.3 (K2) Differentiate the work products that support the test process

A
  • Test Planning - Test basis, traceability, exit criteria
  • Test Monitoring - Test reports, progress reports, test summary reports
  • Test Analysis - Defined, prioritized, and traceable test conditions
  • Test Design - High-level traceable test cases and test suites
  • Test Implementation - Test procedures & their sequencing, test suites & automated test cases, test execution schedule, all traceable
  • Test Execution - documented status of test cases / procedures, defect reports
  • Test Completion - test summary reports, action items for improvements, triaged change requests or backlog defects, finalized testware
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

FL-1.4.4 (K2) Explain the value of maintaining traceability between the test basis and the test work products

A

Traceability allows you to:

  • analyze impact of changes
  • make testing auditable
  • meet IT governance criteria
  • improve understandability of test reports
  • relate technical aspects of testing to non-technical stakeholders
  • provide info to assess product quality, process capability, progress against business goals
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

FL-1.5.1 (K1) Identify the psychological factors that influence the success of testing

A
  • Identifying defects can be perceived as personal criticism of the product’s author.
  • Confirmation bias (making it difficult to accept information like being told your own work products have a problem)
  • Tensions between development team roles if communicate is indelicate when communicating defects or errors.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

FL-1.5.2 (K2) Explain the difference between the mindset required for test activities and the mindset required for development activities

A

Testers and developers have different objectives; the primary objective of the dev is to design and build, whereas primary objective of testing is verifying and validating to find defects in what has been built.

Tester’s mindset is focused on curiosity, pessimism, eye for detail, whereas developer’s mindset is tuned more to designing and building solutions rather than spending time worrying about what might be wrong.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

FL-2.1.1 (K2) Explain the relationships between software development activities and test activities in the software development lifecycle

A

Sequential SDLCs have a linear development process, which means any phase of dev occurs only when the previous is completely done and there is little/no overlap. On the other hand, Iterative/Incremental SDLCs have feature increments developed in a series of cycles. Overlapping test levels occur simultaneously throughout development and often regression testing becomes critical as the system grows.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

FL-2.1.2 (K1) Identify reasons why software development lifecycle models must be adapted to the context of project and product characteristics

A

1 - Difference in product risks of systems (complex vs simple project
2 - Many business units can be part of a project or program (combination of different SDLCs/processes)
3 - Short time to deliver a product to market (merge test levels and/or integration of test types in test levels)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

FL-2.2.1 (K2) Compare the different test levels from the perspective of objectives, test basis, test objects, typical defects and failures, and approaches and responsibilities

A

The four test levels: component testing, integration testing, system testing, acceptance testing

1 - Test Objectives: all except acceptance testing reduce risk and verify functional/non-functional behaviour in their test objects, build confidence in the quality of their test objects, and find defects in their test objects to prevent them from escaping to higher test levels. Acceptance testing validates the system is complete & as expected, verifies behaviour of system is as specified, and establishes confidence in system quality.

2 - Test Basis: component testing is design & specifications, data model, & code. Integration testing is design, sequence diagrams, interface specs, use cases, architecture, workflows. System testing is system req’ts, risk analysis report, use cases, epics/stories, models of system behaviour, state diagrams, and manuals. Acceptance testing is business processes, user or business req’ts, system req’ts and documentation, installation procedures, risk analysis reports, non-functional req’ts, use cases and/or user stories, performance targets, and operations documentation.

3 - Test Objects: component = components/units code, classes, db modules; integration = interfaces, APIs, microservices, DBs, subsystems; systems = applications, OSes, system under test (SUT), etc.; acceptance = system under test, business / operational / maintenance processes of fully integrated system

4 - Typical Defects/Failures: component = incorrect functionality, data flow problems, incorrect logic/code; integration = incorrect/missing data, interface mismatch, incorrect sequencing/timing, failures in communication, incorrect assumptions about boundaries/meaning of data, unhandled failures, compliance failures; system = incorrect/unexpected calculations, functional/non-functional behaviour, or data flows, failure of e2e functional tasks, failure to work in the system environment, failure to work as specified in manuals; acceptance = workflows do not meet business / user req’ts, business rules not met, legal / regulatory req’ts not met, non-functional failures such as in security, performance, etc.

5 - Specific Approaches & Responsibilities: component = performed by developer or one with access to code. Often written/executed after writing code, but often in Agile (e.g., TDD) automated tests precede writing code; integration = focus on integration itself, not functionality of individuals modules. component integration testing often done by devs and system integration testing often done by testers, but testers should understand and be involved in planning both; system = focus on overall e2e behaviour of system in all aspects, usually carried out by independent testers relying heavily on specifications, so if testers not involved early in SDLC in user story refinement and static testing adctivities of specifications, this can lead to disagreements at this point in expected behaviour, causing false positives and negatives, wasted time, and reduced defect detection effectiveness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

FL-2.3.1 (K2) Compare functional, non-functional, and white-box testing

A

Functional testing tests “what the system does” based on specifications of what it should do,

  • Non-functional testing tests “how well the system does it” in terms of characteristics like security, performance, usability, portability, etc.,
  • and White-box testing tests “how well the system is internally structured/implemented” often including structural automated test coverage standards.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

FL-2.3.2 (K1) Recognize that functional, non-functional, and white-box tests occur at any test level

A

Uh. Yeah. They do. Recognize it. Shift testing left as much as possible, but yeah it occurs at all test levels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

FL-2.3.3 (K2) Compare the purposes of confirmation testing and regression testing

A

Confirmation testing is re-testing cases that failed once a change has been made to fix a defect, to confirm whether the defect is in fact fixed, whereas regression testing is running the appropriate scope of tests after any change to a system or component to see if the change had unintended negative side-effects that created unexpected defects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

FL-2.4.1 (K2) Summarize triggers for maintenance testing

A

1 - MODIFICATION of operational system - planned enhancements, corrective/emergency patches, changes, of operational environment
2 - MIGRATION of operational system - moving from one platform to another, migrating data from another system into the system under test, archiving / restoring / retrieving data either to retire an app at EOL or to test long-term data archiving, and regression testing of old functionality remaining in service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

FL-2.4.2 (K2) Describe the role of impact analysis in maintenance testing

A

Impact analysis is the evaluation of maintenance changes for expected/possible consequences and areas of the system they could impact. Also looks at impact on existing tests and regression tests that may need to be run.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

FL-3.1.1 (K1) Recognize types of software work product that can be examined by the different static testing techniques

A

Any work product the participants know how to read and understand. Static analysis can be applied to anything with a formal structure (e.g., static testing for code), even if written in natural language (e.g., reading specifications for linguistic ambiguities).

Examples: specifications, code, testware, user guides, web pages, contracts, project plans, schedules, budget plans, configuration and infrastructure setups, models (such as activity diagrams), etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

FL-3.1.2 (K2) Use examples to describe the value of static testing

A

Static testing, especially when applied early in SDLC:

  • detects defects before more resource-intensive dynamic testing is performed
  • identifies defects that might be difficult/expensive to find in dynamic testing
  • reduces development & testing cost and time
  • improves communication between team members while participating in reviews
  • shift-left testing finds defects easier to remove than when they’re more entrenched later in the SDLC
  • reduces cost of quality over software’s lifetime, due to fewer failures found later in lifecycle or after deployment
  • improves dev productivity (improved design & maintainable code)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

FL-3.1.3 (K2) Explain the difference between static and dynamic techniques, considering objectives, types of defects to be identified, and the role of these techniques within the software lifecycle

A

Static testing finds defects in work products rather than failures caused by those defects. A defect can reside in a work product for a very long time without causing a failure, but static testing can find it. Dynamic testing on the other hand relies on externally visible behaviour and therefore requires you to execute the exact conditions to make a defect cause a failure in order to discover that a defect is there

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

FL-3.2.1 (K2) Summarize the activities of the work product review process

A

1 - PLANNING: define scope, effort, timeframe, review characteristics, roles, entry and exit criteria
2 - INITATE REVIEW: distribute work product, explain review plan, answer questions about review
3 - INDIVIDUAL REVIEW / INDIVIDUAL PREPARATION: participants each review all or part of work product and note their recommendations, questions, and potential defects.
4 - ISSUE COMMUNICATION & ANALYSIS: communicate identified potential defects and assign ownership and status, evaluate and document quality characteristics, evaluate findings against exit criteria to make a review decision (reject/major changes needed/accept with minor changes/accept)
5 - FIXING & REPORTING: fix defects found (typically done by author), communicate defects found in related work products external to the one under review to the appropriate people, document and report defect progress and status, gather metrics, check exit criteria are met, accept work product when they are met.

26
Q

FL-3.2.2 (K1) Recognize the different roles and responsibilities in a formal review

A

1 - AUTHOR: created work product, fixes defects
2 - MANAGEMENT: plan review, make decisions, assign roles, budget, time, monitoring
3 - FACILITATOR / MODERATOR: ensure effective running of meetings, mediate between points of view
4 - REVIEW LEADER: take responsibility for review, deciding who will be involved and organize when and where it takes place
5 - REVIEWERS: stakeholders, subject matter experts, people on project, or individuals with specific technical / business backgrounds
6 - SCRIBE (or RECORDER): collate potential defects, record open points and decisions

27
Q

FL-3.2.3 (K2) Explain the differences between different review types: informal review, walkthrough, technical review, and inspection

A

INFORMAL REVIEW - common in agile development, where colleagues review a work product without a formal process in order to identify defects

WALKTHROUGH - the author of a work product walks reviewers through their product, possibly using scenarios/dry-runs or simulations, to find defects, consider alternative implementations, evaluate conformance to standards, or achieve consensus.

TECHNICAL REVIEW - peers of the author and related technical experts review, after required individual preparation, the work product to gain consensus on the technical approach being taken, evaluate quality, generate new or alternative implementations or ideas, and enable author to improve future work products

INSPECTION - the most formal meeting, following a defined process with formal documented outputs based on rules and checklists. Individual prep before the meeting is required, and the work product is thoroughly visually inspected in the meeting to detect defects, evaluate quality, prevent similar defects in the future, motivate and enable authors to improve work products and the SDLC in the future, and achieve consensus. Specified entry and exit criteria are used for this review type and a facilitator and scribe (neither being the author) is required. Ideally metrics are also collected to improve the inspection process in the future.

28
Q

FL-3.2.4 (K3) Apply a review technique to a work product to find defects

A

Make sure you can do this in practice exams

29
Q

FL-3.2.5 (K2) Explain the factors that contribute to a successful review

A

Organizational factors:

  • clear objectives, measurable exit criteria
  • suitable review types & techniques to the circumstances
  • checklists used are up to date
  • large documents written / reviewed in small chunks to exercise QC by providing early / frequent feedback to authors
  • adequate prep time
  • adequate notice
  • management is supportive of review process (e.g., by incorporating adequate review time in project schedules)
  • reviews integrated into company’s quality or test policies

People factors:

  • right people with the right skillsets are involved and dedicating adequate time and attention to detail
  • testers are seen as valued reviewers
  • reviews done in small chunks to avoid loss of concentration
  • defects are acknowledged, appreciated, and handled objectively
  • meetings well-managed so people consider it a good use of time
  • atmosphere of trust, avoiding negative body language or communications
  • adequate review training is provided as necessary
  • culture of learning and process improvement is promoted and supported
30
Q

FL-4.1.1 (K2) Explain the characteristics, commonalities, and differences between black-box test techniques, white-box test techniques, and experience-based test techniques.

A

BLACK BOX

  • “behaviour-based”
  • test basis = requirements, specifications, user stories, use cases
  • concentrates on analysis of inputs and outputs of test object without reference to internal structure
  • coverage = items tested in test basis documents and techniques applies to those items.

WHITE-BOX

  • “structure-based”
  • test basis = code, architecture, design, other info about structure of software
  • concentrates on analysis of internal structure of test object itself
  • coverage = items tested within a selected structure (e.g., lines covered in code coverage tools) and test technique applied

EXPERIENCE-BASED

  • “experience-based”
  • test basis: developer/tester/stakeholder experience & knowledge
  • often combined with white or black box testing to leverage experience of developers and testers to design, implement, and execute appropriate test cases.
31
Q

FL-4.2.1 (K3) Apply equivalence partitioning to derive test cases from given requirements.

E.g.:
An employee’s bonus is to be calculated. It cannot be negative, but it can be calculated down to zero. The bonus is based on the length of employment:
- <=2 years
- >2 years but <5 years
- 5-10 years inclusively
- >10 years
What is the minimum number of test cases required to cover all valid equivalence partitions for calculating the bonus?

a) 3
b) 5
c) 2
d) 4

A

CORRECT ANSWER: D

a) is one too few
b) is one too much
c) is two too few
d) is correct. Why? The 4 equivalence partitions correspond to the description in the question, i.e. at least one test case must be created for each equivalence partition. The partitions are:
1) 0 <= employment time <= 2
2) 2 < employment time < 5
3) 5<= employment time <= 10
4) 10 < employment time

32
Q

FL-4.2.2 (K3) Apply boundary value analysis to derive test cases from given requirements.

E.g.:
A speed control and reporting system has the following characteristics:
- if you drive 50km/h or less, nothing happens
- if you drive faster than 50km/h but no more than 55 km/h, you will be warned
- if you drive faster than 55km/h but no more than 60km/h, you will be fined
- if you drive faster than 60km/h, your driving license will be suspended
- the speed in km/h is available to the system as an integer value
Which would be the most likely set of values (km/h) identified by applying the boundary value analysis, where only the values on the boundaries of equivalence classes are selected?
a) 0, 49, 50, 54, 59, 60
b) 50, 55, 60
c) 49, 50, 54, 55, 60, 62
d) 50, 51, 55, 56, 60, 61

A

CORRECT ANSWER: D

The following partitions can be identified:

  1. <= 50, boundary value 50
  2. 51 – 55 boundary values 51, 55
  3. 56 – 60 boundary values 56, 60
  4. > =61 boundary value 61

Boundary value according to glossary V.3.2: A minimum or maximum value of an ordered equivalence partition

Thus:

a) Is not correct. Does not include all necessary boundary values, but it includes additional values: 0, 49, and 59, which are not boundary values in this equivalence partition
b) Is not correct. Does not include all necessary boundary values. 51 and 55 are missing
c) Is not correct. Does not include necessary boundary values but it includes additional values: 49, 62, and 54, which are not boundary values in this equivalence partition
d) Is correct. includes all necessary boundary values

33
Q

FL-4.2.3 (K3) Apply decision table testing to derive test cases from given requirements.

E.g.:
A company’s employees are paid bonuses if they work more than a year in the company and achieve a target which is individually agreed before. These facts can be shown in a decision table:

Test-ID | | T1 | T2 | T3 | T4
Condition1 | Employment > 1 yr | YES | NO | NO | YES
Condition2 | Agreed target? | NO | NO | YES | YES
Condition3 | Achieved target? | NO | NO | YES | YES
Action | Bonus payment | NO | NO | NO | YES

Which of the following test cases represents a situation that can happen in real life and is missing in the above decision table?

a) Condition1 = YES, Condition2 = NO, Condition3 = YES, Action = NO
b) Condition1 = YES, Condition2 = YES, Condition3 = NO, Action = YES
c) Condition1 = NO, Condition2 = NO, Condition3 = YES, Action = NO
d) Condition1 = NO, Condition2 = YES, Condition3 = NO, Action = NO

A

CORRECT ANSWER: D

a) Is not correct. If there was no agreement on targets, it is impossible to reach the targets. Since this situation can´t occur, this is not a scenario happening in reality
b) Is not correct. The test case is objectively wrong, since under these conditions no bonus is paid because the agreed target was not reached
c) Is not correct. There was no agreement on targets, it is impossible to reach the targets. Since this situation can´t occur, this is not a scenario happening in reality
d) Is correct. The test case describes the situation that the too short period of employment and the non-fulfilment of the agreed target leads to non-payment of the bonus. This situation can occur in practice but is missing in the decision table

34
Q

FL-4.2.4 (K3) Apply state transition testing to derive test cases from given requirements.

E.g.:
Go look at the state transition diagram of Question #28 of Sample Exam A v1.6 for this course. Which of the following statements about that diagram and the table of test classes below it is true?

a) The given test cases cover both valid and invalid transitions in the state transition diagram
b) The given test cases represent all possible valid transitions in the state transition diagram
c) The given test cases represent some of the valid transitions in the state transition diagram
d) The given test cases represent pairs of transitions in the state transition diagram

A

CORRECT ANSWER: B

Proposed test case covers all five possible single valid transitions in the given state diagram (S1->S2, S2->S1, S2->S3, S3->S2, and S3->S1).

a) Is not correct. Because no invalid transitions are covered
b) Is correct. Because all valid transitions are covered
c) Is not correct. Because all valid transitions are covered
d) Is not correct. Because the test cases do not have pairs of transitions specified

35
Q

FL-4.2.5 (K2) Explain how to derive test cases from a use case.

A

Each use case specifies behaviour that a subject can perform in collaboration with one or more actors, which can be described by interactions and activities, as well as preconditions/postconditions and natural language. You can use this to create a single test case for that main use case and then can include other tests cases from possible variations on its basic behaviour including exceptional behaviour and error handling.

36
Q

FL-4.3.1 (K2) Explain statement coverage.

A

Statement coverage is the amount of executable statements in the code that are exercised by testing. Coverage is calculated as a percentage of the statements a test executes divided by the total number of executable statements

37
Q

FL-4.3.2 (K2) Explain decision coverage.

A

Decision coverage is the amount of control flow outcomes following decision points (i.e., the binary branches in code following an if statement, all of the various logical branches following a case/switch statement, etc) that are executed in testing.

Coverage is calculated as a percentage of the decision outcomes executed by the tests divided by the total # of decision outcomes in the test object.

38
Q

FL-4.3.3 (K2) Explain the value of statement and decision coverage.

A

1 - 100% statement coverage executes all statements once but doesn’t ensure that all decision logic itself is tested and therefore provides less real coverage,
2 - whereas 100% decision coverage does cover all decision logic and finds defects in the code that aren’t exercised by other tests.
3 - Therefore 100% decision coverage guarantees 100% statement coverage but not vice-versa

39
Q

FL-4.4.1 (K2) Explain error guessing.

A

An experience-based test technique to anticipate errors/defects/failures based on how the app worked in the past, what kinds of errors tend to be made, and failures in similar circumstances.

Methodical approach: create a possible list of errors / defects / failures and then design tests to expose those failures

40
Q

FL-4.4.2 (K2) Explain exploratory testing.

A

An informal experience-based test technique where tests are designed, executed, logged, and evaluated dynamically during test execution

Useful when there’s inadequate specification or time pressure

Can be used to learn more about the component or system and create tests for the areas that may need more testing.

41
Q

FL-4.4.3 (K2) Explain checklist-based testing.

A
  • An experience-based test technique in which testers design, implement, and execute tests covering conditions found in checklists.
  • These checklist are high-level lists created or expanded during analysis and that may be retained for re-use, but because they are high-level, there may be variability in actual testing resulting in greater coverage but less repeatability.
  • Can be created to support functional and non-functional testing and can provide guidelines and a degree of consistency in the absence of detailed test case documentation
42
Q

FL-5.1.1 (K2) Explain the benefits and drawbacks of independent testing.

A

BENEFITS:
1 - Likely to recognize different kinds of failures compared to developers because of different backgrounds, technical perspectives, biases
2 - Can verify, challenge, or disprove assumptions made by stakeholders during specification and implementation of the system
3 - Testers independent of a vendor can report in an upright and objective manner about the system under test with less (political) pressure of the company that hired them

DRAWBACKS:
1 - Isolation from development team may lead to a lack of collaboration, delays providing feedback, or an adversarial relationship with that development team
2 - Developers may lose a sense of responsibility for quality
3 - Independent testers may be seen as a bottleneck
4 - Independent testers may lack some important information (about the test object)
5 - Independent testers have less familiarity than the authors (developers) have with their own code, which might allow a developer to find some defects that an independent tester cannot

43
Q

FL-5.1.2 (K1) Identify the tasks of a test manager and tester.

A

Manager Tasks:
1 - Develop/review test policy and test strategy for organization
2 - Plan test activities (incl. test approaches, test levels, test cycles, estimating time, effort, and cost, acquiring resources)
3 - Write, update, and coordinate test plan(s)
4 - Share testing perspectives with other project activities
5 - Initiate the analysis, design, implementation, and execution of tests, monitor test progress and results, check the status of exit criteria (or “definition of done”) and facilitate test completion activities
6 - Prepare and deliver test progress reports and test summary reports
7 - Adapt planning based on test results and progress and take actions necessary for test control
8 - Support setting up the defect management system and adequate configuration management of testware
9 - Introduce suitable metrics
10 - Support the selection, implementation, and maintenance of tools to support the test process, including
11 - Decide about implementation of test environment(s)
12 - Promote and advocate the testers, test team, and test profession within the organization
13 - Develop the skills and careers of testers

Tester Tasks:
1 - Review and contribute to test plans
2 - Analyze, review, and assess the test basis for testability
3 - Identify and document test conditions, capture traceability
4 - Design, setup, verify test environment(s)
5 - Design and implement test cases and test procedures
6 - Prepare and acquire test data
7 - Create detailed test execution schedule
8 - Execute tests, evaluate results, document deviations from expectations
9 - Use appropriate tools to facilitate test process
10 - Automate tests as needed
11 - Evaluate non-functional characteristics such as performance, reliability, usability, security, compatibility, portability
12 - Review tests developed by others

44
Q

FL-5.2.1 (K2) Summarize the purpose and content of a test plan.

A

PURPOSE is to outline test activities for development and maintenance projects in alignment with organization’s test policy, test strategy, and SDLC methodology

CONTENT:
1 - scope, objectives, risks of testing
2 - define test approach
3 - integrate and coordinate test activities into SDLC activities
4 - making decisions about what to test, people and resources required, and how test activities will be carried out
5 - schedule test analysis, design, implementation, execution, and evaluation with either dates or in the context of the iteration
6 - select monitoring & control metrics
7 - budget test activities
8 - determine level of detail and structure for test documentation

45
Q

FL-5.2.2 (K2) Differentiate between various 7 test strategies.

A

1 - ANALYTICAL: based on analysis of requirement or risk analysis

2 - MODEL-BASED: based on model of an aspect of the product like a function, internal structure, business process, or non-functional characteristic

3 - METHODICAL: systematic use of predefined tests or test conditions (e.g., company-wide look-and-feel standards, taxonomy of common/likely failures, etc.)

4 - PROCESS/STANDARD-COMPLIANT: tests based on external rules and standards

5 - DIRECTED/CONSULTATIVE: driven primarily by advice, instructions, or guidance of stakeholders or experts

6 - REGRESSION-AVERSE: driven by desire to avoid regressions of existing capabilities

7 - REACTIVE: tests designed and implemented in reaction to component or system being tested rather than being pre-planned. Knowledge gained from prior test results often used for design & implementation of following tests. Exploratory testing is a common technique for a reactive test strategy.

46
Q

FL-5.2.3 (K2) Give examples of potential entry and exit criteria.

A

Entry criteria could be based on availability of:

  • testable requirements, stories, and/or models
  • test items meeting exit criteria in preceding test levels
  • test environment
  • necessary tools
  • test data and other resources

Exit criteria could be based on:

  • planned tests are executed
  • defined level of coverage achieved
  • num of unresolved defects within agreed limits
  • num estimated remaining defects sufficiently low
  • evaluated levels non-functional quality characteristics are sufficient
47
Q

FL-5.2.4 (K3) Apply knowledge of prioritization, and technical and logical dependencies, to schedule test execution for a given set of test cases.

A

Just like… yeah. be good at this. and stuff. If I find a good example question I’ll come back and fill this in

48
Q

FL-5.2.5 (K1) Identify factors that influence the effort related to testing.

A

PRODUCT
1 - Risks associated with product
2 - Quality of the test basis
3 - Size & complexity of the product
4 - Req’ts for quality characteristics (e.g., reliability, security)
5 - Required level of test documentation detail
6 - Legal/regulatory compliance req’ts

DEV PROCESS
7 - stability & maturity of the organization
8 - SDLC model in use
9 - Test approach
10 - Tools used
11 - Test process
12 - Time pressure

PEOPLE
13 - Skills, experience, & domain knowledge of people involved
14 - Team cohesion and leadership

TEST RESULTS
15 - Number & severity of defects found
16 - Amount of rework required

49
Q

FL-5.2.6 (K2) Explain the difference between two estimation techniques: the metrics-based technique and the expert-based technique.

A

METRICS-BASED estimation is based on metrics of former similar projects or typical values (e.g., burndown charts, velocity) whereas EXPERT-BASED estimation is based on the experience of the owners of testing tasks (e.g., planning poker)

50
Q

FL-5.3.1 (K1) Recall metrics used for testing.

A

Common metrics:
1 - % of planned work done in test case preparation (% of planned test cases implemented)
2 - % of planned work done in test environment preparation
3 - Test Case Execution (number run/not run, passed/failed, etc.)
4 - Defect information (e.g., defect density, defects found vs fixed, failure rate, confirmation test results)
5 - Test coverage of requirements, user stories, acceptance criteria, risks, or code
6 - Task completion, resource allocation and usage, effort
7 - Cost of testing, including the cost compared to the benefit of finding the next defect or the cost compared to the benefit of running the next test

51
Q

FL-5.3.2 (K2) Summarize the purposes, contents, and audiences for test reports.

A

PURPOSE - summarize/communicate test activity info to create stakeholder visibility on progress & current quality metrics

CONTENTS

  • Test Progress Reports: status, progress against plan, impediments, testing planned in next reporting period, quality of test object
  • Test Summary Reports: summary of testing performed, info on what occurred, deviations from schedule / duration / test activities, status with respect to exit criteria, impediments, metrics of defects, test cases, test coverage, activity progress, resource consumption, residual risk reports, reusable test work products produced

AUDIENCE
- executive summary report for stakeholders, more detailed technical information for a technical audience.

52
Q

FL-5.4.1 (K2) Summarize how configuration management support testing.

A

1 - ensures all test items are uniquely identified, version controls, tracked for changes, and related to one another through traceability
2 - same as #1 for all testware
3 - ensures that all identified documents and software items are referenced unambiguously in test documentation

53
Q

FL-5.5.1 (K1) Define risk level by using likelihood and impact.

A

The level of risk is determined by the likelihood that the event will occur as well as the negative harm that will come as a consequence if that event does occur

54
Q

FL-5.5.2 (K2) Distinguish between project and product risks.

A

PRODUCT RISK - possibility that a work product may fail to satisfy the legitimate needs of its users and/or stakeholders

PROJECT RISK - risk of situations that, should they occur, may have a negative effect on the project’s ability to deliver its objectives

55
Q

FL-5.5.3 (K2) Describe, by using examples, how product risk analysis may influence the thoroughness and scope of testing.

A

Risk analysis allows us to decide when and where to start testing and which areas need more attention to minimize probability of adverse events occurring

Results of risk analysis can:
1 - determine test techniques, test levels, and types of testing to be employed
2 - determine extent of testing to be carried out
3 - prioritize testing to find critical defects first
4 - determine whether any activities in addition to testing could be employed to reduce risk (e.g., training to inexperienced designers)

56
Q

FL-5.6.1 (K3) Write a defect report, covering a defect found during testing.

A

Make sure it has:

  • unique identifier
  • title & summary
  • date, author/issuing org
  • ID of test team / config item and environment tested
  • SDLC phase in which it was observed
  • Description of the defect sufficient to enable reproduction and resolution, including logs, db dumps, screenshots, and recordings (if found during execution)
  • expected vs actual results
  • scope or degree of impact (severity) on interests of stakeholder(s)
  • urgency/priority
  • status of defect
  • conclusions, recommendations, and approvals
  • global issues such as other areas that may be affected by a change resulting from the defect
  • change history of ticket
  • appropriate references incl. to the test case that revealed the problem
57
Q

FL-6.1.1 (K2) Classify test tools according to their purpose and the test activities they support.

A

Tool Support for Mgmt of testing and testware

  • test mgmg tools and ALM tools
  • req’ts mgmt tools
  • defect mgmt tools
  • config mgmt tools
  • CI tools*

Tool support for static testing
- Static analysis tools*

Tool support for test design and implementation

  • Model-Based testing tools
  • Test data prep tools

Tool support for test execution and logging

  • test execution tools
  • coverage tools (e.g., req’t coverage, code coverage*)
  • test harnesses*

Tool support for performance measurement and dynamic analysis

  • performance testing tools
  • dynamic analysis tools*

And misc other tool support for specialized testing needs

  • tools that offer support typically more appropriate for developers (e.g., tools used during component and integration testing)
58
Q

FL-6.1.2 (K1) Identify benefits and risks of test automation.

A

BENEFITS:

  • reduction in manual work saves time
  • greater consistency and repeatability
  • more objective assessment
  • easier access to info about testing

RISKS

  • expectations for tool may be unrealistic
  • time, cost, effort of initial introduction
  • time, effort to achieve significant continuing benefits
  • maintenance effort for test work product generated by the tool
  • tool may be relied on too much
  • version control may be neglected
  • relationships/interoperability issues between critical tools may be neglected
  • vendor may go out of business or open-source project may be suspended
  • vendor may provide poor support, upgrades, or defect fixes
  • new platforms/tools may not be supported by the tool
  • unclear ownership of the tool for mentoring, updates, etc.
59
Q

FL-6.1.3 (K1) Remember special considerations for test execution and test management tools.

A

TEST EXECUTION CONSIDERATIONS:

  • often requires significant effort to achieve significant benefits
  • approaches (capturing test, data-driven test, keyword-driven test)
  • the need for someone with expertise in scripting languages
  • need to compare expected to actual results either dynamically while test runs or need to be stored for comparison later
  • model-based testing

TEST MANAGEMENT CONSIDERATIONS:
- often need to interface with spreadsheets to produce useful info in a format that meets the needs of the organization, maintain traceability in a requirements management tool, link with test object version info in config management

60
Q

FL-6.2.1 (K1) Identify the main principles for selecting a tool.

A

1 - assess maturity of organization (strengths / weaknesses)
2 - identify opportunities for improved test process
3 - understand technologies used by test object(s) to select compatible tools
4 - understand build and CI tools already in use and ensure compatibility and integration
5 - evaluate tool against clear and objective req’ts
6 - evaluate vendor, or support for non-commercial tools
7 - consider pros and cons of various licensing models
8 - estimation of a cost-benefit ratio based on concrete business case

61
Q

FL-6.2.2 (K1) Recall the objectives for using pilot projects to introduce tools.

A

1 - gain in-depth knowledge about tool, incl. strengths and weaknesses
2 - evaluate fit with existing processes and practices; what would need to change
3 - decide on standardization of tool’s usage, management, storage, and maintenance
4 - assess reasonable cost-benefit ratio
5 - understand metrics you’d need to collect and report on, configure tool to ensure that it can do so

62
Q

FL-6.2.3 (K1) Identify the success factors for evaluation, implementation, deployment, and on-going support of test tools in an organization.

A

1 - incremental rollout
2 - process adaptation/improvement
3 - training, coaching, mentoring, support for users
4 - defined guidelines for use (e.g., internal automation standards)
5 - implementing a way to gather usage information from the actual use of the tool
6 - gather lessons from all users
7 - ensure tool both technically and organizationally integrated into SDLC