ISTQB Chapter 1 - Fundamentals of Testing - Keywords Flashcards

Sourced from the ISTQB glossary page. https://istqb-glossary.page/

1
Q

Coverage

A

The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.

Test coverage is defined as a technique which determines whether our test cases are actually covering the application code and how much code is exercised when we run those test cases. If there are 10 requirements and 100 tests created and if 90 tests are executed then test coverage is 90%.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Debugging

A

The process of finding, analyzing and removing the causes of failures in software.

When dynamic testing (see chapter 4) triggers a failure, debugging (a non-testing activity) is concerned with finding causes of this failure (defects), analyzing these causes, and eliminating them. The typical debugging process in this
case involves:
* Reproduction of a failure
* Diagnosis (finding the root cause)
* Fixing the cause
When static testing identifies a defect, debugging is concerned with removing it. There is no need for reproduction or diagnosis, since static testing directly finds defects, and cannot cause failures (see chapter 3 of the ISTQB Syllabus).
Testing and debugging are separate activities. Testing can trigger failures that are caused by defects in the software (dynamic testing) or can directly find defects in the test object (static testing).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Defect

A

A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g., an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

Defects can be found in documentation, such as a requirements specification or a test script, in source code, or in a supporting artifact such as a build file. Defects in artifacts produced earlier in the SDLC, if undetected, often lead to defective artifacts later in the lifecycle. If a defect in code is executed, the system may fail to do what it should do, or do something it shouldn’t, causing a failure. Some defects will always result in a failure if executed, while others will only result in a failure in specific circumstances, and some may never result in a failure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Error

A

A human action that produces an incorrect result.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Failure

A

Deviation of the component or system from its expected delivery, service or result.

Testing can trigger failures that are caused by defects in the software (dynamic testing) or can directly find defects in the test object (static testing).
If a defect in code is executed, the system may fail to do what it should do, or do something it shouldn’t, causing a failure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Quality

A

The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Quality Assurance

A

Part of quality management focused on providing confidence that quality requirements will be fulfilled.

QA is a process-oriented, preventive approach that focuses on the implementation and improvement of
processes. It works on the basis that if a good process is followed correctly, then it will generate a good
product. QA applies to both the development and testing processes, and is the responsibility of everyone
on a project.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Root Cause

A

A source of a defect such that if it is removed, the occurrence of the defect type is decreased or removed.

A root cause is a fundamental reason for the occurrence of a problem (e.g., a situation that leads to an error). Root causes are identified through root cause analysis, which is typically performed when a failure occurs or a defect is identified. It is believed that further similar failures or defects can be prevented or their frequency reduced by addressing the root cause, such as by removing it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Test Analysis

A

The process of analyzing the test basis and defining test objectives.

Test analysis includes analyzing the test basis to identify testable features and to define and prioritize associated test conditions, together with the related risks and risk levels (see section 5.2). The test basis and the test objects are also evaluated to identify defects they may contain and to assess their testability. Test analysis is often supported by the use of test techniques (see chapter 4). Test analysis answers the question “what to test?” in terms of measurable coverage criteria.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Test Case

A

A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.

ISTQB defines a test case as a set of preconditions, inputs, actions (where applicable), expected results, and post-conditions, developed based on test conditions. Test cases are step-by-step instructions to the tester to perform validation of specific aspects of the system or application functionality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Test Basis

A

(Syllabus and Joy):
All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.

(Google):
To design test case we need information about system behavior. We call this the test basis and it can consist of, for example, the system requirements, the functional design, the user manual and/or the administrative procedures. The end product of the test design are the test cases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Test Completion

A

The activity that makes test assets available for later use, leaves test environments in a satisfactory condition and communicates the results of testing to relevant stakeholders.

Test completion activities usually occur at project milestones (e.g., release, end of iteration, test level completion) for any unresolved defects, change requests or product backlog items created. Any testware that may be useful in the future is identified and archived or handed over to the appropriate teams. The test environment is shut down to an agreed state. The test activities are analyzed to identify lessons learned and improvements for future iterations, releases, or projects (see section 2.1.6). A test completion report is created and communicated to the stakeholders.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Test Condition

A

An item or event of a component or system that could be verified by one or more test cases, e.g., a function, transaction, feature, quality attribute, or structural element.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Test Control

A

A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Test Data

A

Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Test Design

A

The process of transforming general test objectives into tangible test conditions and test cases.

Test design includes elaborating the test conditions into test cases and other testware (e.g., test charters). This activity often involves the identification of coverage items, which serve as a guide to specify test case inputs. Test techniques (see chapter 4) can be used to support this activity. Test design also includes defining the test data requirements, designing the test environment and identifying any other required infrastructure and tools. Test design answers the question “how to test?”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Test Execution

A

The process of running a test on the component or system under test, producing actual result(s).

Test execution includes running the tests in accordance with the test execution schedule (test runs). Test execution may be manual or automated. Test execution can take many forms, including continuous testing or pair testing sessions. Actual test results are compared with the expected results. The test results are logged. Anomalies are analyzed to identify their likely causes. This analysis allows us to report the anomalies based on the failures observed (see section 5.5).

18
Q

Test Implementation

A

The process of developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts.

Test implementation includes creating or acquiring the testware necessary for test execution (e.g., test data). Test cases can be organized into test procedures and are often assembled into test suites. Manual and automated test scripts are created. Test procedures are prioritized and arranged within a test execution schedule for efficient test execution (see section 5.1.5). The test environment is built and verified to be set up correctly.

19
Q

Test Monitoring

A

A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned.

20
Q

Test Object

A

The component or system to be tested.

21
Q

Test Objective

A

A reason or purpose for designing and executing a test.

The typical test objectives are:
* Evaluating work products such as requirements, user stories, designs, and code
* Triggering failures and finding defects
* Ensuring required coverage of a test object
* Reducing the level of risk of inadequate software quality
* Verifying whether specified requirements have been fulfilled
* Verifying that a test object complies with contractual, legal, and regulatory requirements
* Providing information to stakeholders to allow them to make informed decisions
* Building confidence in the quality of the test object
* Validating whether the test object is complete and works as expected by the stakeholders

Objectives of testing can vary, depending upon the context, which includes the work product being tested, the test level, risks, the software development lifecycle (SDLC) being followed, and factors related to the business context, e.g., corporate structure, competitive considerations, or time to market.

22
Q

Test Planning

A

The activity of establishing or updating a test plan.

Test planning consists of defining the test objectives and then selecting an approach that best achieves the objectives within the constraints imposed by the overall context. Test planning is further explained in section 5.1.

Test planning work products include: test plan, test schedule, risk register, and entry and exit criteria (see section 5.1). Risk register is a list of risks together with risk likelihood, risk impact and information about risk mitigation (see section 5.2). Test schedule, risk register and entry and exit criteria are often a part of the test plan.

Test planning guides the testers’ thinking and forces the testers to confront the future challenges related to risks, schedules, people, tools, costs, effort, etc. The process of preparing a test plan is a useful way to think through the efforts needed to achieve the test project objectives.

23
Q

Test Procedure

A

A sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap up activities post execution.

24
Q

Test Result

A

The consequence/outcome of the execution of a test.

Test results are used by QA and QC. In QC they are used to fix defects, while in QA they provide feedback on how well the development and test processes are performing.

25
Q

Testing

A

The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

26
Q

Testware

A

Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.

27
Q

Validation

A

Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.

Whilst testing involves verification, i.e., checking whether the system meets specified requirements, it also involves validation, which means checking whether the system meets users’ and other stakeholders’ needs in its operational environment.

28
Q

Verification

A

Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.

Whilst testing involves verification, i.e., checking whether the system meets specified requirements, it also involves validation, which means checking whether the system meets users’ and other stakeholders’ needs in its operational environment.

29
Q

Coverage Item

A

An entity or property used as a basis for test coverage, e.g., equivalence partitions or code statements.

30
Q

(K1) Remember

A

Remember (K1) – the candidate will remember, recognize and recall a term or concept.
Action verbs: identify, recall, remember, recognize.

Examples:
* “Identify typical test objectives .”
* “Recall the concepts of the test pyramid.”
* “Recognize how a tester adds value to iteration and release planning”

31
Q

(K2) Understand

A

Understand (K2) – the candidate can select the reasons or explanations for statements related to the topic, and can summarize, compare, classify and give examples for the testing concept.
Action verbs: classify, compare, contrast, differentiate, distinguish, exemplify, explain, give examples, interpret, summarize.

Examples:
* “Classify the different options for writing acceptance criteria.”
* “Compare the different roles in testing” (look for similarities, differences or both).
* “Distinguish between project risks and product risks” (allows concepts to be differentiated).
* “Exemplify the purpose and content of a test plan.”
* “Explain the impact of context on the test process.”
* “Summarize the activities of the review process.”

32
Q

(K3) Apply

A

Apply (K3) – the candidate can carry out a procedure when confronted with a familiar task, or select the correct procedure and apply it to a given context.
Action verbs: apply, implement, prepare, use.

Examples:
* “Apply test case prioritization” (should refer to a procedure, technique, process, algorithm etc.).
* “Prepare a defect report.”
* “Use boundary value analysis to derive test cases.”

33
Q

Partition

A
34
Q

Quality Control

A

QC is a product-oriented, corrective approach that focuses on those activities supporting the achievement of appropriate levels of quality. Testing is a major form of quality control, while others include formal methods (model checking and proof of correctness), simulation and prototyping.

35
Q

Testing Principle #1

A

Testing Shows Presence of Defects, (not the absence of defects):

The earlier we find defects, the more effective we are.
- WE don’t have to test everything, we shouldn’t test twice
- Upfront planning saves effort – Design & Sprint planning

36
Q

Testing Principle #2

A

Exhaustive Testing is Impossible:

It’s not feasible to test every possible input and scenario, so testing efforts should be focused on areas likely to contain defects.
- We are included in upfront design, state what we should test
- Risk based testing & Target testing ( Heatmaps & Mindmaps help)
- We are QA. We are there as experts to suggest test activities, identify risks, issues and mitigate & finally offer recommendation.
- Testing is Risk-Based: Testing activities should be prioritized based on the level of risk associated with different components and features.

37
Q

Testing Principle #3

A

Early Testing (saves time and money):

Testing activities should start as early as possible, reducing costs and risks.
- We are included in upfront design, state what we should test.
- Raise Defects early, don’t over think.

38
Q

Testing Principle #4

A

Defect Clustering:

A small number of modules typically contain most defects, so testing efforts should prioritize these areas.
- Fall back on your original Heatmaps & Mindmaps

39
Q

Testing Principle #5

A

Pesticide Paradox (tests wear out):

Repeating the same tests won’t catch new defects; hence, test cases should be regularly reviewed and updated.
- We are doing this in Automation suite now but manual TC needs same

(Syllabus):
If the same tests are repeated many times, they become increasingly ineffective in detecting new defects (Beizer 1990). To overcome this effect, existing tests and test data may need to be modified, and new tests may need to be written. However, in some cases, repeating the same tests can have a beneficial outcome, e.g., in automated regression testing (see section 2.2.3).

40
Q

Testing Principle #6

A

Testing is Context-Dependent:

Testing strategies and techniques should be tailored to suit the specific context of the project, considering factors like requirements, risks, and constraints.
- Each squad does things differently
- Each piece of work can be chunked differently.
- Test experience, availability.
- Always think – shift left & shortest path, document platinum as option

41
Q

Testing Principle #7

A

Automated Testing Complements Manual Testing:.

IT doesn’t replace manual, it replaces repetitive testing and regression

(Syllabus):
Absence-of-defects fallacy:
It is a fallacy (i.e., a misconception) to expect that software verification
will ensure the success of a system. Thoroughly testing all the specified requirements and fixing all the defects found could still produce a system that does not fulfill the users’ needs and expectations, that does not help in achieving the customer’s business goals, and that is inferior compared to other competing systems. In addition to verification, validation should also be carried out (Boehm 1981).