chapter 4 - Test Analysis and Design Flashcards

1
Q

The criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity.

A

Acceptance criteria

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A collaboration-based test-first approach that defines acceptance tests in the stakeholders’ domain language.

A

Acceptance test-driven development

Abbreviation: ATDD See also: specification by example

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Black-box test technique

A

A test technique based on an analysis of the specification of a component or system. Synonyms: black-box test design technique, specification-based test technique

-(also known as specification-based techniques)

-are based on an analysis of the specified behavior of the test object without reference to its internal structure.

-Therefore, the test cases are independent of how the software is implemented.

-Consequently, if the implementation changes, but the required behavior stays the same, then the test cases are still useful.

Examples: Boundary value analysis, Decision table testing, Equivalence partitioning, State transition testing

-can only be dynamic testing because it involves executing the software to verify its behavior during runtime.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

-A black-box test technique in which test cases are designed based on boundary values

A

Boundary value analysis
See also: boundary value (. 3-value BVA- Three-value Boundary Value Analysis (BVA))

Boundary Value Analysis (BVA) is a technique based on exercising the boundaries of equivalence partitions.
Therefore, BVA can only be used for ordered partitions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The coverage of branches in a control flow graph

A

Branch coverage

-Coverage is measured as the number of branches exercised by the test cases divided by the total number of branches, and is expressed as a percentage.

-When 100% branch coverage is achieved, all branches in the code, unconditional and conditional, are exercised by test cases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

An experience-based test technique in which test cases are designed to exercise the items of a checklist

A

Checklist-based testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

An approach to testing that focuses on defect avoidance by collaborating among stakeholders

A

Collaboration-based test approach

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The degree to which specified coverage items are exercised by a test suite, expressed as a percentage. Synonyms: test coverage

A

Coverage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  • An attribute or combination of attributes derived from one or more test conditions by using a test technique. See also: coverage criteria
A

Coverage item

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A black-box test technique in which test cases are designed to exercise the combinations of conditions and the resulting actions shown in a decision table.

A

Decision table testing

Decision tables are used for testing the implementation of system requirements that specify how different combinations of conditions result in different outcomes.
Decision tables are an effective way of recording complex logic, such as business rules.

When creating decision tables, the conditions and the resulting actions of the system are defined. These form the rows of the table.

Each column corresponds to a decision rule that defines a unique combination of conditions, along with the associated actions.

In limited-entry decision tables all the values of the conditions and actions (except for irrelevant or infeasible ones; see below) are shown as Boolean values (true or false). Alternatively, in extended-entry decision tables some or all the conditions and actions may also take on multiple values (e.g., ranges of numbers, equivalence partitions, discrete values).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
  • A black-box test technique in which test conditions are equivalence partitions exercised by one representative member of each partition. Synonyms: partition testing
A

Equivalence partitioning

Equivalence Partitioning (EP) divides data into partitions (known as equivalence partitions) based on the expectation that all the elements of a given partition are to be processed in the same way by the test object.

The theory behind this technique is that if a test case, that tests one value from an equivalence partition, detects a defect, this defect should also be detected by test cases that test any other value from the same partition. Therefore, one test for each partition is sufficient.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A test technique in which tests are derived on the basis of the tester’s knowledge of past failures, or general knowledge of failure modes.

A

Error guessing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A test technique based on the tester’s experience, knowledge and intuition. Synonyms: experience-based test design technique, experience-based technique

A

Experience-based test technique

-effectively use the knowledge and experience of testers for the design and implementation of test cases.

-The effectiveness of these techniques depends heavily on the tester’s skills.

-Experience-based test techniques can detect defects that may be missed using the blackbox and white-box test techniques. Hence, experience-based test techniques are complementary to the black-box and white-box test techniques.

Examples: Checklist-based testing, Exploratory testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

An approach to testing in which the testers dynamically design and execute tests based on their knowledge, exploration of the test item and the results of previous tests. See also: test charter

A

Exploratory testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A black-box test technique in which test cases are designed to exercise elements of a state transition model. Synonyms: finite state testing

A

State transition testing

A state transition diagram models the behavior of a system by showing its possible states and valid state transitions.

A transition is initiated by an event, which may be additionally qualified by a guard condition.

The transitions are assumed to be instantaneous and may sometimes result in the software taking action.
The common transition labeling syntax is as follows: “event [guard condition] / action”. Guard conditions and actions can be omitted if they do not exist or are irrelevant for the tester.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The coverage of executable statements

A

Statement coverage

Coverage is measured as the number of statements exercised by the test cases divided by the total number of executable statements in the code, and is expressed as a percentage.

When 100% statement coverage is achieved, it ensures that all executable statements in the code have been exercised at least once. In particular, this means that each statement with a defect will be executed, which may cause a failure demonstrating the presence of the defect. However, exercising a statement with a test case will not detect defects in all cases.

For example, it may not detect defects that are data dependent (e.g., a division by zero that only fails when a denominator is set to zero).

Also, 100% statement coverage does not ensure that all the decision logic has been tested as, for instance, it may not exercise all the branches (see chapter 4.3.2) in the code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A procedure used to define test conditions, design test cases, and specify test data. Synonyms: test design technique

A

Test technique

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A test technique only based on the internal structure of a component or system. Synonyms: white-box test design technique, structure-based test technique

A

White-box test technique

-(also known as structure-based techniques)

-are based on an analysis of the test object’s internal structure and processing.

-As the test cases are dependent on how the software is designed, they can only be created after the design or implementation of the test object.

Examples: white-box test design technique, structure-based test technique, branch testing, statement testing

-can be dynamic or static testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

support the tester in test analysis (what to test) and in test design (how to test).

A

Test techniques

-Test techniques help to develop a relatively small, but sufficient, set of test cases in a systematic way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Test techniques also help the tester to

A

define test conditions

identify coverage items

and identify test data during the test

analysis and design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

test techniques are classified as

A

black-box,
white-box,
and experience-based.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A white-box test technique in which test cases are designed to execute statements.

A

statement coverage

-the coverage items are executable statements.

-The aim is to design test cases that exercise statements in the code until an acceptable level of coverage is achieved.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A white-box test technique in which the test conditions are branches.

A

Branch testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

black-box vs white-box testing

A
  1. Black-Box Testing:
    Focus: Testing the functionality of the application without knowing the internal code, structure, or implementation.

Tester’s Perspective: The tester is only concerned with the inputs and outputs. They do not have knowledge of how the system processes those inputs to generate outputs.

  1. White-Box Testing (also called Glass-Box or Clear-Box Testing):
    Focus: Testing the internal code, logic, and structure of the application.

Tester’s Perspective: The tester has full knowledge of the internal workings of the system and tests the software from a developer’s point of view.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

experience based test examples

A

Checklist-based testing,

Exploratory testing,

error guessing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

white-box test techniques examples

A

white-box test design technique,

structure-based test technique,

branch testing,

statement testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

black-box test techniques examples

A

Boundary value analysis,

Decision table testing,

Equivalence partitioning,

State transition testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

specification-based techniques

A

Black-box test techniques

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

structure-based techniques

A

White-box test techniques

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

finite state testing

A

state Transition Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Equivalence partitions can be identified for any data element related to the test object, including

A

inputs,
outputs,
configuration items,
internal values,
time-related values,
and interface parameters.

32
Q

Equivalence partitions may be

A

The partitions may be continuous or discrete, ordered or unordered, finite or infinite. The partitions must not overlap and must be non-empty sets.

33
Q

In EP

A

the coverage items are the equivalence partitions.

To achieve 100% coverage with this technique, test cases must exercise all identified partitions (including invalid partitions) by covering each partition at least once.

34
Q

in BVA

A

The minimum and maximum values of a partition are its boundary values. In the case of BVA, if two elements belong to the same partition, all elements between them must also belong to that partition.

BVA focuses on the boundary values of the partitions because developers are more likely to make errors with these boundary values. Typical defects found by BVA are located where implemented boundaries are misplaced to positions above or below their intended positions or are omitted altogether.

This syllabus covers two versions of the BVA: 2-value and 3-value BVA. They differ in terms of coverage items per boundary that need to be exercised to achieve 100% coverage.

35
Q

State Transition Testing

A

A state table is a model equivalent to a state transition diagram. Its rows represent states, and its columns represent events (together with guard conditions if they exist). Table entries (cells) represent transitions, and contain the target state, as well as the resulting actions, if defined. In contrast to the state transition diagram, the state table explicitly shows invalid transitions, which are represented by empty cells.

A test case based on a state transition diagram or state table is usually represented as a sequence of events, which results in a sequence of state changes (and actions, if needed). One test case may, and usually will, cover several transitions between states.

36
Q

There exist many coverage criteria for state transition testing. This syllabus discusses three of them.

A

—all states coverage:
the coverage items are the states.

To achieve 100% all states coverage, test cases must ensure that all the states are visited.

Coverage is measured as the number of visited states divided by the total number of states, and is expressed as a percentage

—In valid transitions coverage
(also called 0-switch coverage),
the coverage items are single valid transitions.

To achieve 100% valid transitions coverage, test cases must exercise all the valid transitions.

Coverage is measured as the number of exercised valid transitions divided by the total number of valid transitions, and is expressed as a percentage.

—In all transitions coverage,
the coverage items are all the transitions shown in a state table.

To achieve 100% all transitions coverage, test cases must exercise all the valid transitions and attempt to execute invalid transitions.

Testing only one invalid transition in a single test case helps to avoid fault masking, i.e., a situation in which one defect prevents the detection of another.

Coverage is measured as the number of valid and invalid transitions exercised or attempted to be covered by executed test cases, divided by the total number of valid and invalid transitions, and is expressed as a percentage.

37
Q

difference between state transition testing coverages:

A

All states coverage is weaker than valid transitions coverage, because it can typically be achieved without exercising all the transitions. Valid transitions coverage is the most widely used coverage criterion. Achieving full valid transitions coverage guarantees full all states coverage. Achieving full all transitions coverage guarantees both full all states coverage and full valid transitions coverage and should be a minimum requirement for mission and safety-critical software.

38
Q

white-box test techniques

A

Statement Testing and Statement Coverage

Branch Testing and Branch Coverage

39
Q

branch testing

A

Branch testing: A white-box test technique in which the test conditions are branches.

-A branch is a transfer of control between two nodes in the control flow graph, which shows the possible sequences in which source code statements are executed in the test object.

-Each transfer of control can be either unconditional (i.e., straight-line code) or conditional (i.e., a decision outcome).

-In branch testing the coverage items are branches and the aim is to design test cases to exercise branches in the code until an acceptable level of coverage is achieved.

Conditional branches typically correspond to a true or false outcome from an “if…then” decision, an outcome from a switch/case statement, or a decision to exit or continue in a loop.

However, exercising a branch with a test case will not detect defects in all cases. For example, it may not detect defects requiring the execution of a specific path in a code.

40
Q

-Branch coverage subsumes statement coverage.

A

This means that any set of test cases achieving 100% branch coverage also achieves 100% statement coverage (but not vice versa).

41
Q

value of white-box testing

A

A fundamental strength that all white-box techniques share is that the entire software implementation is taken into account during testing, which facilitates defect detection even when the software specification is vague, outdated or incomplete.
A corresponding weakness is that if the software does not implement one or more requirements, white box testing may not detect the resulting defects of omission (left out)
 (Watson 1996).

White-box techniques can be used in static testing (e.g., during dry runs of code). They are well suited to reviewing code that is not yet ready for execution (Hetzel 1988), as well as pseudocode and other highlevel or top-down logic which can be modeled with a control flow graph.

Performing only black-box testing does not provide a measure of actual code coverage. White-box coverage measures provide an objective measurement of coverage and provide the necessary information to allow additional tests to be generated to increase this coverage, and subsequently increase confidence in the code.

42
Q

black-box testing with dynamic testing

A

Black-box testing is often related to dynamic testing because it typically involves executing the software to verify its external functionality and behavior.

43
Q

Documentation of the goal or objective for a test session. (exploratory testing)

A

Test charter

44
Q

Experience-based Test Techniques

A

—Error guessing-A test technique in which tests are derived on the basis of the tester’s knowledge of past failures, or general knowledge of failure modes.

—Exploratory testing-An approach to testing in which the testers dynamically design and execute tests based on their knowledge, exploration of the test item and the results of previous tests.

—Checklist-based testing-An experience-based test technique in which test cases are designed to exercise the items of a checklist.

45
Q

error guessing:

A

-Error guessing is a technique used to anticipate the occurrence of errors, defects, and failures, based on the tester’s knowledge, including:

How the application has worked in the past


The types of errors the developers tend to make and the types of defects that result from these errors


The types of failures that have occurred in other, similar applications

46
Q

Error guessing: In general, errors, defects and failures may be related to:

A

input (e.g., correct input not accepted, parameters wrong or missing),

output (e.g., wrong format, wrong result)

logic (e.g., missing cases, wrong operator),

computation (e.g., incorrect operand, wrong computation),

interfaces (e.g., parameter mismatch, incompatible types),

or data (e.g., incorrect initialization, wrong type).

47
Q

are a methodical approach to the implementation of error guessing.

A

Fault attacks

-This technique requires the tester to create or acquire a list of possible errors, defects and failures, and to design tests that will identify defects associated with 			the errors, expose the defects, or cause the failures. 

	-These lists can be built based on experience, defect and failure data, or 				from common knowledge about why software fails.
48
Q

tests are simultaneously designed, executed, and evaluated while the tester learns about the test object.

A

exploratory testing

The testing is used to learn more about the test object, to explore it more deeply with focused tests, and to create tests for untested areas.

49
Q

A test approach in which test activities are planned as test sessions.

A

session based testing

50
Q

An uninterrupted period of time spent in executing tests.

A

test session

51
Q

Exploratory testing is sometimes conducted using session-based testing to structure the testing.

A
  • In a session-based approach, exploratory testing is conducted within a defined time-box.-The tester uses a test charter containing test objectives to guide the testing.
    • The test session is usually followed by a debriefing that involves a discussion between the tester and stakeholders interested in the test results of the test session.
    • In this approach test objectives may be treated as high-level test conditions.

-Coverage items are identified and exercised during the test session.

-The tester may use test session sheets to document the steps followed and the 			discoveries made.
52
Q

when is Exploratory testing useful?

A

Exploratory testing is useful when there are few or inadequate specifications or there is significant time pressure on the testing.

Exploratory testing is also useful to complement other more formal test techniques.

53
Q

Exploratory testing will be more effective if the tester is

A

experienced,

has domain knowledge

has a high degree of essential skills, like analytical skills, curiosity and creativeness (see section 1.5.1).

54
Q

Exploratory testing can incorporate the use of other test techniques:

A

Boundary Value Analysis (BVA): Focuses on testing the boundaries between partitions, such as the edge values of input ranges, where errors are more likely to occur.

Decision Table Testing: Uses a table to represent combinations of inputs and corresponding outputs, helping to ensure different input conditions and their outcomes are covered.

State Transition Testing: Tests how the system behaves when transitioning from one state to another, ensuring all state changes occur as expected based on different input conditions.

Use Case Testing: Involves testing based on user scenarios or workflows to ensure that the system behaves correctly when performing real-world tasks.

Pairwise Testing (All-Pairs Testing): Tests combinations of input values where each pair of input variables is tested at least once, which can help uncover defects related to interactions between variables.

Error Guessing: Relies on the tester’s experience to anticipate where defects are likely to occur and focuses testing efforts in those areas.

Checklist-Based Testing: Uses a predefined list of items to be checked or conditions to be verified, guiding the exploratory process while still allowing for flexibility.

Equivalence partitioning is a technique where inputs are divided into groups (partitions) that are expected to produce similar behavior.

55
Q

Checklist-Based Testing

A

In checklist-based testing, a tester designs, implements, and executes tests to cover test conditions from a checklist.

Checklists can be built based on experience, knowledge about what is important for the user, or an understanding of why and how software fails.

Checklist items are often phrased in the form of a question.

Checklists can be created to support various test types, including functional and non-functional testing (e.g., 10 heuristics for usability testing (Nielsen 1994)).

Some checklist entries may gradually become less effective over time because the developers will learn to avoid making the same errors.
New entries may also need to be added to reflect newly found high severity defects.

Therefore, checklists should be regularly updated based on defect analysis. However, care should be taken to avoid letting the checklist become too long (Gawande 2009).
In the absence of detailed test cases, checklist-based testing can provide guidelines and some degree of consistency for the testing.

If the checklists are high-level, some variability in the actual testing is likely to occur, resulting in potentially greater coverage but less repeatability.

56
Q

Checklist-Based Testing:
Checklists should not contain items that :

A

can be checked automatically,

items better suited as entry/exit criteria,

items that are too general

57
Q

Checklist-Based Testing:
It should be possible to check each item separately and directly.

A

These items may refer to:
requirements,

graphical interface properties,

quality characteristics

other forms of test conditions.

58
Q

Collaboration-based approaches focus also on defect avoidance by collaboration and communication.

A

Collaboration-based Test Approaches

59
Q

represents a feature that will be valuable to either a user or purchaser of a system or software.

A

A user story

60
Q

User stories have three critical aspects, called together the “3 C’s”:

A

Card – the medium describing a user story (e.g., an index card, an entry in an electronic board)

Conversation – explains how the software will be used (can be documented or verbal)

Confirmation – the acceptance criteria (see section 4.5.2)

61
Q

The most common format for a user story is

A

i“As a [role], I want [goal to be accomplished], so that I can [resulting business value for the role]”, followed by the acceptance criteria.

62
Q

Collaborative authorship of the user story can use techniques such as

A

brainstorming and mind mapping.

63
Q

The collaboration allows the team to obtain a shared vision of what should be delivered, by taking into account three perspectives:

A

business,

development

and testing.

64
Q

Good user stories should be : INVEST

A

Independent,
Negotiable,
Valuable,
Estimable,
Small
Testable
(INVEST).

65
Q

If a stakeholder does not know how to test a user story, this may indicate

A

that the user story is not clear enough,

or that it does not reflect something valuable to them,

or that the stakeholder just needs help in testing.

66
Q

for a user story are the conditions that an implementation of the user story must meet to be accepted by stakeholders.

A

Acceptance criteria

From this perspective, acceptance criteria may be viewed as the test conditions that should be exercised by the tests.

Acceptance criteria are usually a result of the Conversation (3C’s).

67
Q

Acceptance criteria are used to:

A

Define the scope of the user story

Reach consensus among the stakeholders

Describe both positive and negative scenarios

Serve as a basis for the user story acceptance testing (see section 4.5.3)

Allow accurate planning and estimation

68
Q

There are several ways to write acceptance criteria for a user story. The two most common formats are:

A

Scenario-oriented (e.g., Given/When/Then format used in BDD, see section 2.1.3)


Rule-oriented (e.g., bullet point verification list, or tabulated form of input-output mapping)

*tabulated definition: table or columns

—Most acceptance criteria can be documented in one of these two formats. However, the team may use another, custom format, as long as the acceptance criteria are well-defined and unambiguous.

69
Q

Collaboration-based Test Approaches

A

Collaborative User Story Writing

Acceptance Criteria

Acceptance Test-driven Development (ATDD)

70
Q

Acceptance Test-driven Development (ATDD)

A

ATDD is a test-first approach (see section 2.1.3).

Test cases are created prior to implementing the user story.

The test cases are created by team members with different perspectives, e.g., customers, developers, and testers.

Test cases may be executed manually or automated.

71
Q

ATDD steps:

A

The first step is a specification workshop where the user story and (if not yet defined) its acceptance criteria are analyzed, discussed, and written by the team members.

Incompleteness, ambiguities, or defects in the user story are resolved during this process.

The next step is to create the test cases. This can be done by the team as a whole or by the tester individually.

The test cases are based on the acceptance criteria and can be seen as examples of how the software works.

This will help the team implement the user story correctly.

Since examples and tests are the same, these terms are often used interchangeably. During the test design the test techniques blackbox testing, white-box testing, and experience -based test techniques may be applied.

72
Q

ATDD steps testing:

A

Typically, the first test cases are positive, confirming the correct behavior without exceptions or error conditions, and comprising the sequence of activities executed if everything goes as expected.

After the positive test cases are done, the team should perform negative testing.

Finally, the team should cover non-functional quality characteristics as well (e.g., performance efficiency, usability).

73
Q

test cases in ATDD

A

Test cases should be expressed in a way that is understandable for the stakeholders.

Typically, test cases contain sentences in natural language involving the necessary preconditions (if any), the inputs, and the postconditions.

The test cases must cover all the characteristics of the user story and should not go beyond the story.
However, the acceptance criteria may detail some of the issues described in the user story.

In addition, no two test cases should describe the same characteristics of the user story.

74
Q

ATDD: When captured in a format supported by a test automation framework,

A

the developers can automate the test cases by writing the supporting code as they implement the feature described by a user story.

The acceptance tests then become executable requirements.

75
Q

is a collaborative technique well-suited for user story writing use for identifying relationships between user stories and potential dependencies

A

Mind mapping

76
Q

are a black-box testing technique used to divide input data into different sets or groups, where each group is expected to behave in the same way.

A

Equivalence partitions (also called equivalence classes)