CHAPTER 4 PART 2 TEST DESIGN (from White Box-testing) Flashcards

1
Q

WHITE-BOX TEST TEST TECHNIQUES

A

• testing based on the INTERNAL STRUCTURE OF TEST OBJECT
• most often - component testing, in which the model of the test object is the internal structure of the code, represented for example by CONTROL FLOW GRAPH (CFG)
• CAN BE APPLIED AT ALL TEST LEVELS:
- component (CFG)
- integration testing (example structure: CALL GRAPH, API)
- system testing (e.s.: BUSINESS PROCESS MODELED IN BPMN LANGUAGE, PROGRAM MENU)
- acceptance testing (E.S.: WEBSITE PAGES STRUCTURE)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

WHITE-BOX TEST TECHNIQUES

A

• STATEMENT TESTING
• BRANCH TESTING
*MC/DC testing
* multiple condition testing
* loop testing
* basis path testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

STATEMENT TESTING AND STATEMENT COVERAGE

A

• simplest and the weakest white-box technique
• covering EXECUTABLE STATEMENTS in the source code of a program

• THE EXPECTED RESULTS ARE DERIVED FROM SPECIFICATION EXTERNAL TO THE CODE, not code

COVERAGE ITEMS
executable statements
STATEMENT COVERAGE
QUOTIENT OF EXECUTABLE INSTRUCTION COVERED BY TEST/ THE NUMBER OF ALL EXECUTABLE INSTRUCTIONS IN THE ANALYZED CODE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

BRANCH TESTING

A

BRANCH
control flow between 2 vertices of a CFG
can be unconditional or conditional
UNCONDITIONAL BRANCH - branch between vertices A and B means that after execution of statement A is completed, control MUST move to statement B
CONDITIONAL BRANCH - between a and B, after execution of A is completed, control CAN move on to B. C.B. Come out of decision vertices, i.e. places in the code where some decision is made on which further course of control depends
examples of decision statements:
IF-THEN; IF-THEN-ELSE; SWITCH-CASE; and loop condition WHILE, DO-WHILE, FOR loops

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

BRANCH COVERAGE

A

Calculated as the number of branches that were exercised during test execution divided by the total number of branches in the code

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

VALUE OF WHITE-BOX TESTING

A

• STATEMENT TESTING ERROR HYPOTHESIS
if there is a defect in the program, it must be located in one of the executable statements; if we achieve 100% coverage we can be sure that defect was executed
BUT IT’S A WEAK TECHNIQUE, BLIND SPOTS
• BRANCH TESTING ERROR HYPOTHESIS
if there is a defect in the program, it causes an erroneous control flow
if we achieved 100% branch coverage, we must have exercised at least once a transition that was wrong

ANY TEST SET THAT ACHIEVES 100% BRANCH COVERAGE ALSO ACHIEVES 100% STATEMENTS COVERAGE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

3 EXPERIENCE-BASED TEST TECHNIQUES

A
  1. ERROR GUESSING
  2. EXPLORATORY TESTING
  3. CHECKLIST-BASED TESTING
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

ERROR GUESSING

A
  • testers KNOWLEDGE REFERRING TO:
    • how the component or system under test has worked so far
    • what are typical errors tester knows, made by developers, architects, analysts, and other members of the team
    • knowledge of failures that have occurred previously in similar applications tested by the tester or that the tester has heard of
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

ERRORS, DEFECTS, FAILURES TYPICAL TO FIND IN ERROR GUESSING - RELATED TO:

A

• INPUT - e.g. valid input not accepted, invalid accepted, wrong parameter value, missing input parameter
• OUTPUT - e.g. wrong output format, incorrect output, correct output at the wrong time, incomplete output, grammatical and punctuation errors
• LOGIC - (e.g. missing cases to consider, duplicate cases to consider, invalid logic operator, missing condition, invalid loop iteration)
• CALCULATIONS (e.g. wrong algorithm, ineffective algorithm, missing calculation, invalid operand, bracketing error, insufficient precision of the result etc.)
• INTERFACE - e.g. incorrect processing of events from the interface, time-related failures in input/output; bracketing error, calling wrong or non-existent function, incompatible parameter types)
• DATA - e.g. incorrect initialization, definition, declaration of variable, incorrect reference to data, incorrect scaling or unit of data, wrong dimension data, incorrect index

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

FAULT ATTACK/SOFTWARE ATTACK

A

•creating a list of potential mistakes, defects, failures
• NEGATIVE POINT OF VIEW/ NEGATIVE EVENT not positively verifying

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

EXPLORATORY TESTING

A

• DESIGNING, EXECUTING, AND RECORDING UNSCRIPTED TESTS (i.e. tests that have not been designed in advance) AND EVALUATING THEM DYNAMICALLY DURING EXECUTION

Steps depend on:
KNOWLEDGE AND INTUITION OF A TESTER
EXPERIENCE WITH THIS OR SIMILAR APPLICATIONS
THE WAY THE SYSTEM BEHAVED IN TEH PREVIOUS STEP OF TEH SCENARIO

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

SESSION-BASED EXPLORATORY TESTING

A

• way of better managing exploratory testing activities
3 STEPS:
1. Tester meets the manager, to determine the scope and allocate time
manager hands THE TEST CHARTER TO THE TESTER OR WRITES IT COLLABORATIVELY WITH THE TESTE. Test charter should be created during test analysis
2. CONDUCTING AN EXPLORATORY SESSION BY TEH TESTER - TESTER TAKES NOTES OF ANY RELEVANT OBSERVATION
3. MEETING - RESULTS ARE DISCUSSED AND DECISIONS MADE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

WHEN TO USE EXPLORATORY TESTING

A

IF one or more of following premises are met:
1. Specification of the product is incomplete, of poor quality, or lacking
2. There is time pressure; testers are short on time to conduct tests
3. Testers know the product well and are experienced in exploratory testing

  • all other techniques can be used
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

„EXPLORATORY TESTING TOURS” - approach by Andrew Whittaker based on sightseeing and tourism

A

Goals:
- understand how the app works , what interface looks like, what functionality it offers
- force the software to demonstrate its capabilities
- find defects
Division of software into districts - like a tourist city
1. Business district - business parts, functionalities, features important for users
2. Historic district - legacy of code nad history of defective functionality
3. Tourist d. - parts of software that attract new user, functions that advanced user is unlikely to use
4. Entertainment - support functions and features related to usability and user interface
5. Hotel - moments when the user doesn’t actively use the software but she still works
6. Suspicious neighbourhood - various types of attacks that can be launched against the application

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

CHECKLIST-BASED TESTING

A

• the basis for test execution is the items contained in checklist
• checklist - contains test conditions to be verified; shouldn’t contain items that can be checked automatically, items that function better as entry/exit criteria, or items that are too general

DIFFERENCE BETWEEN CHECKLIST AND FAULT ATTACK
checklist checks the positive features, fault attack - inducing failure
• TEST BASIS - CHECKLIST
• checklist items - FORM OF QUESTIONS
•may refer to requirements, quality characteristics, or other form of test condition
• both functional and non-functional
• frequently updated based on defect analysis
• not too long
• different types based on test level (more formal and detailed for component testing etc.)
• any type of testing - both functional and nonfunctional

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

NIELSEN USABILITY HEURISTICS - popular checklist

A
  1. VISIBILITY OF SYSTEM STATUS - is the status of the system shown at every point in its operation; showing breadcrumbs,, path the user has taken, clear titles for each screen
  2. MATCH BETWEEN SYSTEM AND THE REAL WORLS - no technical language in an app, simple, every-day use words
  3. USER CONTROL AND FREEDOM
    ability to undo an action if done by mistake
  4. CONSISTENCY AND STANDARDS
    appearance, formatting, familiar approaches, logo of the org in upper left corner etc.
  5. ERROR PREVENTION
    • not to scare the user
    • little option for mistakes
  6. RECOGNITIONS RATHER THAN RECALL
    description of a field shouldn’t disappear when you start filling it
  7. FLEXIBILITY AND EFFICIENCY OF USE:
    advanced search options headen by default etc
  8. AESTHETIC AND MINIMALIST DESIGN
  9. HELPS USERS RECOGNIZE, DIAGNOSE, AND RECOVER FORM ERRORS
  10. HELP AND DOCUMENTATION
17
Q

CHECKLIST FOR CODE INSPECTIONS

A
  1. Is code written according to current standard
  2. Code formatted in a consistent way
  3. Are there any functions that are never called
  4. Are the implemented algorithms efficient, with appropriate computational complexity?
  5. Is memory being used effectively?
  6. Are there variables that are used without being declared first?
  7. Is the code properly documented?
  8. Is the manner of commenting consistent?
  9. Is every divide operation protected against division by zero?
  10. In IF-THEN statements, are the block of statements executed most often checked first?
  11. Does each CASE statement have a default block?
  12. Is any allocated memory released when it’s no longer in use?
18
Q

COLLABORATION-BASED TEST APPROACHES

A

• from agile methodologies - testing methods taking into account the artefacts used by these methodologies and emphasizing collaboration between customers (business), developers, and testers.

TECHNIQUES:
1. USER STORIES IN AGILE, instead of user requirements
2. ACCEPTANCE CRITERIA AS TEH AGILE COUNTERPART TO TEST CONDITIONS, PROVIDING THE BASIS FOR TEST DESIGN
3. ATDD AS A FORM OF HIGH-LEVEL TESTING, TEST-FIRST APPROACH

19
Q

COLLABORATIVE USER STORIES WIRTING

A

•USER STORY - functional increment that will be of value to the user, purchaser of the system or software and any other stakeholder
• written to capture requirements form the perspective of developers, testers, business representatives
- in sequential SDLC it would be formal reviews after the requirements have been written
• shared vision through frequent informal reviews during requirements writing or by writing requirements together (testers, analysts, developers, any other stakeholder)

20
Q

3 Cs IN USER STORIED

A
  1. CARD
  2. CONVERSATION
  3. CONFIRMATION
21
Q

CARD IN USER STORY

A
  • medium that describes a user story
    • identifies the requirements, its criticality/priority, constraints, expected development and testing time and ACCEPTANCE CRITERIA
    • VERY ACCURATE DESCRIPTION - it will be used in the product backlog

Frequent format
AS A (intended user)
I WANT (intended action)
SO THAT (purpose/result of the action, profit achieved)
• LIST OF ACCEPTANCE CRITERIA

REPRESENTATION OF CUSTOMER REQUIREMENTS, not documentation of them

22
Q

CONVERSATION IN USER STORY

A

• explains how the software will be used
• documented or verbal
• it begins during the release planning phase and continues when the story is planned
• conducted among 3 primary perspectives:
CUSTOMER/USER
DEVELOPER
TESTER

23
Q

CONFIRMATION

A

• in the form of ACCEPTANCE CRITERIA
they represent coverage items that convey and documented details of a user’s story to determine when a story is complete
• acceptance criteria - usually the result of a conversation
- can be viewed as test conditions that the tester must check to verify story completeness

24
Q

TRAITS OF GOOD USER STORIES

A

• adres both functional and nonfunctional features
• testers can improve user story by identifying missions details or nonfunctional requirements
• tester can ask business representatives open-ended questions to improve tests and user story

INVEST properties of good user story:
INDEPENDENT
do not overlap and can be developed in any order
NEGOTIABLE
details are co-created by the client and developer, not a fixed docu
VALUABLE
once implemented, its an added value
ESTIMABLE
able to prioritise them, estimate their completion time to easily manage the project
SMALL
easy to understand
TESTABLE
correctness of implementation easily verifiable

25
Q

ACCEPTANCE CRITERIA

A

Conditions that a product - to the extent described by the user story or PRODUCT BACKLOG ITEM of which acceptance criteria are a part) must meet in order to be accepted by a customer
• test conditions or coverage items that should be checked by acceptance test

26
Q

WHAT ARE ACCEPTANCE CRITERIA USED FOR

A

• DEFINE BOUNDARIES OF THE USER’S STORY
• REACH A CONSENSUS BETWEEN THE DEVELPER TEAM AND THE CLIENT
• DESCRIBE BOTH POSITIVE AND NEGATIVE TEST SCENARIOS
• AS THE BASIS FOR USER STORY ACCEPTANCE TESTING
• AS A TOOL FOR ACCURATE PLANNING AND ESTIMATING

  • determine and evaluate the Definition of Ready (DoR) or Definition of Done (DoD); no implementing till they are elicited
27
Q

SCENARIO-ORIENTED ACCEPTANCE CRITERIA

A
  • a way of writing acceptance criteria
  • format GIVEN/WHEN/THEN (from BDD technique)
28
Q

RULE-ORIENTED ACCEPTANCE CRITERIA

A

• form of a bulleted verification list or a tabular form of mapping inputs to outputs
• creating a set of rules within a scenario, each of which will be tested separately + GIVEN/WHEN/THEN format (Gherkin allows that)

29
Q

ACCEPTANCE TEST-DRIVEN DEVELOPMENT (ATDD)

A

• TEST FIRST APPROACH - test cases created BEFORE the user story is implemented
• test cases are created by team members with different perspectives on the prosecution - customers, developers, testers
• both manual and automated

30
Q

STEPS OF ATDD

A
  1. SPECIFICATION WORKSHOP
    team members analyze, discuss, write user story and its acceptance criteria
    all kind s of problems in the story - incompleteness, ambiguities, contradictions or other defects are fixed
  2. CREATING TESTS
    collectively by a team or individually by a tester; independent person - e.g. business representative - performs validation of the tests
    TEST ARE EXAMPLES, based on user criteria, that describe specific characteristics of the user story
  3. TESTING
    - first POSITIVE TESTS, confirming correct behaviour with no exceptions or error occurances, invoking a sequence of actions performed if everything goes as expected HAPPY PATHS
    - NEGATIVE TESTING
    - TESTING REGARDING NONFUNCTIONAL ATTRIBUTES (e.g. performance, usability)

• TESTS EXPRESSED IN NATURAL LANGUAGE UNDERSTANDABLE FOR STAKEHOLDERS (indcluding necessary preconditions (if any), inputs, associated outputs)

• tests must cover ALL FEATURES OF THE USER STORY, NO GOING BEYOND THAT
•NO 2 TESTS/EXAMPLES SHOULD DESCRIBE THE SAME FEATURE

• THEN THEY CAN BE AUTOMATED AND ACCEPTANCE TESTS BECOME
EXECUTABLE REQUIREMENTS