CHAPTER 4 TEST ANALYSIS AND DESIGN Part 1 Flashcards
ACCEPTANCE CRITERIA
Criteria that a component or system must satisfy in order to be accepted by a user, customer or other authorized entity
BLACK-BOX TESTING TECHNIQUE
A test technique based on an analysis of the specification of a component or system
SPECIFICATION BASED TECHNIQUE
COVERAGE
The degree to which specified coverage items are exercised by a test suite, expressed as a percentage
COVERAGE ITEM
An attribute or combination of attributes derived from one or more test conditions by using a test technique
DECISION TABLE TESTING
A black-box testing technique in which test cases are designed to exercise the combinations of conditions and the resulting actions shown in a decision table
STATE TRANSITION TESTING
An approach testing in which the testers dynamically design and execute tests based on their knowledge, exploration of test item, and the results of previous testing
TEST TECHNIQUE
A procedure used to define test conditions, design test cases, and specify test data
TEST DESIGN TECHNIQUE
WHITE-BOX TEST TECHNIQUE
A test technique only based on the internal structure of a component or system
STRUCTURE-BASED TECHNIQUE
„ERROR(defect) HYPOTHESIS” IN TEST TECHNIQUES - ERROR/DEFECT THEORY
• scientific basis for the construction of each technique
• what types of problems this technique is capable of detecting
• each technique is built around an abstract problem related to a specific error hypothesis and designed to detect specific type of error/defect
3 CATHEGORIES OF TEST TECHNIQUES DIVIDED BY THE SOURCE OF KNOWLEDGE
- BLACK-BOX TECHNIQUES (4)
SOURCE: specification (external source of knowledge about the test object)
TEST BASIS: design documentation - requirements specification - use cases - business process - WHITE-BOX TECHNIQUES (2)
s: structure (internal architecture of the test object)
T.B. : code - menu structure - business process flow structure - architecture description - EXPERIENCE-BASED TECHNIQUES (3)
S: knowledge (experience, intuition, common sense)
BLACK-BOX TEST TECHNIQUES
Behavioural techniques
Specification-based techniques
• knowledge external to the test object about HOW it should BEHAVE
•requirements - use cases - user stories - business processes = DESCRIPTION OF DESIRED BEHAVIOUR OF THE SYSTEM
BOTH FUNCTIONAL AND NON-FUNCTIONAL WITHOUR REFERRING TO ITS INTERNAL DESIGN
ADVANTAGES OF BLACK-BOX TECHNIQUES
• documentation exists long before the implementation of a component or system - CREATING TESTS EARLY
• FOR STATIC TESTING, DYNAMIC
• BOTH FUNCTIONAL AND NONFUCNTIONAL
WHITE-BOX TESTING TECHNIQUES
Structure - structure-based techniques
• basis - INTERNAL STRUCTURE OF TEH TEST OBJECT
• this structure = code, high-level model of TEH system or module architecture - graph or information flow in business process
• CODE CAN NEVER BE AN ORACLE FOR ITSELF
EXPERIENCE-BASED TECHNIQUES
• „soft” sources of knowledge about the system under test - intuition, experience, knowledge of defetcs found in previous versions of
• referring to skills of a tester and other stakeholders
4 BLACK-BOX TESTING TECHNIQUES
- EQUIVALENCE PARTITIONING (EP)
- BOUNDARY VALUE ANALYSIS (BVA)
- DECISION TABLE TESTING
- STATE TRANSITION TESTITNG
2 WHITE-BOX TESTING TECHNIQUES
- STATEMENT TESTING AND STATEMENT COVERAGE
- BRANCH TESTING AND BRANCH COVERAGE
3 EXPERIENCE-BASED
- ERROR GUESSING
- EXPLORATORY TESTING
- CHECKLIST-BASED TESTING
FACTORS INFLUENCING CHOICE OF TEST TECHNIQUE
- FORMAL FACTORS
documentation - law and regulations - customer contract provisions - processes in place in the organization -test objectives - SDLC model) - PRODUCT FACTORS
software, its complexity - importance of various quality characteristics, risks - expected types of defects - expected use of the software - PROJECT FACTORS
available time - budget - resources - tools - skills - knowledge and experience of testers
EXAMPLES TEST TECHNIQUES WITH DIFFERENT LEVELS OF FORMALIZATION
- VERY LOW
unplanned, undocumented execution of error guessing, without saving test results - LOW
conducting exploratory tests with a test charter - MEDIUM
using decision table testing to test business logic; test cases are documented in test case specifications and test results are logged - LARGE
using the state machine model and state transition testing technique to test program behaviour
test design (state machine), test cases (in form of test scripts), and test results (logs) are documented
EQUIVALENCE PARTITIONING (EP)
• purpose - overcome the principle of impossibility of thorough testing
there are usually infinite number of possible inputs, but the number of various EXPECTED BEHAVIOURS of a program on these inputs is TYPICALLY FINITE if we consider specific behaviours related to a strictly defined aspect of app’s operation
• in this method
DIVISION OF A GIVEN DOMAIN INTO SUBSETS - PARTITIONS - SUCH AS FOR EVERY TWO ELEMENTS FROM ONE PARTITION, WE HAVE IDENTICAL PROGRAM BEHAVIOUR
- for example age under 18 certain behaviours - values belonging to a partition below 18 are treated the same way -> each element of a given partition is an equally good choice to test
APPLICATION OF EP
• any situation, any test level, any type of test - division of possible data into groups
•it can be applied not only to input domains but also to output and internal domains
• THE DOMAIN DOESN’T HAVE TO BE A NUMERICAL DOMAIN - ANY NONEMPTY SET, EXAMPLES:
- a set of natural numbers (e.g. partition in to even and odd numbers)
- a collection of words (e.g. partition by word length, one-letter, two-letter etc.)
- collections relating to time (e.g. partition by year of birth, by month ona given year etc)
- a collection of operating system types: {Windows, Linux, macOS}
PARTITIONING CORRECTNESS
• EACH ELEMTN OF THE DOMAIN BELONGS TO EXACTLY ONE EQUIVALENCE PARTITION
• NO EQUIVALENCE PARTITION IS EMPTY
VALILD PARTITIONS - partitions that contain „correct”/„normal” values, i.e. values accepted/expected by the system
OR - values that the specification defines its processing
INVALID PARTITIONS - partitions that contain values that the component or a system should reject (e.g. data with incorrect syntax, exceeding acceptable ranges)
OR - values for which the specification doesn’t defines the processing
DERIVING TEST CASES IN EP
COVERAGE ITEMS IN EP - equivalence partitions
• minimum set of test cases to ensure 100% coverage => one that covers every equivalence partition = for each identified partition, there is a test case containing a values from that partition
ONE-DIMENSIONAL CASE
one domain and one division - minimum number of cases - the number of equivalence partitions we identified
MULTIDIMENSIONAL CASE
more than one domain
• number of test cases depend on the way we treat combinations of invalid partitions AND on possible dependencies or constraints between values and partitions from different domains
COVERAGE - number of equivalence partitions tested using at least one value/total number of equivalence partitions partitions defined *%
COVERING MULTIPLE EQUIVALENCE PARTITIONS SIMULTANEOUSLY
DEFECT MASKING
Test must simultaneously cover equivalence partitions derived from more than one domain (both need to be True, risk of testing just one is valid, the other False, which may seem ok, but it’s not)
Steps
1. Create the smallest possible number of test cases composed only of test data from valid partitions, which will cover all valid partitions from all domain
2. For each uncovered invalid partition, create a separatete test case in which data from that partition will occur, and all other data will come from the valid partitions