Chapter 2. PART 2 Test Levels Flashcards
TEST LEVELS
Group of test activities that are organized and managed together. Each test level is an instance of the test process consisting of the activities performed for software at a driven development level, from individual components to complete systems or systems of systems. These levels are linked to other activities performed as part of the software development cycle.
ATTRIBUTES OF TEST LEVELS
1.TEST OBJECT
2. TEST OBJECTIVE
3. TEST BASIS
4. DEFECTS AND FAILURES
5. SPECIFIC APPROACHES AND RESPONSIBILITIES
TEST TYPE
Testing associated with a specific characteristic of test object
Difference between test level and test type - the latter take into account the test objective and the reason for testing, the first one - refers to the phases of the development life cycle and the stage of advancement in product development
Test level and test type are independent of each other - each test type can be performed at any test level
TEST LEVELS - MOST COMMON CLASSIFICATION
- COMPONENT TESTING
- COMPONENT INTEGRATION TESTING
- SYSTEM TESTING
- SYSTEM INTEGRATION TESTING
- ACCEPTANCE TESTING
Minimal available test environment DTAP - WHAT’S THAT
DEVELOPMENT ENVIRONMENT
TEST ENVIRONMENT
ACCEPTANCE ENVIRONMENT
PRODUCTION ENVIRONMENT
Each test level requires an appropriate test environment -
For component testing - unit test environment, i.e. xUnit-type frameworks (e.g. JUnit)
Libraries for creating mock objects (e.g. EasyMock)
For acceptance testing - similar for production environment, because large part of field defects (those reported by user after the software is released) are defects created by the interaction of the system and the environment
COMPONENT TESTING AND ITS OBJECTIVES
• Other name - UNIT TESTING OR PROGRAM TESTING,
• Focuses on modules that can be tested SEPARATELY
• GOALS:
- component risk mitigation
- checking the compatibility of the functional and nonfunctional behaviour of the component with the design and the specifications
•usually performed by DEVELOPER (author of the code), right after the code is written; sometimes automated tests
- building confidence in the quality of the component - component defect detection - preventing defects from reaching higher levels of testing
• in case of INCREMENTAL and ITERATIVE SDLC -> automated component regression test are a key component to ensure changes in code that are unavoidable in those type of SDLC didnt cause any malfuntion
ISOLATION IN COMPONENT TESTING
• MOST OFTEN it depends on type of SDLC
• that means it may be necessary to use service virtualization, test harnesses or mock objects (stubs nad drivers)
• component tests should be able to be executed in any order and their results should be the same regardless of the execution order
MOCK OBJECTS
TO ALLOW ACTUAL EXECUTIONS OF BATCHES OF CODE THAT REFER TO OTHER OBJECTS (e.g. database) THAT DO NOT YET EXIST
• TO TEST THE FUNCTIONALITY of code that’s not yet finished but its crucial to know for testing certain component
STUB VS DRIVER
MOCK OBJECTS
• they perform the same role, they simulate the operation of the other objects that do not yet exist
STUB - dummy object THAT IS CALLED BY OUR COMPONENT UNDER TEST
DRIVER - dummy object THAT CALLS COMPONENT UNDER TEST
TEST HARNESS
A test when BOTH STUBS AND DRIVES ARE USED FOR TESTING - THIS TYPE OF ENVIRONMENT IS TEST HARNESS
WHAT COMPONENT TESTING MAY COVER
• FUNCTIONALITY (e.g. correctness of calculations)
• NONFUNCTIONAL CHARACTERISTICS (performance, reliability)
• STRUCTURAL PROPERTIES (statement or decision testing)
TYPICAL TEST BASIS FOR COMPONENT TESTING
• DETAILED DESIGN
• CODE
• DATA MODEL
• COMPONENT SPECIFICATION
TYPICAL TEST OBJECTS OF COMPONENT TESTING:
• MODULES, UNITS, COMPONENTS
• CODE AND DATA STRUCTURES
• CLASSES AND METHODS (in object-oriented programming)
•DATABASE COMPONENTS
TYPICAL DEFECTS AND FAILURES IN COMPONENT TESTING
• INCORRECT FUNCTIONALITY (e.g. inconsistent with project specifications)
• DATA FLOW PROBLEMS
• INCORRECT CODE AND LOGIC
DEFECTS. - usually fixed as soon as detected
data from this testing usually not collected
COMPONENT INTEGRATION TESTING
Focuses on INTERACTION and INTERFACES between INTEGRATED COMPONENTS.
COMPONENT -> CLASS, FILE, FUNCTION, PROCEDURE, PACKAGE etc.
Performed after component testing
Usually AUTOMATED
In ITERATIVE SDLC - part of CONTINUOUS INTEGRATION PROCESS
SYSTEM INTEGRATION TESTING
• Focuses on INTERACTIONS AND INTERFACES between SYSTEMS, PACKAGES AND MICROSERVICES
• ! May include interactions with with interfaces provided by EXTERNAL ORGANIZATIONS (e.g. web service providers)
because Devs can’t control the external interfaces - whole range of testing problems
e.g. fixing details in the code created by the external org that block testing or preparing testing environment
• system integration testing can take place after system testing or parallel with ongoing system testing
INTEGRATION TESTING GOALS
TYPICAL OBJECTIVES:
1. MIGRATING RISKS arising from component/system interactions
2/ CHECKING COMPLIANCE of functional and nonfunctional behaviours of component/system interfaces with the design and specifications
3. Building CONFIDENCE IN THE QUALITY OF THE INTERFACES used by components/systems
4. Detecting DEFECTS IN INTERFACES and COMMUNICATION PROTOCOLS
5. PREVENTING DEFECTS from reaching higher levels of testing
TEST BASIS OF INTEGRATION TESTING
• any kind of DOCUMENTATION DESCRIBING THE INTERACTION OR COOPERATION OF INDIVIDUAL COMPONENTS OR SYSTEMS
Examples of work products that can be used as test basis for integration testing:
1. Software and system design
2. Sequence diagrams
3. Interface/communication protocol specifications
4. Use cases
5. Architecture at the component and system level
6. work flows
7. Definitions of external interfaces
TEST OBJECTS OF INTEGRATION TESTING
Typical objects:
1. Application programming interfaces (APIs) that provide communication
2. Interfaces that provide communication between systems (e.g. system-to-system, system-to-database, microservice-to-microservice, etc.)
3. Communication protocols between components and systems
TYPICAL DEFECTS AND FAILURES DETECTED IN COMPONENT INTEGRATION TESTING
- Incorrect and missing data or incorrect data coding
- Incorrect sequencing or incorrect synchronization of interface calls
- Incompatible interfaces
- Communication errors between components
- Failure to handle or incorrect handling of communication errors between components
- Incorrect assumptions about the meaning, units, or boundaries of data transferred between components
TYPICAL DEFECTS AND FAILURES DETECTED IN SYSTEMS INTEGRATION TESTING
- Inconsistent message structures sent between systems
- Incorrect or missing data or incorrect data coding
- Incompatible interfaces
- Inter-system communication errors
- Failure to handle or improper handling of inter-system communication errors
- Incorrect assumptions about the meaning, units, or boundaries of data transferred between systems
- Failure to comply with mandatory security regulations
SPECIFIC APPROACHES AND RESPONSIBILITIES IN INTEGRATION TESTING
•BOTH SYSTEM AND COMPONENT - FOCUS ON INTEGRATION ITSELF
- communication between module a and module b not functionality of separate modules; same with systems
• component integration testing - usually BY DEVELOPERS; system integration - BY TESTERS who are familiar with system architecture
TEST ENVIRONMENT FOR INTEGRATION TESTING
AS SIMILAR AS POSSIBLE TO the TARGET OR PRODUCTION ENVIRONMENT
• very large number of failures arise from system’s interactions with the environment in which the system is seated
INTEGRATION STRATEGIES and INTEGRATION TESTING STRATEGIES
• PLANNING - both tests and integration strategies PRIOR to building components or systems
-> to maximise efficiency
According to older ISTQB syllabus those strategies we can list as:
1. STRATEGIES BASED ON THE SYSTEM ARCHITECTURE
2. FUNCTIONAL TASK-BASED-STRATEGIES
3. STRATEGY BASED ON SEQUENCES OF TRANSACTION PROCESSING
4. STRATEGY BASED ON OTHER ASPECTS OF THE SYSTEM