CHAPTER 2 PART 3 TEST TYPES Flashcards
TEST OBJECTIVES DEPENDING ON TEST TYPES
• evaluation of functional quality characteristics such as completeness, correctness, adequacy
• evaluation of nonfunctional quality characteristics, including parameters such as RELIABILITY, PERFORMANCE, SAFETY, COMPATIBILITY, USABILITY
• determining whether a structure or architecture of a component or system is correct, complete and in accordance with with specifications
4 TEST TYPES ACCORDING TO FOUNDATION LEVEL (there are more)
• FUNCTIONAL TESTING
• NONFUNCTIONAL TESTING
• WHITE-BOX TESTING
• BLACK-BOX TESTING
FUNCTIONAL TESTING
F.T. Of a system - the execution of tests that evaluate functions that system should perform
WHAT SYSTEM SHOULD DO
FUNCTIONAL REQUIREMENTS described in:
business requirements specification, user stories, use cases, system requirements specifications
• performed AT ALL TEST LEVELS, but DIFFERENT TESTS IN DIFFERENT LEVELS
• takes into account software behaviour described in external to the system documents —> so typically using black-box techniques to derive test conditions and test cases
OBJECTIVES OF FUNCTIONAL TESTING
VERIFY:
1. Functional CORRECTNESS
2. Functional APPROPRIATENESS
3. Functional COMPLETNESS
TEST LEVELS VS FUNCTIONAL TESTING
TEST LEVEL - SAMPLE TEST BASIS - SAMPLE TEST
- COMPONENT — COMPONENT SPECIFICATION —verification of the correctness of the component’s calculation of the tax amount
- COMPONENT INTEGRATION — COMPONENT DIAGRAMS (architecture design) — verification of the data transfer between the login component and the authorization component
- SYSTEM — USER STORIES, USE CASES — verification of the correct implementation of business process “pay tax”
- SYSTEM INTEGRATION — SPECIFICATION OF THE SYSTEM-DATABASE INTERFACE — verification of the correctness of sending a query to the database and receiving its result
- ACCEPTANCE — USER MANUAL — validate that the description of the implementation of functionality contained in the manual is consistent with with the actual business process realised by the system
FUNCTIONAL COVERAGE
• degree to which specific type of functional item has been tested
• expressed as a percentage of items of a given type covered by testing
• tracking the relationship between test cases, their results and functional requirements — calculating what percentage of requirements have been covered by testing and identifying gaps in coverage
NONFUNCTIONAL TESTING
Evaluate CHARACTERISTICS OF SYSTEMS AND SOFTWARE, such AS:
- USABILITY
- PERFORMANCE
- SECURITY
N.F. - HOW A SYSTEM BEHAVES
•performed at all test levels
8 QUALITY CHARACTERISTICS IN NONFUNCTIONAL TESTING
- FUNCTIONAL SUITABILITY
functional Completeness, f. Correctness, f. Appropriateness\ - PERFORMANCE EFFICIENCY
time behaviour, resource utilisation, capacity - COMPATIBILITY
co-existence, interoperability - USABILITY
appropriateness - recognizability - learnability - operability - user error protection - user interface aesthetics - accessibility - RELIABILITY
maturity - availability - fault tolerance - recoverability - SECURITY
confidentiality - integrity - non-repudiation - authenticity - accountability - MAINTAINABILITY
modularity - reusability - analysability - modifiability - testability - PORTABILITY
adaptability - installability - replaceability
SMART goals for NONFUNCTIONAL TESTING
• results must be precise - expressive in terms of well-defined metrics
• to specify the hallmarks of properly defined requirements and tests, acronym:
Specific
Measurable
Attainable
Realistic
Time-bound
WHITE-BOX TESTING
Tests are derived based on - INTERNAL STRUCTURE OR IMPLEMENTATION OF A GIVEN SYSTEM
• INTERNAL STRUCTURE
code - architecture - workflows - data flow within the system
STRUCTURAL COVERAGE IN WHITE-BOX TESTING
Degree to which a specific type of structural item has been tested expressed as the percentage of items of a given type covered by the test
Coverage = A/B * 100%
A - number of structural elements covered by tests
B - number of all identified structural elements
Usually expressed in percentage
WHITE-BOX TESTING AT DIFFERENT LEVELS
Test level —— example structure to be covered
COMPONENT — source code (e.g. statements, branches, decisions, paths)
COMPONENT INTEGRATION —— call graph ( i.e. events of a calling one function by another) a set of API functions
SYSTEM —— Business process model
SYSTEM INTEGRATION —— call graph at the level of services offered by communicating systems
ACCEPTANCE —— tree model of the menu structure
SPECIAL SKILLS/KNOWLEDGE FOR WHITE-BOX TESTING
• knowledge of code construction (enabling, e.g. the use of tools to measure code coverage)
• knowledge of software architecture
BLACK-BOX TESTING
Based on SPECIFICATIONS (i.e. informational external to the test object)
VERIFICATION of the system, s BEHAVIOUR, making sure it conforms to the behaviour described in the specification
TEST LEVELS VS TEST TYPES
• independent of each other
• any test type at any level but there is a tendency for some test types to be used at certain levels
White-box testing usually with nonfunctional testing
Black-box testing — functional testing but not always
CONFIRMATION TESTING (re-testing)
• performed in connection with occurrence of a failure and the repair associated defect
- VERIFYING THAT DEFECT HAS BEEN REPAIRED
• typically retest is the same test that revealed a defect, but can be a different test if the reported defect was a lack of functionality
MINIMUM - executing steps that caused the failure earlier
CONFIRMATION TESTING
DEFECT DETECTION —-> DEFECT REPAIR —-> CONFIRMATION TESTING
REGRESSION TESTING
CORRECT OPERATION ——> CHANGE IN CODE —> REGRESSION TEST
• checking if a change (fix or modification) made to one of a component of the code inadvertently affected the behaviour of other parts (same component, other components, system, systems); taking into account changes in the environment (intro of a new version of operating system or data management system)
REGRESSION (regress - deterioration), unintended side effects
REGRESSION TESTING - PROCESS
• re-executing test to detect regressions
•SCHEDULED - after another functionality is added or at the end of each iteration
• good candidate for AUTOMATION
OPTIMISATION OF REGRESSION TESTING
When number of regression tests gets huge that they can be run overnight
PRIORITIZATION
most important and most relevant first
REDUCTION
removing some from the set of regression suite; criterion for removal based e.g. if a test detects defects often
MIXED STRATEGY
most important ones on a scheduled basis, less important ones less frequently
MAINTENANCE TESTING
DURING MAINTENANCE PHASE - AFTER PRODUCT/SOFTWARE WAS RELEASED
• maintenance testing- after any changes in maintenance phase
PURPOSE: verify that the change was implemented successfully to detect possible defects
+ ALWAYS WITH REGRESSION TESTING
SCHEDULED AND UNSCHEDULED (after hot fixes)
At multiple levels
COMMON SITUATIONS WHEN SOFTWARE NEEDS CHANGING/CORRECTIONS during MAINTENANCE FAZE
• Need to patch the system due to the discovery of a hacker attack
• adding, removing or modifying program features
• fixing the defect that caused the failure reported by the user
• need to archive data due to software decommissioning
• need to move the system to a new environment, caused e.g. by upgrading the version of operation system
Improvements of required nonfunctional quality characteristics of the software or system throughout life cycle -> PERFORMANCE - COMPATIBILITY - RELIABILITY - SECURITY - PORTABILITY
SCOPE OF MAINTENANCE TESTING
• RISK LEVEL ASSOCIATED WITH CHANGE (e.g. extent to which the changed software area communicates with other components or systems)
• SIZE OF EXISTING SYSTEM
• SIZE OF TEH CHANGE MADE
4 CATEGORIES OF CHANGES THAT TRIGGERED MAINTENANCE
- MODIFICATION
- mostly planned enhancements, - corrective/emergency changes, - changes to operations environment (planned operating system database upgrades);
- software updates for COTS software
- patches that fix security defects and vulnerabilities
- UPGRADE OR MIGRATION
- transition from one platform to another
- data conversion testing (when migrating data from another application to a maintained system)
- RETIREMENT
when software is withdrawn from use- testing migration or archiving of data
- testing of recovery procedures after archiving
- regression testing to ensure that functionality remaining in use continues to work
- IN INTERNET OF THINGS (IoT)
- after introduction of completely new or modified elements in the system (e.g. hardware devices or software services)
- INTEGRATION TESTING AT VARIOUS LEVELS
- SECURITY
IMPACT ANALYSIS
• EVALUATION OF CHANGES MADE TO MAINTAINED VERSION
- intended effects
- expected or potential side effects
- identification of areas of the system that will be affected by the changes
- impact of the change on existing tests
• CONDUCTED BEFORE A CHANGE IS MADE TO DETERMINE IF THE CHANGE SHOULD BE MADE
IMPACT ANALYSIS - DIFFICULTIES
• specifications (e.g. business requirements, user stories, architecture) are outdated or unavailable
• test cases have not been documented or are outdated
• bidirectional traceability between tests and the test basis has not been established
• tool support is nonexistent or inadequate
• people involved do not have the knowledge of the field or the system in question
• too little attention has been paid to system’s quality characteristics in terms of maintainability