CHAPTER 2 PART 3 TEST TYPES Flashcards

1
Q

TEST OBJECTIVES DEPENDING ON TEST TYPES

A

• evaluation of functional quality characteristics such as completeness, correctness, adequacy
• evaluation of nonfunctional quality characteristics, including parameters such as RELIABILITY, PERFORMANCE, SAFETY, COMPATIBILITY, USABILITY
• determining whether a structure or architecture of a component or system is correct, complete and in accordance with with specifications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

4 TEST TYPES ACCORDING TO FOUNDATION LEVEL (there are more)

A

• FUNCTIONAL TESTING
• NONFUNCTIONAL TESTING
• WHITE-BOX TESTING
• BLACK-BOX TESTING

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

FUNCTIONAL TESTING

A

F.T. Of a system - the execution of tests that evaluate functions that system should perform
WHAT SYSTEM SHOULD DO
FUNCTIONAL REQUIREMENTS described in:
business requirements specification, user stories, use cases, system requirements specifications

• performed AT ALL TEST LEVELS, but DIFFERENT TESTS IN DIFFERENT LEVELS
• takes into account software behaviour described in external to the system documents —> so typically using black-box techniques to derive test conditions and test cases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

OBJECTIVES OF FUNCTIONAL TESTING

A

VERIFY:
1. Functional CORRECTNESS
2. Functional APPROPRIATENESS
3. Functional COMPLETNESS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

TEST LEVELS VS FUNCTIONAL TESTING
TEST LEVEL - SAMPLE TEST BASIS - SAMPLE TEST

A
  1. COMPONENT — COMPONENT SPECIFICATION —verification of the correctness of the component’s calculation of the tax amount
  2. COMPONENT INTEGRATION — COMPONENT DIAGRAMS (architecture design) — verification of the data transfer between the login component and the authorization component
  3. SYSTEM — USER STORIES, USE CASES — verification of the correct implementation of business process “pay tax”
  4. SYSTEM INTEGRATION — SPECIFICATION OF THE SYSTEM-DATABASE INTERFACE — verification of the correctness of sending a query to the database and receiving its result
  5. ACCEPTANCE — USER MANUAL — validate that the description of the implementation of functionality contained in the manual is consistent with with the actual business process realised by the system
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

FUNCTIONAL COVERAGE

A

• degree to which specific type of functional item has been tested
• expressed as a percentage of items of a given type covered by testing
• tracking the relationship between test cases, their results and functional requirements — calculating what percentage of requirements have been covered by testing and identifying gaps in coverage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

NONFUNCTIONAL TESTING

A

Evaluate CHARACTERISTICS OF SYSTEMS AND SOFTWARE, such AS:
- USABILITY
- PERFORMANCE
- SECURITY

N.F. - HOW A SYSTEM BEHAVES

•performed at all test levels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

8 QUALITY CHARACTERISTICS IN NONFUNCTIONAL TESTING

A
  1. FUNCTIONAL SUITABILITY
    functional Completeness, f. Correctness, f. Appropriateness\
  2. PERFORMANCE EFFICIENCY
    time behaviour, resource utilisation, capacity
  3. COMPATIBILITY
    co-existence, interoperability
  4. USABILITY
    appropriateness - recognizability - learnability - operability - user error protection - user interface aesthetics - accessibility
  5. RELIABILITY
    maturity - availability - fault tolerance - recoverability
  6. SECURITY
    confidentiality - integrity - non-repudiation - authenticity - accountability
  7. MAINTAINABILITY
    modularity - reusability - analysability - modifiability - testability
  8. PORTABILITY
    adaptability - installability - replaceability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

SMART goals for NONFUNCTIONAL TESTING

A

• results must be precise - expressive in terms of well-defined metrics
• to specify the hallmarks of properly defined requirements and tests, acronym:
Specific
Measurable
Attainable
Realistic
Time-bound

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

WHITE-BOX TESTING

A

Tests are derived based on - INTERNAL STRUCTURE OR IMPLEMENTATION OF A GIVEN SYSTEM
• INTERNAL STRUCTURE
code - architecture - workflows - data flow within the system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

STRUCTURAL COVERAGE IN WHITE-BOX TESTING

A

Degree to which a specific type of structural item has been tested expressed as the percentage of items of a given type covered by the test

Coverage = A/B * 100%
A - number of structural elements covered by tests
B - number of all identified structural elements
Usually expressed in percentage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

WHITE-BOX TESTING AT DIFFERENT LEVELS
Test level —— example structure to be covered

A

COMPONENT — source code (e.g. statements, branches, decisions, paths)

COMPONENT INTEGRATION —— call graph ( i.e. events of a calling one function by another) a set of API functions

SYSTEM —— Business process model

SYSTEM INTEGRATION —— call graph at the level of services offered by communicating systems

ACCEPTANCE —— tree model of the menu structure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

SPECIAL SKILLS/KNOWLEDGE FOR WHITE-BOX TESTING

A

• knowledge of code construction (enabling, e.g. the use of tools to measure code coverage)
• knowledge of software architecture

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

BLACK-BOX TESTING

A

Based on SPECIFICATIONS (i.e. informational external to the test object)

VERIFICATION of the system, s BEHAVIOUR, making sure it conforms to the behaviour described in the specification

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

TEST LEVELS VS TEST TYPES

A

• independent of each other
• any test type at any level but there is a tendency for some test types to be used at certain levels

White-box testing usually with nonfunctional testing
Black-box testing — functional testing but not always

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

CONFIRMATION TESTING (re-testing)

A

• performed in connection with occurrence of a failure and the repair associated defect
- VERIFYING THAT DEFECT HAS BEEN REPAIRED
• typically retest is the same test that revealed a defect, but can be a different test if the reported defect was a lack of functionality
MINIMUM - executing steps that caused the failure earlier
CONFIRMATION TESTING
DEFECT DETECTION —-> DEFECT REPAIR —-> CONFIRMATION TESTING

17
Q

REGRESSION TESTING

A

CORRECT OPERATION ——> CHANGE IN CODE —> REGRESSION TEST

• checking if a change (fix or modification) made to one of a component of the code inadvertently affected the behaviour of other parts (same component, other components, system, systems); taking into account changes in the environment (intro of a new version of operating system or data management system)

REGRESSION (regress - deterioration), unintended side effects

18
Q

REGRESSION TESTING - PROCESS

A

• re-executing test to detect regressions
•SCHEDULED - after another functionality is added or at the end of each iteration
• good candidate for AUTOMATION

19
Q

OPTIMISATION OF REGRESSION TESTING
When number of regression tests gets huge that they can be run overnight

A

PRIORITIZATION
most important and most relevant first
REDUCTION
removing some from the set of regression suite; criterion for removal based e.g. if a test detects defects often
MIXED STRATEGY
most important ones on a scheduled basis, less important ones less frequently

20
Q

MAINTENANCE TESTING

A

DURING MAINTENANCE PHASE - AFTER PRODUCT/SOFTWARE WAS RELEASED

• maintenance testing- after any changes in maintenance phase

PURPOSE: verify that the change was implemented successfully to detect possible defects
+ ALWAYS WITH REGRESSION TESTING

SCHEDULED AND UNSCHEDULED (after hot fixes)

At multiple levels

21
Q

COMMON SITUATIONS WHEN SOFTWARE NEEDS CHANGING/CORRECTIONS during MAINTENANCE FAZE

A

• Need to patch the system due to the discovery of a hacker attack
• adding, removing or modifying program features
• fixing the defect that caused the failure reported by the user
• need to archive data due to software decommissioning
• need to move the system to a new environment, caused e.g. by upgrading the version of operation system

Improvements of required nonfunctional quality characteristics of the software or system throughout life cycle -> PERFORMANCE - COMPATIBILITY - RELIABILITY - SECURITY - PORTABILITY

22
Q

SCOPE OF MAINTENANCE TESTING

A

• RISK LEVEL ASSOCIATED WITH CHANGE (e.g. extent to which the changed software area communicates with other components or systems)
• SIZE OF EXISTING SYSTEM
• SIZE OF TEH CHANGE MADE

23
Q

4 CATEGORIES OF CHANGES THAT TRIGGERED MAINTENANCE

A
  1. MODIFICATION
    • mostly planned enhancements, - corrective/emergency changes, - changes to operations environment (planned operating system database upgrades);
    • software updates for COTS software
    • patches that fix security defects and vulnerabilities
  2. UPGRADE OR MIGRATION
    • transition from one platform to another
    • data conversion testing (when migrating data from another application to a maintained system)
  3. RETIREMENT
    when software is withdrawn from use
    • testing migration or archiving of data
    • testing of recovery procedures after archiving
    • regression testing to ensure that functionality remaining in use continues to work
  4. IN INTERNET OF THINGS (IoT)
    • after introduction of completely new or modified elements in the system (e.g. hardware devices or software services)
    • INTEGRATION TESTING AT VARIOUS LEVELS
    • SECURITY
24
Q

IMPACT ANALYSIS

A

• EVALUATION OF CHANGES MADE TO MAINTAINED VERSION
- intended effects
- expected or potential side effects
- identification of areas of the system that will be affected by the changes
- impact of the change on existing tests

• CONDUCTED BEFORE A CHANGE IS MADE TO DETERMINE IF THE CHANGE SHOULD BE MADE

25
Q

IMPACT ANALYSIS - DIFFICULTIES

A

• specifications (e.g. business requirements, user stories, architecture) are outdated or unavailable
• test cases have not been documented or are outdated
• bidirectional traceability between tests and the test basis has not been established
• tool support is nonexistent or inadequate
• people involved do not have the knowledge of the field or the system in question
• too little attention has been paid to system’s quality characteristics in terms of maintainability