Intro Flashcards
BS 7925-1
Testing involves executing software with the intent to identify errors and ensure the application meets specified requirements.
IEEE Definition
Testing evaluates systems or components, manually or automatically, to verify that they satisfy requirements or identify discrepancies between expected and actual outcomes.
ISEB Syllabus
Testing measures software quality by identifying defects, encompassing both functional (what the software does) and non-functional (how it performs) aspects.
Entities in the Testing Problem
- P (Implementation): Development of the system
- S (Specification): Requirements or goals defining the system’s correct behavior.
- O (Observation): Outputs and effects of the system under testing.
- T (Testing): Process of verifying that P aligns with S by examining O.
Testing Axioms
- Bugs cannot be fully eliminated
- Exhaustive Testing is impossible
- Testing is risk-based and context-dependent
- Testing starts in SDLC
- Finding Bugs often uncovers more
Testing is risk-based and context-dependent
Strategies vary between critical systems like medical software and non-critical systems.
Testing starts in SDLC
Catch bugs early to minimize cost and effort to fix them later.
Testing Principles
- Traceability: Every test should correspond to a requirement
- Risk Prioritization: Test high impact areas to maximize impact.
- Pareto Principle: 80% of issues stem from 20% of the code.
- Diversity of Techniques: A single method can’t uncover all types of bugs, use a mix of testing strategies (ex. White box, Black box)
- Bug Fix Timing: Fixing defects early reduces cost and complexity
- Pesticide Paradox: Regularly change and enhance tests to ensure continued effectiveness in finding bugs.
Additional Principles
- Sensitivity: A test is more effective if it fails consistently when an issue exists
- Intentions: Clearly define what each test aims to achieve to prevent ambiguity
- Partition: Break down large problems into smaller, manageable sections
- Feedback: Improve the testing process continuously based on past and present outcomes.
Error
Human mistake that leads to an incorrect implementation
Fault/Defect/Bug
Specific problem in the program (ex. Incorrect logic data)
Failure
Visible manifestation of a fault during program execution
Levels of Testing
- Unit testing - Verifies individual components or modules in isolation
- Integration testing - Ensures different components work together correctly
- System testing - Evaluates the entire application as a whole against requirements
The Testing Process
- Test Planning
- Test Design
- Environment Setup
- Execution
- Problem Reporting
- Exit Criteria
Test Planning
Establish the entry/exit criteria, Test strategies, tools, schedules and resources and introduce problem tracking and reporting mechanisms.
Test Design
Review system requirements, architecture, and testability. Define test conditions, required test data and specific test cases with preconditions and steps
Environment Setup
Configure hardware, software, network and database environments needed for testing
Execution
Run test cases, record outcomes (Pass/Fail/Not executed) and perform regression testing to make sure fixes don’t break existing functionality.
Problem Reporting
Document with description, steps to reproduce, severity and priority, and potential new test ideas.
Exit Criteria
Define conditions for ending testing; Coverage threshold met, no critical faults remaining, Testing within time and budget constraints.
Test Case
Input, execution steps, and expected outcome
Test Specification
Requirements satisfied by one or more test cases
Test Suite
Collection of test cases
Adequacy Criterion
Measure of how effectively a test suite meets testing requirements.
Types of Coverage Analysis
- Statement (Every line of code executes)
- Branch (Every decision point’s branches are tested)
- Condition (Tests logical conditions for all T/F combinations)
- Path (Tests all potential execution paths)
- Data (Asses how input data affects execution)
balance between effort and effectiveness
You may spend 40 hours testing or 80 hours, but if they produce 10 and 12 bugs respectively then it’s better to spend 40 hours.