Software Testing Theory Flashcards
Validation vs. verification
Validation — confirmation by examination and through the provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
Verification — confirmation by examination and through the provision of objective evidence that specified requirements have been fulfilled.
Seven testing principles
- Testing shows the presence of defects, not their absence
- Exhaustive testing is impossible
- Early testing saves time and money
- Defects cluster together
- Beware of the pesticide paradox
- Testing is context dependent
- Absence-of-errors is a fallacy
Error vs. defect vs. failure
Failure - deviation of the module or system from the expected behavior or result of the action.
Defect — any state that deviates from expectations based on requirements specification, project documentation, user documentation, standards, and others; or from someone’s perception or experience. Anomalies can be found during reviews, testing, analysis, compilation, or when using the software or browsing documentation.
Error — human action causing an incorrect result.
Low-level test case vs. high-level test case
Low-level test cases are those with specific (at the implementation level) input data and expected results. Logical operators from high-level test cases are converted into specific values that correspond to the objectives of logical operators.
High-level test cases are those without specific (implementation level) input data and expected results. Logical operators are used; actual values are not yet defined or available. Also known as abstract test cases.
Essential parts of a test case
unique id — a special number usually assigned by the test case management software.
name — a short name identifying the test case and describing what it concerns.
description — a short description of the test case, tested functionality, defect, user story, and requirement.
preconditions — the required state of the software and environment before testing.
steps to execute — a verbal description of the activities to be performed in the test case.
expected result — the expected result of the step/test case.
exit condition — the required state of the software, and environment at the end of tests.
What are the most common test case tools?
TestRail, Jira
Common parts of a bug report
title — it should be short and informative. Describe exactly what does not work and indicate the place of error. It is not advisable to use the general terms “the form does not display”, “the system does not respond”, or “the button does not work”. A valid example is: “The registration form is not displayed after logging in to the site”.
environment — it should contain information about the environment - operating system, browser, and version number of the failure.
priority — determines how quickly the defect should be repaired, from the highest (critical), which blocks the application to the lowest, where it has no effect on the application.
reproducibility — whether the error is easy to reproduce, e.g. occurs in 100% of cases, or e.g. very rarely 1 in 10 calls.
preconditions — conditions that must be met in order to reproduce a bug.
test data — a set of test data that was used when the bug appeared.
steps to reproduce — steps to follow to reproduce the error.
actual result — the current result that appeared during the test case.
expected result — expected result based on documentation/requirements.
screenshot/video/log — an additional attachment, which documents the bug. Can be not only a screenshot or video but also a .log file, which is an event log in the application.
Test activities/tasks
- Test planning (a goal is defined, a testing technique is selected, and a test schedule is formulated)
- Test analysis (defining test conditions, specifying various requirements, user stories, and use cases)
- Test design (designing test cases, identifying necessary test data, designing the test environment)
- Test implementation (creating test sets and automated test scripts, building the test environment, preparation of test data)
- Test execution (performing tests, comparing actual and expected results, analyzing failures, reporting defects, documenting test results)
- Test completion (checking if all defects have been handled, creating a summary test report and forwarding it to the stakeholders, archiving the test environment and test data)
- Test monitoring and control (a continuous process of comparing actual results to the plan, an extension of the test scope where needed, and informing the stakeholders about the process)
Software development models
- Waterfall aka cascade (sequential model)
- V model (sequential model)
- Rational Unified Process aka RUP (iterative model)
- Scrum/Agile (incremental model)
- Kanban (incremental model)
- Spiral model (incremental model)
Test levels
- Unit aka component tests
- Unit aka component integration tests
- System tests
- System integration tests
- Acceptance tests
What is smoke testing?
Smoke testing is the practice of testing fundamental and core elements of a software program in the early phases of development to identify minor issues that might delay the product’s release.
Alpha and beta testing
Alpha testing is performed at the headquarters of the software development organization, but instead of the development team, the tests are performed by potential or existing customers or independent testers.
Beta testing is performed by current or potential customers in their own locations. Beta testing may or may not be preceded by alpha testing.
CI/CD meaning
Continuous Integration/Continuous Delivery - software development practice involving frequent and regular code delivery to the repository and verification of changes by building a project and performing unit and/or integration tests.
What are the types of tests?
- Functional tests
- Non-functional tests (reliability, performance, load, stress, security, compatibility, and usability tests)
- White-box tests
- Black-box tests
What are the main testing techniques?
- Black-box (aka specification-based) test techniques include use case, equivalence partitioning, boundary value analysis, decision table, and state transition
- White-box (aka structure-based) test techniques include statement coverage aka executing every executable statement and decision/branch coverage aka testing every decision/branch in the program both ways
- Experience-based (exploratory, checklist, error guessing, defect injection/fault attack)
What is stress testing?
Stress testing is a type of performance test that checks how the system/module behaves during the expected number of users and with limited or lacking resources such as processor, memory, or disk space.
What is usability testing?
Tests during which the ease of use of the software is checked, or how easily users learn to use it, and the convenience of end-users.