Software Testing Flashcards
red-green-refactor testing
- fail - write a test and make sure it fails, before implementing
- pass - write an implementation, to make the test pass
- refactor + extend - refactor system
Unit Tests
- FAST and ISOLATED
- test a small facet of the code
Acceptance/End-to-end Tests
- SLOW and !ISOLATED
- done at the end with the user/person who will use the product
System Tests
- SLOW and !ISOLATED
- simulates user
- ties together integration tests and acceptance tests
s
Integration Tests
- SLOWER than unit tests
- MORE ISOLATED than acceptance tests
Smoke Tests
try to expose a fault as quickly as possible, making it possible to defer running large swaths of unnecessary tests for a system that is known to be broken
3w21wq;p
A|B Testing
- for production environments
- comparing 2 versions
- usually used for business decisions rather than functionality
4 phase tests
1) set up
2) execute CUT/SUT
3) evaluate the output
4) tear down
Given-When-Then
1) start from some given state, where the system’s configuration is understood
2) The key action of the test is the when; this is often described in the test name using plain language (e.g., it(‘should be able to parse a document that has UTF-16 characters’)).
3) The then step involves observing the output to ensure its correctness.
TDD biggest strength
you are able to ensure that your system is structured in a testable way
4 properties of testability
controllability
observability
isolateablilty
and automatability
controllability
- refactoring code in a way that makes test cases more succinct
- red flag:
- when a constructor has new statements
- bc when a constructor is creating its own objects you as the test writer dont have the opportunity to change those objects
- example: using id directly, should pass in jquery object instead
Observability
- ability to check the value to make sure a test
- (refactor) often these can be resolved by returning data to the caller that might otherwise just been passed further down a call chain
- (can refactor to return what you want to check instead)
Isolatability
- split up code to test subparts
- also test as a whoole but this is the main idea
automatability
- easier to autoate if you pass in objects instead of requiring manual use of software to test
Glass Box Testing Faults
- confirmation bias
pros of using coverage
- cheap to compute in contrast to reasoning
- it is actionable
types of coverage
- line
- statement: diff than line bc measures all statements on a line executed
- branch: both conditions of all branch points
- path: all program paths (ie all possible combinations)
Boundary Value Analysis
BVA: 4 values for boolean testing
true, false, null, undefined
BVA: numbers
- number.negative_infinity
- -number.max_value
- number.positive_infinity
- number.max_value
- undefined
- null
- you can have null checks enabled for null and undefined
Equivalence Class Partitioning (ECP)
- inputs decomposed into classes and make sure test atleast one from each class
state transition testing
- identify states or modes that he program will assume at runtime
- determine how the program flows between states and test the behaviour during these transitions
User Acceptance Testing
- also use case testing/user story testing
- tries to simulate interaction with the system
- can also simulate both untrained and adversarial users
Input Partition: pros/cons
-pros:
- - avoid confirmation bias
- - even tho black box -> systematic
- - simplifies large domains
- cons:
- - dependent on specification
- - inputs can be less interesting than outputs (output partitioning needed)
- - defects can arrive at boundaries (combo input paritioning with BVA)
Output Partitioning
- identify the values that test the desired outputs
Output Partitioning: pros/cons
- pros
- focuses on observable outputs
- can encourage diff approach
- cons
- usually fewer outputs than inputs (might drive easier tests)
- effort required to derive inputs to acheive given goal can be challenging
fuzz testing
Fuzz testing automates parts of the test writing process by generating inputs programmatically, often with some randomness
1) generates an input
2) executes the SUT with generated input
3) observes if crash
4) if no crash repeat from step 1
common use for fuzz testing
- test that hte system does not crash
- used to find segmentation faults or other types of runtime errors
types of fuzzing
1) random fuzzing: tests completely at random (usually specifies type, can at times be TOO random)
2) generator-based fuzzing: developer writes code that generates random input BUT they are guaranteed to be valid
3) mutational fuzzing: defining mutations—operations that transform one input into another, slightly different input
Mutation Testing
Mutation operators are used to create mutant versions of the code under test. Mutants tend to be simple: for example changing bounds checks (e.g., a >= arr.length to a > arr.length), or boolean values (e.g., if (isReady) {} to if (!isReady) {}).