Software testing Flashcards
testing enables us to
- find defects in the product
- verify the product is fit for purpose
- measure the quality of the product
- mitigate rish associated with the product
- build confidence in the product
why is a software testing methodology needed
To meet legal, contractual, industry specific needs etc.
- to find defects in the product
positive testing
- verifies correct operation, does it work, if error -> test fails
negative testing
- ensures application handles invalid or unexpected user behaviour
- try to break the code
- finding defects
- verify product fit for purpose
Project delivers to a set specification - must be correct - must meet the specification
Referred to as :
Validation (correct specification)
Verification (does it meet spec)
- Measuring quality of product
Degree a system meets:
- specified requirements
- user/customer needs and expectations
can be objectively and subjectively measured
How testing enhances quality short and long term
ST
- testing finds defects, fixes defects, confirms fixes, software has fewer defects
LT
- finds defects, defect analysis identifies causes, process improvement prevents reoccurence, future systems have fewer defects
- mitigating risk associated with product
categored by impact and probability and organised by risk
- Building confidence in the product
Test coverage
Rate of defect discovery
Number of known defects
Percentage of test cases passed
Remember
Testing does not change quality of system (detects defects which are repaired)
Tests should measure functional and non-functional characteristics
Provides a learning opportunity for future projects
Needs to be team wide incorporating different departments
Test basis to test case process
Text basis
- Requirements specification
- source for creation of test cases
Test condition:
- an item or event of a component or system that could be verified by one or more test cases (e.g. a function, feature, quality attribute, or structural element)
- test that is is possible to
Test Case:
- a set of input values, execution preconditions, expected results and execution post-conditions, developed for a particular test condition
example:
input - “The beatles, a day in the life”
expecte output: A list of websites relating to the beatles
Characteristics of a good requirement
- Unambiguous
- Testable
- Clear
- Correct
- Understandable
- Feasible
- Independent
- Atomic
- Necessary
- Implementation-free
write good requirement cardss
write cards
fundamental test process
- Test Planning and Control
- Test Analysis and Design
- Test Implementation and Execution
- Evaluating Exit Criteria and Reporting
- Test Closure
Test planning and control
- define testing objectives
- comparing planned progress and actual progress
- allows for comparison of test process, and compare against previous and future versions
Test analysis and design
- Reviewing test basis
- evaluate testability of test basis
- identifying and prioritising test conditions
- designing and prioritising high-level test cases
- identifying necessary test data
- designing test environment set up
- identifying any required testing infastructure and tools
- creating bi-directional traceability (from test basis to case and case to basis)
Test implementation and execution
- Finalising and prioritising test cases
- Finalising the test data
- Developing and prioritising test procedures
- Creating the test data
- Preparing test harnesses
- Writing automated test scripts
- Creating test suites from the test procedures for efficiency
- Verifying the test environment has been set up correctly
- Verifying traceability between test basis and test cases
- Executing test procedures in planned sequence
o Manually or using test execution tools - Logging the outcome (results) of each test execution
- Comparing actual results with expected results
- Recording discrepancies as incidents
- Analysing incidents for causes
- Repeating test activities (with necessary regression)
4: Evaluating Exit Criteria & Reporting
Checking logs against exit criteria specified in Test Planning
(e.g. Are there any critical/high risk defects remaining?
Have all high risk areas been completely tested?)
* Assessing if more tests are required (How successful were our tests?)
* Assessing if exit criteria need changing
* Writing a test summary for stakeholders
Test closure
Checking planned deliverables have been delivered
* Closing incident reports (failed tests have an answer/action)
* Raising change records for remaining open incidents
* Documenting the acceptance of the system (acceptance testing)
* Finalising and archiving test-ware and test environment
* Analysing lessons learned (post-mortem follow up meetings)
* Using information gathered to improve test maturity
(retrospective)
Seven principles of software testing
- Testing shows the presence of defects
- Exhaustive testing is impossible
- Early testing
- defect clustering
- The pesticied paradox
- Testing is context independent
- The absence of errors fallacy
- Testing shows the presence of defects
- Testing shows defects but cannot prove there are no defects
- Even if no defects are found, not a proof of correctness
Exhaustive testing is impossible
- Not possible to include all testing combinations
- use risks and priorities to focus testing efforts (risk analysis)
- Early testing
- Testing should start as early as possible to save time and resources
- Based on defined objectives
- Defect clustering
- Small number of modules usually have most defects
- can be due to complex challenges
- external factors such as laws and business challenges
- team with more skills in one area
- The pesticide paradox
- If the same tests are repeated, test cases will find no new bugs
- Overcome by reviewing test cases regularly
- create new and different test to exercise different areas of the software