1. Fundamentals of Testing Flashcards
Typical objectives of testing
7
- Prevent defects by evaluating work products (requirements, user stories, design, code)
- Verify if specified requirements are fulfilled.
- Check test object is complete & works as expected.
- Build confidence in quality.
- Find defects/failures to reduce risk of inadequate quality.
- Provide enough information for stakeholders to make decisions (especially on quality).
- Comply with or verify object’s compliance with contractual, legal or regulatory requirements.
Differentiate testing from debugging
Testing = finding and showing failures caused by defects.
Debugging = dev activity finding, analysing and fixing the defects.
(Conformation testing checks if defects are resolved)
(Testers test and devs debug, except in agile and sometimes other methodologies where testers help debug)
Give examples of why testing is necessary
4
- Detecting defects in requirements or stories - reduces risk of wrong/unstable features.
- Testers working with system designers = better understanding of design and how to test it - reduces risk of fundamental design defects and enables early test identification.
- Testers working with developers = better understanding of code and how to test it - reduces risk of defects in code and tests.
- Testers verifying and validating software before release detects failures and supports debugging. Increases likelihood software meets stakeholder needs and requirements.
Describe the relationship between testing and quality assurance
Quality management: umbrella containing QA and QC
Quality assurance: Prevents defects by ensuring processes used to manage/create deliverables work and are followed. Process oriented. Proactive.
Quality control: includes testing. Determines if end result is expected - detects problems by inspecting ans testing. Product oriented. Reactive.
QA wants whole process to be good so supports testing through QC.
Distinguish between error, defect and failure
- Error: human action that produces incorrect result. Caused by time pressure, inexperience etc.
- Defect: Imperfection or deficiency in work product where it doesn’t meet requirements or specifications. Cased by errors in requirements, errors in programming etc.
- Failure: Event where component or system does not perform required function within specified limits. Caused by defects in code, environmental factors etc.
(Not all unexpected test results are failures, false positives and false negatives)
Distinguish between root cause of a defect and its effects.
Root cause is the earliest actions or conditions that created a defect. The effect is the end result.
E.g.
Root cause: Story owner misunderstands something
Error: Story was not written correctly
Defect: Story is ambiguous
Defect: Code calculates improperly
Failure: Incorrect payment made to customer
Effect: Customer complains
Explain the 7 testing principles
• Testing shows the presence of defects not their absence.
(Can’t prove that there are no defects, can reduce probability of undiscovered defects remaining)
• Exhaustive testing is impossible.
(Impossible to test every possible combination - instead direct based on risk analysis, techniques and priorities)
• Early testing saves time and money.
(Static and dynamic testing ASAP in the dev life cycle, helps reduce/eliminate costly changes.)
• Defects cluster together.
(Few modules usually contain most defects. Predicted/observed clusters are factors in risk analysis and focussing test efforts.)
• Beware pesticide paradox
(Same tests over and over will no longer find new defects, like antibiotic resistance. Changing test data or new tests may detect new defects. In automated regression this might reduce regression defects which could be a good thing though)
• Testing is context dependant.
(Different software (high safety medical vs. mobile game app) and different methodologies (agile vs. sequential) have different testing.)
• Absence of errors is a fallacy.
(Perfection is impossible. Fallacy to think finding and fixing defects automatically makes software good. Software might still be inferior, badly designed etc.)
Explain the impact of context on test practices.
Context affects the test process for an organisation. It affects coverage criteria which will define the key performance indicators (KPIs), which will drive demonstration of objectives.
Contextual factors can include: • Software development lifecycle • Methodologies used • Test levels/types being considered • Product and project risks • Business domain • Operation constraints (budgets, resources, timescale, complexity, contractual/regulatory requirements) • Organisational policies and practices • Required internal and external standards.
List standard test activities
7
TEST PLANNING
TEST MONITORING AND CONTROL
TEST ANALYSIS
TEST DESIGN
TEST IMPLEMENTATION
TEST EXECUTION
TEST COMPLETION
Describe and list tasks within test planning
Define objectives, decide approach for meeting objectives within constraints imposed by context.
Describe and list tasks within test monitoring and control
Monitoring - On going comparison of progress against planned progress using monitoring metrics defined in plan.
Control - Taking actions necessary to meet objectives of plan. (Supported by evaluation of exit criteria, progress against plan is communicated to stakeholders in progress reports)
Describe and list tasks within test analysis
(What to test)
Determines what to test in terms of measurable exit criteria. Test basis is analysed to identify testable features and define associated test conditions.
Major activities:
• Analysing the test basis appropriate to the test level being considered.
- Requirement specifications (business/functional req, stories, use cases, similar work products)
- Design and implementation information (system/software architecture, design specs, call flow graphs)
- Implementation of component or system itself (code, database metadata/queries, interfaces)
- Risk analysis reports (considering functional, non-functional and structural aspects)
• Evaluating the test basis and items to identify defects.
- Such as ambiguities, omissions, inconsistencies, inaccuracies, contradictions and superfluous statements.
• Identifying features and test of features to be tested.
• Defining and prioritising test conditions for each feature.
- Based on analysis of test basis, characteristics, business/technical factors, risks
• Capturing bi-directional traceability between each element of the test basis and associated test conditions.
Describe and list tasks within test design
(How to test)
Test conditions are expanded into high level test cases, sets thereof, and other testware.
- Designing and prioritising test cases and sets thereof.
- Identifying necessary test data to support conditions and cases.
- Designing test environment and identifying any needed infrastructure/tools.
- Capturing bi-directional traceability between test basis, conditions, and cases.
Describe and list tasks within test implementation
(Do we now have everything in place to run the tests?)
Testware necessary for execution is created/completed, including sequencing procedures out of test cases.
- Developing and prioritising procedures (maybe creating automated scripts.
- Creating test suites out of procedures (and automated scripts if any)
- Arranging test suites within an execution schedule to result in efficient test execution.
- Building the test environment (inc test harnesses, service visualisation, simulators) and verifying everythig needed is set up correctly.
- Preparing test data and ensuring it’s properly loaded into test environment.
- Verifying and updating bi-directional traceability between test basis, conditions, cases, procedures and suites.
Describe and list tasks within test execution
Test suites are run in accordance with the execution schedule.
- Recording IDs & versions of test item, object, tool, testware.
- Executing tests (manually & with tools).
- Comparing actual and expected results.
- Analysing anomalies to establish likely causes (failures due to code defects, false positives)
- Reporting defects based on failures observed.
- Logging outcome of test execution.
- Repeating test activities - either as a result of action taken for an anomaly or as part of planned testing (eg. execution of corrected test, conformation/regression testing)
- Verifying and updating bi-directional traceability between the test basis, conditions, cases, procedures and results.