Foundation Level 1 Flashcards
Software testing
a set of activities to discover defects and evaluate the quality of software artifacts.
Test Objectives
- Evaluating work products such as requirements, user stories, designs, and code
- Ensuring required coverage of a test object
- Reducing the level of risk of inadequate software quality
- Verifying whether specified requirements have been fulfilled
- Verifying that a test object complies with contractual, legal, and regulatory requirements
- Providing information to stakeholders to allow them to make informed decisions
- Building confidence in the quality of the test object
- Validating whether the test object is complete and works as expected by the stakeholders
Debugging
concerned with finding causes of the failure (defects), analyzing these causes, and eliminating them
Typical debugging process
- Reproduction of a failure
- Diagnosis (finding the root cause)
- Fixing the cause
Quality control (QC)
a product-oriented, corrective approach that focuses on those activities supporting the achievement of appropriate levels of quality (Testing)
Quality assurance (QA)
a process-oriented, preventive approach that focuses on the implementation and improvement of processes
Test results in QA and QC
- In QC they are used to fix defects
- In QA they provide feedback on how well the development and test processes are performing
A root cause
a fundamental reason for the occurrence of a problem (e.g., a situation that leads to an error). Root causes are identified through root cause analysis, which is typically performed when a failure occurs or a defect is identified.
Errors, Defects, Failures
Human beings make errors (mistakes), which produce defects (faults, bugs), which in turn may result in failures
Testing principles
- Testing shows the presence, not the absence of defects
- Exhaustive testing is impossible
- Early testing saves time and money
- Defects cluster together
- Tests wear out
- Testing is context dependent.
- Absence-of-defects fallacy (In addition to verification, validation should also be carried out)
Test Activities and Tasks
often implemented iteratively or in parallel, need to be tailored to the system and the project
- Test planning
- Test monitoring and control
- Test analysis
- Test design
- Test implementation
- Test execution
- Test completion
The way the testing is carried out will depend on
- Stakeholders (needs, expectations, requirements, willingness to cooperate, etc.)
- Team members (skills, knowledge, level of experience, availability, training needs, etc.)
- Business domain (criticality of the test object, identified risks, market needs, specific legal regulations, etc.)
- Technical factors (type of software, product architecture, technology used, etc.)
- Project constraints (scope, time, budget, resources, etc.)
- Organizational factors (organizational structure, existing policies, practices used, etc.)
- Software development lifecycle (engineering practices, development methods, etc.)
- Tools (availability, usability, compliance, etc.)
Traceability
- provides information to assess product quality, process capability, and project progress against business goals.
- between the test basis elements, testware associated with these elements (e.g., test conditions, risks, test cases), test results, and detected defects.
- The coverage criteria can function as key performance indicators to drive the activities that show to what extent the test objectives have been achieved
The test management role
- takes overall responsibility for the test process, test team and leadership of the test activities.
- mainly focused on the activities of test planning, test monitoring and control and test completion.
NB varies depending on the context, the test management role can be performed by a team leader, by a test manager, by a development manager,
The testing role
mainly focused on the activities of test analysis, test design, test implementation and test execution.
Generic Skills Required for Testing
- Testing knowledge (techniques)
- Thoroughness, carefulness, curiosity, attention to details, being methodical
- Good communication skills, active listening, being a team player
- Analytical thinking, critical thinking, creativity
- Technical knowledge
- Domain knowledge (to be able to understand and to communicate with end users/business representatives)
Independence of Testing
- A certain degree of independence makes the tester more effective at finding defects due to differences between the author’s and the tester’s cognitive biases
- Work products can be tested by their author (no independence), by the author’s peers from the same team (some independence), by testers from outside the author’s team but within the organization (high independence), or by testers from outside the organization (very high independence)
The main benefit of independence of testing
- independent testers are likely to recognize different kinds of failures and defects compared to developers because of their different backgrounds, technical perspectives, and biases.
- an independent tester can verify, challenge, or disprove assumptions made by stakeholders during specification and implementation of the system.
Drawbacks of independent testers
- Independent testers may be isolated from the development team, which may lead to a lack of collaboration, communication problems, or an adversarial relationship with the development team.
- Developers may lose a sense of responsibility for quality.
- Independent testers may be seen as a bottleneck or be blamed for delays in release.
Test planning work products
- test plan
- test schedule
- risk register
- entry and exit criteria
Risk register
a list of risks together with risk likelihood, risk impact and information about risk mitigation
Test monitoring and control work products
- test progress reports
- documentation of control directives
- risk information
Test analysis work products
- (prioritized) test conditions
- defect reports regarding defects in the test basis
Test design work products
- (prioritized) test cases
- test charters
- coverage items
- test data requirements
- test environment requirements
Examples of test environment elements
- stubs
- drivers
- simulators
- service virtualizations
Test implementation work products
- test procedures
- automated test scripts
- test suites
- test data
- test execution schedule
- test environment elements
Test execution work products
- test logs
- defect reports
Test completion work products
- test completion report
- action items for improvement of subsequent projects or iterations
- documented lessons learned
- change requests
Test execution rate
No. of executed tests / no. of total tests
Test planning
defining the test objectives and then selecting an approach that best achieves the objectives within the constraints imposed by the overall context.
Test monitoring and control
- ongoing checking of all test activities
- the comparison of actual progress against the plan
Test analysis
- analyzing the test basis to identify testable features and to define and prioritize associated test conditions, together with the related risks and risk levels
- The test basis and the test objects are also evaluated to identify defects they may contain and to assess their testability
- identify features and sets of features to be tested
- defining and prioritising test conditions for each feature based on analysis of the test basis
NB: answers the question “what to test?” in terms of measurable coverage criteria.
Test design
- Designing and prioritising test cases
- Identifying test data to support test conditions and test cases
- Designing the test environment and identifying required infrastructure & tools
- Creating bi-directional traceability between test basis and test cases
- designing the test environment and identifying any other required infrastructure and tools.
NB answered the question “how to test”
Test implementation
- Developing and prioritising test procedures and creating automated test scripts
- Creating test suits from the test procedures
- Arranging the test suits within the test execution schedule
- Building the test environment and verifying everything set up correct
- Preparing test data and ensuring it is properly loaded in the test environment.
- Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedure and test suites
Test execution
- Recording the IDs and versions of the test items, objects tools and test ware
- executing test either manually or with a tool
- comparing actual results with expected results
- analysing anomalies to establish their likely causes
- reporting defects based on failures observed
- logging the outcome of test execution
- repeating test activities either as a result of action taken for an anomaly or as a part of planned testing
Test completion
- Checking whether all defect reports are closed, entering change request or product backlog items for any defects that remain unresolved at the end of test execution
- Creating a test summary report to be communicated to stakeholders
- Finilazing and archiving the test environment, test data test infrastructure and other testware
- handing over testware to the maintenance team, other project teams and/or other stakeholders who could benefit from its use
- analysing lessons learned from the completed test activities to determine changes needed for future iterations, releases and projects
- using information gathered to improve test process maturity
Test Basis
All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based.
If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.
Test charter
A statement of test objectives, and possibly test ideas about how to test. Test charters are used in exploratory testing.
Error
Human action that produces an incorrect result.