Foundation Level 2 Flashcards
A software development lifecycle (SDLC)
- an abstract, high-level representation of the software development process
- defines how different development phases and types of activities performed within this process relate to each other, both logically and chronologically
Examples of SDLC models
- sequential development models (e.g., waterfall model, V-model),
- iterative development models (e.g., spiral model, prototyping)
- incremental development models (e.g., Unified Process)
Impact of the Software Development Lifecycle on Testing
- Scope and timing of test activities (e.g., test levels and test types)
- Level of detail of test documentation
- Choice of test techniques and test approach
- Extent of test automation
- Role and responsibilities of a tester
Good testing practices, independent of SDLC model
- For every software development activity, there is a corresponding test activity => quality control
- Different test levels have specific and different test objectives => comprehensive, no redundancy
- Test analysis and design for a given test level begins during the corresponding development phase of the SDLC => principle of early testing
- Testers are involved in reviewing work products as soon as drafts of this documentation are available => supports shift left
Testing as a Driver for Software Development (Approaches)
- Test-Driven Development (TDD)
- Acceptance Test-Driven Development (ATDD)
- Behavior-Driven Development (BDD)
Test-Driven Development (TDD)
- Directs the coding through test cases
- Tests are written first, then the code is written to satisfy the tests
- The tests and code are refactored
Acceptance Test-Driven Development (ATDD)
- Derives tests from acceptance criteria as part of the system design process
- Tests are written before the part of the application is developed to satisfy the tests
Behavior-Driven Development (BDD)
- Expresses the desired behavior of an application with test cases written in a simple form of natural language, which is easy to understand by stakeholders – usually using the Given/When/Then format.
- Test cases are then automatically translated into executable tests
Challenges & Risks of DevOps (test design perspective)
- The DevOps delivery pipeline must be defined and established
- CI / CD tools must be introduced and maintained
- Test automation requires additional resources and may be difficult to establish and maintain
Benefits of DevOps for testing
- Fast feedback on the code quality, and whether changes adversely affect existing code
- CI promotes a shift-left approach in testing by encouraging developers to submit high quality code accompanied by component tests and static analysis
- Promotes automated processes like CI/CD that facilitate establishing stable test environments
- Increases the view on non-functional quality characteristics
- Automation through a delivery pipeline reduces the need for repetitive manual testing
- The risk in regression is minimized due to the scale and range of automated regression tests
Shift-Left Approach
- an approach where testing is performed earlier in the SDLC
- it does not mean that testing later in the SDLC should be neglected.
- the process but is expected to save efforts and/or costs later in the process.
Good practices that illustrate how to achieve a “shift-left” in testing,
- Reviewing the specification from the perspective of testing. These review activities on specifications often find potential defects, such as ambiguities, incompleteness, and inconsistencies
- Writing test cases before the code is written and have the code run in a test harness during code implementation
- Using CI and even better CD as it comes with fast feedback and automated component tests to accompany source code when it is submitted to the code repository
- Completing static analysis of source code prior to dynamic testing, or as part of an automated process
- Performing non-functional testing starting at the component test level, where possible. This is a form of shift-left as these non-functional test types tend to be performed later in the SDLC when a complete system and a representative test environment are available
Benefits of retrospectives for testing include
- Increased test effectiveness / efficiency
- Increased quality of testware
- Team bonding and learning
- Improved quality of the test basis
- Better cooperation between development and testing
Hot fix
unplanned releases/deployments
Categories of maintenance
- Corrective
- Adaptive to changes
- Improve performance or maintainability
Testing the changes to a system in production includes:
- evaluating the success of the implementation of the change
- the checking for possible regressions in parts of the system that remain unchanged
The scope of maintenance testing depends on
- The degree of risk of the change
- The size of the existing system
- The size of the change
The triggers for maintenance and maintenance testing can be classified as
- Modifications, such as planned enhancements (i.e., release-based), corrective changes or hot fixes.
- Upgrades or migrations of the operational environment, such as from one platform to another, which can require tests associated with the new environment as well as of the changed software, or tests of data conversion when data from another application is migrated into the system being maintained.
- Retirement, such as when an application reaches the end of its life. When a system is retired, this can require testing of data archiving if long data-retention periods are required.
Confirmation testing
confirms that an original defect has been successfully fixed.
Regression testing
- confirms that no adverse consequences have been caused by a change, including a fix that has already been confirmation tested.
- Regression testing may not be restricted to the test object itself but can also be related to the environment
- It is advisable first to perform an impact analysis to optimize the extent of the regression testing. Impact analysis shows which parts of the software could be affected.
Test levels: definition
- groups of test activities that are organized and managed together
- an instance of the test process, performed in relation to software at a given stage of development,
Test types
- groups of test activities related to specific quality characteristics
- most of those test activities can be performed at every test level.
Test levels example
- Component testing
- Component integration testing
- System testing
- System integration testing
- Acceptance testing
Component testing
- components in isolation
- normally performed by developers in their development environments
Component integration testing
- testing the interfaces and interactions between components
- heavily dependent on the integration strategy approaches like bottom-up, top-down or big-bang
System testing
overall behavior and capabilities of an entire system or product, often including functional testing of end-to-end tasks and the non-functional testing of quality characteristics
NB using simulation of sub-systems is possible
System integration testing
focuses on testing the interfaces of the system under test and other systems and external services . System integration testing requires suitable test environments preferably similar to the operational environment
Acceptance testing
validation and on demonstrating readiness for deployment, which means that the system fulfills the user’s business needs.
NB ideally be intended users
The main forms of acceptance testing
- user acceptance testing (UAT)
- operational acceptance testing
- contractual and regulatory acceptance testing
- alpha testing and beta testing.
Test levels are distinguished by the following non-exhaustive list of attributes
- Test object
- Test objectives
- Test basis
- Defects and failures
- Approach and responsibilities
Test Types
- functional testing
- non-funcional testing
- black box testing
- white box testing
Functional testing
evaluates the functions that a component or system should perform.
The main objective - checking the functional completeness, functional correctness and functional appropriateness.
Non-functional software quality characteristics
- Performance efficiency
- Compatibility
- Usability
- Reliability
- Security
- Maintainability
- Portability