Chapter 2 Testing Throughout the SDLC Flashcards

1
Q

Black-box testing

A

Testing based on an analysis of the specification of the component or system. Synonyms: specification-based testing

specification-based and derives tests from documentation external to the test object.

-main objective is checking the system’s behavior against its specifications.

Supports both Functional and nonfunctional testing

specification-based testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Acceptance testing

A

A TEST LEVEL that focuses on determining whether to accept the system. See also: user acceptance testing

-focuses on validation and on demonstrating readiness for deployment, which means that the system fulfills the user’s business needs.

-Ideally, acceptance testing should be performed by the intended users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Component integration testing

A

The integration testing of components. Synonyms: module integration testing, unit integration testing

also known as unit integration testing)
focuses on testing the interfaces and interactions between components.

Component integration testing is heavily dependent on the integration strategy approaches like bottom-up, top-down or big-bang.

Bottom-Up Integration Testing:
Definition: In bottom-up integration, testing begins with the lower-level components (often the foundational units) and progressively moves upward by integrating higher-level components.

Top-Down Integration Testing:
Definition: In top-down integration, testing starts from the top-most or higher-level modules and works its way down toward the lower-level components.

Big-Bang Integration Testing:
Definition: In the big-bang approach, all or most of the components are integrated at once, and the system is tested as a whole.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

component testing

A

A test level that focuses on individual hardware or software components. Synonyms: module testing, unit testing

-(also known as unit testing) focuses on testing components in isolation.

-often requires specific support, such as test harnesses or unit test frameworks.

-normally performed by developers in their development environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Functional testing

A

Testing performed to evaluate if a component or system satisfies functional requirements.

-evaluates the functions that a component or system should perform.

-The functions are “what” the test object should do.

-The main objective is checking the functional completeness, functional correctness and functional appropriateness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A test level that focuses on interactions between components or systems

A

Integration testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Testing the changes to an operational system or the impact of a changed environment to an operational system.

A

Maintenance testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Testing performed to evaluate that a component or system complies with non-functional requirements

A

Non-functional testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

An approach to performing testing and quality assurance activities as early as possible in the software development lifecycle

A

Shift-left

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

System integration testing

A

The integration testing of systems

-focuses on testing the interfaces of the system under test and other systems and external services .

  • requires suitable test environments preferably similar to the operational environment.

integrate: combine one thing with another so that they become a whole

interface: a point where two systems, subjects, organizations, etc. meet and interact

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

System testing

A

A test level that focuses on verifying that a system as a whole meets specified requirements

-focuses on the overall behavior and capabilities of an entire system or product,

-often including functional testing of end-to-end tasks and the non-functional testing of quality characteristics.

-For some non-functional quality characteristics, it is preferable to test them on a complete system in a representative test environment (e.g., usability).

usability: The degree to which a component or system can be used by specified users to achieve specified goals in a specified context of use.
  • Using simulations of sub-systems is also possible. System testing may be performed by an independent test team, and is related to specifications for the system.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A specific instantiation of a test process. Synonyms: test stage

A

Test level

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Test type

A

A group of test activities based on specific test objectives aimed at specific characteristics of a component or system.

groups of test activities related to specific quality characteristics
most of those test activities can be performed at every test level.

Examples:
Functional Testing: To verify that the software functions as expected.

Performance Testing: To check how the software performs under load or stress.

Security Testing: To find vulnerabilities and ensure the system is secure.

Usability Testing: To ensure the software is easy to use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

White-box testing

A

Testing based on an analysis of the internal structure of the component or system. Synonyms: clear-box testing, code-based testing, glass-box testing, logic-coverage testing, logic-driven testing, structural testing, structure-based testing

-is structure-based
derives tests from the system’s implementation or internal structure (e.g., code, architecture, work flows, and data flows).

main objective is to cover the underlying structure by the tests to the acceptable level.

(clear-box testing, code-based testing, glass-box testing, logic-coverage testing, logic-driven testing, structural testing, structure-based testing)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Examples of SDLC models include:

A

sequential development models

iterative development models

and incremental development models

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

sequential development models

A

e.g., waterfall model, V-model

A type of software development lifecycle model in which a complete system is developed in a linear way of several discrete and successive phases with no overlap between them.

work best when requirements are clear and unlikely to change.

-in the initial phases testers typically participate in requirement reviews, test analysis, and test design.

-The executable code is usually created in the later phases( dynamic testing cannot be performed early in the SDLC)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

iterative development models

A

(e.g., spiral model, prototyping),

A type of software development lifecycle model in which the component or system is developed through a series of repeated cycles.

allow for revisions and continuous improvements, focusing on user feedback and risk management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

incremental development models

A

(e.g., Unified Process).

A type of software development lifecycle model in which the component or system is developed through a series of increments.

break down the project into smaller increments, enabling faster delivery of functional modules.

	-Defining requirements, designing software, and testing are done in phases where in each phase a piece if the system is added.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Some activities within software development processes can also be described by more detailed software development methods and Agile practices.

A

-acceptance test-driven development (ATDD),

-behavior-driven development (BDD),

domain-driven design (DDD),

extreme programming (XP),

feature-driven development (FDD),

Kanban,

Lean IT,

Scrum,

test-driven development (TDD).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Testing must be adapted to the SDLC to succeed. The choice of the SDLC impacts on the:

A

Scope and timing of test activities (e.g., test levels and test types)

Level of detail of test documentation

Choice of test techniques and test approach

Extent of test automation

Role and responsibilities of a tester

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Iterative and incremental models:

A

-that each iteration delivers a working prototype or product increment.

-in each iteration both static and dynamic testing may be performed at all test levels.

-Frequent delivery of increments requires fast feedback and extensive regression testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Agile software development:

A

-assumes that change may occur throughout the project.

-lightweight work product documentation and extensive test automation to make regression testing easier

-most of the manual testing tends to be done using experience-based test techniques that do not require extensive prior test analysis and design.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Good testing practices, independent of the chosen SDLC model, include the following:

A
  1. For every software development activity, there is a corresponding test activity:

This principle emphasizes that testing should be integrated into every step of the software development process, ensuring quality control. For example, if developers are writing code (a development activity), there should be corresponding unit tests for that code. This ensures that quality is checked continuously and not just at the end of development.

  1. Different test levels have specific and different test objectives:

The idea is to make testing comprehensive by having clear objectives for each level, while avoiding redundancy (i.e., avoiding testing the same things at multiple levels unnecessarily).

  1. Test analysis and design for a given test level begins during the corresponding development phase of the SDLC:

This refers to the practice of early test planning. The process of designing tests for a specific test level (e.g., unit tests, system tests) should begin early in the development phase of the SDLC, aligning with the principle of early testing. Early testing means identifying and planning for potential issues before they occur, which helps prevent defects from being introduced in the first place.
Focus: Test planning and design.

  1. Testers are involved in reviewing work products as soon as drafts of this documentation are available:

This encourages involving testers early in the process to review drafts of requirements, designs, or any other documentation produced during development. This is part of the shift-left strategy, which aims to shift testing and defect detection earlier in the process (left on the timeline of the SDLC). Detecting defects early is more cost-effective and helps maintain higher quality throughout development.
Focus: Reviewing and validating early-stage documents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

test levels

A

-groups of test activities that are organized and managed together

Test levels refer to the stages of testing, such as unit testing, integration testing, system testing, and acceptance testing.
Each test level serves a different purpose:

Unit Testing: Ensures individual components function correctly.

Integration Testing: Ensures different components work together.

System Testing: Ensures the whole system functions as expected.

Acceptance Testing: Ensures the system meets business requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Test-Driven Development (TDD):

A

iterative development model

Directs the coding through test cases (instead of extensive software design)

Tests are written first, then the code is written to satisfy the tests, and then the tests and code are refactored

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Acceptance Test-Driven Development (ATDD)

A

Derives tests from acceptance criteria as part of the system design process

Tests are written before the part of the application is developed to satisfy the tests


How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Behavior-Driven Development (BDD):

A

Expresses the desired behavior of an application with test cases written in a simple form of natural language, which is easy to understand by stakeholders – usually using the Given/When/Then format.

Test cases are then automatically translated into executable tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

DevOps definition

A

is an organizational approach aiming to create synergy by getting development (including testing) and operations to work together to achieve a set of common goals.

DevOps promotes team autonomy, fast feedback, integrated toolchains, and technical practices like continuous integration (CI) and continuous delivery (CD). This enables the teams to build, test and release high-quality code faster through a DevOps delivery pipeline

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

benefits of devops:

A

Fast feedback on the code quality, and whether changes adversely affect existing code

CI promotes a shift-left approach in testing (see section 2.1.5) by encouraging developers to submit high quality code accompanied by component tests and static analysis

Promotes automated processes like CI/CD that facilitate establishing stable test environments

Increases the view on non-functional quality characteristics (e.g., performance, reliability)

Automation through a delivery pipeline reduces the need for repetitive manual testing

The risk in regression is minimized due to the scale and range of automated regression tests

30
Q

Devops risk:

A

The DevOps delivery pipeline must be defined and established


CI / CD tools must be introduced and maintained


Test automation requires additional resources and may be difficult to establish and maintain

31
Q

There are some good practices that illustrate how to achieve a “shift-left” in testing, which include:

A

Reviewing the specification from the perspective of testing. These review activities on specifications often find potential defects, such as ambiguities, incompleteness, and inconsistencies

Writing test cases before the code is written and have the code run in a test harness (a framework that allows tests to run during code development) during code implementation

Using CI and even better CD as it comes with fast feedback and automated component tests to accompany source code when it is submitted to the code repository

Completing static analysis of source code prior to dynamic testing, or as part of an automated process

Performing non-functional testing starting at the component test level, where possible. This is a form of shift-left as these non-functional test types tend to be performed later in the SDLC when a complete system and a representative test environment are available

32
Q

Static analysis

A

involves examining the source code for potential issues (like syntax errors, security vulnerabilities, or coding standard violations) without actually running the code.

This analysis is often done before dynamic testing (which tests the program while it runs), or as part of an automated process in CI.

33
Q

Non-functional tests (such as performance, reliability, or security tests) are typically performed at the end of the SDLC when the entire system is built.

A

Non-functional tests

34
Q

2.1.6. Retrospectives and Process Improvement
Typical benefits for testing include:

A

Increased test effectiveness / efficiency (e.g., by implementing suggestions for process improvement)

Increased quality of testware (e.g., by jointly reviewing the test processes)

Team bonding and learning (e.g., as a result of the opportunity to raise issues and propose improvement points)

Improved quality of the test basis (e.g., as deficiencies in the extent and quality of the requirements could be addressed and solved)

Better cooperation between development and testing (e.g., as collaboration is reviewed and optimized regularly)

35
Q

Retrospectives

A

(also known as “post-project meetings” and project retrospectives)

are often held at the end of a project or an iteration, at a release milestone, or can be held when needed.

The timing and organization of the retrospectives depend on the particular SDLC model being followed.

In these meetings the participants (not only testers, but also e.g., developers, architects, product owner, business analysts) discuss:

What was successful, and should be retained?

What was not successful and could be improved?

How to incorporate the improvements and retain the successes in the future?

The results should be recorded and are normally part of the test completion report (see section 5.3.2). Retrospectives are critical for the successful implementation of continuous improvement and it is important that any recommended improvements are followed up.

36
Q

includes all artifacts created and used in the testing process (e.g., test cases, test scripts, test data).

A

Testware

37
Q

refers to the documents or sources (like requirements, specifications, or designs) that testing is based on

A

test basis

38
Q

test-first approaches

A

TDD, ATDD, BDD

39
Q

test level description

A

-Each test level is an instance of the test process, performed in relation to software at a given stage of development, from individual components to complete systems or, where applicable, systems of systems.

  • Test levels are related to other activities within the SDLC.

-the exit criteria of one level are part of the entry criteria for the next level

	-In some iterative models, this may not apply Development activities may span through multiple test levels. Test levels may overlap in time.
40
Q

a point where two systems, subjects, organizations, etc. meet and interact.

A

interface

41
Q

usability testing vs

acceptance testing vs

user acceptance testing

A

usability: The degree to which a component or system can be used by specified users to achieve specified goals in a specified context of use.

acceptance testing:
A test level that focuses on determining whether to accept the system.

user acceptance testing (UAT):
A type of acceptance testing performed to determine if intended users accept the system

42
Q

The main forms of acceptance testing are:

A

–user acceptance testing (UAT),
definition: A type of acceptance testing performed to determine if intended users accept the system

–operational acceptance testing
deifinition:
A type of acceptance testing performed to determine if operations and/or systems administration staff can accept a system.

–contractual and regulatory acceptance testing,
definition:
a type of testing performed to ensure that a system or software meets specific contractual requirements and complies with relevant regulations or legal standards before it is delivered or put into use

–alpha testing:
Definition: Alpha testing is an internal testing phase performed by the development team, QA team, and sometimes internal stakeholders before the software is released to external users.

beta testing.
Definition: Beta testing is an external testing phase performed by real users in a real-world environment to gather feedback and identify any remaining bugs before the final release.

43
Q

list of test levels

A

Component testing

Component Integration testing

system testing

system integration testing

acceptance testing

44
Q

Test levels are distinguished by the following non-exhaustive list of attributes, to avoid overlapping of test activities:

A

Test object
Purpose: By specifying the test object at each level, the testing activities are focused on distinct portions of the software to ensure all areas are covered.

Test objectives
Purpose: Clearly defined objectives for each test level ensure that testing activities at each stage focus on achieving specific outcomes, avoiding duplication of efforts.

Test basis
Purpose: By using different test bases at each level, testing ensures that each phase addresses distinct sets of requirements and conditions, reducing redundancy and gaps in coverage.

Defects and failures
Purpose: Understanding the types of defects and failures expected at each level helps target the test activities appropriately to catch problems specific to that stage of development.

Approach and responsibilities
Purpose: Defining distinct approaches and responsibilities ensures that each level of testing is conducted by the right team, using the appropriate methodology, avoiding overlap, and ensuring clear accountability.

45
Q

test object vs
test basis

A

test object: The work product to be tested.

The test object refers to the actual item that is being tested.
It could be:
A software application or system
A specific module, feature, or function within a system
Hardware components, if applicable
APIs, databases, or any other component subject to testing

test basis:The body of knowledge used as the basis for test analysis and design.

46
Q

what are the four test types?

A

Functional testing

Non-functional testing

Black-box testing

White-box testing

47
Q

Functional appropriateness vs
Functional completeness vs
Functional correctness

A

Functional appropriateness: The degree to which the functions facilitate (make easier) the accomplishment of specified tasks and objectives
Focus: User experience, ease of use, and the relevance of the system’s functions in helping users achieve their goals.
Key Focus: How well the functions of the system align with user needs and simplify task completion.
- The degree to which the functions facilitate the accomplishment of specified tasks and objectives

Functional completeness: the degree to which the set of functions covers all the specified tasks and user objectives
Focus: Ensuring that the system provides all necessary functionality to meet the user’s requirements without omissions (being left out).
Key Focus: Ensuring the system has all the required features and functionalities for task completion.

Functional correctness: The degree to which a component or system provides the correct results with the needed degree of precision.
Focus: Ensuring the system functions correctly and produces expected results without errors.
Key Focus: Verifying that the system behaves as expected, performing tasks accurately and delivering correct outcomes.

48
Q

Non-functional testing details

A

-evaluates attributes other than functional characteristics of a component or system.

-Non-functional testing is the testing of “how well the system behaves”.

-The main objective of nonfunctional testing is checking the non-functional software quality characteristics.

It is sometimes appropriate for non-functional testing to start early in the life cycle (e.g., as part of reviews and component testing or system testing).

-Many non-functional tests are derived from functional tests as they use the same functional tests, but check that while performing the function, a non-functional constraint is satisfied (e.g., checking that a function performs within a specified time, or a function can be ported to a new platform).

-The late discovery of non-functional defects can pose a serious threat to the success of a project.

-Non-functional testing sometimes needs a very specific test environment, such as a usability lab for usability testing.

49
Q

classification of the non-functional software quality characteristics:

A

Performance efficiency
-The degree to which a component or system uses time, resources and capacity when accomplishing its designated functions.

Compatibility
-The degree to which a component or system can exchange information with other components or systems, and/or perform its required functions while sharing the same hardware or software environment.

Usability
-The degree to which a component or system can be used by specified users to achieve specified goals in a specified context of use.

Reliability
-The degree to which a component or system performs specified functions under specified conditions for a specified period of time.

Security
-The degree to which a component or system protects its data and resources against unauthorized access or use and secures unobstructed access and use for its legitimate users.

Maintainability
-The degree to which a component or system can be modified by the intended maintainers.


Portability
-The degree to which a component or system can be transferred from one hardware, software or other operational or usage environment to another.

50
Q

Usability Lab

A

with real users testing the system in realistic scenarios to evaluate ease of use.

51
Q

Confirmation testingconfirms that an original defect has been successfully fixed.

Depending on the risk, one can test the fixed version of the software in several ways, including:

A

executing all test cases that previously have failed due to the defect,

or, also by
adding new tests to cover any changes that were needed to fix the defect

-when time or money is short when fixing defects, confirmation testing might be restricted to simply exercising the steps that should reproduce the failure caused by the defect and checking that the failure does not occur.

52
Q

regression testing

A

-confirms that no adverse consequences have been caused by a change, including a fix that has already been confirmation tested.

-These adverse consequences could affect the same component where the change was made, other components in the same system, or even other connected systems.

-Regression testing may not be restricted to the test object itself but can also be related to the environment.

-It is advisable first to perform an impact analysis to optimize the extent of the regression testing.

Impact analysis shows which parts of the software could be affected.

-Regression test suites are run many times and generally the number of regression test cases will increase with each iteration or release, so regression testing is a strong candidate for automation.

-Automation of these tests should start early in the project. Where CI is used, such as in DevOps (see section 2.1.4), it is good practice to also include automated regression tests.

-Depending on the situation, this may include regression tests on different levels.

53
Q

Confirmation testing and/or regression testing for the test object are needed on all test levels when?

A

if defects are fixed and/or changes are made on these test levels.

54
Q

different categories of maintenance:

A

it can be corrective

adaptive to changes in the environment

improve performance or maintainability

55
Q

maintenance testing

A

Testing the changes to an operational system or the impact of a changed environment to an operational system.

-can involve planned releases/deployments and unplanned releases/deployments (hot fixes).

-Impact analysis may be done before a change is made, to help decide if the change should be made, based on the potential consequences in other areas of the system

56
Q

The scope of maintenance testing typically depends on:

A

The degree of risk of the change

The size of the existing system

The size of the change

57
Q

maintenance

A

The process of modifying a component or system after delivery to correct defects, improve quality characteristics, or adapt to a changed environment.

Maintenance refers to the activities involved in updating, modifying, or enhancing a software system after it has been deployed or delivered.

58
Q

Triggers for maintenance and maintenance testing can be classified as:

A

Modifications, such as planned enhancements (i.e., release-based), corrective changes or hot fixes.


Upgrades or migrations of the operational environment, (such as from one platform to another), which can require tests associated with the new environment as well as of the changed software, or tests of data conversion when data from another application is migrated into the system being maintained


Retirement, such as when an application reaches the end of its life. When a system is retired, this can require testing of data archiving if long data-retention periods are required. Testing of restore and retrieval procedures after archiving may also be needed in the event that certain data is required during the archiving period.

59
Q

Detailed design, code, and data model

A

component testing examples

60
Q

Software and system design, sequence diagrams, interface and communication protocol specifications, use cases, and external interface definitions are examples of

A

Component integration testing examples

61
Q

Software and system requirement specifications (functional and non-functional), use cases, epics and user stories, system, and user manualsSystem testing examples

A

System testing examples

62
Q

User or business requirements, regulations, legal contracts, and standards, use cases, system requirements, system or user documentation, and installation procedures

A

Acceptance testing examples

63
Q

The degree to which a component or system can be transferred from one hardware, software or other operational or usage environment to another

A

Portability

64
Q

A type of acceptance testing performed to determine the compliance of the test object.

-is performed to ensure that a product, system, or software complies with the relevant laws, regulations, and industry standards imposed by regulatory authorities. This type of testing is mandatory in sectors such as healthcare, finance, telecommunications, automotive, and aerospace, where non-compliance can lead to legal issues, fines, or product recalls.

A

Regulatory acceptance testing

65
Q

A type of acceptance testing performed to verify whether a system satisfies its contractual requirements

A

.Contractual acceptance testing

-ensures that the product or system meets the terms and conditions specified in a contract between the vendor (provider) and the client (customer). This testing confirms that the product fulfills the functional, non-functional, and performance requirements agreed upon in the contract. It is usually conducted before the client formally accepts the product and approves final payment.

66
Q

A type of acceptance testing performed to determine if intended users accept the system.

A

UAT:
stands for User Acceptance Testing

67
Q

A type of acceptance testing performed to determine if operations and/or systems administration staff can accept a system. (Synonym: production acceptance testing)

A

OAT: stands for Operational Acceptance Testing

It is a type of testing conducted to ensure that a system is ready for deployment from an operational perspective. OAT focuses on validating the non-functional aspects of the system, such as its stability, reliability, performance, and operational readiness, rather than the functional requirements.

68
Q

module testing

A

component testing:
A test level that focuses on individual hardware or software components.

synonym: unit testing

69
Q

“To check the functional completeness, functional correctness, and functional appropriateness”

What levels of testing are you MOST LIKELY performing?

A

i. Component testing

ii. Component integration testing

iii. System testing

iv. System integration testing

v. Acceptance testing

All the four test types(Functional testing, Non-functional testing, Black-box testing, White-box testing) can be applied to all test levels, although the focus will be different at each level. Different test techniques can be used to derive test conditions and test cases for all the mentioned test types

70
Q

“To check the degree to which a component or system performs specified functions under specified conditions for a specified period of time”

What type of testing are you MOST LIKELY performing?

A

non-functional testing

71
Q

When a system is scheduled for retirement, which type of maintenance testing is MOST likely to be performed?

A

data migration testing

When a system is retired, this can require testing of data migration, which is a form of maintenance testing

72
Q

According to the ISTQB Glossary, the word ‘Confirmation testing’ is synonymous with which of the following words?

A

re-testing