Chapter 2 Flashcards

1
Q

Software Development Lifecycle Models

A

Describes the types of activity performed at each stage in a software development project, and how the activities relate to one another logically and chronologically.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Characteristics of good testing

A
  • For every development activity, there is a corresponding test activity
  • Each test level has test objectives specific to that level
  • Test analysis and design for a given test level begin during the corresponding development activity
  • Tester participates in discussions to define and refine requirements and design, and are involved in reviewing work products
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Sequential model

A
  • Sequential flow of activities
  • This means that any phase in the begin in the development process should begin when the previous phase is complete
  • In theory, there is no overlap of phases, it is beneficial to have early feedback from the following phase
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Incremental Development

A
  • Pieces
  • Software grows incrementally
  • The size of these feature increments vary
  • The feature increments can be as small as a single change to a user interface screen
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Iterative

A
  • Groups of features
  • Fixed duration
  • Each iteration delivers working software which is a growing subset of the overall set of features
  • Fast feedback from the customer
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Waterfall Model

A
  • Activities are completed one after another

- Test activities only occur after all other development activities have been completed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

V-model

A
  • Sequential
  • Early-testing
  • The customer doesn’t see the software until the acceptance testing
  • Planning, designing and execution
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Rational Unified Process RUP

A
  • Each iteration tends to be relatively long (two or three months)
  • Iterative and Incremental model
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Scrum

A
  • Agile
  • Each iteration tends to be relatively short (days, a few weeks)
  • Feature increments are correspondingly small
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Kanban

A
  • Implemented with or without fixed-length iterations

- Incremental model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Spiral or prototyping

A
  • Experimental Increments

- Heavily re-worked or even abandoned in subsequent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Test Levels

A
  • Test levels are groups of test activities that are organized and managed together
  • Each test level is an instance of the test process
  • Test levels are related to other activities within the software development lifecycle
  • For every test level, a suitable test environment is required.

Are characterized:

  • Specific objectives
  • Test basis, referenced to derive test cases
  • Test object (what is being tested)
  • Typical defects and failures
  • Specific approaches and responsibilities
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Component Testing or Unit Testing

A
  • Focuses on components that are separately
  • Reduce risk-find defects-prevent defects
  • Behavior
  • TEST BASIS→ Detailed design-code-data model component specification
  • TEST OBJECTS→ Components - Units Modules - Code - Data Structures - Classes - database modules
  • DEFECTS AND FAILURES→ Incorrect functionality - Data flow problems - Incorrect code and logic
  • Done by the developer - TDD

TDD ⇒ Fail → Pass → Refactor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Integration Testing

A
  • Interactions between components or systems
  • System Integration Testing is after system testing not after integration testing
  • If integrating module A with module B, tests should focus on the communication between the modules, not the functionality of the individual modules
  • If integrating system X with system Y, tests should focus on the communication between the systems, not the functionality of the individual systems.
  • Responsibility for TESTER, black box testing
  • OBJECTIVES → Reduce risk-find defects-prevent defects
  • TEST BASIS → Software and system design - Sequence diagrams - use cases - workflows
  • TEST OBJECTS → Subsystems - APIs - Microservices - Interfaces - Databases - Infrastructure
  • DEFECTS AND FAILURES → Failures in communication - Incorrect data - Interface - Incorrect data
  • Top down - bottom up - simulators - stubs - drivers
  • Integration should normally be incremental
  • This is one reason that we use continuous integration, where software is integrated on a component-by-component basis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Integration Testing

A

• Component integration testing focuses on the interactions and interfaces between integrated
components. Component integration testing is performed after component testing, and is
generally automated. In iterative and incremental development, component integration tests are
usually part of the continuous integration process.

• System integration testing focuses on the interactions and interfaces between systems,
packages, and microservices. System integration testing can also cover interactions with, and
interfaces provided by, external organizations (e.g., web services). In this case, the developing
organization does not control the external interfaces, which can create various challenges for
testing (e.g., ensuring that test-blocking defects in the external organization’s code are resolved,
arranging for test environments, etc.). System integration testing may be done after system
testing or in parallel with ongoing system test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

System Testing

A
  • Focuses on the behavior and capabilities of a whole system
  • Often considering the end-to-end tasks the system can perform
  • Produces information that is used by stakeholders to make release decisions
  • Test environment should ideally correspond to the final target or production
  • Focus on the overall
  • Independent tester typically carry out system testing
  • OBJECTIVES → Reduce risk - find defects - prevent defects
  • TEST BASIS → SRS - Risk analysis reports - use cases - epics - user stories
  • TEST OBJECTS → Applications - operating systems - system under test - system configuration
  • DEFECTS AND FAILURES → System failure - Incorrect data flow - unexpected behavior
17
Q

Acceptance Testing

A
  • Like system testing
  • Focuses on the behavior and capabilities of a whole system or product
  • May produce information to assess the system readiness for deployment and use by the customer (end-user)
  • Defects may be found during acceptance testing, but finding defects is often not an objective.
  • OBJECTIVES → Establish confidence - validate system - verify behavior
  • TEST BASIS → Business processes - requirements - use cases - installation procedures - risk analysis reports - regulations contracts
  • TEST OBJECTS → System under test - recovery systems - hot sites - forms - reports
  • DEFECTS AND FAILURES → System workflow - Business rules - Contract - Non-functional failures
  • Operational Acceptance Testing
18
Q

Which testing level is primarily focused on building confidence rather than finding defects?

A

Acceptance testing

19
Q

If you need to add system integration testing as a test level for a particular project, what testing level should it directly follow?

A

→ System Testing

20
Q

Functional Testing

A
  • Involves tests that evaluate functions that the system should perform.
  • Functional requirements may be described in work products such as
    • User stories
    • Use cases
    • Functional specifications
    • They may be undocumented.
  • What the system should do
  • Should be performed at all test levels, though the focus is on difference at each level
  • BLACK BOX → may be used to derive test conditions and test cases for the functionality of the component or system in different
  • Functional coverage is the extent to which some type of functional element has been exercised by tests.
21
Q

Non-Functional Testing

A
  • How well the system behaves
  • Usability, performance, efficiency or security
  • Can be done at all test levels
22
Q

White-box Testing

A
  • Derives tests based on the system internal structure or implementation
  • Internal structure may include code, architecture, workflows, and/or data flows within the system
  • We look at the internal structure systems
  • Structural Coverage Percentage of the type of element being covered
  • Code coverage is based on the percentage of component code that has been tested
23
Q

Which of the following is mostly true regarding structural testing?

A

→ Deriving tests based on the system internal structure

24
Q

Which of the following is a correct definition of structural coverage?

A

→ The extent to which some type of structural element has been exercised by tests

25
Q

Change Related Testing

A

Testing should be done to confirm that the changes have corrected the defect or implemented the functionality correctly, and have not caused any unforeseen adverse consequences

26
Q

Confirmation Testing

A
  • After a defect is fixed, the software should be tested

- The purpose is to confirm whether the original defect has been successfully fixed

27
Q

Regression Testing

A
  • Involves running tests to detect such unintended side effects
  • Changes to the environment
  • Run many times and generally involve slowly, so regression testing is a strong candidate for automation
  • Automation of these tests should start early in the project
  • Performed at all test levels
28
Q

Which of the following is an important characteristic of tests used for regression testing?

A

→ They are reusable for multiple releases with little maintenance.

29
Q

Maintenance Testing

A
  • Once deployed to production, software and systems need to be maintained
  • Focuses on testing the changes to the system
  • Can involve planned releases and unplanned releases
  • Modification→ Migration → Retirement
30
Q

Impact Analysis

A
  • Evaluates the changes that were made for maintenance release to identify the intended consequences
  • Impact analysis can also help to identify the impact of a change on existing tests
  • The side effects and affected areas in the system need to be tested for regressions
  • May be done before a change is made, to help decide if the change should be made
31
Q

Impact analysis helps to decide

A

→ How much regression testing should be done