Chapter 1 Fundamentals of testing Flashcards

1
Q

Activities designed to evaluate the quality of a component or system.

A

Quality control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The degree to which a work product satisfies stated and implied needs of its stakeholders.

A

quality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Activities focused on providing confidence that quality requirements will be fulfilled. Abbreviation: QA; see also: quality management

A

quality assurance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The activity that derives and specifies test cases from test conditions

A

test design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The activity that develops and applies corrective actions to get a test project on track when it deviates from what was planned. See also: test management

A

test control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The activity that derives and specifies test cases from test conditions

A

test design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The activity that runs a test on a component or system producing actual results.

A

test execution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The activity that prepares the testware needed for test execution based on test analysis and design.

A

test implementation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The activity that checks the status of testing activities, identifies any variances from planned or expected, and reports status to stakeholders. See also: test management

A

test monitoring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The work product to be tested. See also: test item

A

test object

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The activity of establishing or updating a test plan

A

test planning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap up activities post execution.

A

test procedure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The process within the software development lifecycle that evaluates the quality of a component or system and related work products. See also: quality control
-overall process that ensures software quality

A

testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Work products produced during the test process for use in planning, designing, executing, evaluating and reporting on testing.

-products or materials created to support and execute the testing process

A

testware

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Confirmation by examination that a work product matches a stakeholder’s needs

Question Addressed: “Are we building the right product?”

A

validation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled

Question Addressed: “Are we building the product right?”

A

verification

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

The body of knowledge used as the basis for test analysis and design.

-IT IS A DOCUMENTATION

A

test basis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A testable aspect of a component or system identified as a basis for testing.

-If the test basis specifies "The system must authenticate users with a valid username and 			password," a test condition might be "Testing valid user login" or "Testing login with invalid credentials.”
A

test condition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions.

A

test case

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

The activity that identifies test conditions by analyzing the test basis

A

test analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

a human error which produces an incorrect result (MISTAKE)

A

Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

An Imperfection or deficiency in a work product where it does not meet its requirements or specifications or impairs its intended use. (BUG, FAULT)

A

Defect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

an event in which a component or system does not perform a required function within specified limits

is an event! And code is executed!

can also be caused by environmental conditions, such as when radiation or electromagnetic field cause defects in firmware.

A

Failure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A source of a defect such that if it is removed, the occurrence of the defect type is decreased or removed

-testers dont find root cause

A

Root Cause

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

The process of evaluating a component or system without executing it, based on its form, structure, content, or documentation.

A

static analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Testing that does not involve the execution of a test item.

A

static testing

-includes reviews and static analysis

-testing can directly find defects in the test object

-Static testing finds defects (bugs, faults) directly and immediately sees what might go wrong in the future

-When static testing identifies a defect, debugging is concerned with removing it (debugging only has the fixing the cause not the reporoduction or diagnosis)

27
Q

Testing that involves the execution of the test item

A

dynamic testing

-uses different types of test techniques and test approaches to derive test cases

-Testing can trigger failures that are caused by defects in the software

-Dynamic testing causes (triggers) failures

-The typical debugging process in this case involves:
Reproduction of a failure

Diagnosis (finding the root cause, analyzing the root cause)
Fixing the cause

28
Q

The typical test objectives are:

A

Evaluating work products such as requirements, user stories, designs, and code

Triggering failures and finding defects
Ensuring required coverage of a test object

Reducing the level of risk of inadequate software quality

Verifying whether specified requirements have been fulfilled

Verifying that a test object complies with contractual, legal, and regulatory requirements

Providing information to stakeholders to allow them to make informed decisions

Building confidence in the quality of the test object

Validating whether the test object is complete and works as expected by the stakeholders

29
Q

quality control

A

Activities designed to evaluate the quality of a component or system.

Testing is a form
of quality control (QC).

is a product-oriented

corrective approach that focuses on those activities supporting the achievement of appropriate levels of quality.

include formal methods (model checking and proof of correctness), simulation and prototyping.

test results are used to fix defects

30
Q

Who is preferably responsible for confirmation testing after debugging?

A

the same tester who identified the original defect

31
Q

quality assurance

A

Activities focused on providing confidence that quality requirements will be fulfilled.

is process-oriented

preventive approach that focuses on the implementation and improvement of processes.

applies to both the development and testing processes,

is the responsibility of everyone on a project.

test results provide feedback on how well the development and test processes are performing.

32
Q

Model Checking

A

Model checking is a formal verification technique used in software engineering and computer science to automatically verify if a system (often described as a mathematical model or state machine) satisfies certain specifications or properties. It checks whether the behavior of the system adheres to a given set of requirements, typically expressed in temporal logic (such as Linear Temporal Logic, LTL, or Computation Tree Logic, CTL).

33
Q

Testing Principles

A
  1. Testing shows the presence, not the absence of defects.
  2. Exhaustive testing is impossible.
    -Rather than attempting to test exhaustively, test techniques, test case prioritization, and risk-based testing, should be used to focus test efforts.

-Exhaustive testing is a test approach in which the test suite comprises all combinations of input values and preconditions.

  1. Early testing saves time and money.
    To find defects early, both static testing and dynamic testing should be started as early as possible.
  2. Defects cluster together.
    Pareto principle.
  3. Tests wear out
    -in some cases, repeating the same tests can have a beneficial outcome, e.g., in automated regression testing
  4. Testing is context dependent.
  5. Absence-of-defects fallacy.
    -In addition to verification, validation should also be carried out
34
Q

The set of interrelated activities comprising of test planning, test monitoring and control, test analysis, test design, test implementation, test execution, and test completion.

-The test process can be tailored to a given situation based on various factors

A

test process

35
Q

Test Activities and Tasks of test process

A

—Test planning
-Estimating the test effort is part of test planning
workproduct:
test plan, test schedule, risk register, and entry and exit criteria (see section 5.1). Risk register is a list of risks together with risk likelihood, risk impact and information about risk mitigation (see section 5.2). Test schedule, risk register and entry and exit criteria are often a part of the test plan.

—-Test monitoring and control.
work product: test progress reports (see section 5.3.2), documentation of control directives (see section 5.3) and risk information (see section 5.2).

—Test analysis
-“what to test?”
work product:
(prioritized) test conditions (e.g., acceptance criteria, see section 4.5.2), and defect reports regarding defects in the test basis (if not fixed directly).

—Test design
-“how to test?”
defining the test data requirements,
designing the test environment
identifying any other required infrastructure and tools.
work product: test cases, test charters, coverage items, test data requirements and test environment requirements.

—Test implementation
work product:
test procedures, automated test scripts, test suites, test data, test execution schedule, and test environment elements. Examples of test environment elements include: stubs, drivers, simulators, and service virtualization.

—Test execution :
work product:
test logs, and defect reports (see section 5.5).

—Test completion
work product:
test completion report (see section 5.3.2), action items for improvement of subsequent projects or iterations, documented lessons learned, and change requests (e.g., as product backlog items).

36
Q

test plan

A

A test plan describes the objectives, resources and processes for a test project.

Documents the means and schedule for achieving test objectives

Helps to ensure that the performed test activities will meet the established criteria

Serves as a means of communication with team members and other stakeholders

Demonstrates that testing will adhere to the existing test policy and test strategy (or explains why the testing will deviate from them)

forces the testers to confront the future challenges related to risks, schedules, people, tools, costs, effort, etc.

37
Q

The typical content of a test plan includes:

A

Context of testing (e.g., scope, test objectives, constraints, test basis)

Assumptions and constraints of the test project

Stakeholders (e.g., roles, responsibilities, relevance to testing, hiring and training needs)

Communication (e.g., forms and frequency of communication, documentation templates)

Risk register (e.g., product risks, project risks)

Test approach (e.g., test levels, test types, test techniques, test deliverables, entry criteria and exit criteria, independence of testing, metrics to be collected, test data requirements, test environment requirements, deviations from the organizational test policy and test strategy)

Budget and schedule

38
Q

(test planning) In iterative SDLCs, typically two kinds of planning occur:

A

release planning and iteration planning

39
Q

Test monitoring

A

concerned with gathering information about testing.

This information is used to assess test progress and to measure whether the test exit criteria or the test tasks associated with the exit criteria are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria.

Test monitoring gathers a variety of metrics to support the test control and test completion.

Common test metrics include:

Project progress metrics (e.g., task completion, resource usage, test effort)

Test progress metrics (e.g., test case implementation progress, test environment preparation progress, number of test cases run/not run, passed/failed, test execution time)

Product quality metrics (e.g., availability, response time, mean time to failure)

Defect metrics (e.g., number and priorities of defects found/fixed, defect density, defect detection percentage)

Risk metrics (e.g., residual risk level)

Coverage metrics (e.g., requirements coverage, code coverage)

Cost metrics (e.g., cost of testing, organizational cost of quality)

40
Q

Test control

A

uses the information from test monitoring to provide, in a form of the control directives, guidance and the necessary corrective actions to achieve the most effective and efficient testing.

examples:
Reprioritizing tests when an identified risk becomes an issue

Re-evaluating whether a test item meets entry criteria or exit criteria due to rework

Adjusting the test schedule to address a delay in the delivery of the test environment

Adding new resources when and where needed

41
Q

test completion

A

Test completion collects data from completed test activities to consolidate experience, testware, and any other relevant information. Test completion activities occur at project milestones such as when a test level is completed, an agile iteration is finished, a test project is completed (or cancelled), a software system is released, or a maintenance release is completed.

42
Q

Essential Skills and Good Practices in Testing:

Generic Skills Required for Testing:

While being generic, the following skills are particularly relevant for testers:

A

Testing knowledge (to increase effectiveness of testing, e.g., by using test techniques)

Thoroughness, carefulness, curiosity, attention to details, being methodical (to identify defects, especially the ones that are difficult to find)

Good communication skills, active listening, being a team player (to interact effectively with all stakeholders, to convey information to others, to be understood, and to report and discuss defects)

Analytical thinking, critical thinking, creativity (to increase effectiveness of testing)

Technical knowledge (to increase efficiency of testing, e.g., by using appropriate test tools)

Domain knowledge (to be able to understand and to communicate with end users/business representatives)

43
Q

whole-team approach

A

any team member with the necessary knowledge and skills can perform any task, and everyone is responsible for quality.

The team members share the same workspace (physical or virtual), as co-location facilitates communication and interaction.

The whole team approach improves team dynamics, enhances communication and collaboration within the team, and creates synergy by allowing the various skill sets within the team to be leveraged for the benefit of the project.

44
Q

Independence of Testing

A

Work products can be tested by their author (no independence),

by the author’s peers from the same team (some independence),

by testers from outside the author’s team but within the organization (high independence),

by testers from outside the organization (very high independence). F

45
Q

Defects can be found in documentation, such as:

A

requirements specification or a test
script,

in source code,

in a supporting artifact such as a build file.

46
Q

—Test planning work product

A

work product:
test plan, test schedule, risk register, and entry and exit criteria (see section 5.1). Risk register is a list of risks together with risk likelihood, risk impact and information about risk mitigation (see section 5.2). Test schedule, risk register and entry and exit criteria are often a part of the test plan.

47
Q

—-Test monitoring and control
work product

A

work product: test progress reports (see section 5.3.2), documentation of control directives (see section 5.3) and risk information (see section 5.2).

48
Q

—Test analysis
-“what to test?”
work product:

A

work product:
(prioritized) test conditions (e.g., acceptance criteria, see section 4.5.2), and defect reports regarding defects in the test basis (if not fixed directly).

49
Q

—Test design work product:

A

work product: test cases, test charters, coverage items, test data requirements and test environment requirements.

50
Q

—Test implementation
work product:

A

work product:
test procedures, automated test scripts, test suites, test data, test execution schedule, and test environment elements. Examples of test environment elements include: stubs, drivers, simulators, and service virtualization.

51
Q

—Test execution :
work product:

A

work product:
test logs, and defect reports (see section 5.5).

52
Q

—Test completion
work product:

A

work product:
test completion report (see section 5.3.2), action items for improvement of subsequent projects or iterations, documented lessons learned, and change requests (e.g., as product backlog items).

53
Q

A component or tool that temporarily replaces another component and controls or calls a test item in isolation.


A

-driver:

54
Q

A type of test double providing predefined responses.

A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component

A

-stub:

55
Q

A component or system used during testing which behaves or operates like a given component or system

A

-simulator:

56
Q

A technique to enable virtual delivery of services which are deployed, accessed and managed remotely.

A

-service virtualization:

57
Q

In addition to evaluating coverage, good traceability makes it :

A

possible to determine the impact of changes

facilitates test audits

helps meet IT governance criteria.

58
Q

Traceability of test cases to requirements can verify that the

A

requirements are covered by test cases.


59
Q

Traceability of test results to risks can be used to evaluate the level of

A

residual risk in a test object.

60
Q

-Traceability provides information to assess

A

product quality

process capability

project progress against business goals.

61
Q

The test management role is mainly focused on the activities of :

A

test planning

test monitoring and control

test completion.

62
Q

The testing role is mainly focused on the activities of :

A

test analysis

test design

test implementation

test execution

63
Q

Contains hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test

A

Test Environment

64
Q
A