Chapter 1 Fundamentals of testing Flashcards

1
Q

Quality control

A

Activities designed to evaluate the quality of a component or system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

quality

A

The degree to which a work product satisfies stated and implied needs of its stakeholders.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

quality assurance

A

Activities focused on providing confidence that quality requirements will be fulfilled. Abbreviation: QA; see also: quality management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

test design

A

The activity that derives and specifies test cases from test conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

test control

A

The activity that develops and applies corrective actions to get a test project on track when it deviates from what was planned. See also: test management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

test design

A

The activity that derives and specifies test cases from test conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

test execution

A

The activity that runs a test on a component or system producing actual results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

test implementation

A

The activity that prepares the testware needed for test execution based on test analysis and design.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

test monitoring

A

The activity that checks the status of testing activities, identifies any variances from planned or expected, and reports status to stakeholders. See also: test management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

test object

A

The work product to be tested. See also: test item

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

test planning

A

The activity of establishing or updating a test plan

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

test procedure

A

A sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap up activities post execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

testing

A

The process within the software development lifecycle that evaluates the quality of a component or system and related work products. See also: quality control
-overall process that ensures software quality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

testware

A

Work products produced during the test process for use in planning, designing, executing, evaluating and reporting on testing.

-products or materials created to support and execute the testing process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

validation

A

Confirmation by examination that a work product matches a stakeholder’s needs

Question Addressed: “Are we building the right product?”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

verification

A

Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled

Question Addressed: “Are we building the product right?”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

test basis

A

The body of knowledge used as the basis for test analysis and design.

-IT IS A DOCUMENTATION

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

test condition

A

A testable aspect of a component or system identified as a basis for testing.

-If the test basis specifies "The system must authenticate users with a valid username and 			password," a test condition might be "Testing valid user login" or "Testing login with invalid credentials.”
19
Q

test case

A

A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions.

20
Q

test analysis

A

The activity that identifies test conditions by analyzing the test basis

21
Q

Error

A

a human error which produces an incorrect result (MISTAKE)

22
Q

Defect

A

An Imperfection or deficiency in a work product where it does not meet its requirements or specifications or impairs its intended use. (BUG, FAULT)

Defects can be found in documentation, such as:

requirements specification or a test
script,

in source code,

in a supporting artifact such as a build file.

23
Q

Failure

A

an event in which a component or system does not perform a required function within specified limits

is an event! And code is executed!

can also be caused by environmental conditions, such as when radiation or electromagnetic field cause defects in firmware.

24
Q

Root Cause

A

A source of a defect such that if it is removed, the occurrence of the defect type is decreased or removed

-testers dont find root cause

25
Q

static analysis

A

The process of evaluating a component or system without executing it, based on its form, structure, content, or documentation.

26
Q

static testing

A

Testing that does not involve the execution of a test item.

-includes reviews and static analysis

-testing can directly find defects in the test object

-Static testing finds defects (bugs, faults) directly and immediately sees what might go wrong in the future

-When static testing identifies a defect, debugging is concerned with removing it (debugging only has the fixing the cause not the reporoduction or diagnosis)

27
Q

dynamic testing

A

Testing that involves the execution of the test item.

-uses different types of test techniques and test approaches to derive test cases

-Testing can trigger failures that are caused by defects in the software

-Dynamic testing causes (triggers) failures

-The typical debugging process in this case involves:
Reproduction of a failure

Diagnosis (finding the root cause, analyzing the root cause)
Fixing the cause

28
Q

The typical test objectives are:

A

Evaluating work products such as requirements, user stories, designs, and code

Triggering failures and finding defects
Ensuring required coverage of a test object

Reducing the level of risk of inadequate software quality

Verifying whether specified requirements have been fulfilled

Verifying that a test object complies with contractual, legal, and regulatory requirements

Providing information to stakeholders to allow them to make informed decisions

Building confidence in the quality of the test object

Validating whether the test object is complete and works as expected by the stakeholders

29
Q

quality control

A

Activities designed to evaluate the quality of a component or system.

Testing is a form
of quality control (QC).

is a product-oriented

corrective approach that focuses on those activities supporting the achievement of appropriate levels of quality.

include formal methods (model checking and proof of correctness), simulation and prototyping.

test results are used to fix defects

30
Q

Who is preferably responsible for confirmation testing after debugging?

A

the same tester who identified the original defect

31
Q

quality assurance

A

Activities focused on providing confidence that quality requirements will be fulfilled.

is process-oriented

preventive approach that focuses on the implementation and improvement of processes.

applies to both the development and testing processes,

is the responsibility of everyone on a project.

test results provide feedback on how well the development and test processes are performing.

32
Q

Model Checking

A

Model checking is a formal verification technique used in software engineering and computer science to automatically verify if a system (often described as a mathematical model or state machine) satisfies certain specifications or properties. It checks whether the behavior of the system adheres to a given set of requirements, typically expressed in temporal logic (such as Linear Temporal Logic, LTL, or Computation Tree Logic, CTL).

33
Q

Testing Principles

A
  1. Testing shows the presence, not the absence of defects.
  2. Exhaustive testing is impossible.
    -Rather than attempting to test exhaustively, test techniques, test case prioritization, and risk-based testing, should be used to focus test efforts.

-Exhaustive testing is a test approach in which the test suite comprises all combinations of input values and preconditions.

  1. Early testing saves time and money.
    To find defects early, both static testing and dynamic testing should be started as early as possible.
  2. Defects cluster together.
    Pareto principle.
  3. Tests wear out
    -in some cases, repeating the same tests can have a beneficial outcome, e.g., in automated regression testing
  4. Testing is context dependent.
  5. Absence-of-defects fallacy.
    -In addition to verification, validation should also be carried out
34
Q

test process

A

The set of interrelated activities comprising of test planning, test monitoring and control, test analysis, test design, test implementation, test execution, and test completion.

-The test process can be tailored to a given situation based on various factors

35
Q

Test Activities and Tasks of test process

A

—Test planning
-Estimating the test effort is part of test planning
workproduct:
test plan, test schedule, risk register, and entry and exit criteria (see section 5.1). Risk register is a list of risks together with risk likelihood, risk impact and information about risk mitigation (see section 5.2). Test schedule, risk register and entry and exit criteria are often a part of the test plan.

—-Test monitoring and control.
work product: test progress reports (see section 5.3.2), documentation of control directives (see section 5.3) and risk information (see section 5.2).

—Test analysis
-“what to test?”
work product:
(prioritized) test conditions (e.g., acceptance criteria, see section 4.5.2), and defect reports regarding defects in the test basis (if not fixed directly).

—Test design
-“how to test?”
defining the test data requirements,
designing the test environment
identifying any other required infrastructure and tools.
work product: test cases, test charters, coverage items, test data requirements and test environment requirements.

—Test implementation
work product:
test procedures, automated test scripts, test suites, test data, test execution schedule, and test environment elements. Examples of test environment elements include: stubs, drivers, simulators, and service virtualization.

—Test execution :
work product:
test logs, and defect reports (see section 5.5).

—Test completion
work product:
test completion report (see section 5.3.2), action items for improvement of subsequent projects or iterations, documented lessons learned, and change requests (e.g., as product backlog items).

36
Q

test plan

A

A test plan describes the objectives, resources and processes for a test project.

Documents the means and schedule for achieving test objectives

Helps to ensure that the performed test activities will meet the established criteria

Serves as a means of communication with team members and other stakeholders

Demonstrates that testing will adhere to the existing test policy and test strategy (or explains why the testing will deviate from them)

forces the testers to confront the future challenges related to risks, schedules, people, tools, costs, effort, etc.

37
Q

The typical content of a test plan includes:

A

Context of testing (e.g., scope, test objectives, constraints, test basis)

Assumptions and constraints of the test project

Stakeholders (e.g., roles, responsibilities, relevance to testing, hiring and training needs)

Communication (e.g., forms and frequency of communication, documentation templates)

Risk register (e.g., product risks, project risks)

Test approach (e.g., test levels, test types, test techniques, test deliverables, entry criteria and exit criteria, independence of testing, metrics to be collected, test data requirements, test environment requirements, deviations from the organizational test policy and test strategy)

Budget and schedule

38
Q

(test planning) In iterative SDLCs, typically two kinds of planning occur:

A

release planning and iteration planning

39
Q

Test monitoring

A

concerned with gathering information about testing.

This information is used to assess test progress and to measure whether the test exit criteria or the test tasks associated with the exit criteria are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria.

Test monitoring gathers a variety of metrics to support the test control and test completion.

Common test metrics include:

Project progress metrics (e.g., task completion, resource usage, test effort)

Test progress metrics (e.g., test case implementation progress, test environment preparation progress, number of test cases run/not run, passed/failed, test execution time)

Product quality metrics (e.g., availability, response time, mean time to failure)

Defect metrics (e.g., number and priorities of defects found/fixed, defect density, defect detection percentage)

Risk metrics (e.g., residual risk level)

Coverage metrics (e.g., requirements coverage, code coverage)

Cost metrics (e.g., cost of testing, organizational cost of quality)

40
Q

Test control

A

uses the information from test monitoring to provide, in a form of the control directives, guidance and the necessary corrective actions to achieve the most effective and efficient testing.

examples:
Reprioritizing tests when an identified risk becomes an issue

Re-evaluating whether a test item meets entry criteria or exit criteria due to rework

Adjusting the test schedule to address a delay in the delivery of the test environment

Adding new resources when and where needed

41
Q

test completion

A

Test completion collects data from completed test activities to consolidate experience, testware, and any other relevant information. Test completion activities occur at project milestones such as when a test level is completed, an agile iteration is finished, a test project is completed (or cancelled), a software system is released, or a maintenance release is completed.

42
Q

Essential Skills and Good Practices in Testing:

Generic Skills Required for Testing:

While being generic, the following skills are particularly relevant for testers:

A

Testing knowledge (to increase effectiveness of testing, e.g., by using test techniques)

Thoroughness, carefulness, curiosity, attention to details, being methodical (to identify defects, especially the ones that are difficult to find)

Good communication skills, active listening, being a team player (to interact effectively with all stakeholders, to convey information to others, to be understood, and to report and discuss defects)

Analytical thinking, critical thinking, creativity (to increase effectiveness of testing)

Technical knowledge (to increase efficiency of testing, e.g., by using appropriate test tools)

Domain knowledge (to be able to understand and to communicate with end users/business representatives)

43
Q

whole-team approach

A

any team member with the necessary knowledge and skills can perform any task, and everyone is responsible for quality.

The team members share the same workspace (physical or virtual), as co-location facilitates communication and interaction.

The whole team approach improves team dynamics, enhances communication and collaboration within the team, and creates synergy by allowing the various skill sets within the team to be leveraged for the benefit of the project.

44
Q

Independence of Testing

A

Work products can be tested by their author (no independence),

by the author’s peers from the same team (some independence),

by testers from outside the author’s team but within the organization (high independence),

by testers from outside the organization (very high independence). F