Fundamental of testing Flashcards

1
Q

error

mistake

A

A human action that produces an incorrect result

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

defect
bug
fault

A

A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

failure

A

Deviation of the component or system from its expected delivery, service or result.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

quality

A

The degree to which a component, system or process meets specified requirements
and/or user/customer needs and expectations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

risk

A

A factor that could result in future negative consequences; usually expressed as impact
and likelihood.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

software

A

Computer programs, procedures, and possibly associated documentation and data
pertaining to the operation of a computer system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

testing

A

The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products
to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

exhaustive testing

complete testing

A

A test approach in which the test suite comprises all combinations of input values and preconditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

system

A

A collection of components organized to accomplish a specific function or set of
functions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Testing is context dependent (principle)

A

Testing is done differently in different contexts. For example, safety-critical software is
tested differently from an e-commerce site.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Testing shows presence of defects (principle)

A

Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Exhaustive testing is impossible (principle)

A

Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, we use risks and priorities to focus testing efforts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Early testing (principle)

A

Testing activities should start as early as possible in the software or system development life cycle and should be focused on defined objectives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Defect clustering (principle)

A

A small number of modules contain most of the defects discovered during prerelease testing or show the most operational failures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Pesticide paradox (principle)

A

If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new bugs. To overcome this ‘pesticide paradox’, the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Absence-of-errors fallacy (principle)

A

Finding and fixing defects does not help if the system built is unusable and does not fulfill the users’ needs and expectations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

code

A

Computer instructions and data definitions expressed in a programming language or in
a form output by an assembler, compiler or other translator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

debugging

A

The process of finding, analyzing and removing the causes of failures in software.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

test objective

A

A reason or purpose for designing and executing a test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

requirement

A

A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.

21
Q

test case

A

A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.

22
Q

test basis

A

All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.

23
Q

review

A

An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal
review, technical review, inspection, and walkthrough.

24
Q

re-testing

confirmation testing

A

Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.

25
Q

exit criteria

A

The set of generic and specific conditions, agreed upon with the stakeholders for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.

26
Q

incident

A

Any event occurring that requires investigation

27
Q

regression testing

A

Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

28
Q

test condition
test requirement
test situation

A

An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.

29
Q

coverage

test coverage

A

The degree, expressed as a percentage, to which a specified coverage item has been
exercised by a test suite.

30
Q

test data

A

Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.

31
Q

test execution

A

The process of running a test on the component or system under test, producing actual result(s).

32
Q

test log

A

A chronological record of relevant details about the execution of tests.

33
Q

test plan

A

A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing
tasks, who will do each task, degree of tester independence, the test environment, the test
design techniques and entry and exit criteria to be used, and the rationale for their choice,
and any risks requiring contingency planning. It is a record of the test planning process.

34
Q

test strategy

A

A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).

35
Q

test summary report

A

A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria.

36
Q

testware

A

Artifacts produced during the test process required to plan, design, and execute
tests, such as documentation, scripts, inputs, expected results, set-up and clear-up
procedures, files, databases, environment, and any additional software or utilities used in
testing.

37
Q

test policy

A

A high level document describing the principles, approach and major objectives of the organization regarding testing.

38
Q

test planning

A

The activity of establishing or updating a test plan.

39
Q

test approach

A

The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed

40
Q

test control

A

A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. See also test management.

41
Q

test monitoring

A

A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned. See also test management.

42
Q

test design

test design specification

A

A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases.

43
Q

test procedure

test procedure specification

A

A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script.

44
Q

test suite

A

A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

45
Q

independence of testing

A

Separation of responsibilities, which encourages the accomplishment of objective testing

46
Q

result

test result

A

The consequence/outcome of the execution of a test. It includes outputs to screens, changes to data, reports, and communication messages sent out.

47
Q

actual result

A

The behavior produced/observed when a component or system is tested.

48
Q

expected result

A

The behavior predicted by the specification, or another source, of the
component or system under specified conditions.

49
Q

test progress report

A

A document summarizing testing activities and results, produced at regular intervals, to report progress of testing activities against a baseline (such as the original test plan) and to communicate risks and alternatives requiring a decision to management.