QA Terms Flashcards

1
Q

CMMI

A

Capability Maturity Model Integration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Capability Maturity Model Integration (CMMI)

A

a proven set of global best practices that drives business performance through building and benchmarking key capabilities - project management, quality management, and engineering all glued together by process management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

review

A

an evaluation of a life-cycle work product or project status to determine if there are any deviations from planned results and to recommend improvement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

peer reviews

A

human-based testing rather than computer-based testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

anomaly, defect

A

any condition that deviates from expectations based on requirements, specification, design documents, standards, plans, and so on, or from someone’s experiences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Project Manager Review

A

normally, a weekly project with the development staff called by the Project manager

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Milestone Review

A

Represent meaningful points in the project’s schedule and are documented in the project management plan, tracked, and reviewed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Buddy Check

A

an informal verification technique in which the life-cycle work product is examined by the author and one other person

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Circulation Reviews

A

A review that takes on attributes of both a buddy check and a walkthrough; they can be informal or formal; the life-cycle product is passed around to each reviewer who review it and either attaches comments, questions, and recommendations directly on the life-cycle work product or places them into a separate document.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Inspections

A

a formal verification technique in which life-cycle work products are examined in detail by a group of peers for the explicit purpose of detecting and identifying defects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Inspection Team

A

Usually consists of four people: moderator, programmer, designer, test specialist

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Walkthrough

A

a less formal verification technique in which life-cycle work products are examined by a group of peers for the purpose of finding defects, omission, and contradictions; typically led by the author of the work

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Peer Ratings

A

a technique of evaluating anonymous programs in terms of their overall quality, maintainability, extensibility, usability, and clarity; the purpose of this techniques to provide programmer self-evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Bug Record

A

Provide clear and complete information about a bug, including details about the environment and specific steps that the developer can use to reproduce the issue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Black-Box Testing

A

Testing without concern about the internal behavior and structure of the program, instead concentrating on finding circumstances in which the program does not behave according to its specifications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Equivalence Partitioning (Equivalence Classes)

A

a testing technique in which a set of test conditions is divided into groups or sets that can be considered the same

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Boundary Value Analysis

A

A black box test design technique in which test cases are designed based on boundary values, those situations directly on, above, and beneath the edges of input and output equivalence classes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Cause-Effect Graphing

A

a black-box testing technique that aids in selecting, in a systematic way, a high-yield set of test cases and that has the benefit of pointing out incompleteness and ambiguities in the specification.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Symbol for Identity function on Cause-Effect Graph

A

single path

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Symbol for NOT function on Cause-Effect Graph

A

zigzag path

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Symbol for OR function on Cause-Effect Graph

A

V

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Symbol for AND function on Cause-Effect Graph

A

inverted “V”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Identity function

A

if (a = 1) { b = 1 } else { b = 0 }

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Not function

A

if (a = 1) { b = 0 } else { b = 1 }

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Or function

A

if (a or b or c = 1) { d = 1 } else { d = 0 }

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

And function

A

if ( a and b = 1 ) { c = 1 } else { c = 0 }

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Error Guessing (process)

A

Enumerate a list of possible errors or error-prone situations and then write test cases based on the list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Logic Coverage

A

Tests that exercise all decision point outcomes at least once, and ensure that all statements or entry points are executed at least once.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Equivalence Partitioning

A

Defines condition or error classes to help reduce the number of finite tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Boundary value analysis

A

Tests each edge condition of an equivalence class (output and input)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Cause-effect graphing

A

Produces Boolean graphical representations of potential test case results to aid in selecting efficient and complete test cases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Error Guessing

A

Produces test cases based on intuitive and expert knowledge of test team members to define potential software errors to facilitate efficient test case design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

White-Box Testing

A

Testing that is concerned with the degree to which test cases exercise or cover the logic (source code) of the program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Logic (Statement) Coverage Testing (WBT)

A

If tester backs completely away from path testing, it may seem that a worthy goal would be to execute every statement in the program at least once. (Weak criterion)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Decision (Branch) Coverage Testing (WBT)

A

Testing in which one must write enough test cases that each decision has a true and a false outcome at least once.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Condition Coverage (WBT)

A

Testing in which one writes enough test cases to ensure that each condition in a decision takes on all possible outcomes at least once.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Decision/Condition Coverage (WBT)

A

Testing in which there are sufficient test cases such that each condition in a decision takes on all possible outcomes at least once, each decision takes on all possible outcomes at least once, and each point of entry is invoked at least once.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Multiple Condition Coverage (WBT)

A

Testing in which there are sufficient test cases such that all possible combinations of condition outcomes in each decision and all points of entry are invoked at least once.

39
Q

Model (Unit) Test-Case Design (WBT)

A

The design of a test based on a specification for the module (defining the module’s input and output parameters and its function) and the module’s source code.

40
Q

Module (Unit) Testing

A

Testing which focuses on testing smaller units of the program first, rather than initially testing the whole program

41
Q

Non-Incremental (Big-Bang) Testing

A

A way of Integration testing in which you test each module independently

42
Q

Incremental Testing

A

A way of integration testing in which first you test each module of the software individually, then continue testing by adding another module to it, then another, etc. This can be done either top-down, bottom-up, or sandwich.

43
Q

Function Testing

A

The process of attempting to find discrepancies between the program and the external specification.

44
Q

System Testing

A

Testing meant to compare the system or program to its original objectives, NOT testing the functions of the complete system or program (as Function Testing); Considered Black-Box Testing

45
Q

Acceptance Testing

A

The process of comparing the program to its initial requirements and the current needs of its end users; Performed by the program’s customer or end user and normally is not considered the responsibility of the development organization

46
Q

Installation Testing

A

Testing meant to find errors during the installation process.; Performed by the organization that produced the system.

47
Q

Regression Testing

A

Performed after making a functional improvement or repair to a program, its purpose is to determine whether the change has regressed other aspects of the program.

48
Q

Usability Testing

A

A black-box testing technique that involves actual users or customers of the product and that seeks to verify an implementation’s approach works for the user base. Test should be created by establishing practical, real-world, repeatable exercises for each user to conduct.

49
Q

Component Testing

A

tests the interactive software parts for reasonable selection and user feedback

50
Q

Test User Select

A

A complete usability testing protocol usually involves multiple tests from the same users as well as tests from multiple users

51
Q

User Recall

A

how much of what a user learns about software operation is retained from session to session

52
Q

Think-Aloud Protocol

A

A procedure in which participants are asked to say out loud what they are thinking while performing the assigned software testing tasks.

53
Q

Remote User Testing

A

Testing conducted by the user a the user’s business where the software may ultimately be applied

54
Q

Hallway Intercept

A

Testing that involves random users for a software with a general target market

55
Q

Debugging

A

a two-step process that begins with determination of the nature and location of the suspected error and then fixing it

56
Q

Debugging by Brute Force

A

A popular method of debugging that requires little thought, but that is inefficient and generally unsuccessful. Includes debugging in three categories: storage dump, print statements, automated debugging tools.

57
Q

Debugging by Induction

A

The use of clues (i.e. symptoms of the error and/or the results of one or more test cases) and relationships among the clues to determine where the error lies.

58
Q

Debugging by Deduction

A

The use of the processes of elimination and refinement to debug a program.

59
Q

Debugging by Backtracking

A

The process of stepping backwards through the logic of a program until the programmer finds the point at which the logic went astray.

60
Q

Debugging by Testing

A

The use of test cases specifically for debugging, to cover only a single condition or a few conditions for scrutiny.

61
Q

Error Analysis

A

The examination of the exact location of the error, the developer of the code, the preventive measures taken to avoid those errors in the future, etc.

62
Q

Agile Development

A

A software development methodology that delivers functionality in rapid iterations, measured in weeks, requiring frequent communication, development, testing, and delivery; that is customer-centric; and that welcomes change during the process.

63
Q

Extreme Programming

A

A software process that helps developers create high-quality code rapidly

64
Q

XP Planning

A

Identifying your customer’s application requirements and designing user (or case) stories that meet them.

65
Q

XP Testing

A

Continuous unit testing comprises the bulk of the testing effort although acceptance testing also falls under this principle.

66
Q

Extreme Unit Testing

A

Testing in which all code modules must have primary tests before coding begins, and these primary tests must be defined and created before coding the module.

67
Q

Extreme Acceptance Testing

A

Testing which determines whether the application meets its functional and usable requirements, and which is created by the customers during the design/planning phases.

68
Q

assertFalse()

A

Checks whether the parameter supplied causes the method to return an incorrect Boolean value

69
Q

primeCheck()

A

Checks the input value against a calculated list of numbers divisible only by itself and 1

70
Q

checkArgs()

A

Asserts that the input value is a positive integer

71
Q

main()

A

Provides the entry point into the application

72
Q

Scrum

A

a short team meeting to discuss progress and work; common across all methodologies

73
Q

Automated Testing

A

Tests which provide the immediate feedback required by rapid development

74
Q

Code Inspections

A

a set of procedures and error-detection techniques for group code reading

75
Q

buddy check

A

an informal verification technique in which the life-cycle work product is examined by the author and one other person

76
Q

circulation review

A

taking on the attributes of both buddy checks and walkthroughs, the are either formal or informal reviews in which the life-cycle work product is circulated to each reviewer who attaches comments, questions, and recommendations directly on the life-cycle work product or places them in a separate document.

77
Q

technical review

A

a formal team evaluation of a life-cycle work product to identify any discrepancies from specifications and standards, determine its suitability for use, and provide recommendations after the examination of various alternatives

78
Q

inspections

A

a formal verification technique in which life-cycle work products are examined in detail by a group of peers for the explicit purpose of detecting and identifying defects; author cannot act as the leader or as the moderator

79
Q

walkthroughs

A

a less formal verification technique in which life-cycle work products are examined by a group of peers for the purpose of finding defects, omissions, and contradictions; normally led by the author or the producer of the material being reviewed.

80
Q

structured walkthroughs

A

a more formal verification technique than a walkthrough, using many of the concepts/objectives of an inspection

81
Q

testing

A

a quality control function in that it is used to verify the functionality and performance of life-cycle work products or product components as they move through the product life cycle

82
Q

unit testing

A

a process of testing the individual components, subsystems, hardware components such as programmable logic arrays, and software components such as subprograms, subroutines, or procedures; focuses on white box or glass box testing and on test statements, branches, and paths through discrete pieces of code.

83
Q

Integration Testing

A

verify that separate systems can work together passing data back and forth correctly

84
Q

Systems Testing

A

Testing which measures and determines what the system capabilities are and ends when the system capabilities have been measured and enough of the problems have been corrected to have confidence that the acceptance is ready to executed

85
Q

Test Coverage Analysis

A

the process of finding areas of a program not exercised by a set of test cases, creating additional test cases to increase coverage, and determining a quantitative measure of code coverage that serves as an indirect measure of quality

86
Q

statement coverage

A

a measure of whether each executable statement is encountered

87
Q

block coverage

A

a measure of whether each executable statement is encountered, like statement coverage except that the unit of code measured is each sequence of non-branching statements

88
Q

decision coverage

A

the measure of whether Boolean expressions tested in control structures are evaluated to both true and false

89
Q

condition coverage

A

the measure of the true or false outcome of each Boolean subexpression; similar to decision coverage, but has better sensitivity to the control flow

90
Q

multiple condition coverage

A

the measure of whether every possible combination of Boolean subexpression occurs; requires very thorough testing in languages with short-circuit operators

91
Q

path coverage

A

the measure of whether each of the possible paths in each function has been followed

92
Q

bench testing

A

the insertion of a product component into a test loop where all of the variables can be independently controlled, measured, and recorded

93
Q

system validation

A

an end-to-end process that is needed to ensure that the completed and integrated system will operate as needed in the environment for which it was intended; a measure of customer satisfaction, given the customer’s operational need and profile