QA Terms Flashcards
CMMI
Capability Maturity Model Integration
Capability Maturity Model Integration (CMMI)
a proven set of global best practices that drives business performance through building and benchmarking key capabilities - project management, quality management, and engineering all glued together by process management
review
an evaluation of a life-cycle work product or project status to determine if there are any deviations from planned results and to recommend improvement
peer reviews
human-based testing rather than computer-based testing
anomaly, defect
any condition that deviates from expectations based on requirements, specification, design documents, standards, plans, and so on, or from someone’s experiences
Project Manager Review
normally, a weekly project with the development staff called by the Project manager
Milestone Review
Represent meaningful points in the project’s schedule and are documented in the project management plan, tracked, and reviewed.
Buddy Check
an informal verification technique in which the life-cycle work product is examined by the author and one other person
Circulation Reviews
A review that takes on attributes of both a buddy check and a walkthrough; they can be informal or formal; the life-cycle product is passed around to each reviewer who review it and either attaches comments, questions, and recommendations directly on the life-cycle work product or places them into a separate document.
Inspections
a formal verification technique in which life-cycle work products are examined in detail by a group of peers for the explicit purpose of detecting and identifying defects
Inspection Team
Usually consists of four people: moderator, programmer, designer, test specialist
Walkthrough
a less formal verification technique in which life-cycle work products are examined by a group of peers for the purpose of finding defects, omission, and contradictions; typically led by the author of the work
Peer Ratings
a technique of evaluating anonymous programs in terms of their overall quality, maintainability, extensibility, usability, and clarity; the purpose of this techniques to provide programmer self-evaluation
Bug Record
Provide clear and complete information about a bug, including details about the environment and specific steps that the developer can use to reproduce the issue
Black-Box Testing
Testing without concern about the internal behavior and structure of the program, instead concentrating on finding circumstances in which the program does not behave according to its specifications
Equivalence Partitioning (Equivalence Classes)
a testing technique in which a set of test conditions is divided into groups or sets that can be considered the same
Boundary Value Analysis
A black box test design technique in which test cases are designed based on boundary values, those situations directly on, above, and beneath the edges of input and output equivalence classes.
Cause-Effect Graphing
a black-box testing technique that aids in selecting, in a systematic way, a high-yield set of test cases and that has the benefit of pointing out incompleteness and ambiguities in the specification.
Symbol for Identity function on Cause-Effect Graph
single path
Symbol for NOT function on Cause-Effect Graph
zigzag path
Symbol for OR function on Cause-Effect Graph
V
Symbol for AND function on Cause-Effect Graph
inverted “V”
Identity function
if (a = 1) { b = 1 } else { b = 0 }
Not function
if (a = 1) { b = 0 } else { b = 1 }
Or function
if (a or b or c = 1) { d = 1 } else { d = 0 }
And function
if ( a and b = 1 ) { c = 1 } else { c = 0 }
Error Guessing (process)
Enumerate a list of possible errors or error-prone situations and then write test cases based on the list
Logic Coverage
Tests that exercise all decision point outcomes at least once, and ensure that all statements or entry points are executed at least once.
Equivalence Partitioning
Defines condition or error classes to help reduce the number of finite tests
Boundary value analysis
Tests each edge condition of an equivalence class (output and input)
Cause-effect graphing
Produces Boolean graphical representations of potential test case results to aid in selecting efficient and complete test cases
Error Guessing
Produces test cases based on intuitive and expert knowledge of test team members to define potential software errors to facilitate efficient test case design
White-Box Testing
Testing that is concerned with the degree to which test cases exercise or cover the logic (source code) of the program
Logic (Statement) Coverage Testing (WBT)
If tester backs completely away from path testing, it may seem that a worthy goal would be to execute every statement in the program at least once. (Weak criterion)
Decision (Branch) Coverage Testing (WBT)
Testing in which one must write enough test cases that each decision has a true and a false outcome at least once.
Condition Coverage (WBT)
Testing in which one writes enough test cases to ensure that each condition in a decision takes on all possible outcomes at least once.
Decision/Condition Coverage (WBT)
Testing in which there are sufficient test cases such that each condition in a decision takes on all possible outcomes at least once, each decision takes on all possible outcomes at least once, and each point of entry is invoked at least once.
Multiple Condition Coverage (WBT)
Testing in which there are sufficient test cases such that all possible combinations of condition outcomes in each decision and all points of entry are invoked at least once.
Model (Unit) Test-Case Design (WBT)
The design of a test based on a specification for the module (defining the module’s input and output parameters and its function) and the module’s source code.
Module (Unit) Testing
Testing which focuses on testing smaller units of the program first, rather than initially testing the whole program
Non-Incremental (Big-Bang) Testing
A way of Integration testing in which you test each module independently
Incremental Testing
A way of integration testing in which first you test each module of the software individually, then continue testing by adding another module to it, then another, etc. This can be done either top-down, bottom-up, or sandwich.
Function Testing
The process of attempting to find discrepancies between the program and the external specification.
System Testing
Testing meant to compare the system or program to its original objectives, NOT testing the functions of the complete system or program (as Function Testing); Considered Black-Box Testing
Acceptance Testing
The process of comparing the program to its initial requirements and the current needs of its end users; Performed by the program’s customer or end user and normally is not considered the responsibility of the development organization
Installation Testing
Testing meant to find errors during the installation process.; Performed by the organization that produced the system.
Regression Testing
Performed after making a functional improvement or repair to a program, its purpose is to determine whether the change has regressed other aspects of the program.
Usability Testing
A black-box testing technique that involves actual users or customers of the product and that seeks to verify an implementation’s approach works for the user base. Test should be created by establishing practical, real-world, repeatable exercises for each user to conduct.
Component Testing
tests the interactive software parts for reasonable selection and user feedback
Test User Select
A complete usability testing protocol usually involves multiple tests from the same users as well as tests from multiple users
User Recall
how much of what a user learns about software operation is retained from session to session
Think-Aloud Protocol
A procedure in which participants are asked to say out loud what they are thinking while performing the assigned software testing tasks.
Remote User Testing
Testing conducted by the user a the user’s business where the software may ultimately be applied
Hallway Intercept
Testing that involves random users for a software with a general target market
Debugging
a two-step process that begins with determination of the nature and location of the suspected error and then fixing it
Debugging by Brute Force
A popular method of debugging that requires little thought, but that is inefficient and generally unsuccessful. Includes debugging in three categories: storage dump, print statements, automated debugging tools.
Debugging by Induction
The use of clues (i.e. symptoms of the error and/or the results of one or more test cases) and relationships among the clues to determine where the error lies.
Debugging by Deduction
The use of the processes of elimination and refinement to debug a program.
Debugging by Backtracking
The process of stepping backwards through the logic of a program until the programmer finds the point at which the logic went astray.
Debugging by Testing
The use of test cases specifically for debugging, to cover only a single condition or a few conditions for scrutiny.
Error Analysis
The examination of the exact location of the error, the developer of the code, the preventive measures taken to avoid those errors in the future, etc.
Agile Development
A software development methodology that delivers functionality in rapid iterations, measured in weeks, requiring frequent communication, development, testing, and delivery; that is customer-centric; and that welcomes change during the process.
Extreme Programming
A software process that helps developers create high-quality code rapidly
XP Planning
Identifying your customer’s application requirements and designing user (or case) stories that meet them.
XP Testing
Continuous unit testing comprises the bulk of the testing effort although acceptance testing also falls under this principle.
Extreme Unit Testing
Testing in which all code modules must have primary tests before coding begins, and these primary tests must be defined and created before coding the module.
Extreme Acceptance Testing
Testing which determines whether the application meets its functional and usable requirements, and which is created by the customers during the design/planning phases.
assertFalse()
Checks whether the parameter supplied causes the method to return an incorrect Boolean value
primeCheck()
Checks the input value against a calculated list of numbers divisible only by itself and 1
checkArgs()
Asserts that the input value is a positive integer
main()
Provides the entry point into the application
Scrum
a short team meeting to discuss progress and work; common across all methodologies
Automated Testing
Tests which provide the immediate feedback required by rapid development
Code Inspections
a set of procedures and error-detection techniques for group code reading
buddy check
an informal verification technique in which the life-cycle work product is examined by the author and one other person
circulation review
taking on the attributes of both buddy checks and walkthroughs, the are either formal or informal reviews in which the life-cycle work product is circulated to each reviewer who attaches comments, questions, and recommendations directly on the life-cycle work product or places them in a separate document.
technical review
a formal team evaluation of a life-cycle work product to identify any discrepancies from specifications and standards, determine its suitability for use, and provide recommendations after the examination of various alternatives
inspections
a formal verification technique in which life-cycle work products are examined in detail by a group of peers for the explicit purpose of detecting and identifying defects; author cannot act as the leader or as the moderator
walkthroughs
a less formal verification technique in which life-cycle work products are examined by a group of peers for the purpose of finding defects, omissions, and contradictions; normally led by the author or the producer of the material being reviewed.
structured walkthroughs
a more formal verification technique than a walkthrough, using many of the concepts/objectives of an inspection
testing
a quality control function in that it is used to verify the functionality and performance of life-cycle work products or product components as they move through the product life cycle
unit testing
a process of testing the individual components, subsystems, hardware components such as programmable logic arrays, and software components such as subprograms, subroutines, or procedures; focuses on white box or glass box testing and on test statements, branches, and paths through discrete pieces of code.
Integration Testing
verify that separate systems can work together passing data back and forth correctly
Systems Testing
Testing which measures and determines what the system capabilities are and ends when the system capabilities have been measured and enough of the problems have been corrected to have confidence that the acceptance is ready to executed
Test Coverage Analysis
the process of finding areas of a program not exercised by a set of test cases, creating additional test cases to increase coverage, and determining a quantitative measure of code coverage that serves as an indirect measure of quality
statement coverage
a measure of whether each executable statement is encountered
block coverage
a measure of whether each executable statement is encountered, like statement coverage except that the unit of code measured is each sequence of non-branching statements
decision coverage
the measure of whether Boolean expressions tested in control structures are evaluated to both true and false
condition coverage
the measure of the true or false outcome of each Boolean subexpression; similar to decision coverage, but has better sensitivity to the control flow
multiple condition coverage
the measure of whether every possible combination of Boolean subexpression occurs; requires very thorough testing in languages with short-circuit operators
path coverage
the measure of whether each of the possible paths in each function has been followed
bench testing
the insertion of a product component into a test loop where all of the variables can be independently controlled, measured, and recorded
system validation
an end-to-end process that is needed to ensure that the completed and integrated system will operate as needed in the environment for which it was intended; a measure of customer satisfaction, given the customer’s operational need and profile