CAST Full Deck Flashcards
Acceptance Criteria
A key prerequisite for test planning is a clear understanding of what must be accomplished for the test project to be deemed successful.
Acceptance Testing
The objective of acceptance testing is to determine throughout the development cycle that all aspects of the development process meet the user’s needs.
Act
If your checkup reveals that the work is not being performed according to plan or that results are not as anticipated, devise measures for appropriate action. (Plan-Do-Check-Act)
Access Modeling
Used to verify that data requirements (represented in the form of an entity-relationship diagram) support the data demands of process requirements (represented in data flow diagrams and process specifications.)
Active Risk
Risk that is deliberately taken on. For example, the choice to develop a new product that may not be successful in the marketplace.
Actors
Interfaces in a system boundary diagram. (Use Cases)
Alternate Path
Additional testable conditions are derived from the exceptions and alternative course of the Use Case.
Affinity Diagram
A group process that takes large amounts of language data, such as a list developed by brainstorming, and divides it into categories.
Analogous
The analogy model is a nonalgorithmic costing model that estimates the size, effort, or cost of a project by relating it to another similar completed project. Analogous estimating takes the actual time and/or cost of a historical project as a basis for the current project.
Analogous Percentage Method
A common method for estimating test effort is to calculate the test estimate as a percentage of previous test efforts using a predicted size factor (SF) (e.g., SLOC or FPA).
Application
A single software product that may or may not fully support a business function.
Appraisal Costs
Resources spent to ensure a high level of quality in all
development life cycle stages which includes conformance to quality standards and delivery of products that meet the user’s requirements/needs. Appraisal costs include the cost of in-process reviews, dynamic testing, and final inspections.
Appreciative or Enjoyment Listening
One automatically switches to this type of listening when it is perceived as a funny situation or an explanatory example will be given of a situation. This listening type helps understand real-world situations.
Assumptions
A thing that is accepted as true.
Audit
This is an inspection/assessment activity that verifies compliance with plans, policies, and procedures, and ensures that resources are conserved. Audit is a staff function; it serves as the “eyes and ears” of management.
Backlog
Work waiting to be done; for IT this includes new systems to be developed and enhancements to existing systems. To be included in the development backlog, the work must have been cost-justified and approved for development. A product backlog in Scrum is a prioritized featured list containing short descriptions of all functionality desired in the product.
Baseline
A quantitative measure of the current level of performance.
Benchmarking
Comparing your company’s products, services, or processes against best practices, or competitive practices, to help define superior performance of a product, service, or support process.
Benefits Realization Test
A test or analysis conducted after an application is moved into production to determine whether it is likely to meet the originating business case.
Black-Box Testing
A test technique that focuses on testing the functionality of the program, component, or application against its specifications without knowledge of how the system is constructed; usually data or business process driven.
Bottom-Up
Begin testing from the bottom of the hierarchy and work up to the top. Modules are added in ascending hierarchical order. Bottom-up testing requires the development of driver modules, which provide the test input, that call the module or program being tested, and display test output.
Bottom-Up Estimation
In this technique, the cost of each single activity is determined with the greatest level of detail at the bottom level and then rolls up to calculate the total project cost.
Boundary Value Analysis
A data selection technique in which test data is chosen from the “boundaries” of the input or output domain classes, data structures, and procedure parameters. Choices often include the actual minimum and maximum boundary values, the maximum value plus or minus one, and the minimum value plus or minus one.
Brainstorming
A group process for generating creative and diverse ideas.
Branch Combination Coverage
Branch Condition Combination Coverage is a very thorough structural testing technique, requiring 2n test cases to achieve 100% coverage of a condition containing n Boolean operands.
Branch/Decision Testing
A test method that requires that each possible branch on each decision point be executed at least once.
Bug
A general term for all software defects or errors.
Calibration
This indicates the movement of a measure so it becomes more valid, for example, changing a customer survey so it better reflects the true opinions of the customer.
Candidate
An individual who has met eligibility requirements for a credential awarded through a certification program, but who has not yet earned that certification through participation in the required skill and knowledge assessment instruments.
Causal Analysis
The purpose of causal analysis is to prevent problems by
determining the problem’s root cause. This shows the relation between an effect and its possible causes to eventually find the root cause of the issue.
Cause and Effect Diagrams
A cause and effect diagram visualizes results of brainstorming and affinity grouping through major causes of a significant process problem.
Cause-Effect Graphing
Cause-effect graphing is a technique which focuses on modeling the dependency relationships between a program’s input conditions (causes) and output conditions (effects). CEG is considered a Requirements-Based test technique and is often referred to as Dependency modeling.
Certificant
An individual who has earned a credential awarded through a certification program.
Certification
A voluntary process instituted by a nongovernmental agency by which individual applicants are recognized for having achieved a measurable level of skill or knowledge. Measurement of the skill or knowledge makes certification more restrictive than simple registration, but much less restrictive than formal licensure.
Change Management
Managing software change is a process. The process is the primary responsibility of the software development staff. They must assure that the change requests are documented, that they are tracked through approval or rejection, and then incorporated into the development process.
Check
Check to determine whether work is progressing according to the plan and whether the expected results are obtained. Check for performance of the set procedures, changes in conditions, or abnormalities that may appear. As often as possible, compare the results of the work with the objectives.
Check Sheets
A check sheet is a technique or tool to record the number of occurrences over a specified interval of time; a data sample to determine the frequency of an event.
Checklists
A series of probing questions about the completeness and attributes of an application system. Well-constructed checklists cause evaluation of areas, which are prone to problems. It both limits the scope of the test and directs the tester to the areas in which there is a high probability of a problem.
Checkpoint Review
Held at predefined points in the development process to evaluate whether certain quality factors (critical success factors) are being adequately addressed in the system being built. Independent experts for the purpose of identifying problems conduct the reviews as early as possible.
Client
The customer that pays for the product received and receives the benefit from the use of the product.
CMMI-Dev
A process improvement model for software development.
Specifically, CMMI for Development is designed to compare an organization’s existing development processes to proven best practices developed by members of industry, government, and academia.
Coaching
Providing advice and encouragement to an individual or individuals to promote a desired behavior.
COCOMO II
The best recognized software development cost model is the Constructive Cost Model II.COCOMO®II is an enhancement over the original COCOMO® model. COCOMO®II extends the capability of the model to include a wider collection of techniques and technologies. It provides support for object-oriented software, business software, software created via spiral or evolutionary development models and software using COTS application utilities.
Code Comparison
One version of source or object code is compared to a second version. The objective is to identify those portions of computer programs that have been changed. The technique is used to identify those segments of an application program that have been altered as a result of a program change.
Common Causes of Variation
Common causes of variation are typically due to a large number of small random sources of variation. The sum of these sources of variation determines the magnitude of the process’s inherent variation due to common causes; the process’s control limits and current process capability can then be determined.
Compiler-Based Analysis
Most compilers for programming languages include diagnostics that identify potential program structure flaws. Many of these diagnostics are warning messages requiring the programmer to conduct additional investigation to determine whether or not the problem is real. Problems may include syntax problems, command
violations, or variable/data reference problems. These diagnostic messages are a useful means of detecting program problems, and should be used by the programmer.
Complete Test Set
A test set containing data that causes each element of prespecified set of Boolean conditions to be true. In addition, each element of the test set causes at least one condition to be true.
Completeness
The property that all necessary parts of an entity are included. Often, a product is said to be complete if it has met all requirements.
Complexity-Based Analysis
Based upon applying mathematical graph theory to programs and preliminary design language specification (PDLs) to determine a unit’s complexity. This analysis can be used to measure and control complexity when maintainability is a desired attribute. It can also be used to estimate test effort required and identify paths that
must be tested.
Compliance Checkers
A parse program looking for violations of company standards. Statements that contain violations are flagged. Company standards are rules that can be added, changed, and deleted as needed.
Comprehensive Listening
Designed to get a complete message with minimal distortion. This type of listening requires a lot of feedback and summarization to fully understand what the speaker is communicating.
Compromise
An intermediate approach – Partial satisfaction is sought for both parties through a “middle ground” position that reflects mutual sacrifice. Compromise evokes thoughts of giving up something, therefore earning the name “lose-lose.”
Condition Coverage
A white-box testing technique that measures the number of, or percentage of, decision outcomes covered by the test cases designed. 100% condition coverage would indicate that every possible outcome of each decision had been executed at least once during testing.
Condition Testing
A structural test technique where each clause in every condition is forced to take on each of its possible values in combination with those of other clauses.
Configuration Management
Software Configuration Management (CM) is a process for tracking and controlling changes in the software. The ability to maintain control over changes made to all project artifacts is critical to the success of a project. The more complex an application is, the more important it is to control change to both the application and its supporting artifacts.
Configuration Management Tools
Tools that are used to keep track of changes made to systems and all related artifacts. These are also known as version control tools.
Configuration Testing
Testing of an application on all supported hardware and software platforms. This may include various combinations of hardware types, configuration settings, and software versions.
Consistency
The property of logical coherence among constituent parts. Consistency can also be expressed as adherence to a given set of rules.
Consistent Condition Set
A set of Boolean conditions such that complete test sets for the conditions uncover the same errors.
Constraints
A limitation or restriction. Constraints are those items that will likely force a dose of “reality” on a test project. The obvious constraints are test staff size, test schedule, and budget.
Constructive Criticism
A process of offering valid and well-reasoned opinions about the work of others, usually involving both positive and negative comments, in a friendly manner rather than an oppositional one.
Control
Control is anything that tends to cause the reduction of risk. Control can accomplish this by reducing harmful effects or by reducing the frequency of occurrence.
Control Charts
A statistical technique to assess, monitor and maintain the stability of a process. The objective is to monitor a continuous repeatable process and the process variation from specifications. The intent of a control chart is to monitor the variation of a statistically stable process where activities are repetitive.
Control Flow Analysis
Based upon graphical representation of the program process. In control flow analysis, the program graph has nodes, which represent a statement or segment possibly ending in an unresolved branch. The graph illustrates the flow of program control from one segment to another as illustrated through branches. The objective of control flow analysis is to determine potential problems in logic branches that might result in a loop condition or improper processing.
Conversion Testing
Validates the effectiveness of data conversion processes, including field-to-field mapping, and data translation.
Corrective Controls
Corrective controls assist individuals in the investigation and correction of causes of risk exposures that have been detected.
Correctness
The extent to which software is free from design and coding defects (i.e., fault-free). It is also the extent to which software meets its specified requirements and user objectives.
Cost of Quality (COQ)
Money spent beyond expected production costs (labor, materials, equipment) to ensure that the product the customer receives is a quality (defect free) product. The Cost of Quality includes prevention, appraisal, and failure costs.
COTS
Commercial Off the Shelf (COTS) software products that are ready-made and available for sale in the marketplace.
Coverage
A measure used to describe the degree to which the application under test (AUT) is tested by a particular test suite.
Coverage-Based Analysis
A metric used to show the logic covered during a test session, providing insight to the extent of testing. The simplest metric for coverage would be the number of computer statements executed during the test compared to the total number of statements in the program. To completely test the program structure, the test data chosen should cause the execution of all paths. Since this is not generally possible outside of unit test, general metrics have been developed which give a measure of the quality of test data based on the proximity to this ideal coverage. The metrics should take into consideration the existence of infeasible paths, which are those paths in the program that have been designed so that no data will cause the execution of those paths.
Critical Listening
The listener is performing an analysis of what the speaker said. This is most important when it is felt that the speaker is not in complete control of the situation, or does not know the complete facts of a situation.
Critical Success Factors
Critical Success Factors (CSFs) are those criteria or factors that must be present in a software application for it to be successful.
Customer
The individual or organization, internal or external to the producing organization that receives the product.
Customer’s/User’s of Software View of Quality
Fit for use.
Cyclomatic Complexity
The number of decision statements, plus one.
Damaging Event
Damaging Event is the materialization of a risk to an organization’s assets.
Data Dictionary
Provides the capability to create test data to test validation for the defined data elements. The test data generated is based upon the attributes defined for each data element. The test data will check both the normal variables for each data element as well as abnormal or error conditions for each data element.
Data Flow Analysis
In data flow analysis, we are interested in tracing the behavior of program variables as they are initialized and modified while the program executes.
DD(Decision-to- Decision) Path
A path of logical code sequence that begins at a decision statement or an entry and ends at a decision statement or an exit.
Debugging
The process of analyzing and correcting syntactic, logic, and other errors identified during testing.
Decision Analysis
This technique is used to structure decisions and to represent real- world problems by models that can be analyzed to gain insight and understanding. The elements of a decision model are the decisions, uncertain events, and values of outcomes.
Decision Coverage
A white-box testing technique that measures the number of, or percentage of, decision directions executed by the test case designed. 100% decision coverage would indicate that all decision directions had been executed at least once during testing. Alternatively, each logical path through the program can be tested. Often, paths through the program are grouped into a finite set of classes, and one path from each class is tested.
Decision Table
A tool for documenting the unique combinations of conditions and associated results in order to derive unique test cases for validation testing.
Decision Trees
This provides a graphical representation of the elements of a decision model.
Defect
Operationally, it is useful to work with two definitions of a defect:
• From the producer’s viewpoint a defect is a product requirement that has not been met or a product attribute possessed by a product or a function performed by a product that is not in the statement of requirements that define the
product;
• From the customer’s viewpoint a defect is anything that causes
customer dissatisfaction, whether in the statement of requirements or not.
A defect is an undesirable state. There are two types of defects: process and product.
Defect Management
Process to identify and record defect information whose primary goal is to prevent future defects.
Defect Tracking Tools
Tools for documenting defects as they are found during testing and for tracking their status through to resolution.
Deliverables
Any product or service produced by a process. Deliverables can be interim or external. Interim deliverables are produced within the process but never passed on to another process. External deliverables may be used by one or more processes. Deliverables serve as both inputs to and outputs from a process.
Design Level
The design decomposition of the software item (e.g., system, subsystem, program, or module).
Desk Checking
The most traditional means for analyzing a system or a program. Desk checking is conducted by the developer of a system or program. The process involves reviewing the complete product to ensure that it is structurally sound and that the standards and requirements have been met. This tool can also be used on artifacts created during analysis and design.
Detective Controls
Detective controls alert individuals involved in a process so that they are aware of a problem.
Discriminative Listening
Directed at selecting specific pieces of information and not the entire communication.
Do
Create the conditions and perform the necessary teaching and training to ensure everyone understands the objectives and the plan. (Plan-Do-Check-Act)
The procedures to be executed in a process. (Process Engineering)
Driver
Code that sets up an environment and calls a module for test. A driver causes the component under test to exercise the interfaces. As you move up the drivers are replaced with the actual components.
Dynamic Analysis
Analysis performed by executing the program code. Dynamic analysis executes or simulates a development phase product, and it detects errors by analyzing the response of a product to sets of input data.
Dynamic Assertion
A dynamic analysis technique that inserts into the program code assertions about the relationship between program variables. The truth of the assertions is determined as the program executes.
Ease of Use and Simplicity
These are functions of how easy it is to capture and use the measurement data.
Effectiveness
Effectiveness means that the testers completed their assigned responsibilities.
Efficiency
Efficiency is the amount of resources and time required to complete test responsibilities.
Empowerment
Giving people the knowledge, skills, and authority to act within their area of expertise to do the work and improve the process.
Entrance Criteria
Required conditions and standards for work product quality that must be present or met for entry into the next stage of the software development process.
Environmental Controls
Environmental controls are the means which management uses to manage the organization.
Equivalence Partitioning
The input domain of a system is partitioned into classes of representative values so that the number of test cases can be limited to one-per-class, which represents the minimum number of test cases that must be executed.
Error or Defect
A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition.
Error Guessing
Test data selection technique for picking values that seem likely to cause defects. This technique is based upon the theory that test cases and test data can be developed based on the intuition and experience of the tester.
Exhaustive Testing
Executing the program through all possible combinations of values for program variables.
Exit Criteria
Standards for work product quality, which block the promotion of incomplete or defective work products to subsequent stages of the software development process.
Exploratory Testing
The term “Exploratory Testing” was coined in 1983 by Dr. Cem Kaner. Dr. Kaner defines exploratory testing as “a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.”
Failure Costs
All costs associated with defective products that have been delivered to the user and/or moved into production. Failure costs can be classified as either “internal” failure costs or “external” failure costs.
File Comparison
Useful in identifying regression errors. A snapshot of the correct expected results must be saved so it can be used for later comparison.
Fitness for Use
Meets the needs of the customer/user.
Flowchart
Pictorial representations of data flow and computer logic. It is frequently easier to understand and assess the structure and logic of an application system by developing a flow chart than to attempt to understand narrative descriptions or verbal explanations. The flowcharts for systems are normally developed manually, while flowcharts of programs can be produced.
Force Field Analysis
A group technique used to identify both driving and restraining forces that influence a current situation.
Formal Analysis
Technique that uses rigorous mathematical techniques to analyze the algorithms of a solution for numerical properties, efficiency, and correctness.
FPA
Function Point Analysis a sizing method in which the program’s functionality is measured by the number of ways it must interact with the users.
Functional System Testing
Functional system testing ensures that the system requirements and specifications are achieved. The process involves creating test conditions for use in evaluating the correctness of the application.
Functional Testing
Application of test data derived from the specified functional requirements without regard to the final program structure.
Gap Analysis
This technique determines the difference between two variables. A gap analysis may show the difference between perceptions of importance and performance of risk management practices. The gap analysis may show discrepancies between what is and what needs to be done. Gap analysis shows how large the gap is and how far the leap is to cross it. It identifies the resources available to deal with the gap.
Happy Path
Generally used within the discussion of Use Cases, the happy path follows a single flow uninterrupted by errors or exceptions from beginning to end.
Heuristics
Experience-based techniques for problem solving, learning, and discovery.
Histogram
A graphical description of individually measured values in a data set that is organized according to the frequency or relative frequency of occurrence. A histogram illustrates the shape of the distribution of individual values in a data set along with information regarding the average and variation.
Incremental Model
The incremental model approach subdivides the requirements specifications into smaller buildable projects (or modules). Within each of those smaller requirements subsets, a development life cycle exists which includes the phases described in the Waterfall approach.
Incremental Testing
Incremental testing is a disciplined method of testing the interfaces between unit-tested programs as well as between system components. It involves adding unit-tested programs to a given module or component one by one, and testing each resultant combination.
Infeasible Path
A sequence of program statements that can never be executed.
Influence Diagrams
Provides a graphical representation of the elements of a decision model.
Inherent Risk
Inherent Risk is the risk to an organization in the absence of any actions management might take to alter either the risk’s likelihood or impact.