ABA_Complete_Corrected_Flashcards
Abative effect (of a motivating operation)
A decrease in the current frequency of behavior that has been reinforced by some stimulus, object, or event whose reinforcing effectiveness depends on the same motivating operation.
For example, food ingestion abates (decreases the current frequency of) behavior such as opening the fridge that has been reinforced by food.
Abolishing operation (AO)
A motivating operation that decreases the reinforcing effectiveness of a stimulus, object, or event.
For example, the reinforcing effectiveness of food is abolished as a result of food ingestion.
Adjunctive Behaviors
Behavior that occurs as a collateral effect of a schedule of periodic reinforcement for other behavior; time-filling or interim activities (e.g. doodling, idle talking, drinking, smoking) that are induced by schedules of reinforcement during times when reinforcement is unlikely to be delivered.
(Also called schedule-induced behavior)
Alternative Schedule (ALT)
Provides an opportunity for reinforcement on two or more simultaneously available basic schedules of reinforcement. The first schedule completed provides reinforcement, regardless of which schedule component is met first.
With an alt FR 50/FI 5-min schedule, reinforcement is delivered whenever either of these two conditions has been met: (a) 50 correct responses, provided the 5-minute interval of time has not elapsed; or (b) the first response after the elapse of 5 minutes, provided that fewer than 50 responses have been emitted.
Antecedent
An environmental condition or stimulus change existing or occurring prior to a behavior of interest.
Antecedent Manipulation
Altering the environment before a behavior occurs that encourage desired behaviors and reduce problem behaviors.
Antecedent Stimulus Class
A set of stimuli that share a common relationship. All stimuli in an antecedent stimulus class evoke the same operant behavior, or elicit the same respondent behavior.
Applied Behavior Analysis (ABA)
The science in which tactics derived from the principles of behavior are applied to improve socially significant behavior and experimentation is used to identify the variables responsible for the improvement in behavior.
Arbitrary Stimulus Class
Antecedent stimuli that evoke the same response but do not resemble each other in physical form or share a relational aspect such as bigger or under (e.g., peanuts, cheese, coconut milk, and chicken breasts are members of an arbitrary stimulus class if they evoke the response “sources of protein”).
(Compare to feature stimulus class.)
Artifact
An outcome or result that appears to exist because of the way it is measured but in fact does not correspond to what actually occurred.
Autoclitic
The autoclitic relation involves two interlocking levels of verbal behavior emitted in one utterance. One level is a primary response (e.g., “The ice is solid”), while the other type is the secondary autoclitic response (e.g., adding “I think”). Autoclitic behavior benefits the listener by providing additional information regarding the primary response.
Automatic contingencies
Skinner (1957) used ‘automatic’ to identify circumstances in which behavior is evoked, shaped, maintained, or weakened by environmental variables occurring without direct manipulation by other people.
Automatic Reinforcement
Reinforcement that occurs independent of the social mediation of others.
(e.g., scratching an insect bite relieves the itch).
Automaticity of Reinforcement
Refers to the fact that behavior is modified by its consequences irrespective of the person’s awareness; a person does not have to recognize or verbalize the relation between her behavior and a reinforcing consequence, or even know that a consequence has occurred, for reinforcement to “work.”
Aversive stimulus
In general, an unpleasant or noxious stimulus; more technically, a stimulus change or condition that functions (a) to evoke a behavior that has terminated it in the past, (b) as a punisher when presented following behavior, and/or (c) as a reinforcer when withdrawn following behavior.
Avoidance Contingency
A contingency in which a response prevents or postpones the presentation of a stimulus.
Bar Graph
A simple and versatile graphic format for summarizing behavioral data; shares most of the line graph’s features except that it does not have distinct data points representing successive response measures through time. (Also called a histogram.)
Behavior
That portion of an organism’s interaction with its environment that involves movement of some part of the organism.
(See also operant behavior, respondent behavior, response, and response class.)
Behavior Change Tactic
A technologically consistent method for changing behavior derived from one or more principles of behavior (e.g., response cost is derived from the principle of negative punishment); possesses sufficient generality across subjects, settings, and/or behaviors to warrant its codification and dissemination.A technologically consistent method for changing behavior derived from one or more principles of behavior.
Behavioral Momentum
Describes the resistance to change in a behavior’s rate of responding following an alteration in reinforcement conditions. The momentum metaphor has also been used to describe the effects produced by the high-probability (high-p) request sequence. Behavioral momentum is based on the idea that people are more likely to engage in a behavior if they have experienced success leading up to it.
Behavioral Contrast
Behavioral contrast is something that happens when a person (or animal) is responding under a multiple schedule of reinforcement—which means there are two or more conditions (components) with different rules for getting reinforcement, and each condition has a clear signal (like a color or sound) to tell them which schedule is in place.
A simple example:
Let’s say a child is working under a multiple schedule with two components:
Component A: Reinforcement every 2 minutes for good behavior (signaled by a green light)
Component B: Reinforcement every 5 minutes (signaled by a red light)
If you stop giving reinforcement in Component A (extinction), you might notice that the child responds more in Component B, even though you didn’t change anything in B. That increase is behavioral contrast.
Behavior-altering effect (of a motivating operation)
Either (a) an increase in the current frequency of behavior that has been reinforced by some stimulus, object, or event, called an evocative effect; or (b) a decrease in the current frequency of behavior that has been reinforced by some stimulus, object, or event, called an abative effect. For example, the current frequency of behavior that has been reinforced with food, such as opening the fridge, is evoked (increased) or abated (decreased) by food deprivation or food ingestion, respectively.
Behaviorism
The philosophy of a science of behavior; there are various forms of behaviorism. (See also methodological behaviorism and radical behaviorism.)
Bidirectional Naming (BiN)
BiN is when a child learns a word by hearing it (listener) or saying it (tact), and can automatically do the other without being directly taught. It means the speaker and listener skills are connected, showing two-way learning.
Example of BiN:
A child is shown a zebra and someone says,
“Look, it’s a zebra!”
The child has never said the word “zebra” before, but they hear it and learn to identify it (listener behavior).
Later, when the child sees a zebra in a book, they point and say, “Zebra!”
—They tacted it without being directly taught to do so.
That’s BiN:
They learned the listener skill, and the speaker skill emerged without training.
If it worked the other way (they learned to say “zebra” first and then could find it when asked), that’s also BiN.
Bonus Response Cost
A procedure for implementing response cost in which the person is provided a reservoir of reinforcers that are removed in predetermined amounts contingent on the occurrence of the target behavior.A strategy where an individual earns or receives a reward or bonus for displaying desired behavior, and a small portion of that reward or bonus is taken away or reduced as a consequence for inappropriate behavior.
Example of Bonus Response Cost:
A teacher gives a student 5 extra tokens at the start of the day (these are bonus tokens—they didn’t earn them yet).
Every time the student shouts out without raising their hand, the teacher takes away 1 token.
At the end of the day, the student can trade their remaining tokens for a reward (like extra playtime or a small prize).
🔁 Why it’s bonus response cost:
The tokens were given in advance as a “reserve.”
The student loses from the bonus, not from what they earned.
Celeration
The change (acceleration or deceleration) in rate of responding over time; based on count per unit of time (rate); expressed as a factor by which responding is accelerating or decelerating (multiplying or dividing); displayed with a trend line on a Standard Celeration Chart. Celeration is a generic term without specific reference to accelerating or decelerating rates of response.
Example of Celeration:
A child is learning to label animals.
In Week 1, they correctly label 2 animals per minute.
In Week 2, they label 4 animals per minute.
In Week 3, they label 8 animals per minute.
That’s an acceleration in the rate of responding.
We can say the celeration is ×2 per week—because the rate is doubling each week.
This trend would be shown on a Standard Celeration Chart with a steep upward trend line.
🧠 Key takeaway:
Celeration shows how fast the behavior is changing, not just how fast it is happening.
(See also Standard Celeration Chart.)
Chained Schedule (Chain)
A schedule of reinforcement in which the response requirements of two or more basic schedules must be met in a specific sequence before reinforcement is delivered; a discriminative stimulus is correlated with each component of the schedule
Cumulative Record
A type of graph on which the cumulative number of responses emitted is represented on the vertical axis; the steeper the slope of the data path, the greater the response rate.
Cumulative Recorder
A mechanical or electronic device that produces a cumulative record by marking responses as they occur over time.
Class expansion
A new member is added to a demonstrated stimulus equivalence class as the result of teaching a new conditional discrimination.
Example of Class Expansion:
A child learns to say “dog” when they see the family pet.
Later, without being directly taught, the child also says “dog” when they see:
A cartoon dog 🐶
A stuffed animal dog 🧸
A neighbor’s dog 🐕
A photo of a dog 📸
Each new response shows the child has expanded their stimulus class for “dog” — they now respond the same way to different but related stimuli.
🧠 What makes it “class expansion”?
The child is adding new members to an existing stimulus class (“dog”) without direct teaching for each one.
Class merger
Independent equivalence classes are combined as the result of teaching a new but interrelated conditional discrimination.
Example of Class Merger:
A child has two separate stimulus classes:
Class 1: Says “dog” when shown real dogs.
Class 2: Says “brown” when shown brown things (like a brown crayon or a brown teddy bear).
One day, the child sees a brown dog and says, “brown dog,” even though they were never directly taught to say that combination.
🧠 Why is this class merger?
Two previously separate stimulus classes (“dog” and “brown”) have merged to form a combined response. The child now responds to stimuli that share features from both original classes.
It shows flexible, generative language—a big win in verbal behavior!
Class-specific reinforcement
A match-to-sample procedure in which not only is the correct comparison choice conditional on the sample stimulus, but the type of consequence delivered is, too; class-specific consequences themselves become members of the equivalence classes.
Example of Class-Specific Reinforcement:
You’re teaching a child to identify animals vs. vehicles.
When the child correctly labels an animal, you give a high-five and say “You’re right, that’s an animal!”
When the child correctly labels a vehicle, you roll a toy car toward them and say, “Yes, that’s a vehicle!”
The type of reinforcement depends on the response class—different reinforcers are used for different correct answers.
🧠 Why is this class-specific reinforcement?
Because the reinforcement is specific to each class of behavior (animals vs. vehicles), not just a general reward for being right. This helps strengthen the child’s ability to discriminate between response classes.
Codic
A type of verbal behavior where the form of the response is under the functional control of a verbal stimulus with point-to-point correspondence, but without formal similarity. There is also a history of generalized reinforcement.
Example of a Codic:
A student sees the written word “cat” and says aloud, “cat.”
They are reading the word, but their spoken response matches the text (has point-to-point correspondence), and the stimulus is visual (the written word), not auditory.
🧠 Why is this a codic?
Point-to-point correspondence? ✅ Yes (the spoken word matches the written word).
Same sense mode? ❌ No (visual → vocal, so the input and output are in different sensory modalities).
This makes it a codic, specifically textual behavior.
2 Types of Codics
- Textual Codic - Reading a written word and responding verbally.
- Transcription Codic - Hearing a spoken word and responding by writing it down.
Compound Schedule of Reinforcement
A schedule of reinforcement consisting of two or more elements of continuous reinforcement (CRF), the four intermittent schedules of reinforcement (FR, VR, FI, VI), differential reinforcement of various rates of responding (DRH, DRL), and extinction. The elements from these basic schedules can occur successively or simultaneously and with or without discriminative stimuli; reinforcement may be contingent on meeting the requirements of each element of the schedule independently or in combination with all elements.
Example of a Compound Schedule:
A child is working in an ABA session where two different schedules are combined:
FR5 (Fixed Ratio 5): They get a token every 5 correct responses.
FI 2 min (Fixed Interval 2 minutes): They can only exchange their tokens for a break every 2 minutes.
So, they’re earning tokens based on how many responses they make (FR5), but they can only access the backup reinforcer (like a break or toy) after a fixed time passes (FI 2 min).
🧠 Why is this a compound schedule?
Because it combines two or more basic reinforcement schedules (in this case, FR and FI) that are in effect at the same time or in sequence.
Compound verbal discrimination
Involves two or more verbal SD (convergent multiple control) that each independently evoke behavior, but when they both occur in the same antecedent configuration, a different SD is generated, and a more specific behavior is evoked.
Example of Compound Verbal Discrimination:
An instructor says:
“Touch the big red car.”
There are:
A big red car
A small red car
A big blue car
The child must attend to both the size (“big”) and color (“red”) to choose the correct item.
🧠 Why is this compound verbal discrimination?
Because the correct response depends on understanding two or more verbal cues together (“big” and “red”). The child has to discriminate based on the combination of these verbal stimuli—not just one.
Concept
A stimulus class whose members share a common feature.
Concurrent Schedule (Conc)
A schedule of reinforcement in which two or more contingencies of reinforcement (elements) operate independently and simultaneously for two or more behaviors.
Example 1 of a Concurrent Schedule:
In an ABA session, a child has two choices available at the same time:
Option 1: Press a button to hear music — on a VI 1-minute schedule (music plays about once every minute).
Option 2: Spin a fidget spinner — on an FR 3 schedule (they get access to the spinner after every 3 presses).
The child can freely choose between these two options at any time, and their choice will influence how often each behavior is reinforced.
🧠 Why is this a concurrent schedule?
Because two or more reinforcement schedules are available at the same time, and the learner chooses which one to respond to. It’s great for studying preference, matching law, and choice behavior.
Example 2 A Concurrent Schedule with Token Boards:
A student has two token boards to choose from during work time:
Board A: Earn 5 tokens (FR5) to play with the iPad for 2 minutes.
Board B: Earn 10 tokens (FR10) to play outside for 5 minutes.
Both boards are available at the same time, and the student can switch between them at will.
They might work quickly on Board A for faster iPad breaks or choose Board B when they really want to go outside, even though it takes more effort.
Conditional Discrimination
Performance in a match-to-sample procedure in which discrimination between the comparison stimuli is conditional on the sample stimulus present on each trial.
Example of Conditional Discrimination:
An instructor places three cards on the table:
🍎 Apple
🍌 Banana
🍇 Grapes
Then they say:
“Touch fruit that is red.”
The child must select 🍎 Apple — not just because it’s a fruit, but because it’s red.
✅ Why is this a conditional discrimination?
Because the correct response (choosing the apple) depends on an additional condition — in this case, the color “red”. The child has to discriminate based on both the object category and the condition given in the instruction.
Conditioned Motivating Operation (CMO)
A motivating operation whose value-altering effect depends on a learning history. For example, because of the relation between locked doors and keys, having to open a locked door is a CMO that makes keys more effective as reinforcers, and evokes behavior that has obtained such keys.Motivating variables that alter the reinforcing effectiveness of other stimuli as a result of the organism’s learning history.
Example of a CMO (specifically a CMO-T, Transitive):
A child is working on a tablet that suddenly locks and shows a pop-up that says, “Enter password.”
Now, the tablet is not accessible unless the password is entered.
This makes the password (or the person who knows it) more valuable, even though it wasn’t before.
So the child mands (asks), “Can you help me unlock it?”
🧠 Why is this a CMO?
The locked tablet creates a new need (getting the password).
It increases the value of something previously neutral (the password or help unlocking).
It evokes behavior (asking for help) to gain access to the reinforcer (tablet use).
This is a CMO-T, because something in the environment (locked screen) has made another stimulus (password/help) necessary to access the reinforcer.
Conditioned Negative Reinforcer
A previously neutral stimulus change that functions as a negative reinforcer because of prior pairing with one or more negative reinforcers.
Example of a Conditioned Negative Reinforcer:
A student hears a loud alarm sound in the classroom when the timer goes off.
They learn that if they press a button quickly, the sound stops.
Now, just hearing the alarm motivates the student to press the button immediately to make it go away.
🧠 Why is this a conditioned negative reinforcer?
The alarm sound is learned to be unpleasant (not naturally aversive like pain or hunger).
The removal of the sound strengthens the behavior (pressing the button).
The behavior increases in the future because it removes a conditioned aversive stimulus.
Conditioned Punisher
A stimulus change whose presentation functions as punishment as a result of a person’s conditioning history.
Example of a Conditioned Punisher:
A child touches a toy they’re not supposed to, and the teacher says “No!” in a firm voice.
At first, the word “No” didn’t have much meaning.
But over time, the child learns that “No” is often followed by loss of access to fun things or disapproval.
Now, just hearing “No!” causes the child to stop the behavior immediately.
🧠 Why is this a conditioned punisher?
The word “No” has become aversive through learning (it’s been paired with negative consequences).
It decreases the future likelihood of the behavior it follows.
It’s not naturally punishing — its effect is due to conditioning.
Conditioned Reflex
A learned stimulus–response functional relation consisting of an antecedent stimulus (e.g., sound of refrigerator door opening) and the response it elicits (e.g., salivation); each person’s repertoire of conditioned reflexes is the product of his or her history of interactions with the environment (ontogeny).
🔁 Example of a Conditioned Reflex:
A child hears the sound of the freezer door opening and starts salivating because they’ve learned it means ice cream is coming.
Originally, the freezer sound didn’t cause any reaction.
But after repeated pairings with getting ice cream, the sound now triggers salivation automatically.
🧠 Why is this a conditioned reflex?
It’s a learned (conditioned) response — salivating to a sound that wasn’t naturally triggering before.
The sound (conditioned stimulus) has been paired with ice cream (unconditioned stimulus).
Now, the sound alone evokes the reflexive response (salivation).
(See also respondent conditioning and unconditioned reflex.)
Conditioned Reinforcer
A stimulus change that functions as a reinforcer because of prior pairing with one or more other reinforcers.
⭐ Example of a Conditioned Reinforcer:
A therapist gives a child a token each time they complete a task.
The token itself isn’t valuable at first, but over time, the child learns that 5 tokens = playtime with a favorite toy.
Now, the child is motivated to earn tokens, even before getting the actual toy.
🧠 Why is this a conditioned reinforcer?
The token becomes reinforcing because it’s consistently paired with access to a primary reinforcer (like playtime).
It gains its value through learning, not biology.
It increases the likelihood of the behavior it follows.
Conditioned Stimulus (CS)
The stimulus component of a conditioned reflex; a formerly neutral stimulus change that elicits respondent behavior only after it has been paired with an unconditioned stimulus (US) or another CS.
🔔 Example of a Conditioned Stimulus:
A child hears a bell ring right before snack time every day.
At first, the bell means nothing.
But after several pairings with snack time, the bell alone makes the child feel excited and start looking for food.
🧠 Why is this a conditioned stimulus?
The bell was originally neutral.
It was repeatedly paired with an unconditioned stimulus (snack/food).
Now, the bell alone triggers a learned (conditioned) response, like excitement or salivation.
Conjunctive Schedule (Conj)
A schedule of reinforcement that is in effect whenever reinforcement follows the completion of response requirements for two or more schedules of reinforcement.
🔗 Example of a Conjunctive Schedule:
A child is working in an ABA session and must:
Complete 5 math problems (FR5)
AND Work for at least 2 minutes (FI 2 min)
before earning a break.
Even if they finish the 5 problems quickly, they must still wait until 2 minutes have passed.
If 2 minutes pass but they’ve only done 3 problems, they still have to finish all 5.
🧠 Why is this a conjunctive schedule?
Because two different schedules (FR and FI) must be met together for reinforcement to be delivered.
Think of it as “this AND that” — both requirements must be completed.
Consequence
A stimulus change that follows a behavior of interest. Some consequences, especially those that are immediate and relevant to current motivational states, have significant influence on future behavior; others have little effect.
Constant time delay
A procedure for transferring stimulus control from contrived response prompts to naturally existing stimuli. After the student has responded correctly to several 0-sec delay trials, after which presentation of the response prompt follows the instructional stimulus by a predetermined and fixed delay (usually 3 or 4 seconds) for all subsequent trials.
Contextual Control
The situation or context in which a stimulus (or stimulus class) occurs determines its function. More specifically: a type of stimulus control requiring three levels of antecedent stimuli, such that the functions of the stimuli in a conditional discrimination vary depending on the context. Contextual control training requires a five-term contingency. It allows for the same stimuli to be members of more than one equivalence class, depending on the context.
🧠 Example of Contextual Control:
A child sees a ball.
On the playground, they say, “Let’s play!” and kick it.
In the classroom, they say, “That’s a circle.” when asked about shapes.
In clean-up time, they pick it up and put it in the bin.
✅ Why is this contextual control?
Because the same stimulus (the ball) evokes different behaviors depending on the context or setting.
The environmental cues determine which response is appropriate.
Contingency
Refers to dependent and/or temporal relations between operant behavior and its controlling variables.
✅ (Simplified Definition):
A contingency means that a behavior and a consequence are connected — the consequence depends on the behavior happening first.
It’s basically an “if–then” relationship:
If the behavior happens, then the consequence follows.
Example:
If a child says “please,” then they get the toy.
The toy is contingent on saying “please.”
Contingency-shaped behavior
Behavior acquired by direct experience with contingencies.
Example:
A child touches a hot stove and immediately feels pain. They learn not to touch the stove again through direct experience with the consequence.
🧠 Why is this contingency-shaped?
Because the behavior (touching the stove) was shaped by direct contact with the consequence (pain).
No one told the child what would happen — they learned through experience.
Contingent
Describes reinforcement (or punishment) that is delivered only after the target behavior has occurred.
Cotingent Exercise (postive punishment technique)
Contingent exercise is a behavior reduction procedure in which the learner is required to perform a physical activity (like jumping jacks or push-ups) immediately after engaging in a problem behavior.
The exercise is not related to the function of the behavior — it serves as a punishment procedure meant to reduce future occurrences of the target behavior.
🧠 Key Points:
The exercise is contingent (dependent) on the problem behavior.
It’s a punisher — intended to decrease the behavior.
It must be implemented ethically and only under professional supervision (e.g., by a BCBA with consent).
📌 Examples:
After hitting another student, a child is immediately required to do 10 jumping jacks.
A teen throws materials off the table during class. The teacher has them do 5 wall push-ups before returning to work.
A student curses loudly during a group activity, and the instructor has them run one lap around the gym.
Contingent Observation (negative punishment technique)
A procedure for implementing time-out in which the person is repositioned within an existing setting such that observation of ongoing activities remains, but access to reinforcement is lost.
👀 Example of Contingent Observation:
During a group game, a child hits another child instead of taking turns.
As a consequence, the teacher has the child sit just outside the game area where they can see and hear the others playing, but cannot participate for 3 minutes.
After that, the child is invited to rejoin the group.
🧠 Why is this contingent observation?
The child is temporarily removed from access to reinforcement (the game),
But they are still able to observe others receiving reinforcement appropriately.
It’s a non-exclusionary form of time-out designed to reduce problem behavior while promoting observational learning.
Continuous Reinforcement (CRF) (positive reinforcement technique)
A schedule of reinforcement that provides reinforcement for each occurrence of the target behavior.
Nonverbal Controlling Stimulus
Audience.
Copying text
An elementary verbal operant involving a written response that is evoked by a written verbal discriminative stimulus.
Count
A simple tally of the number of occurrences of a behavior.
Dead Metaphor
Tact.
Dependent Variables in ABA
The target behavior in an ABA experiment, or more precisely a measurable dimensional quantity of that behavior.
Deprivation
The state of an organism with respect to how much time has elapsed since it has consumed or contacted a particular type of reinforcement.
Dependent Variables in ABA
The target behavior in an ABA experiment, or more precisely a measurable dimensional quantity of that behavior. Another way of saying it is the dependent variable is dependent on (i.e. a function of) the independent variable(s) manipulated by the investigator. The Behavior we want to change.
Deprivation
The state of an organism with respect to how much time has elapsed since it has consumed or contacted a particular type of reinforcement; also refers to a procedure for increasing the effectiveness of a reinforcer (e.g. withholding a person’s access to a reinforcer for a specified period prior to the session).
Derived stimulus relations
Responding indicating a relation (e.g. same as, opposite, different from, better than) between two or more stimuli that emerges as an indirect function of related instruction or experience. (Also called emergent stimulus relations.)
Determinism
The assumption that the universe is a lawful and orderly place in which phenomena occur in relation to other events and not in a willy-nilly, accidental fashion.
Differential Reinforcement of Diminishing Rates (DRD)
A schedule of reinforcement in which reinforcement is provided at the end of a predetermined interval contingent on the number of responses emitted during the interval being fewer than a gradually decreasing criterion based on the individual’s performance in previous intervals (e.g., fewer than five responses per 5 minutes, fewer than four responses per 5 minutes, fewer than three responses per 5 minutes).
Differential Reinforcement of High Rates (DRH)
A schedule of reinforcement in which reinforcement is provided at the end of a predetermined interval contingent on the number of responses emitted during the interval being greater than a gradually increasing criterion based on the individual’s performance in previous intervals (e.g., more than three responses per 5 minutes, more than five responses per 5 minutes, more than eight responses per 5 minutes).
Differential Reinforcement of low rates (DRL)
A schedule of reinforcement in which reinforcement (a) follows each occurrence of the target behavior that is separated from the previous response by a minimum interresponse time (IRT), or (b) is contingent on the number of responses within a period of time not exceeding a predetermined criterion. Practitioners use DRL schedules to decrease the rate of behaviors that occur too frequently but should be maintained in the learner’s repertoire.
Dimensions (measurement)
Dimensions refer to the different ways behavior can be quantified to ensure accurate data collection and analysis. These dimensions help describe how a behavior occurs in terms of count, time, and intensity.
7 Dimensions of ABA
Applied
Discrete trial
Any operant whose response rate is controlled by a given opportunity to emit the response. Each discrete response occurs when an opportunity to respond exists. Discrete trial, restricted operant, and controlled operant are synonymous technical terms.
Discriminated Avoidance
A contingency in which responding in the presence of a signal prevents the onset of a stimulus from which escape is a reinforcer.
Discriminated Operant
An operant that occurs more frequently under some antecedent conditions than under others.
Discriminative Stimulus (SD)
A stimulus that signals the availability of reinforcement for a specific behavior. It increases the likelihood that a particular behavior will occur because the individual has learned that engaging in that behavior in the presence of the SD leads to reinforcement.
Discriminative Stimulus for Punishment (SDP)
A stimulus in the presence of which a given behavior has been punished and in the absence of which that behavior has not been punished; as a result of this history, the behavior occurs less often in the presence of the SDP than in its absence.
Distorted Tact
A tact that has been altered by external reinforcement rather than being purely under the control of the stimuli it describes. This can happen when social reinforcement encourages exaggeration, underreporting, or fabrication.
Duplic
A type of verbal behavior where the form of the response is under the functional control of a verbal stimulus with formal similarity, and a history of generalized reinforcement.
Echoic
An elementary verbal operant involving a vocal response that is evoked by a vocal verbal SD that has formal similarity between an auditory verbal stimulus and an auditory verbal response product, and a history of generalized reinforcement.
Elementary verbal operants
Michael’s (1982) term for Skinner’s (1957) taxonomy of five different types of speaker behavior (i.e., expressive language) distinguished by their antecedent controlling variables and related history of consequences: mand, tact, intraverbal, duplic, and codic.
Emergent stimulus relations
Stimulus relations that are not taught directly but emerge as an indirect function of related instruction or experience. (Also called derived stimulus relations.)
Empiricism
The practice of observing, measuring, and objectively analyzing behavior using data. It means that conclusions and decisions are based on evidence from direct observation rather than personal opinions or assumptions.
Environment
The environment refers to everything surrounding an individual that can influence their behavior. This includes people, objects, settings, and events that occur before or after a behavior. The environment plays a crucial role in shaping, maintaining, or changing behavior.
Equivalence-class formation
The emergence of accurate responding to untrained and nonreinforced stimulus–stimulus relations following the reinforcement of responses to some stimulus–stimulus relations. Requires successful performances on three types of probe trials—reflexivity, symmetry, and transitivity—in the absence of reinforcement. (Sometimes called stimulus equivalence.)
Errorless Learning
A teaching method in Applied Behavior Analysis (ABA) that minimizes mistakes by providing immediate prompts and gradually fading them over time. The goal is to help learners develop the correct response without experiencing frustration from repeated errors.
Escape Contingency
A contingency in which a response terminates (produces escape from) an ongoing stimulus.
Escape extinction
A behavior reduction strategy that prevents a person from escaping or avoiding a demand after engaging in problem behavior. The goal is to teach that engaging in challenging behavior will no longer allow them to escape or avoid a task, leading to a decrease in that behavior over time.
Establishing Operation (EO)
An establishing operation (EO) is defined by Michael (1987) as an environmental event, operation, or stimulus condition which affects an organism by momentarily altering (1) the reinforcing effectiveness of other events, and (2) the strength of that part of the organism’s repertoire that has been reinforced by those other events.
Evocative Effect (of a motivating operation)
An increase in the current frequency of behavior that has been reinforced by some stimulus, object, or event whose reinforcing effectiveness depends on the same motivating operation. For example, food deprivation evokes (increases the current frequency of) behavior such as opening the fridge that has been reinforced by food.
Exclusion (training)
A procedure for building new arbitrary conditional discriminations based on the robust finding that learners will select a novel comparison stimulus over a known one in the presence of a novel sample.
Exclusion time-out
Negative Punishment Technique
Experiment
An experiment is a controlled comparison of some measure of the phenomenon of interest (the dependent variable) under two or more different conditions in which only one factor at a time (the independent variable) differs from one condition to another.
Experimental Analysis of Behavior (EAB)
An Experimental Analysis of Behavior (EAB) experiment is essentially a controlled study where researchers systematically compare how a specific behavior (dependent variable) changes when manipulating only one environmental factor (independent variable) at a time across different conditions, allowing them to identify functional relationships between the environment and the behavior being observed.
Experimental Design
Scientific term
Explanatory Fiction
A fictitious variable that often is simply another name for the observed behavior that contributes nothing to an understanding of the variables responsible for developing or maintaining the behavior. Explanatory fictions are the key ingredient in “a circular way of viewing the cause and effect of a situation.”
Extinction
The discontinuing of a reinforcement of a previously reinforced behavior (i.e., responses no longer produce reinforcement); the primary effect is a decrease in the frequency of the behavior until it reaches a prereinforced level or ultimately ceases to occur.
Extinction burst
An increase in the frequency of responding when an extinction procedure is initially implemented.
Extinction-induced variability
Phenomenon in which diverse and novel forms of behavior are sometimes observed during the extinction process.
Fading
A procedure for transferring stimulus control in which features of an antecedent stimulus (e.g., shape, size, position, color) controlling a behavior are gradually changed to a new stimulus while maintaining the current behavior; stimulus features can be faded in (enhanced) or faded out (reduced).
Feature stimulus class
Stimuli that share common physical forms or structures (e.g., made from wood, four legs, round, blue) or common relative relationships (e.g., bigger than, hotter than, higher than, next to).
Fixed Interval (FI)
Schedule of Positive Reinforcement
Fixed Ratio (FR)
Schedule of Positive Reinforcement
Formal similarity
This occurs when the controlling verbal stimulus and the response product are in the same sense mode (e.g., both are auditory), and the stimulus and the response product resemble each other in the physical sense (e.g. hear “bear,” say “bear”). Formal similarity is Skinner’s term for the case where the controlling stimulus and the response product are (1) in the same sense mode (both are visual, or both are auditory, or both are tactile, etc.) and (2) resemble each other in the physical sense of resemblance.
Free-operant avoidance
A contingency in which responses at any time during an interval prior to the scheduled onset of an aversive stimulus delays the presentation of the aversive stimulus.
Generalized Conditioned Reinforcer
A stimulus change that has been paired with numerous forms of unconditioned and conditioned punishers becomes a generalized conditioned punisher. Reprimands (“No!” “Don’t do that!”) and disapproving gestures (e.g., scowls, head shakes, frowns) are generalized conditioned punishers for many people because they have been paired repeatedly with a wide range of unconditioned or conditioned punishers (e.g., burned finger, loss of privileges).
Habilitation
Habilitation (adjustment) occurs when a person’s repertoire has been changed such that short- and long-term reinforcers are maximized and short- and long-term punishers are minimized.
Habituation
A decrease in responsiveness to repeated presentations of a stimulus; most often used to describe a reduction of respondent behavior as a function of repeated presentation of the eliciting stimulus over a short span of time; some researchers suggest that the concept also applies to within-session changes in operant behavior.
Higher-order Conditioning
Development of a conditioned reflex by pairing of a neutral stimulus (NS) with a conditioned stimulus (CS).
Higher-order operant class
Development of a conditioned reflex by pairing of a neutral stimulus (NS) with a conditioned stimulus (CS).
Incidental Bidirectional Naming
The spontaneous emergence of both listener and speaker behaviors (i.e., pointing to an object when its name is said and verbally labeling an object) without any direct teaching or reinforcement for either behavior, simply by being exposed to the names of novel items in a natural environment; essentially, learning to both understand and say a new word without being explicitly taught to do so.
Independent Variable in ABA
The aspect of the environment that the experimenter manipulates. Another way of saying it is the intervention or treatment being manipulated by the researcher (e.g., a specific behavioral strategy). Also called: experimental variable, intervention, or treatment. It’s what is done to change the dependent variable.
Indirect or Unrelated Reinforcer
An indirect reinforcer is a situation where a behavior is reinforced not directly through a tangible reward or immediate consequence, but through a mediated process, often involving social interaction or a rule, where the positive outcome is not directly accessible to the individual performing the behavior; essentially, the reinforcement is delivered through another person or based on an understanding of the potential future consequence of the behavior, rather than an immediate tangible reward. Example: A child tacts car and is reinforced with a skittle.
Interresponse time
A measure of the amount of time that elapses between two consecutive instances of a behavior. Like response latency, IRT is a measure of temporal locus because it identifies when a specific instance of behavior occurs with respect to another event (i.e., the previous response).
Intermittent Schedule of Reinforcement (INT)
A contingency of reinforcement in which some, but not all, occurrences of the behavior produce reinforcement.
Intraverbal
Elementary Verbal Operant
Joint Control
A phenomenon in which two separate, but interrelated forms of a person’s own verbal behavior, combine to acquire stimulus control of a response that would not have occurred in the absence of either.
Lag Schedule
A lag schedule of reinforcement is one method to increase response variability. Reinforcement on a lag schedule is contingent on a response differing in some predetermined way (e.g., different topography, different sequence) from one or more responses that preceded it. With a Lag 1 schedule, each response that differs from the prior response produces reinforcement. Reinforcement on a Lag 2 schedule is contingent on a response being different from the previous 2 responses, a Lag 3 requires a response to differ from the previous 3 responses, and so on. To produce reinforcement on a lag infinity schedule, a response must differ from all prior responses.
Language
Verbal Behavior
Latency
A measure of the elapsed time between the onset of a stimulus and a subsequent response. How long it takes the client to respond after they hear the question.
Level (line graph)
Line graph
Limited Hold
When a limited hold is added to an interval schedule, reinforcement remains available for a finite time following the elapse of the FI or VI interval. The participant will miss the opportunity to receive reinforcement if a targeted response does not occur within the time limit. For example, on an FI 5-min schedule with a limited hold of 30 seconds, the first correct response following the elapse of 5 minutes is reinforced, but only if the response occurs within 30 seconds after the end of the 5-minute interval. If no response occurs within 30 seconds, the opportunity for reinforcement has been lost and a new interval begins. The abbreviation LH identifies interval schedules using a limited hold (e.g., FI 5-min LH 30-sec, VI 3-min LH 1-min). Limited holds with interval schedules typically do not change the overall response characteristics of FI and VI schedules beyond a possible increase in rate of response.
Magnitude
The force or intensity with which a response is emitted; provides important quantitative parameters used in defining and verifying the occurrence of some response classes. Responses meeting those criteria are measured and reported by one or more fundamental or derivative measures such as frequency, duration, or latency. (Sometimes called amplitude.)
Mand
Elementary Verbal Operant
Matching Law
The matching law addresses response allocation to choices available with concurrent schedules of reinforcement. Typically, the rate of responding is proportional to the rate of reinforcement received from each choice alternative.
Measurement
The process of applying quantitative labels to describe and differentiate objects and natural events. Measurement in applied behavior analysis involves three steps: (a) identifying the behavior to be measured, (b) defining the behavior in observable terms, and (c) selecting an appropriate observation and data-recording method.
Metaphorical Extension
Tact
Metonymical extension
Tact
Mixed Schedule of Reinforcement (MIX)
A mixed schedule of reinforcement (mix) uses a procedure identical to that of the multiple schedules, except no discriminative stimuli signal the presence of the independent component schedules. For example, with a mix FR 10/FI 1 schedule, reinforcement sometimes occurs after the completion of 10 responses and sometimes occurs with the first response after a 1-minute interval from the preceding reinforcement.
MO Unpairing
The relationship between the CMO-S and an effective MO can generally be weakened by two kinds of MO unpairing: presenting the previously neutral stimulus (now CMO-S) without the effective MO or presenting the MO as often in the absence of the CMO-S as in its presence.
What weakens MO-S and an effective MO?
Two kinds of MO unpairing: presenting the previously neutral stimulus (now CMO-S) without the effective MO or presenting the MO as often in the absence of the CMO-S as in its presence.
What is a Multiple Schedule (MULT)?
A multiple schedule presents two or more basic schedules of reinforcement in an alternating, usually random, sequence.
What is Naturalistic Behavioral Intervention (NDBI)?
An evidence-based approach to care for toddlers and young children diagnosed with ASD, combining developmental and behavioral sciences.
What is Negative Punishment?
Occurs when the removal of an already present stimulus immediately following a behavior results in a decrease in the future frequency of the behavior.
What is Negative Reinforcement?
A contingency in which the occurrence of a response is followed immediately by the termination, reduction, postponement, or avoidance of a stimulus, leading to an increase in similar responses.
What is Overcorrection?
A behavior reduction tactic where, contingent on each occurrence of the problem behavior, the learner engages in effortful behavior related to the problem behavior.
What is the Premack Principle?
States that making the opportunity to engage in a high-probability behavior contingent on a low-frequency behavior will function as reinforcement for the low-frequency behavior.
What is a Post Reinforcement Pause?
A brief period of inactivity or pause in responding that happens immediately after receiving reinforcement, commonly observed in fixed ratio schedules.
What is Ratio Strain?
Can result from abrupt increases in ratio requirements when moving from denser to thinner reinforcement schedules, leading to avoidance, aggression, and unpredictable pauses.
What is Recovery from Punishment?
The occurrence of a previously punished type of response without its punishing consequence.
What is Respondent Behavior?
The response component of a reflex; behavior that is elicited by antecedent stimuli.
What is Response Cost?
The response-contingent loss of a specific number of positive reinforcers that decreases the frequency of similar responses in the future.
What is Reinforcement?
A basic principle of behavior describing a response-consequence functional relation where a response is followed by a stimulus change resulting in similar responses occurring more often.
What is a Reinforcer?
A stimulus change that increases the future frequency of behavior that immediately precedes it.
What is Resurgence?
The reoccurrence of a previously reinforced behavior when reinforcement for an alternative behavior is terminated or decreased.
What is Rule-governed behavior?
Behavior controlled by a rule, enabling human behavior to come under the indirect control of temporally remote or improbable consequences.
What is Schedule of Reinforcement?
A rule specifying the environmental arrangements and response requirements for reinforcement.
What is Stimulus Control?
Occurs when a behavior happens more often due to a specific cue or signal in the environment.
What is Stimulus Fading?
A method of gradually reducing the intensity of a stimulus to promote learning.
What is stimulus control transfer?
It allows an individual to perform a desired behavior based on relevant cues in everyday situations, promoting generalization of skills.
What is a stimulus delta (S)?
A stimulus in the presence of which a given behavior has not produced reinforcement, or has produced reinforcement of lesser quality, in the past.
What is stimulus discrimination?
A two-term consistency between operant response and reinforcement.
What is stimulus equivalence?
A definition related to the equivalence of stimuli.
What is stimulus fading?
A method of transferring stimulus control that involves highlighting a physical dimension of a stimulus to increase the likelihood of a correct response and then gradually diminishing the exaggerated dimension.
What is a stimulus preference assessment?
A variety of procedures used to determine the stimuli that a person prefers and their relative preference values.
What is Task Analysis for Chaining (TA)?
It includes the Single Opportunity Method of Chaining.
What is temporal extent?
Every instance of behavior occurs during some amount of time, meaning the duration of behavior can be measured.
What is temporal locus?
It refers to when a behavior occurs with respect to other events and the amount of time that elapses between two consecutive instances of a response class.
What does ‘terminate specific reinforcer contact’ refer to?
It is a negative punishment technique.
What is textual in the context of verbal operants?
It refers to the elementary verbal operant.
What is a total verbal episode?
It refers to verbal behavior.
What is a trend in behavior analysis?
It is represented by a line graph.
What is an unconditioned punisher?
A stimulus that functions as punishment without having been paired with any other punishers.
What is a Variable Interval (VI)?
It is a schedule of positive reinforcement.
What is a Variable Ratio (VR)?
It is also a schedule of positive reinforcement.
What does variability refer to in behavior analysis?
It is represented by a line graph.
Time Delay (Prompting)
A procedure for transferring stimulus control from contrived response prompts to naturally existing stimuli that begin with the simultaneous presentation of the natural stimulus and response prompt. After several correct responses, a delay is introduced between the stimulus and the response prompt until the student emits the unprompted correct response. Time delay is considered an “errorless learning” technique as students make few or no errors transitioning from the contrived prompt to the instructional stimulus. (See also constant time delay and progressive time delay.)
Constant Time Delay
A procedure for transferring stimulus control from contrived response prompts to naturally existing stimuli. After the student has responded correctly to several 0-sec delay trials, after which presentation of the response prompt follows the instructional stimulus by a predetermined and fixed delay (usually 3 or 4 seconds) for all subsequent trials.
Progressive Time Delay
A procedure for transferring stimulus control from contrived response prompts to naturally existing stimuli that starts with simultaneous presentation of the natural stimulus and the response prompt (i.e., 0-sec delay). The number of 0-sec trials depends on the task difficulty and the functioning level of the participant. Following the simultaneous presentations, the time delay is gradually and systematically extended.