Domain B Flashcards

1
Q

3 types of stimulus receptors

A

PIE

proprioceptors- receive stimulation from joints, tendons, muscles; e.g. posture, balance, movement (internal)

interoceptors- receive stimulation from organs e.g. headache, fast heart rate, hunger (internal)

exteroceptors- 5 senses (external)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

stimulus classes

A

FFFAT

Formal- share a certain feature (e.g. things that are green, things that are round, bad smells, loud noises)

Feature-share infinite topographies (e.g. dog breeds, cats/dogs/lions, bike/car/bus, chair/table/couch)

Functional-same effect on behavior;
have an immediate yet temporary effect on behavior (e.g. things that provide warmth: heater, blanket, sweater. sour taste: lemon, spoiled milk. loud noise: drum, horn, police car)

Arbitrary- physically different, evoke same response (e.g. synonyms, fruit)

Temporal- related by their place in time (e.g. SDs and MOs both occur prior to behavior. reinforcement and punishment both occur after behavior.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

2 primary types of behavior. Name and compare them.

A

respondent behavior- reflexive, involuntary; elicited without prior learning; phylogenic

operant behavior- behavior controlled by its consequences; emitted and/or evoked; ontogenic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

respondent behavior is __________.

a. elicited
b. evoked
c. emitted

A

elicited- unlearned responses; reflexes where the unconditioned response bears a one-to-one relationship to the unconditioned stimulus (e.g. gagging, salivating, fear reactions, blushing. uncontrollable reflexes)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

operant behavior is __________.

a. elicited
b. evoked
c. emitted

A

emitted and/or evoked

emitted- produced; not an automatic response (e.g. answering questions in class)

evoked- learned responses (through learning history and consequences)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

classical and pavlovian conditioning, stimulus-stimulus pairing (S-S), conditioned stimulus-conditioned response (CS-CR)

A

respondent conditioning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

ABC, 3-term contingency, behavioral, contingency, stimulus-response-stimulus (S-R-S) model

A

operant conditioning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

critical part of operant conditioning

A

consequences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

all ABA strategies are derived from these 3 principles

A

reinforcement
punishment
extinction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

a response becomes more frequent in the future if previously followed by a reinforcer within __________ seconds.

A

:00 - :03 seconds

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

__________ maintains behavior that is already occurring, makes antecedent-stimulus conditions relevant, and is a linear concept.

A

reinforcement (it also increases behavior)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

2 types of negative reinforcement

A

escape

avoidance: 2 types; discriminated avoidance & free-operant avoidance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

2 types of avoidance

A

discriminated- the arrival of a bad thing is signaled (think: warning); has an SD that signals availability of negative reinforcement (e.g. news traffic warning)

free-operant- no SD to signal the availability of negative reinforcement ; avoiding the bad thing without an SD (e.g. avoiding traffic between 4-7pm without checking for traffic information)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

type I reinforcement; SR+ = __________

type II reinforcement; SR- = __________

SP= __________

SDP= __________

type I punishment; SP+

type II punishment; SR-

A

positive reinforcement

negative reinforcement

punisher

discriminative stimulus for punishment

positive punishment

negative punishment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

punisher vs aversive stimulus?

A

punisher- a stimulus change that decreases future frequency of behavior that immediately precedes it

aversive stimulus- an unpleasant stimulus that may or may not impact future behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

behavior is defined by its __________, not __________!

A

function, NOT topography!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

when behavior is evoked, shaped, or maintained by environmental variables delivered without another person’s mediation

A

automatic reinforcement contingency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

automatic reinforcement contingencies: describe each

positive automatic reinforcement
negative automatic reinforcement
positive automatic punishment
negative automatic punishment

A

positive automatic reinforcement- behavior is immediately followed by a presentation/addition of a stimulus which increases the future frequency of that behavior (the behavior of adding the stimulus)

negative automatic reinforcement- when behavior is immediately followed by reduction or removal of a stimulus which increases future behavior (scratching mosquito bite to relieve itch)

positive automatic punishment- behavior is immediately followed by presentation or addition of a stimulus that decreases future behavior (flicking rubber band to wrist whenever cursing to decrease cursing)

negative automatic punishment behavior is followed immediately by the reduction or removal of desirable stimulus that decreases future behavior (e.g. removing nice nail polish whenever biting nails to decrease nail biting)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

turning on air conditioner when feeling hot, and continuing to do when needing cool air is an example of __________

A

positive automatic reinforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

scratching a mosquito bite to relieve the itch is an example of __________

A

negative automatic reinforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

snapping a rubber band on your wrist each time you curse resulting in cursing less is an example of __________

A

positive automatic punishment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

forcing yourself to remove your nice nail polish every time you bite your nails is an example of __________

A

negative automatic punishment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

automaticity vs automatic reinforcement and punishment

A

automaticity refers to how reinforcement and punishment affect behavior without a person’s awareness.

automatic reinforcement and punishment refer to the person reinforcing or punishing themselves with no social mediation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

when antecedents and consequences are presented by another person

A

socially mediated contingency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

primary reinforcer, unlearned

A

unconditioned reinforcers (UCR)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

secondary reinforcer, learned; stimulus-stimulus pairing

A

conditioned reinforcer (CR)

NS + 1 or more UR or CR= CR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Doesn’t depend on MO’s for effectiveness because it’s been paired with an unlimited number of unconditioned and conditioned reinforcers

A

generalized conditioned reinforcers (GCSR)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

examples of ___________ include money, tokens, social praise

A

generalized conditioned reinforcers (GCSR)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

examples of __________ include reprimands (warnings), stern tone of voice, fines, red marks on grades, head shakes, scowls, frowns

A

generalized conditioned punishers (GCSP)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Reinforcers and punishers are defined by their __________ on behavior.

A

effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Reinforcement is delivered every time the target behavior occurs (aka FR-1); used for the acquisition phase of novel behaviors

A

continuous reinforcement (CRF)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

when some (not all) instances of target behavior are reinforced; used for maintaining behaviors that have already been established; transition from artificial->natural reinforcement

A

intermittent schedules of reinforcement (INT)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

0 reinforcement; absence of reinforcement

A

extinction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

4 schedules of intermittent reinforcement

A

FI
FR
VI
VR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

number of needed correct responses for reinforcement to be delivered

A

ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

an amount of time needs to pass before reinforcement is available for one correct response

A

interval

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

a constant rate or time criteria

A

fixed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

a rate or time that varies, but averages around the scheduled criterion

A

variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

strongest basic schedule of intermittent reinforcement

A

VR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

__________ produce post reinforcement pause

A

FR & FI

hint:
for FI, client automatically pauses reflexively after earning all their tokens. Does not occur with VR because of the uncertainty of reinforcement.

for FI, client paused during the earlier part of the interval (think: finishing a paper before its due date and taking a pause from paper writing before the due date)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

low ratio requirements produce a __________ response rate, while higher ratios produce a __________ response rate.

A

low ratio = lower response rate
high ratio= higher response rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

a supervisor randomly checking in with an employee every few days to provide reinforcement for consistent effort is an example of __________

A

VI schedule of reinforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

when rate of responding gradually accelerates toward the end of the interval (slow start but accelerates with maximum rate at end of interval)

A

FI scallop

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

in FI, larger intervals produce __________ rates of response and shorter intervals produce __________ rates of response. Larger FI requirement = __________post-reinforcement pause and vice versa.

A

larger intervals produce lower rates of response

shorter intervals produce higher rates of response

larger FI requirement = longer post-reinforcement pauses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

constant, stable rate of response; produces very few pauses in responding; low to moderate rate of response

A

VI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

4 COMPOUND schedules of reinforcement

Concurrent
Multiple
Chained
Mixed

hint call me! call me!

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

4 VARIATIONS of basic intermittent schedules

Lag
Progressive ratio
Delays
Differential

explain each.

A

Lag- reinforcement is delivered for a response that is different in topography from a previously reinforced response (e.g. reinforcing any other shade of purple that isn’t “purple” aka “eggplant” , “plum”, “lavender”); increases response variability

Progressive ratio- schedule is thinned until client responding stops (higher expectation aka FR3->6->9until breaking point)

Delays- reinforcement for correct responding is delayed

Differential reinforcement of rates of responding- DRH, DRD, DRL

48
Q

3 extinction variations

positive reinforcement extinction
automatic reinforcement extinction
negative reinforcement extinction

explain each.

A

positive reinforcement extinction- withholding attention or access when the behavior occurs (use for reducing bxs with an attention or access to tangibles function); socially mediated

automatic reinforcement extinction- aka sensory extincetion; withholding access to automatic reinforcement (use for bx with an automatic reinforcement function e.g. flickering lights on and off =disconnect lightbulb)

negative reinforcement extinction- aka escape extinction; withholding access to escape (e.g. follow through on demands after the behavior occurs); socially mediated

remember, REINFORCEMENT is being put on extinction, NOT BEHAVIOR.

49
Q

consequence in which reinforcement is not given following a behavior; it is withheld, and contingent on behavior

A

operant extinction

50
Q

consequence in which a preferred stimulus is taken away, contingent on behavior, decreasing future frequency of that behavior

A

negative punishment

51
Q

when neutral stimuli (NS) achieve the capacity to elicit respondent behaviors (reflexes) that are typically elicited by specific unconditioned stimuli (US)

A

respondent conditioning

52
Q

operant vs respondent

A

operant = CONSEQUENCE-based contingency

respondent= ANTECEDENT CONTROL (Pavlov, antecedent stimulus pairing & unpairing) think: respondent=reflex based, elicited

53
Q

behavior that occurs more frequently in the presence of a specific stimulus (SD)

A

discriminated operant

54
Q

stimulus in the presence of which a given behavior has not produced reinforcement in the past and likely won’t in the future

A

S-delta (triangle shape); e.g. if dehydrated, coffee is s-delta and water is SD.

55
Q

5 factors affecting stimulus control

Attending (pre-)
Stimulus salience
Differential consequences
Repertoire (client’s)
Over-selective control

A

Attending (pre-)- prerequisite behaviors needed for learning

Stimulus salience- stimulus stands out

Differential consequences- a specific consequence must follow the response in the presence of that sitmulus

Repertoire (client’s)-what pre-requisite skills are needed?; ensure that the response is in the client’s repertoire

Over-selective control- when focusing on a minor feature of a stimulus interferes with a complete understanding of that stimulus (e.g. focusing on the pullback tab of a can of soda and not the can as a whole, completely missing its shape and color;not fully understanding all its features)

56
Q

_____ and _____ can impact stimulus salience and can prevent stimulus control.

A

masking and overshadowing

masking e.g. being able to play guitar alone but not out in public (stimulus is being masked by other competing stimuli)

overshadowing e.g. TV is on while studying, TV overshadows the learning making it less salient (aka TV makes study materials stand out less)

57
Q

when an irrelevant stimulus controls the correct behavior and the relevant stimulus does not

A

faulty stimulus control

58
Q

a child saying “mommy” to her mother and not to other women is an example of __________

A

stimulus discrimination

59
Q

2 types of generalization

stimulus generalization
response generalization

A

stimulus generalization-when antecedent stimuli other than the original SD evoke the same response as the original SD (e.g. saying “green” when seeing darker or lighter greens); ***multiple stimuli evoke same 1 response

response generalization- when a person exhibits novel untrained behaviors that are functionally equivalent to the trained target response. (e.g. after learning “hello”, child says “hey” “hi” “what’s up?”; taught to ask “I want water”->points to water); form of response/behavior changes; ***multiple responses to 1 stimulus

60
Q

stimulus generalization has _____ stimulus control within. stimulus discrimination has _____ stimulus control between/across.

A

stimulus generalization = loose stimulus control

stimulus discrimination= tight stimulus control

61
Q

extent to which behavior remains in the client’s repertoire over time, beyond treatment; lasting change in behavior

A

response maintenance

62
Q

motivating operations

establishing operations have a _____ value-altering effect and _____ behavior altering effect.

abolishing operations have a _____ value-altering effect and _____ behavior altering effect.

A

EO = reinforcer-establishing, evocative effect (an in-the-moment increase in behavior that’s been reinforced by the now increased value stimulus); think: DEPRIVATION

AO = reinforcer-abolishing, abative effect (an in-the-moment decrease in frequency of behavior that’s been reinforced by the stimulus which isn’t currently valuable); think: SATIATION

63
Q

an in-the moment change (increase or decrease) in how effective a sitmulus will be as a reinforcer; aka how valuable a reinforcer is at a given moment

A

value-altering effect

64
Q

an in-the moment change (increase or decrease) in the occurrence of all behavior that has been reinforced by a stimulus; aka how behavior changes to access a reinforcer in the moment

A

behavior-altering efffect

65
Q

an internal or external antecedent event or condition that has an influence on the occurrence of a specific behavior

A

setting event

66
Q

when a consequence for a behavior in the presence of an MO changes the behavior evoked by the MO in the future

A

function-altering effect

key words: MO that affects behavior in the FUTURE (e.g. burning tongue on hot tea would result in not drinking hot tea as quickly next time. asking for help and receiving it would increase likelihood of asking for help in the future)

67
Q

events, operations, and stimulus conditions with value-altering motivating effects that are unlearned

A

unconditioned motivating operations

e.g. basic human needs (food, water, sleep, oxygen, sex, relief from extreme temperatures and pain)

68
Q

behavior-altering effect vs function-altering effect?

A

behavior-altering effect- an antecedent context changes the current, in-the-moment occurence of a behavior

function-altering effect- a consequence shapes future behavior, evoking a different/new response in the future

69
Q

MOs that change the value of other stimuli, objects, or events due to conditioning (individual learning history); creates an in-the-moment change in frequency of any behavior associated with those other stimuli, objects, or events.

A

conditioned motivating operations (CMOS)

70
Q

Although a charged phone seems to be the reinforcer when cell phone battery is dead, needing to charge it makes the charger more valuable as a reinforcer in that moment. This is an example of __________

A

conditioned motivating operations (CMOS)

71
Q

Three types of CMOS

A

Surrogate CMO ; CMO-S
Reflexive CMO ; CMO-R
Transitive CMO ; CMO-T

72
Q

NS becomes an MO by being paired with UMO or CMO, acquiring the same value-altering and behavior-altering effect as the original MO. NS + MO (UMO or CMO)= NS becomes CMO

A

Surrogate CMO (CMO-S)

think: increased value to access due to repeated pairing

73
Q

a stimulus signal that comes before the onset of aversive stimulus (removal/avoidance of the aversive stimulus acts as negative reinforcement); can also precede improving conditions

A

Reflexive CMO (CMO-R)

think: signal, evokes avoidance/escape behaviors

74
Q

environmental variable that establishes another stimulus or event as a reinforcer or punisher

A

Transitive CMO (CMO-T)

think: increase in access behaviors that solve a problem; results in access to reinforcers (UR or CR)

75
Q

the sound of a lunch bell at school, which has been repeatedly paired with mealtimes, now trigger a similar motivating effect to hunger, even if the individual isn’t physically hungry.

A

example of CMO-S

NS bell + UMO hunger = bell becomes CMO-S

76
Q

you see a blanket and want to lie down and sleep even if you’re not tired

A

example of CMO-S

NS blanket + UMO sleep deprivation/fatigue = blanket becomes CMO-S

the blanket has been paired with sleep routines and now evokes sleep-related behaviors.

77
Q

a worker sees their supervisor approaching and is motivated to appear busy/productive.

A

example of CMO-R

CMO-R supervisor’s presence signals potential worsening condiiton of reprimand if the worker appeard idle

78
Q

a driver accelerates when a traffic light turns yellow.

A

example of CMO-R

CMO-R yellow light signals worsening condition of being stopped by the red light, motivating the driver to act quickly.

79
Q

you want to watch TV. In order to do so, you need to find the remote that seems to always be missing.

A

example of CMO-T

wanting to watch TV (CMO) establishes the need for a remote (CMO-T), increasing the likelihood of access behaviors to find the remote.

80
Q

sleep deprivation establishes the need for a bed, dim lights, a blanket, quiet.

A

example of CMO-T

sleep deprivation (UMO) establishes the need for a bed, dim lights, a blanket, quiet (CMO-T).

81
Q

all UMOs act as a __________because the UMO establishes the unconditioned reinforcin effectiveness and the __________ evokes the necessary behaviors needed to obtain the unconditioned reinforcer.

A

CMO-T, CMO-T (transitive CMO)

82
Q

stimulus control vs motivating operations?

A

stimulus control = a change in responding based on the SD

motivating operation = environmental variable which affects the frequency of behavior by altering the value of a reinforcer; creates a state of deprivation or satiation and has two effects, value-altering and behavior altering

Stimulus control: think SD
MO: think value-altering & behavior-altering

similarity: they are both antecedent variables.

83
Q

rule-governed behavior vs contingency shaped behavior?

A

rule-governed behavior= behavior that is controlled by a verbal statement (rule) and not an immediate consequence

contingency-shaped behavior= learned behaviors from direct contact with contingencies/consequences immediately following that behavior

84
Q

putting neosporin on a painful cut and immediately relieving the sting increases neosporin use in the future

A

contingency shaped behavior

85
Q

if a person brushes their teeth and flosses 2x per day, they’ll have good oral health

A

rule-governed behavior; if the person follows this verbal statement, they are changing their behavior without an immediate consequence (no reinforcement or punishment)

86
Q

verbal behavior vs vocal behavior?

A

verbal= all communicative response forms

vocal= vocal talking

87
Q

point-to-point correspondence vs formal similarity

A

point-to-point correspondence= when the controlling stimulus matches the verbal response, aka the content is identical (e.g. echoic, transcription, motor imitation, codics, duplics) (e.g. rbt writes “cat” and client finger spells “cat”. the content are identical)

formal similarity= controlling stimulus and verbal response have the same sense mode (both spoken, both written, both signed) (e.g. RBT writes “hi” and client writes “star”. both share the same mode but have no point-to-point correspondence)

88
Q

6 verbal operants: list and describe

A

EMITTT

Echoic- vocal imitation; a type of dupilic (verbal behavior that share formal similarity and point-to-point correspondence); critical behavior cusp bc exposes learners to reinforcement produced through vocalizations; shaping through reinforcing approximations

Mand- speaker requests what they want/need; first verbal operant humans learn; should be the first one taught

Intraverbal- a verbal response to a verbal SD that does NOT have point-to-point correspondence with that verbal SD. (e.g. “what’s your favorite song?” “a cow goes ___” “old mcdonald had a ___” “What do you wear on your feet?”)

Tact- speaker names non-verbal SD (5 senses) as well as private experiences (thoughts, emotions)

Textual- speaker reads presented words (out loud or in head); codic (shares point-to-point correspondence but not formal similarity);

Transcription- speaker converts spoken words to written words; codic (shares point-to-point correspondence but not formal similarity);

89
Q

4 tact extensions (higher level tacts)

A

SMMG (Studying Makes Me Grim)

Solecistic- tact delivered using poor use of language; slang; based on associations that are indirect/opposite (e.g. you speak good. or calling a cold drink hot)

Metaphorical- metaphors/figurative language (her heart is black as coal)

Metonymical- tact based on novel stimuli that share no relevant features of the original stimulus, but some irrelevant or related feature has acquired stimulus control (e.g. looking at a stanley cup and saying “water” or saying “mailman” when seeing a mail truck)

Generic- aka stimulus generalization; tact based on a novel stimulus that shares all the applicable features of the original stimulus (e.g. calling Doughnut Plant Dunkin Donuts, signing “Skittles” when shown an M&M, saying “house” when seeing an apartment)

90
Q

non-verbal response evoked by a verbal SD because of a history of GCSR; receptive language

e.g. “come here” client goes to speaker

“touch airplane.” client touches airplane

A

listener responding

91
Q

verbal behavior that modifies one’s own verbal behavior

e.g. I’m hungry BECAUSE I HAVEN’T EATEN YET (descriptive- explaining the reason for verbal behavior)

I want TWO cookies (quantifying- giving numeric value)

I am hungrier THAN you. (relational-making comparisons)

I am EXTREMELY hungry. I want to eat RIGHT NOW. (qualifying- evoked by motivational factors)

A

autoclitic

92
Q

2 types of autoclitics:

autoclitic mand
autoclitic tact

explain each.

A

autoclitic mand- INSTRUCTS listener to take action in some way (e.g. DONT BE OFFENDED, but I have to go. IM SURE you’ll love this. I NEED the blue cup, not the red one. PLEASE pass me the pencil.)

autoclitic tact- controlled by some nonverbal aspect of the main response; shows self-awareness & self-editing (e.g. I KNOW the car is parked outside. I SEE the big dog. It’s DEFINITELY a red car. It MIGHT be raining outside.

93
Q

verbal behavior that is controlled by ONE source (e.g. a tact is only controlled by a nonverbal SD and a mand is only controlled by an MO)

A

pure verbal behavior

94
Q

multiple control exists in 2 forms¨̮

convergent control
divergent control

explain each.

A

convergent control- MULTIPLE STIMULI work together to evoke ONE VERBAL RESPONSE; exists in most instances of verbal behavior; occurs with MOs and SDs (verbal & non-verbal)
mand example- stimuli: water in sight + thirst (MO) + verbal prompt “what do you want?” -> person says “water”
tact example- stimuli: sight of a dog + hearing the question “what do you see?” -> person says “dog”
intraverbal example- stimuli hearing “twinkle twinkle” + prior knowledge of song -> person says “little star”

divergent control- ONE STIMULUS evokes MULTIPLE VERBAL RESPONSES (mand, tact, intraverbal)
e.g.
antecedent: being asked “what do you like to eat?”; responses: pad see ew, penne vodka, ice cream.
antecedent: feeling hungry/food deprivation; responses: ordering food, making food, asking someone to cook for you.

think:
CONVERGENT=antecedent stimuli CONverge to form 1 response

&
DIVERGENT= 1 antecedent stimulus DIverge to form multiple responses*

95
Q

an audience in the presence of which the speaker’s behavior increases (e.g. teacher speaking to students in classroom)

A

positive audience

96
Q

an audience in the presence of which the speaker’s behavior decreases (e.g. employees discussing work dissatisfaction and their boss appears)

A

negative audience

97
Q

using efficient approaches to teaching that promote a client’s ability to apply learned skills to form novel and untrained responses and connections, without having every possible relation or skill explicity taught

A

promoting emergent relations

98
Q

accurate responding to untrained and non-reinforced stimulus-stimulus relations, following the training of some correct stimulus-stimulus relations

A

emergent relations

99
Q
  1. teach prerequisite and foundational skills- establishes a foundation for more complex behaviors to emerge (e.g. discrimination, receptive language, labelling, joint attention)
  2. promote generalization- teaching in a way that connects similar concepts result in emergence of new relations (e.g. multiple exemplar training, negative teaching examples)
  3. equivalence-based instruction- teaching different combinations of related stimuli leads to other derived relations emerging
  4. incorporate relational frame theory (RFT)- teaching various relational frames (e.g. same, opposite, symbolic)
  5. natural environment training- spontaneous responding is more likely to occur
  6. lag schedules of reinforcement- promotes variability in responding which can lead to new untrained responses
A

strategies for promoting emergent relations

100
Q

when stimulus relations emerge independent of training, due to an experience with learning something similar; a skill that has already been acquired enables or speeds up other skills being acquired, without direct training or reinforcement history

aka using previously learned skills to generate new, untrained responses

A

generative performance

101
Q

indirect learning is typically a product of __________

A

stimulus and response generalization

102
Q

responding differently to unlearned combinations of stimuli that were taught in different contexts

e.g. learning to tact “blue fish” and “green bird” and without training correctly tacting “blue bird”

A

recombinative generalization in emergent tact relations

think recombining previously learned elements into new, untrained appropriate ways

103
Q

taught using direct or incidental training, or through forming relational frames (relational frames e.g. same, opposite, symbolic)

A

emergent mand relations

104
Q

when both speaker and listener behavior are established for the same stimulus.

e.g. child is taught to tact “sun” and then is able to touch a picture of the sun when told to touch sun.

client learns to respond “food” to the verbal SD “pizza” and without training responds “pizza” to the verbal SD “food”

A

common bi-directional naming (C-BiN) in emergent tact relations

think: symmetry

105
Q

for establishing __________. established between at least 2 verbal stimuli; when training an AB intraverbal relation produces a reverse BA intraverbal relation

e.g. when asked “what do you drive?” client responds “car”, and vice versa (client hears “car” and responds “what do you drive?”)

A

emergent intraverbal behavior

106
Q

tendency of behavior with a strong history of reinforcement to persist (resistance to change) despite challenges

key strategy: using high-probability tasks to build compliance before introducing low-probablility tasks

goal: increase compliance and reduce resistance

A

behavioral momentum

107
Q

allocation of behavior occurs in direct proportion to the reinforcement received for each option

A

matching law

key concept:
more reinforcement ->more behavior toward that option; behavior goes where reinforcement flows

goal: predict and influence choice behavior

108
Q

a client must complete a task to receive reinforcement, but since there are other more fun activities available in his room, rate of task completion decreases

A

example of matching law

key concept:
more reinforcement ->more behavior toward that option

109
Q

physically copying the behavior of a model; must occur without prior learning history to be considered this

A

imitation

110
Q

4 criteria for imitation

A

MFIC (Mother Figure In Charge)

Model - any physical movement; 2 types: planned (contrived to teach client) & unplanned (occur naturally in daily life e.g. seeing people get in line and copying their behavior)

Formal similarity- model and behavior must look alike physically and be in the same sense mode

Immediacy- an imitative behavior must immediately follow the model within a few seconds; it not immediate, it’s not imitation

Controlled relation- model must be the controlling variable for imitation;

111
Q

when a novel model evoked an imitative response

A

generalized imitation

112
Q

imitation vs observational learning

A

imitation- replicates exact behavior

observational learning- learns from outcomes or generalizes behavior based on observation; learning that occurs through indirect contact with consequences experienced by other people

113
Q

5 steps for imitation training

Assess
Select
Pretest
Sequence
Implement

A
  1. Assess and teach missing prerequisite skills for at least 3 sessions ; assess attending skills, gross and fine motor skills, skill deficits that could hinder imitative responses, and if there are interfering challenging behaviors
  2. select 25 behavior models; gross and fine motor; train one at a time. move onto complex sequences after mastery of simpler models
  3. Pretest models at least 3 times to evaluate target models and identify models the client can already imitate

4.sequence models from simplest to hardest, based on pretested results

  1. Implement: run imitation training; 4 steps
    -pre-assessment to evaluate current level and progress
    -training- repeatedly present models from pre-assessment
    -post-assessment- assess how well client can do with previously learned targets
    -probe novel targets at end of each imitation training session
114
Q
  1. teach attending skills
  2. teach imitation skills and program for generalized learning
  3. teach discrimination of consequences
A

teaching observational learning skills

115
Q
A