Cooper Chapter 13 - Schedules of Reinforcement Flashcards

1
Q

adjunctive behaviors

A

Behavior that occurs as a collateral effect of a schedule of periodic reinforcement for other behavior;

time-filling or interim activities (e.g., doodling, idle talking, smoking, drinking) that are induced by schedules of reinforcement during times when reinforcement is unlikely to be delivered (also called schedule-induced behavior)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

alternative schedule (alt)

A

Provides reinforcement when the response requirements of any of two or more simultaneously available component schedules are met

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

chained schedule (chain)

A

A schedule of reinforcement in which the response requirements of two or more basic schedules must be met in a specific sequence before reinforcement is delivered; a discriminative stimulus is correlated with each component of the schedule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

compound schedule of reinforcement

A

A schedule of reinforcement consisting of two or more elements of continuous reinforcement (CRF), the four intermittent schedules of reinforcement (FR, VR, FI, VI), differential reinforcement of various rates of responding (DRH, DRL), and extinction. The elements from these basic schedules can occur successively or simultaneously and with or without discriminative stimuli; reinforcement may be contingent on meeting the requirements of each element of the schedule independently or in combination with all elements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

concurrent schedule (conc)

A

A schedule of reinforcement in which two or more contingencies of reinforcement (elements) operate independently and simultaneously for two or more behaviors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

conjunctive schedule (conj)

A

A schedule of reinforcement that is in effect whenever reinforcement follows the completion of response requirements for two or more schedules of reinforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

continuous reinforcement (CRF)

A

A schedule of reinforcement that provides reinforcement for each occurrence of the target behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

differential reinforcement of diminishing rates (DRD)

A

A schedule of reinforcement in which reinforcement is provided at the end of a predetermined interval contingent on the number of responses emitted during the interval being fewer than a gradually decreasing criterion based on the individual’s performance in previous intervals (e.g. fewer than 5 responses per 5 minutes, fewer than 4 responses per 5 minutes, fewer than 3 responses per 5 minutes)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

differential reinforcement of high rates (DRH)

A

A schedule of reinforcement in which reinforcement is provided at the end of a predetermined interval contingent on the number of responses emitted during the interval being greater than a gradually increasing criterion based on the individual’s performance in previous intervals (e.g. more than 3 responses per 5 minutes, more than 5 responses per 5 minutes, more than 8 responses per 5 minutes)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

differential reinforcement of low rates (DRL)

A

A schedule of reinforcement in which reinforcement (a) follows each occurrence of the target behavior that is separated from the previous response by a minimum interresponse times (IRT) or (b) is contingent on the number of responses within a period of time not exceeding a predetermined criterion. Practitioners use DRL schedules to decrease the rate of behaviors that occur too frequently but should be maintained in the learner’s repertoire

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

fixed interval (FI)

A

A schedule of reinforcement in which reinforcement is delivered for the first response emitted following the passage of a fixed duration of time since the last response emitted following the passage of a fixed duration of time since the last response was reinforced (e.g. on an FI 3-min schedule, the first response following the passage of 3 minutes is reinforced)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

fixed ratio (FR)

A

A schedule of reinforcement requiring a fixed number of responses for reinforcement (e.g. an FR 4 schedule of reinforcement follows every fourth response)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

intermittent schedule of reinforcement (INT)

A

A contingency of reinforcement in which some, but not all, occurrences of the behavior produce reinforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

lag reinforcement schedule

A

A schedule of reinforcement in which reinforcement is contingent on a response being different in some specified way (e.g. different topography) from the previous response (e.g. Lag 1) or a specified number of previous responses (e.g. Lag 2 or more)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

limited hold

A

A situation in which reinforcement is available only during a finite time following the elapse of an HI or VI interval; if the target response does not occur within the time limit, reinforcement is withheld and a new interval begins (e.g. on an FI 5m schedule with a limited hold of 30s, the first correct response following the elapse of 5m is reinforced only if that response occurs within 30s after the end of the 5m interval)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

matching law

A

The allocation of responses to choices available on concurrent schedules of reinforcement; rates of responding across choices are distributed in proportions that match the rates of reinforcement received from each choice alternative.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

mixed schedule of reinforcement (mix)

A

A compound schedule of reinforcement consisting of two or more basic schedules of reinforcement (elements) that occur in an alternating, usually random, sequence; no discriminative stimuli are correlated with the presence or absence of each element of the schedule, and reinforcement is delivered for meeting the response requirements of the element in effect at any time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

multiple schedule (mult)

A

A compound schedule of reinforcement consisting of two or more basic schedules of reinforcement (elements) that occur in alternating, usually random, sequence; a discriminative stimulus is correlated with the presence or absence of each element of the schedule, and reinforcement is delivered for meeting the response requirements of the elements in effect at any time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

postreinforcement pause

A

The absence of responding for a period of time following reinforcement; an effect commonly produced by fixed interval (FI) and fixed ratio (FR) schedules of reinforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

progressive schedule of reinforcement

A

A schedule that systematically thins each successive reinforcement opportunity independent of the individual’s behavior; progressive ratio (PR) and progressive interval (PI) schedules are thinned using arithmetic or geometric progressions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

progressive-ratio (PR) schedule of reinforcement

A

A variation of the fixed ratio (FR) schedule of reinforcement that increases the ratio requirements incrementally within the session; PR schedule requirements are changed using (a) aritmetic progressions to add a constant number to each successive ratio or (b) geometric progressions to add successively a constant proportion of the preceding ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

ratio strain

A

A behavioral effect associated with abrupt increases in ratio requirements when moving from denser to thinner reinforcement schedules; common effects include avoidance, aggression, and unpredictable pauses or cessation in responding.

23
Q

schedule of reinforcement

A

A rule specifying the environmental arrangements and response requirements for reinforcement; a description of a contingency of reinforcement.

24
Q

schedule thinning

A

Changing a consistency of reinforcement by gradually increasing the response ratio or the extent of the time interval; it results in a lower rate of reinforcement per responses, time, or both.

25
Q

tandem schedule (tand)

A

A schedule of reinforcement identical to the chained schedule except, like the mix schedule, the tandem schedule does not use discriminative stimuli with the elements in the chain.

26
Q

variable interval (VI)

A

A schedule of reinforcement that provides reinforcement for the first correct response following the elapse of variable durations of time occurring in a random or unpredictable order. The mean duration of the intervals is used to describe the schedule (e.g. on a VI 10m schedule, reinforcement is delivered for the first response following an average of 10 minutes since the last reinforced response, but the time that elapses following the last reinforced response might range from 30 seconds or less to 25 minutes or more.)

27
Q

variable ratio (VR)

A

A schedule of reinforcement requiring a varying number of responses for reinforcement. The number of responses required varies around a random number; the mean number of responses required for reinforcement is used to describe the schedule (e.g. on a VR 10 schedule an average of 10 responses must be emitted for reinforcement, but the number of responses required following the last reinforced response might range from 1 to 30 or more)

28
Q

Behavior that occurs as a collateral effect of a schedule of periodic reinforcement ffor other behavior; time-filling or interim activities (e.g., doodling, idle talking, smoking, drinking) that are induced by schedules of reinforcement during times when reinforcement is unlikely to be delivered (also called schedule-induced behavior)

A

adjunctive behaviors

29
Q

Provides reinforcement when the response requirements of any of two or more simultaneously available component schedules are met

A

alternative schedule (alt)

30
Q

A schedule of reinforcement in which the response requirements of two or more basic schedules musti be met in a specific sequence before reinforcement is delivered; a discriminative stimulus is correlated with each component of the schedule

A

chained schedule (chain)

31
Q

A schedule of reinforcement consisting of two or more elements of continuous reinforcement (CRF), the four intermittent schedules of reinforcement (FR, VR, FI, VI), differential reinforcement of various rates of responding (DRH, DRL), and extinction. The elements from these basic schedules can occur successively or simultaneously and with or without discriminative stimuli; reinforcement may be contingent on meeting the requirements of each element of the schedule independently or in combination with all elements.

A

compound schedule of reinforcement

32
Q

A schedule of reinforcement in which tow or more contingencies of reinforcement (elements) operate independently and simultaneously for two or more behaviors.

A

concurrent schedule (conc)

33
Q

A schedule of reinforcement that is in effect whenever reinforcement follows the completion of response requirements for two or more schedules of reinforcement

A

conjunctive schedule (conj)

34
Q

A schedule of reinforcement that provides reinforcement for each occurrence of the target behavior

A

continuous reinforcement (CRF)

35
Q

A schedule of reinforcement in which reinforcement is provided at the end of a predetermined interval contingent on the number of responses emitted during the interval being fewer thabn a gradually decreasing criterion based on the individual’s performance in previous intervals (e.g. fewer than 5 responses per 5 minutes, fewer than 4 responses per 5 minutes, fewer than 3 responses per 5 minutes)

A

differential reinforcement of diminishing rates (DRD)

36
Q

A schedule of reinforcement in which reinforcement is provided at the end of a predetermined interval contingent on the number of responses emitted during the interval being greater than a gradually increasing criterion based on the individual’s performance in previous intervals (e.g. more than 3 responses per 5 minutes, more than 5 responses per 5 minutes, more than 8 responses per 5 minutes)

A

differential reinforcement of high rates (DRH)

37
Q

A schedule of reinforcement in which reinforcement (a) follows each occurrence of the target behavior that is separated from the previous response by a minimum interresponse times (IRT) or (b) is contingent on the number of responses within a period of time not exceeding a predetermined criterion. Practitioners use DRL schedules to decrease the rate of behaviors that occur too frequently but should be maintained in the learner’s repertoire

A

differential reinforcement of low rates (DRL)

38
Q

A schedule of reinforcement in which reinforcement is delivered for the first response emitted following the passage of a fixed duration of time since the last response emitted following the passage of a fixed duration of time since the last response was reinforced (e.g. on an FI 3-min schedule, the first response following the passage of 3 minutes is reinforced)

A

fixed interval (FI)

39
Q

A schedule of reinforcement requiring a fixed number of responses for reinforcement (e.g. an FR 4 schedule of reinforcement follows every fourth response)

A

fixed ratio (FR)

40
Q

A contingency of reinforcement in which some, but not all, occurrences of the behavior produce reinforcement

A

intermittent schedule of reinforcement (INT)

41
Q

A schedule of reinforcement in which reinforcement is contingent on a response being different in some specified way (e.g. different topography) from the previous response (e.g. Lag 1) or a specified number of previous responses (e.g. Lag 2 or more)

A

lag reinforcement schedule

42
Q

A situation in which reinforcement is available only during a finite time following the elapse of an HI or VI interval; if the target response does not occur within the time limit, reinforcement is withheld and a new interval begins (e.g. on an FI 5m schedule with a limited hold of 30s, the first correct response following the elapse of 5m is reinforced only if that response occurs within 30s after the end of the 5m interval)

A

limited hold

43
Q

The allocation of responses to choices available on concurrent schedules of reinforcement; rates of responding across choices are distributed in proportions that match the rates of reinforcement received from each choice alternative.

A

matching law

44
Q

A compound schedule of reinforcement consisting of two or more basic schedules of reinforcement (elements) that occur in an alternating, usually random, sequence; no discriminative stimuli are correlated with the presence or absence of each element of the schedule, and reinforcement is delivered for meeting the response requirements of the element in effect at any time

A

mixed schedule of reinforcement (mix)

45
Q

A compound schedule of reinforcement consisting of two or more basic schedules of reinforcement (elements) that occur in alternating, usually random, sequence; a discriminative stimulus is correlated with the presence or absence of each element of the schedule, and reinforcement is delivered for meeting the response requirements of the elements in effect at any time

A

multiple schedule (mult)

46
Q

The absence of responding for a period of time following reinforcement; an effect commonly produced by fixed interval (FI) and fixed ratio (FR) schedules of reinforcement

A

postreinforcement pause

47
Q

A schedule that systematically thins each successive reinforcement opportunity independent of the individual’s behavior; progressive ratio (PR) and progressive interval (PI) schedules are thinned using arithmetic or geometric progressions.

A

progressive schedule of reinforcement

48
Q

A variation of the fixed ratio (FR) schedule of reinforcement that increases the ratio requirements incrementally within the session; PR schedule requirements are changed using (a) aritmetic progressions to add a constant number to each successive ratio or (b) geometric progressions to add successively a constant proportion of the preceding ratio

A

progressive-ratio (PR) schedule of reinforcement

49
Q

A behavioral effect associated with abrupt increases in ratio requirements when moving from denser to thinner reinforcement schedules; common effects include avoidance, aggression, and unpredictable pauses or cessation in responding.

A

ratio strain

50
Q

A rule specifying the environmental arrangements and response requirements for reinforcement; a description of a contingency of reinforcement.

A

schedule of reinforcement

51
Q

Changing a consistency of reinforcement by gradually increasing the response ratio or the extent of the time interval; it results in a lower rate of reinforcement per responses, time, or both.

A

schedule thinning

52
Q

A schedule of reinforcement identical to the chained schedule except, like the mix schedule, the tandem schedule does not use discriminative stimuli with the elements in the chain.

A

tandem schedule (tand)

53
Q

A schedule of reinforcement that provides reinforcement for the first correct response following the elapse of variable durations of time occurring in a random or unpredictable order. The mean duration of the intervals is used to describe the schedule (e.g. on a VI 10m schedule, reinforcement is delivered for the first response following an average of 10 minutes since the last reinforced response, but the time that elapses following the last reinforced response might range from 30 seconds or less to 25 minutes or more.)

A

variable interval (VI)

54
Q

A schedule of reinforcement requiring a varying number of responses for reinforcement. The number of responses required varies around a random number; the mean number of responses required for reinforcement is used to describe the schedule (e.g. on a VR 10 schedule an average of 10 responses must be emitted for reinforcement, but the number of responses required following the last reinforced response might range from 1 to 30 or more)

A

variable ratio (VR)