Chapter 6 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

schedule of reinforcement

A

a program or rule that determines which occurence of a response is followed by the reinforcer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

ratio schedule

A

in this schedule, the reinforcement depends only on the number of responses the organism has to perform

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

continuous reinforcement

A

a schedule in which each response results in delivery of the reinforcer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

intermittent reinforcement

A

situations in which responding is only reinforced some of the time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

fixed ratio schedule

A

a schedule in which there is a fixed ratio between the number of responses and the number of reinforcers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

cumulative record

A

a special way of representing how a response is repeated over time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

post-reinforcement pause

A

the zero rate of responding that typically occurs just after reinforcement on a fixed ratio schedule (can also be thought of as a pre-ratio pause)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

ratio run

A

the high and steady rate of responding that completes each ratio requirement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

with higher ratio requirements, ____ post-reinforcement pauses occur

A

with higher ratio requirements, longer post-reinforcement pauses occur

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

ratio strain

A

If the ratio requirement is suddenly increased a great deal, the animal is likely to pause periodically before the completion of the ratio requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

variable-ratio schedule

A

the number of responses required for the reinforcer varies from one occasion to the next (VR10 = average of 10 responses, ex 7 and 13)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

pauses in VR compared to FR

A

predictable pauses in the rate of responding are less likely with VR schedules than the FR schedules. Organisms respond at fairly steady rates on VR schedules

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Interval schedules

A

a response is only reinforced if the response occurs after certain amount of time has passed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

fixed interval schedule

A

the amount of time that has to pass before a response is reinforced is constant from one trial to the next.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

fixed-interval scallop

A

The increase in response rate toward the end of each fixed interval

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

variable interval schedule

A

the time required to set up the reinforcer varies from one trial to the next. The subject has to respond to obtain the reinforcer that has been set up, but now the set-up time is not as predictable.

17
Q

limited hold

A

a limited amount of time that the reinforcer remains available in FI or VI schedules

18
Q

rates of responding in FR and FI compared to VR and VI

A

FR and FI schedules produce high rates of responding just before the delivery of the next reinforcer.

VR and VI schedules both maintain steady rates of responding without predictable pauses

19
Q

What does the Reynolds experiment tell you about wages?

A

You can get employees to work harder for the same pay if the wages are provided on a ratio rather than an interval schedule

20
Q

inter-response time

A

the interval between successive responses

a participant who has mostly short IRTs is responding at a high rate. a participant who has mostly long IRTs is responding at a low rate

21
Q

IRTs and schedules

A

Interval schedules differentially reinforce long IRTs and, thus, result in lower rates of responding than ratio schedules

22
Q

T or F - there are higher response rates on ratio schedules

A

True

23
Q

feedback function

A

the feedback function for a ratio schedule is an increasing linear function with no limit

interval schedules have an upper limit on the number of reinforcers a participant can earn.

24
Q

concurrent schedule

A

a complex reinforcement procedure in which the participant can choose any one of two or more simple reinforcement schedules that are available simultaneously. Concurrent schedules allow for the measurement of direct choice between simple schedule alternatives.

25
Q

matching law

A

a rule for instrumental behavior, proposed by R.J. Herrnstein, which states that the relative rate of responding on a particular response alternative equals the relative rate of reinforcement for that response alternative

relies on the fact that the relative rates of responding match the relative rates of reinforcement

26
Q

response bias

A

response bias occurs when the response alternatives require different amounts of effort or if the reinforcer provided for one response is much more attractive than the reinforcer for the other response

27
Q

undermatching

A

less sensitivity to the relative rate of reinforcement than predicted by the matching law

28
Q

molar theories

A

molar theories explain the aggregates of responses, they deal with the distribution of responses and reinforcers in choice situations during an entire experimental session

29
Q

molecular theories

A

explain matching relations by focusing on what happens at the level of individual responses

30
Q

molecular maxmizing

A

according to molecular theories of maximizing, organisms always choose whichever response alternative is most likely to be reinforced at a given moment in time

31
Q

molar maximizing

A

molar theories of maximizing assume that organisms distribute their responses among various alternatives so as to maximize the amount of reinforcement they earn over the long run.

32
Q

melioration

A

a mechanism for achieving matching by responding so as to improve the local rates of reinforcement for response alternatives

33
Q

concurrent-chain schedule of reinforcement

A

a complex reinforcement procedure in which the participant is permitted to choose during the first link which of several simple reinforcement schedules will be in effect in the second link. Once a choice has been made, the rejected alternatives become unavailable until the start of the next trial. Concurrent-chain schedules allow for the study of choice with commitment.

34
Q

conditioned reinforcer

A

a stimulus that is present when the primary reinforcer is provided

35
Q

delayed discounting

A

decrease in the value of a reinforcer as a function of how long one has to wait to obtain it