Chapter 6 Flashcards
schedule of reinforcement
a program or rule that determines which occurence of a response is followed by the reinforcer.
ratio schedule
in this schedule, the reinforcement depends only on the number of responses the organism has to perform
continuous reinforcement
a schedule in which each response results in delivery of the reinforcer
intermittent reinforcement
situations in which responding is only reinforced some of the time
fixed ratio schedule
a schedule in which there is a fixed ratio between the number of responses and the number of reinforcers
cumulative record
a special way of representing how a response is repeated over time
post-reinforcement pause
the zero rate of responding that typically occurs just after reinforcement on a fixed ratio schedule (can also be thought of as a pre-ratio pause)
ratio run
the high and steady rate of responding that completes each ratio requirement
with higher ratio requirements, ____ post-reinforcement pauses occur
with higher ratio requirements, longer post-reinforcement pauses occur
ratio strain
If the ratio requirement is suddenly increased a great deal, the animal is likely to pause periodically before the completion of the ratio requirement.
variable-ratio schedule
the number of responses required for the reinforcer varies from one occasion to the next (VR10 = average of 10 responses, ex 7 and 13)
pauses in VR compared to FR
predictable pauses in the rate of responding are less likely with VR schedules than the FR schedules. Organisms respond at fairly steady rates on VR schedules
Interval schedules
a response is only reinforced if the response occurs after certain amount of time has passed
fixed interval schedule
the amount of time that has to pass before a response is reinforced is constant from one trial to the next.
fixed-interval scallop
The increase in response rate toward the end of each fixed interval
variable interval schedule
the time required to set up the reinforcer varies from one trial to the next. The subject has to respond to obtain the reinforcer that has been set up, but now the set-up time is not as predictable.
limited hold
a limited amount of time that the reinforcer remains available in FI or VI schedules
rates of responding in FR and FI compared to VR and VI
FR and FI schedules produce high rates of responding just before the delivery of the next reinforcer.
VR and VI schedules both maintain steady rates of responding without predictable pauses
What does the Reynolds experiment tell you about wages?
You can get employees to work harder for the same pay if the wages are provided on a ratio rather than an interval schedule
inter-response time
the interval between successive responses
a participant who has mostly short IRTs is responding at a high rate. a participant who has mostly long IRTs is responding at a low rate
IRTs and schedules
Interval schedules differentially reinforce long IRTs and, thus, result in lower rates of responding than ratio schedules
T or F - there are higher response rates on ratio schedules
True
feedback function
the feedback function for a ratio schedule is an increasing linear function with no limit
interval schedules have an upper limit on the number of reinforcers a participant can earn.
concurrent schedule
a complex reinforcement procedure in which the participant can choose any one of two or more simple reinforcement schedules that are available simultaneously. Concurrent schedules allow for the measurement of direct choice between simple schedule alternatives.
matching law
a rule for instrumental behavior, proposed by R.J. Herrnstein, which states that the relative rate of responding on a particular response alternative equals the relative rate of reinforcement for that response alternative
relies on the fact that the relative rates of responding match the relative rates of reinforcement
response bias
response bias occurs when the response alternatives require different amounts of effort or if the reinforcer provided for one response is much more attractive than the reinforcer for the other response
undermatching
less sensitivity to the relative rate of reinforcement than predicted by the matching law
molar theories
molar theories explain the aggregates of responses, they deal with the distribution of responses and reinforcers in choice situations during an entire experimental session
molecular theories
explain matching relations by focusing on what happens at the level of individual responses
molecular maxmizing
according to molecular theories of maximizing, organisms always choose whichever response alternative is most likely to be reinforced at a given moment in time
molar maximizing
molar theories of maximizing assume that organisms distribute their responses among various alternatives so as to maximize the amount of reinforcement they earn over the long run.
melioration
a mechanism for achieving matching by responding so as to improve the local rates of reinforcement for response alternatives
concurrent-chain schedule of reinforcement
a complex reinforcement procedure in which the participant is permitted to choose during the first link which of several simple reinforcement schedules will be in effect in the second link. Once a choice has been made, the rejected alternatives become unavailable until the start of the next trial. Concurrent-chain schedules allow for the study of choice with commitment.
conditioned reinforcer
a stimulus that is present when the primary reinforcer is provided
delayed discounting
decrease in the value of a reinforcer as a function of how long one has to wait to obtain it