Exam 3 Re-Do Flashcards
E.L. Thorndike’s studies of lerning started as an attempt to understand _______.
animal intelligence
The law of effect says that _____.
behavior is a function of its consequences
The training procedure Thorndike used in his famous experiments with cats is best described as _____.
discrete trial
The free operant procedure is most associated with ______.
Skinner
Studeis of delayed reinforcement document the importance _______.
contiguity
The level of deprivation is less important when the reinforcer used is a/an ______ reinforcer.
secondary
The one thing that all reinforcers have in common is that they ______.
strengthen behavior
All of the following are useful tips for shaping behavior except _____.
never back up
Shaping is the reinforcement of successive _________.
approximatioins of a desired behavior
Schlinger and Blakely found that the reinforcing power of a delayed reinforcer could be increased by ______.
preceding the reinforcer with a stimulus
The reappearance of previously effective behavior during extinction is called _____.
resurgence
Negative reinforcement is also called ___________.
escape-avoidance training
Thorndike plotted the results of his puzzle box experiements as graphs. The resulting curves show a/an ______ with succeeding trials.
decrease in time
Operant learning is sometimes called ________ learning.
instrumental
Clark Hull’s explanation of reinforcement assumes that reinforcers ______.
reduce a drive
Money is a good example of a _____ reinforcer.
generalized
Often the initial effect of an extinction procedure is an increase in the behavior called a/an extinction _______.
burst
According to ______ theory, schoolchildren are eager to go to recess because the have been deprived of the opportunity to exercise
repsonse deprivation
Resurgence may help account for _____.
regression
_____ is a neurotransmitter that seems to be important in reinforcement.
Dopamine
John spent his summer picking cantaloupes for a farmer. The farmer paid John a certain amount for every basket of cantaloupes picked. John worked on a _______.
fixed ratio schedule
The schedule to use if you want to produce the most rapid learning of new behavior is _____.
CRF
Bill spends his summer in the city panhandling. Every day he takes a position on a busy corner and accosts passersby saying, “Can you spare some change?” Most people ignore him, by every now and then someone gives him money. Bill’s reinforcement schedule is best described as a ______.
variable ratio schedule
Refer to George’s Pigeons. George is using a procedure called ______.
stretching the ratio
Refer to George’s pingeons. Things are going pretty well for George until he jumps from reinforcing every tenth response to reinforcing every 50th response. At this point, the pigeon responds erratically and nearly stops responding entirely. George’s pigeon is suffering from ____.
ratio strain
Stanley wants to determine which of two reinforcement schedules is more attractive to rats. He trains a rate to press a lever for food, and then puts the rat into an experimental chamber containg two levers. Pressing one lever produces reinforcement on an FR 10 schedule, pressing the other lever produces reinforcement on an FI 10” schedule. Lever pressing is on a ______.
Concurrent schedule
Refer to pigeon study. You predict that the bird will peck_____.
the red disk about twice as often as the green disk.
Refer to pigeon study. The principle that allows you to predict the behavior of the pigeon is called the ______.
matching law
A reduction in response rate following reinforcement is called a _____.
postreinforcement pause
The schedule that is likely to produce a cumulative record with scallops is the _____.
FI schedule
One explanation for the PRE implies that the effect is really an illusion. This is the _____.
response unit hypothesis
CRF is synonomous with ____.
FR 1
In schedules research, VD stands for _____.
variable duration
Shirley trains a rat to press a lever and then reinforces lever presses on an FR 10 schedule when a red light is on, and an FI 10” schedule when a green light is on. In this case, lever pressing is on a ____.
multiple schedule
Studies of choice involve _____.
concurrent schedules
The study of reinforcement shedules suggest that the behavior we call stick-to-itiveness is largely the product of ____.
reinforcement history
Your text reports the case of a man who apparently made hundreds of harrassing phone calls. The man’s behavior was most likely on a/an _____.
VR schedule
A schedule that does not require the performance of a particular behavior is the _____.
FT schedule
In a _____ schedule, reinforcement is contingenet on the continuous performance of a behavior for some period of time.
fixed duration
_____ is an excellent schedule for producing a high rate of behavior.
DRH
Of the following explanations, the one that is most satisfactory from the standpoint of science is ____.
Harry has stick-to-itiveness because his parents praised him for sticking with projects
Operant lerning may also be referred to as _____.
instrumental learning
All of the following are recognized kinds of reinforcers except ____.
classical
Thorndike emphasized that we learn mainly from ____.
success
T/F One everyday example of a VR schedule is the lottery.
True
T/F When a response is plaed on exntinction, there is often an increae in emotional behavior.
True
T/F One effect of the extinction procedure is an increase in the variability of behavior.
True
T/F Unexpected reinforcers produce more dopamine than expected reinforcers.
true
T/F Negative reinforcement and punishment are synonyms.
false
Skinner generally used a free operant procedure in his research.
true