Chapter 5 - Textbook Flashcards

1
Q

hree fundamental elements of the instrumental conditioning paradigm

A

(1) the instrumental response
(2) the reinforcer or goal event
(3) the relation between the instrumental response and the goal event

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Instrumental behavior:

A

Behavior that occurs because it was previously effective in producing certain consequences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

consequences of an action can determine whether

A

you make that response again

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

factors responsible for goal-directed behavior are difficult to isolate without:

A

experimental manipulation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Laboratory and theoretical analyses of instrumental conditioning began in earnest with the work of the American psychologist

A

E. L. Thorndike.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Thorndike’s original intent was to study

A

animal intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Thorndike devised a series of __ for his experiments

A

puzzle boxes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Thorndike’s training procedure consisted of:

A

placing a hungry animal (often a young cat) in the puzzle box with some food left outside in plain view of the animal
The task for the animal was to learn how to get out of the box and get the food.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Thorndike’s careful empirical approach was a significant advance in

A

the study of animal intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

to Thorndike many aspects of behavior seemed

A

rather unintelligent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Thorndike interpreted the results of his studies as

A

Reflecting the learning of a new S–R association

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Thorndike formulated the

A

law of effect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The law of effect states that:

A

if a response R in the presence of a stimulus S is followed by a satisfying event, the association between the stimulus S and the response R becomes strengthenedIf the response is followed by an annoying event, the S–R association is weakened

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

according to the law of effect, what is learned is:

A

an association between the response and the stimuli present at the time of the response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

the consequence of the response is

A

not one of the elements in the association

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

The satisfying or annoying consequence simply serves to:

A

strengthen or weaken the associa- tion between the preceding stimulus and response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Thorndike’s law of effect involves:

A

S–R learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Once learned, habitual responses occur because:

A

they are triggered by an antecedent stimulus and not because they result in a desired consequence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Discrete-trial procedures are similar to the method Thorndike used in that

A

each training trial begins with putting the animal in the apparatus and ends with removal of the animal after the instrumental response has been performed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Behavior in a runway can be quantified by measuring:

A

RUNNING SPEED: how fast the animal gets from the start box to the goal box.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

running spead typically increases with:

A

repeated training trials

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Another common measure of behavior in runways is

A

response latency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

latency:

A

the time it takes the animal to leave the start box and begin running down the alley.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Typically, latencies become __ as training progresses

A

shorter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

T maze consists of:

A

a start box and alleys arranged in the shape of a T.A goal box is located at the end of each arm of the T

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Because the T maze has two choice arms, it can be used to :

A

study more complex questions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

free-operant procedures allow

A

the animal to repeat the instrumental response without constraint over and over again without being taken out of the apparatus until the end of an experimental session.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

the free-operant method was invented by

A

B. F. Skinner (1938) to study behavior in a more continuous manner than is possible with mazes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Skinner (Figure 5.4) was interested in analyzing in the laboratory a form of behav- ior that would be representative of

A

all naturally occurring ongoing activity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Skinner recognized that before behaviour can be experimentally annalyzed:

A

a measureable unit of behavior must be defines

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Skinner proposed the concept of

A

the operant as a way of dividing behavior into meaningful measurable units.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

operant response is defined in terms of:

A

the effect that the behaviour has on the environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Activities that have the same environmental effect are considered to be:

A

instances of the same operant response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Behavior is not defined in terms of particular muscle movements but in terms of

A

how the behavior operates on the environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

any response that is required to produce a desired consequence is:

A

an instrumental response because it is “instrumental” in producing a particular outcome.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

food-delivery device is called:

A

the food magazine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

After enough pairings of the sound of the food magazine with food delivery, the sound elicits

A

a classically conditioned approach response:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

preliminary phase of conditioning is called

A

magazine training.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

After magazine training, the rat is ready to

A

learn the required operant response.

42
Q

sequence of training steps is called:

A

response shaping

43
Q

shaping of a new operant response requires:

A

training components or approximations to the final behavior.

44
Q

Successful shaping of behavior involves __ components.

A

three

45
Q

What are the three components involved in shaping?

A

(1) you have to clearly define the final response you want the trainee to perform
(2) you have to clearly assess the starting level of performance, no matter how far it is from the final response you are interested in.
(3) you have to divide the progression from the starting point to the final target behavior into appropriate training steps or successive approximations

46
Q

the execution of the training plan for shaping involves two complementary tactics:

A

(1) reinforcement of successive approximations to the final behaviour
(2) Withholding reinforcement for earlier response forms

47
Q

in shaping, we are teaching the rat how to:

A

combine familiar responses into a new activity.

48
Q

Instrumental conditioning often involves

A

he con- struction, or synthesis, of a new behavioral unit from preexisting response components that already occur in the organism’s repertoire

49
Q

Instrumental conditioning can also be used to produce responses

A

unlike anything the trainee ever did before.

50
Q

The creation of new responses by shaping depends on

A

the inherent variability of behavior

51
Q

In contrast to discrete-trial techniques for studying instrumental behavior, free-operant methods permit:

A

continuous observation of behavior over long periods.

52
Q

With continuous opportunity to respond, the organism, rather than the experimenter, determines

A

the frequency of its instrumental response

53
Q

free-operant techniques provide a special opportunity to observe changes in the likelihood of behavior over time

A
54
Q

Measures of response latency and speed that are commonly used in discrete-trial procedures

A

do not characterize the likelihood of repetitions of a response.

55
Q

__has become the primary measure in studies that employ free-operant procedures.

A

Response rate

56
Q

In all instrumental conditioning situations, the participant makes

A

a response and thereby produces an outcome or consequence.

57
Q

negative reinforcement

A

the instrumental response turns off an aversive stimulus.

58
Q

Instrumental behavior is ___ by punishment and ___ by negative reinforcement.

A

Instrumental behavior is decreased by punishment and increased by negative reinforcement.

59
Q

In omission training or negative punishment, the instrumental response results in

A

the removal of a pleasant or appetitive stimulus

60
Q

ega- tive punishment is often preferred over positive punishment as a method of discouraging human behavior because:

A

it does not involve delivering an aversive stimulus.

61
Q

the essence of instrumental behavior is that it is controlled by

A

its consequences.

62
Q

instrumental conditioning fundamentally involves three elements

A

the instrumental response, the outcome of the response (the reinforcer), and the relation or contingency between the response and the outcome

63
Q

Thorndike described instrumental behavior as involving the

A

stamping in of an S–R association

64
Q

Skinner wrote about behavior being

A

trengthened or reinforced

65
Q

both thorndike and skinner emphasized that:

A

reinforcement increases the likelihood that the instrumental response will be repeated in the future.

66
Q

Thorndike and Skinner were partially correct in saying that

A

responding becomes more stereotyped with continued instrumen- tal conditioning

67
Q

“A behavior cannot be reinforced by a reinforcer if

A

it is not naturally linked to that reinforcer in the repertoire of the animal”

68
Q

“A behavior cannot be reinforced by a reinforcer if it is not naturally linked to that reinforcer in the repertoire of the animal” (p. 78).
This type of natural linkage was first observed by

A

Thorndike.

69
Q

Thorndike used the term ___ to explain his failures to train scratching and yawning as instrumental responses

A

belongingness

70
Q

according to the concept of belongingness:

A

certain responses nat- urally belong with the reinforcer because of the animal’s evolutionary history

71
Q

The effectiveness of the procedure in increasing an instrumental response will depend on

A

the compatibility of that response with the preexisting organization of the feeding system.

72
Q

the nature of other responses that emerge during the course of training (or instinctive drift) will depend on:

A

the behavioral components of the feeding system that become activated by the instrumental conditioning procedure.

73
Q

According to the behavior systems approach, we should be able to predict which responses will increase with food reinforcement by

A

studying what animals do when their feeding system is activated in the absence of instrumental conditioning.

74
Q

Another way to diagnose whether a response is a part of a behavior system is to

A

perform a classical conditioning experiment.Through classical conditioning, a CS elicits components of the behavior system activated by the US. If instinctive drift reflects responses of the behavior system, responses akin to instinctive drift should be evident in a classical conditioning experiment.

75
Q

The effectiveness of a reinforcer depends not only on its quality and quantity but also on

A

what the subject received previously.

76
Q

positive and negative behavioral contrast effects.

A

Speaking loosely, a large reward is treated as especially good after reinforcement with a small reward, and a small reward is treated as especially poor after reinforcement with a large reward

77
Q

Behavioral-contrast effects can occur either because of

A

(1)a shift from a prior reward magnitude
(2)because of an anticipated reward.

78
Q

The hallmark of instrumental behavior is that it produces and is controlled by

A

its consequences

79
Q

two types of relationships between a response and a reinforcer

A

(1)temporal relation
(2)causal relation or response–reinforcer contingency

80
Q

The temporal relation refers to

A

he time between the response and the reinforcer. A special case of the temporal relation is temporal contiguity.

81
Q

Temporal contiguity refers to

A

the delivery of the reinforcer immediately after the response.

82
Q

The response– reinforcer contingency refers to

A

he extent to which the instrumental response is necessary and sufficient to produce the reinforcer.

83
Q

since the early work of Grice (1948), learning psychologists have correctly emphasized that instrumental conditioning requires providing

A

the reinforcer immediately after the occurrence of the instrumental response.

84
Q

Why is instrumental conditioning so sensitive to a delay of reinforcement?

A

A major culprit is the credit-assignment problem. With delayed reinforcement, it is difficult to figure out which response deserves the credit for the delivery of the reinforcer. As I pointed out earlier, behavior is an ongoing, continual stream of activities.

85
Q

couple of ways to overcome the credit-assignment problem:

A

(1)Provide a secondary or conditioned reinforcer immediately after the instrumental response, even if the primary reinforcer cannot occur until sometime later
(2)mark the target instrumental response in some way to make it distinguishable from the other activities of the organism

86
Q

The effectiveness of a marking procedure was first demon- strated by

A

David Lieberman and his colleagues

87
Q

he response–reinforcer contingency refers to

A

the extent to which the delivery of the reinforcer depends on the prior occurrence of the instrumental response.

88
Q

In studies of delayed reinforcement, there is a perfect causal relation between the response and the reinforcer, but

A

learning is disrupted

perfect causal relation between the response and the reinforcer is not sufficient to produce vigorous instrumental responding!!!!

89
Q

Skinner’s Superstition Experiment

A

The role of contiguity versus contingency in instrumental learning became a major issue with Skinner’s superstition experiment

90
Q

Skinner’s explanation of superstitious behavior rests on the idea of

A

accidental, or adventitious, reinforcement. Adventitious reinforcement refers to the accidental pairing of a response with delivery of the reinforcer.

91
Q

Reinterpretation of the Superstition Experiment Skinner’s bold claim that tempo- ral contiguity, rather than response-reinforcer contingency, is most important for instru- mental conditioning was challenged by subsequent empirical evidence

A

Staddon and Simmelhag (1971) repeated Skinner’s experiment. However, Staddon and Simmelhag made more extensive and systematic observations. They defined a variety of responses, such as orienting to the food hopper, pecking the response key, wing flap- ping, turning in quarter circles, and preening. They then recorded the frequency of each response according to when it occurred during the interval between successive free deliv- eries of food.

92
Q

data from Staddon and stimmelhag revealed:

A

terminal response:some of the responses occurred predominantly toward the end of the interval between successive reinforcers

interim responses:Other activities increased in frequency after the delivery of food and then decreased as the time for the next food delivery drew closer

93
Q

Explanation of the Periodicity of Interim and Terminal Responses

A

Simmelhag (1971) suggested that terminal responses are species-typical responses that reflect the anticipation of food as time draws closer to the next food presentation.
they viewed interim responses as reflecting other sources of motivation that are prominent early in the interfood interval, when food presentation is unlikely.

94
Q

nvestigated the effects of exposure to uncontrollable shock on subsequent escape-avoidance learning in dogs

A

Overmier and Seligman (1967) and Seligman and Maier (1967),

95
Q

The Triadic Design

A

Learned-helplessness experiments are usually conducted using the triadic design

96
Q

learned-helplessness hypoth- esis

A

assumes that during exposure to uncontrollable shocks, animals learn that the shocks are independent of their behavior—that there is nothing they can do to control the shocks. Furthermore, they come to expect that reinforcers will continue to be inde- pendent of their behavior in the future.

97
Q

learning deficit (in learned helplessness hypothesis) occurs for two reasons:

A

(1)expectation of lack of control reduces the motivation to perform an instrumental response
(2)even if they make the response and get reinforced in the conditioning phase, the previously learned expectation of lack of con- trol makes it more difficult for the subjects to learn that their behavior is now effective in producing reinforcement.

98
Q

It is important to distinguish the learned-helplessness hypothesis from the learned- helplessness effect:

A

he effect is the pattern of results obtained with the triadic design (disruption of instrumental conditioning caused by prior exposure to inescapable shock). The learned-helplessness effect has been replicated in numerous studies and is a firmly established finding. By contrast, the learned-helplessness hypothesis is an explana- tion or interpretation of the effect, which has been provocative and controversial since its introduction (LoLordo & Overmier, 2011).

99
Q

Alternatives to the Helplessness Hypothesis:

A

(1) activity deficit hypothesis
(2) attention deficit hypothesis

100
Q

According to the activity deficit hypothesis

A

animals in Group Y show a learning deficit following exposure to inescapable shock because inescapable shocks encourage animals to become inactive or freeze

101
Q
A