Instrumental learning Flashcards

1
Q

early work

A

Animal psychologists were studying instrumental learning before Pavlov’s work became known:

  • Small (rats in scaled-down Hampton Court maze),
  • Thorndike (puzzle boxes)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

instrumental conditioning

A

stim (in box) –> response R reflex.

When the animal encountered a certain discriminative stimulus S then it emitted the response R.

Reinforcement established the link between S and R.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

instrumental conditioning procedures

A

Positive reinforcement = R–>appetitive - (more R)

Punishment = R–>aversive - (less R)

Negative reinforcement = R–>no aversive - (more R)

Omission Training = R–>no appetitive - (less R)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

reward and reinforcement

A

Thorndike’s Law of Effect: animals repeat actions that lead to a satisfying state of affairs, and this is called reinforcement

Hull: reinforcement is due to drive reduction, hence the animal will work for food if it is hungry, or for water if it is thirsty etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

schedules of reinforcement

A

Extinction applies to instrumental conditioning, too – stop giving reinforcers and the response ceases
But we can get away with only reinforcing some of the responses the subject emits, and still get stable conditioned responding
A schedule of reinforcement is a rule for deciding which responses we reinforce
Different schedules lead to different, highly predictable, patterns of response, instantly recognisable on a cumulative record

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

sample schedules and their effects

A

Continuous reinforcement, CRF – reinforce every response

Fixed ratio, FR – reinforce every nth response. Pause after each reinforcement followed by fast responding
- Gets sated if give to many – why this method is good

Variable ratio, VR – reinforce every nth response on average. Continuous fast responding
- Doesn’t know when the reinforcer will be given

Fixed interval, FI – reinforce the first response after time t has elapsed since the last reinforcer. Pause after each reinforcement followed by gradually increasing response rate
- Slows down responding then speeds up when time nearly up

Variable interval, VI – same as FI but with a variable time period. Continuous moderate response rate
- More stable pattern of responding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

ratio schedules

A

reinforcement depends on number of responses

  • if 1 = continuous reinforcement
  • if not 1 = partial or intermittent reinforcement
  • fixed ratio schedule, e.g. FR10
  • variable ratio schedule, VR10 (the average of responses required equals 10)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

interval schedules

A

reinforcement depends on time interval

Fixed interval, e.g. FI4

variable interval, e.g. VI2 (again the average…) – most used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

instrumental learning: can it be explained as form of Pavlovian conditioning?

A

US = reward, e.g. food, freedom

UR = natural response, e.g. eating, approach

CS = starting condition, e.g. start box of maze, inside of puzzle box, sight of lever

CR = approach

So when the rat “learns to press” the lever it may simply find the lever attractive (stimulus substitution) and bump into it because of this. Is the apparent learning of the response simply an artifact brought about by Pavlovian conditioning?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

omission schedule

A

Distinguishing between Pavlovian and instrumental conditioning

see notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Grindley’s bidirectional control

A

Switch either side of head

If touches right switch – gets carrot

Other way = nothing happens

Change contingency – can it learn to turn other way? - fi can then it is learning the action

The fact that the animals will learn to turn their heads left or right when the buzzer has the same relationship with reward establishes that this is not Pavlovian conditioning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

contemporary issue

A

Actions and Habits -is all instrumental learning the same?

We shall see that the answer is no.

In some circumstances the S->R account seems to be the correct one.

In others, there is clear evidence that the animal has some expectancy of an outcome and modifies it’s behaviour accordingly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Adams

A

evidence that animals had some representation of the outcome in instrumental learning. If the outcome is made aversive, they respond less in extinction.

Press lever for food

Then extinction – food and chloride that makes them ill – don’t want them any more – dev group

Non group - get given them on different days – not paired
Given no pellets next time in box

Dev = less lever presses and don’t re-acquire as quickly again

see notes

could show that they responded less in extinction to an outcome that had been made aversive, but only if they had not been overtrained (100), if they had (500) they continued to respond.

Overtrained – 500 trials – not getting any better but more experience – becomes habitual

Dev group still press lever as much as before extinction

Just doing it for no reward – habitual response

overtrained animals are exhibiting what Adams and Dickinson called a habit, something that an S->R account would expect, where the current outcome value has no impact on the probability of making a response in the presence of the discriminative stimulus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Colwill and Rescorla (1990)

A

some representation of the outcome is involved in determining their performance

Rat in box – press lever/pull chain

Under light – lever = food, chain = sucrose water – equally valuable

Under tone - lever = water, chain = food

Conditional on stimulus presented

Water + lithium chloride – don’t like any more- devalued

Extinction

see notes

Light – don’t pull chain as much – paired with chloride

Tone – press lever less – chloride

Basing decisions on outcome provided

see notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Dickinson and the castaways dilemma

A

Actions that required knowledge of the expected outcome, and Habits of the S->R kind. He then set about testing this idea by proposing what he now terms the “Castaways Dilemma”

In this, someone who is castaway on a desert island is hungry but manages to find and eat coconuts. Then they become thirsty and there’s no water available - what do they do?

The answer is pretty obvious - they drink coconut milk - but would an animal have the ability to learn this?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

castaway’s dilemma - transferred to lab

A

see notes

They (Dawson and Dickinson) found no difference in performance of the two actions.

Both actions were performed more than in a control group who’d not been made thirsty, but they interpreted this as general activation of the available responses by thirst (which seems perfectly reasonable).

There was no sign of any outcome specific activation of an action.

Thirst energised behaviour but not doing right thing

see notes

17
Q

Dickinson and Wyatt (1997)

A

They can solve the Castaway’s dilemma!

The animals now respond more for the sugar water under thirst.

But only if you let the animal learn that one of the reinforcers (in this case sugar water) is valuable under the new drive state (thirst) before test.

This was the new idea they incorporated in their revised design.

see notes

Need to know that when thirsty, sugar water good

Sugar water + food while thirsty - find sugar water better – then can solve the problem

see notes

18
Q

analysis

A

incentive learning is needed to support drive-related action on the basis of the available outcomes.

Tony Dickinson has argued for a model of instrumental performance that requires inference on the basis of these results.

Thus the animal is postulated to reason that:

  1. I’m thirsty
  2. If I pull the chain I get sugar water
  3. Sugar water is good when I’m thirsty
  4. I’ll pull the chain then.

And for this inference to be possible - each step in the chain has to be available, so the animal

Not a reflex

Pull together knowledge