LECTURES 7-12 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

STIMULUS CONTROL

A

The extent to which stimuli that precede or accompany operant behaviour come to control the rate or probability of that behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

GUTTMAN & KALISH (1956)

A

Trained pigeons to peck a single S+(580nm light), showed decremental generalization gradients

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

JENKINS & HARRISON (1960)

A

Innate or learned -
two groups of pigeons pecking a white key, group 1 on SST, 100Hz tone VI schedule, food available always, tone sounding always

Group 2 on IDDT, 100Hz tone VI schedule, no tone extinction

Group 2 discriminated a lot more than Group 1. Experience is necessary for generalization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

PETERSON (1962)

A

Early rearing experiments - two groups of ducklings, one raised in a normal colour environment and the other raised in a monochromatic yellow environment. Trained both groups to peck a yellow key. Tested generalization, found normal gradients for the normal group and flat gradients for the monochromatic group. Appears as though prior experience is necessary - behaviour is learned.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

GENERAL PROCESS APPROACH TO LEARNING

A

Behavioural principles of learning are common across all species.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

EVOLUTIONARILY PREPARED

A

Behaviours that make sense with the environment e.g. an association of taste with feeling sick

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

EVOLUTIONARILY UNPREPARED

A

Behaviours that do not make sense with the environment e.g. an association of light and feeling sick

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

HEARST AND KORESKO (1968)

A

4 groups of pigeons on SST using line orientation. S+ was a vertical white line on a black key with food on a VI. Test done in extinction.

  • 2 days training, very flat gradient
  • 4 days training steeper
  • 7 days training steeper, more responses
  • by 14 days the gradient has shifted up the graph and is showing a large peak at the training stimulus, decremental gradient on either side
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

DIFFERENTIAL REINFORCEMENT

A

Arranging different reinforcement schedules to different stimuli and different responses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

DISCRIMINATION TRAINING

A

One response is reinforced, the other is in extinction, at a lower rate or punished.
Expect a greater discrimination if S+ and S- are along the same dimension in the gen test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

INTRA DIMENSIONAL DISCRIMINATION TRAINING

A

Where the different stimuli are along the same dimension in the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

INTER DIMENSIONAL DISCRIMINATION TRAINING

A

Where the different stimuli are not along the same dimension in the gen test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

POSITIVE PEAK SHIFT

A

The most responding in the generalization test does not occur at S+, but instead at another stimulus that occurs on the opposite side of S+ from S-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

HANSON (1959)

A

Control: SST 550nm
Exp: intradimensional training, 550nm S+ and 560nm S-
Both groups received a gen test, control showed large peak at S+, exp group showed peak shift away from S-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

ABSOLUTE THEORY OF STIMULUS CONTROL

A

Relies on the absolute values of S+ and S-, predicts no peak shift, just generalization of wavelengths to both S+ and S- resulting in independent control by both

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

RELATIONAL THEORY OF STIMULUS CONTROL

A

The animals are learning something about the relationship between stimuli, i.e. learning not that S+ looks different than S-, but instead that S+ is more green than S-. They are learning a rule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

KÖHLER

A

Chickens - simultaneous choice task. One light grey card (S+) and one dark grey card (S-). Pecking S+ knocks it down revealing food, pecking S- does nothing.
S- was then swapped for an even lighter S++ card. If the chickens were responding according to the absolute theory, they should continue to peck the S+ as this signals reinforcement. Instead they responded according to the relational theory, and pecked S++, indicating they have learnt to ‘peck the lighter card’

18
Q

SPENCE (1937)

A

Absolute theory of stimulus control. Proposed that excitatory and inhibitory gradients form around S+ and S- respectively and the observed behaviour is the sum of these gradients. Accounts for peak shift because there is a considerable amount of generalization remaining to S+ but not a lot generalized from S- because it is so far away.

19
Q

HONIG, BONEAU, BURNSTEIN & PENNYPACKER (1963)

A

Two groups of pigeons, interdimensional DT, stimuli were black line on a white key vs plain white key. Presence absence training. The groups had swapped S+ and S-.
Training stimulus at 90 deg line, Line S+ group show a peak in responding, decremental gradient, Line S- group show a minimum in responding, incremental gradient.

20
Q

BRELAND AND BRELAND

A

Operant techniques - Trained pigs to pick up coins and deposit them in a piggy bank across the room, and chickens to stand on platforms for 15s.
Over time the animals developed different behaviours - chickens danced and pigs rooted/buried the coins. These are species specific, intrinsic behaviours.

21
Q

INSTINCTIVE DRIFT

A

Initially reinforcement increased the target behaviour as expected, but eventually more learning about the predictability of food began to evoke species-typical food-getting behaviour. This instinctive behaviour interfered with performing the operant response.

22
Q

Principles of Operant Conditioning

A

Shape the response, follow it with a reinforcer, continue reinforcement until the response happens reliably in the presence of the discriminative stimulus

23
Q

STADDON & SIMMELHAG (1971)

A
Replicated Skinner (1948), presented pigeons with food every 12 seconds, again the pigeons engaged in repetitive behaviour. 
Unlike Skinner, the pigeons all ultimately engaged in the same patterns of behaviour.
24
Q

AUTOSHAPING

A

Repeatedly pairing a neutral stimulus with an unconditioned stimulus, elicits response without directly hand shaping it

25
Q

AUTOSHAPING

A

Repeatedly pairing a neutral stimulus with an unconditioned stimulus, elicits response without directly hand shaping it

26
Q

PALYA & ZACMY (1980)

A

Showed that pigeons spot pecking happens directly after presentation of food - foraging that happens in the natural environment. Changing the time of feeding subsequently changes the time of spot pecking to match.

27
Q

REINSTATEMENT

A

First the response produced the food now the food produces the response

28
Q

EQUIPOTENTIALITY

A

Getting the same learning whatever response, stimulus or reinforcer is used.

29
Q

LAWICKA (1964)

A

Tone location easily learned, tone frequency not so easily learned.

30
Q

RANDALL & ZENTALL (1997)

A

Showed pigeons have a win-stay bias - they will repeat the behaviour that just received the reinforcer.

31
Q

RANDALL & ZENTALL (1997)

A

Showed pigeons have a win-stay bias - they will repeat the behaviour that just received the reinforcer.

32
Q

HERRNSTEIN (1961)

A

Herrnstein’s Hyperbola - prediction about the rate of responding per minute depending on the rate of reinforcers.

33
Q

RATES

A

How frequently is one type of activity being engaged in. Not choice, but occurs as a result of choice.

34
Q

B = kR/R + Re

A

Response rate = (maximum response rate x reinforcer rate) / (reinforcer rate + extraneous reinforcer rate)

B and R: variables
k and Re: parameters

35
Q

STRICT MATCHING

A

Proportion of responses matches the proportion of reinforcers obtained for making that response

36
Q

CONGER & KILLEEN (1974)

A

Students and confederates - one gave plenty of social reinforcement and the other very little. Students were found to spend more time talking to the former, i.e. allocation of response depended on the proportion of reinforcers.

37
Q

VOLLMER & BOURRET(2000)

A

Basketball shots - 3 point attempts vs 2 point attempts. Proportion of attempts matches the proportion of successful shots made.

38
Q

VOLLMER & BOURRET(2000)

A

Basketball shots - 3 point attempts vs 2 point attempts. Proportion of attempts matches the proportion of successful shots made.

39
Q

DOUGHERTY & LEWIS (1992)

A

3 exp naive horses. Measured lever presses, changing proportion of reinforcers to lever 1 changes the response to that lever.

40
Q

CATANIA & REYNOLDS (1968)

A

6 pigeons, VI schedules, rates from 10 - 300 foods per hour. VAC - variance accounted for, higher the value the more accurate. Looked at how closely the observed rate of responding matched the hyperbolic prediction.

41
Q

CATANIA & REYNOLDS (1968B)

A

3 male pigeons, concurrent schedules. Reinforcer rate on key 1 affects responding on key 1 and 2 (choice).