Unit 6: Ch 10 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What are the two kinds of schedules of reinforcement?

A

Simple and complex

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

CRF

A

Continuous Reinforcement (FR 1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Schedule of reinforcement

A

A schedule of reinforcement is a rule describing the delivery of reinforcement. A particular kind of reinforcement schedule tends to produce a particular pattern and rate of performance, and these schedule effects are remarkably reliable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Run rate

A

The run rate is the rate at which the organism performs once it has resumed work after reinforcement. Run rate = rate at which a behaviour occurs once it has begun.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Post-reinforcement pause

A

After reinforcement there may be a pause, called a post-reinforcement pause. This is influenced by the amount of work required for each reinforcement, which correlated with the ratio of responses to reinforcement. This means pauses are longer in an FR 100 schedule than an FR 20 schedule.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Fixed-ration schedule of reinforcement

A

A Fixed Ratio (FR) schedule (FR Schedule) reinforces behaviour when it has occurred a fixed number of times. The naming convention for Fixed Ratio schedules is FR # (number of times the behaviour must occur for each reinforcement).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Variable-ratio schedule

A

In a Variable-Ratio (VR) schedule of reinforcement (VR Schedule), the number of times an activity must be performed before reinforcement varies around an average.
The response pattern for VR has fewer and shorter post-reinforcement pauses and so more behaviour in a given period than FR.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Fixed-interval schedule

A

A Fixed-Interval (FI) schedule of reinforcement reinforces behaviour the first time it occurs after a constant/fixed interval.
The response pattern for FI shows a scalloped shape, as since the FI doesn’t reinforce steady performance it doesn’t produce a steady run rate. There is no reinforcement for continuing to perform until near the end of the interval.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Fixed-duration (FD) schedule

A

In Fixed-Duration (FD) schedules of reinforcement, reinforcement depends on the continuous performance of a behaviour for a period of time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Variable-duration schedule

A

In Variable-Duration (VD) schedules of reinforcement, the required period for performing the behaviour varies around an average.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which schedules aren’t affected by the behaviour of the subject?

A

Fixed-Time (FT) and Variable-Time (VT) schedules of reinforcement aren’t contingent on any behaviour by the subject. The reinforcer is delivered regardless of what the subject does.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Ratio strain

A

Ratio strain happens when stretching the ratio is done too quickly or too far, causing the tendency to perform to break down.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Thinning

A

Thinning a reinforcement schedule is a type of shaping that involves “thinning” the reinforcement schedule to fewer and fewer reinforcements for a behaviour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Stretching the ratio

A

Stretching the ratio means to progressively increase the ratio of behaviour to reinforcement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Partial reinforcement effect

A

The Partial-Reinforcement Effect (PRE) says that behaviours that have been on intermittent schedule are more resistant to extinction than behaviour that has been on continuous reinforcement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the 4 hypotheses to explain PRE?

A
  • Discrimination Hypothesis
  • Frustration Hypothesis
  • Sequential Hypothesis
  • Response Unit Hypothesis
17
Q

Discrimination hypothesis

A

Discrimination Hypothesis says that extinction takes longer after intermittent reinforcement because it’s harder to discriminate between extinction and an intermittent schedule than it is to discriminate between extinction and continuous reinforcement, and a say FR 30 schedule will take you 30 times before you even start to question if something has changed, and longer still to stop trying.

18
Q

Frustration hypothesis

A

Frustration Hypothesis = discrimination, explains PRE by saying that extinction takes longer after intermittent reinforcement because, in effect, lever pressing while frustrated has been reinforced. If you’ve always been reinforced for a behaviour, then never are, you will experience frustration, and that will act as negative reinforcement. But if you’ve been intermittently reinforced for the behaviour, you’ve been reinforced for performing the behaviour while reinforced, and so that frustration becomes an S+ for lever pressing (and the frustration from that causes S+ for more lever pressing, and so on). - the stimuli present during training become S+ for behaviour, and the S+ is inside the subject (frustration)

19
Q

Sequential hypothesis

A

Sequential Hypothesis = discrimination, focuses on differences in the sequence of cues during training. In continuous reinforcement the behaviour is followed by reinforcement every time, so the reinforcement is an S+ for that behaviour. In intermittent reinforcement, some behaviours are followed by reinforcement and some not, but that interval itself becomes a bit of an S+ for the behaviour, since the subject is used to performing X number (or for X long) that behaviour before reinforcement. The thinner the reinforcement schedule, the more resistant the behaviour to extinction.- the stimuli present during training become S+ for behaviour, and the S+ is outside the subject (the sequence of reinforcement and nonreinforcement).

20
Q

Response unit hypothesis

A

Response Unit Hypothesis is quite different, doesn’t focus on discrimination. It distinguishes between what is actually creating reinforcement in CRF and other forms of reinforcement. The typical understanding is to define the reinforced behaviour as, for example, pressing the lever. And in CRF this would be correct, just that is enough to produce reinforcement. In other forms of reinforcement, however, there is more involved in getting the reinforcer. On an FR 3 schedule, for example, the unit of behaviour is three lever presses, not one. In VR schedules, the unit of behaviour is a range of behaviour, rather than just the one original unit.

21
Q

Matching law

A

Matching law is a formula that states that the distribution of behaviours matches the availability of reinforcement. The subject will start by moving back and forth between the two, but eventually land on the side associated with the richer reinforcement schedule.

22
Q

Complex schedules of reinforcement

A

Multiple- 2 or more
Mixed- same, but no stimuli to “tip off” which you’re on
• Chain: only reinforced for end of chain
Tandem: same, but no clear mark between end of one event and beginning of next.

23
Q

The only schedule with more than one subject

A

cooperative

24
Q

The only schedule with more than 1 available at a time

A

concurrent

25
Q

Herrnstein’s formula

A

Herrnstein triangle formula
Herrnstein’s formula uses a reformulation of the matching law:

BA = rA
BA + BB rA + rB

Bs are the behaviours, rs are the reinforcement rates.
Example: if rA is 10 and rB is 30, then 75% of reinforcement and time should go to rA, and 25% to rB.

26
Q

Herrnstein’s formula for multiple choice

A

“O” for any Other behaviour (but the same as regular Herrnstein’s formula otherwise)

27
Q

Cumulative record with scallops

A

The schedule that is likely to produce a cumulative record with scallops is the FI fixed interval schedule.

28
Q

The explanation of the PRE Partial Reinforcement Effect that puts greatest emphasis on internal cues

A

Frustration hypothesis

29
Q

Schedule effects

A

The term schedule effects refers to the pattern and rate of performance produced by a particular reinforcement schedule.