chapter 6 Flashcards

1
Q

WHEN reinforcement is delivered after a behaviour, it will influence (3)

A

(1) Whether or not that behaviour is
learned,
(2) How it is learned
(3) How it is maintained

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A schedule of reinforcement is:

A

the rule that
determines how and when a response will be
reinforced

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What did Skinner reason about schedules of reinforcement?

A

Skinner reasoned that the form of this contingency would control the pattern of behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

In IC, a contingency is learned:

A

If S, then R –> O

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Schedule:

A

The pattern of behavioral contingency

ex:
If 10 Rs, then O
If 10 minutes, and then R, then O

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the two types of reinforcement rates:

A

(1) Continuous reinforcement
(2) Partial Reinforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Continuous reinforcement:

A

Reinforcing every correct response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which reinforcement rate is the most efficient way to condition a new response?

A

Continuous reinforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which reinforcement rate is rare in real life?

A

continuous reinforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Partial reinforcement:

A

reinforcing some,
but not all responses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which reinforcement rate is more effective at maintaining or increasing the rate of response?

A

Partial reinforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

In partial reinforcement schedules, different schedules produce (2):

A

(1) Distinct rates and patterns of responses
(2) Varying degrees of resistance to extinction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Two basic types of partial reinforcement schedules:

A

ratio and interval

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Ratio types (a type of partial reinforcing schedule) requires:

A

a certain number of responses be made before one is reinforced

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Interval types ( a type of partial reinforcing schedule) requires:

A

A certain amount of time must pass before a reinforcer is given

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the two basic CATEGORIES (not to be mistaken with types) of partial reinforcement schedules?

A

(1) fixed
(2) variable

we can have fixed ratio, variable ratio, fixed interval, variable interval

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Define simple schedules of reinforcement:

A

A single factor determines which occurrence of the response is reinforced

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

continuous reinforcement

A

each and every time leads to reinforcement

learn the association very fast

rare in real life

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

in which case is learning stronger? partial reinforcement or continuous reinforcement

A

partial reinforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

In ratio schedules, the delivery of reinforcement depends on:

A

the number of responses performed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

In __ , a “ratio” between “work” and “reinforcement” is established

A

Ratio schedules

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

In fixed ratio (FR) schedules, a reinforcement is given if:

A

the subject completes a PRE-SET number of responses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Continuous reinforcement (CRF) can also be refferred to as an:

A

Fixed-ratio-1 or FR1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

In fixed ratio (FR) schedules, every __ produces an __

A

In fixed ratio schedules, every x produces 1 O

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

In a cumulative recorder, the slope =

A

the rate of responding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Define ratio strain:

A

A pause during the ratio run, following a sudden, significant increase in ratio requirement (e.g., FR 5 to FR50)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Post-reinforcement pause is synonymous with:

A

Pre-ratio pause

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

In variable ratio schedules (VR), the number of responses required to get each reinforcer is:

A

NOT fixed –> it varies around an average

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

In variable ratio (VR) schedules, the reinforcer is __

A

less predictable
-there is less likelihood of regular pauses in responding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

In variable ratio schedules, the numerical value of the ratio indicates :

A

the average number of responses required per reinforcer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

In variable ratio schedules, every __ produces __ but __

A

In variable ratio schedules, every X Rs produces 1 O, but X changes with each reinforcer
– identified by average number of Rs per O

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

In Fixed ratio schedules, every __ produces ___

A

In fixed ratio schedules,every X Rs produces 1 P

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Describe the behaviour of a line depicting a fixed ratio schedule on a graph (3):

A

(1) Steady responding (upward line) until reinforcement

(2): Post-reinforcement (flat line):time out from responding after each reward

(3) Higher ratio, longer pause after each reward

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

In fixed ratio schedules, the __ ratio, the __ pause after each reward

A

In fixed ratio schedules, the HIGHER the ratio, the LONGER the pause after each reward

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

In variable ratio, every __ produces __ , but ___

A

In variable ratio (VR), every X Rs produces 1 O, but X varies around the mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is the behaviour of a line depicting a variable ratio on a graph?

A

Constant and high rate of responding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

In interval schedules, responses are reinforced only if:

A

They occur after a certain amount of TIME has passed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is the behaviour of a line depicting a fixed interval schedule on a graph:

A

Fixed interval scallop: Time to the end of the interval approaches, increase rate of responding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Define the fixed interval scallop:

A

Time to the end of the interval approaches, increase rate of responding

40
Q

In a fixed interval schedule, after ___, X produces X

A

In a fixed interval schedule, after Y seconds, 1R produces 1O

41
Q

In a fixed and variable interval schedule, what is the effect of behaviour before the interval is up?

A

Behaviour before the interval expires has no consequence

42
Q

In a variable interval schedule (VI), a response is reinforced only if it:

A

occurs more than a variable amount of time after the delivery of an earlier reinforcer

43
Q

In the variable interval schedule, the reinforcer is:

A

less predictable, thus the subject shows a steady rate of response

44
Q

In a variable interval schedule (VI) after _, _ produces _ but _

A

In a variable interval schedule, 1 R produces 1 O, but Y randomly changes after each O

45
Q

What is the behaviour on a line depicting a variable interval schedule:

A

steady but low rate of responding

46
Q

What are the four basi schedules of reinforcement:

A
  1. Fixed-ratio
  2. Variable-ratio
  3. Fixed-interval
  4. Variable-interval
47
Q

Fixed schedulers, regular reinforcers:

A

FR: pauses
FI: scallops

48
Q

Variable schedules, variable reinforcers:

A

VR: generally highest rate
VI: generally lowest rate

49
Q

Why are VR and VI so different?

A

VR –> more responses = more reinforcers (got to play to win)

VI –> more responses does not equal more reinforcers (only need to check in)

50
Q

The VR schedule produces both the:

A

The highest rate of responding and is the most resistant to extinction

51
Q

__ produce slower rates of responding:

A

Interval schedules (both FI and VI)

52
Q

What is the response rate for a fixed-ratio reinforcement?

A

very high

53
Q

What is the pattern of responses for fixed-ratio?

A

Steady responses with low ratio. Brief pause after each reinforcement with very high ratio.

54
Q

What is the resistance to extinction for fixed-ratio?

A

The higher the ratio, the more resistant to extinction

55
Q

What is the response rate for variable-ratio?

A

Highest response rate (of the 4)

56
Q

What is the pattern of responses for variable-ratio:

A

constant response pattern, no pauses

57
Q

what is the resistance to extinction of variable ratio?

A

Most resistant to extinction (of the 4)

58
Q

What is the response rate of fixed-interval:

A

Lowest response rate

59
Q

What is the pattern of responses for fixed-interval:

A

Long pause after reinforcement followed by gradual acceleration
SCALLOPS

60
Q

What is the resistance to extinction of fixed interval:

A

the longer the interval, the more resistant to extinction

61
Q

What is the response rate of varible interval:

A

moderate

62
Q
A
63
Q

What is the pattern of response for variable interval:

A

stable, uniform response

64
Q

What is the resistance to extinction of variable interval:

A

more resistant to extinction than fixed-interval schedule with same average interval

65
Q

What are two similarities of ratio vs interval schedules:

A

(1) In both the FR and FI schedules, there is a typical pause after each reinforcer, and an increased rate of response just before the delivery of the next reinforcer
(2) In both VR and VI schedules the response rate and is steady without predictable pauses

66
Q

What is a difference of ratio vs intervals schedules:

A

Different rate of response even when reinforcement frequency is similar

67
Q

What was Reynolds’ (1975) study on Ratio vs Interval Schedules? (describe)

A

-Compared rate of key pecking on VR and VI schedules
-Reinforement to pigeons on VI schedule controlled by responses of pigeons on VR schedule (therefore, nearly identical reinforcement rate)

68
Q

What was the result of Reynold’s study on Ratio vs Interval Schedules?

A

Same frequency and distribution of reinforcers, but different rates of responding

69
Q

What was the interpretation of Reynold’s Ratio vs Interval Schedules (2) ?

A

(1) Different underlying motivation
(2) Behaviour not dependant on just rate of reinforcement

70
Q

Why would different schedules with similar rates of reinforcement produce different rates of behaviour (2)?

A

(1) Reinforcement of inter-response times (IRT)

(2) Feedback Function

71
Q

Define reinforcement of inter-response times (IRT)?

A

The faster you respond, the more likely you are to receive reinforcement

Variable ratio schedules reinforce shorter IRTS

72
Q

Variable ratio schedules reinforce __ IRTS

A

shorter IRTs

73
Q

Interval schedules favour:

A

long IRTs

74
Q

Feedback function:

A

– Relationship between the response rate and rate of reinforcement
over an entire experimental session is larger on a VR schedule

– Will persist in higher responding if this is reinforced MORE than low

– Ratio schedule gets higher reinforcement with more responding
* It is an increasing linear function with no limit

75
Q

Define choice behaviour:

A

The voluntary act of selecting or separating from two or more things that which is preferred

76
Q

Our everyday choices are influenced by numerous factors (4):

A
  1. Reinforcer - quality, quantity
  2. Behaviour: type of response, schedule of reinforcement (i.e., effort)
    3.Available alternatives
    4.Delay in reinforcement
77
Q

define concurrent schedule:

A

2 schedules of reinforcement are in effect at the same time
-the subject is free to switch from one response key to the other

78
Q

concurrent schedule allows for:

A

Continous measurement of choice
(because organism is free to switch between options at any time)

79
Q

What is the implication of Herrnstein’s matching law for interval scales:

A

Implication: rate of a particular response does not depend on the rate of reinforcement of that response alone

80
Q

What is the implication of an operant response?

A

An operant response must compete with all other possible behaviours for an individual’s time

Thus, it is impossible to predict how a reinforcer will affect a behaviour without taking into account the context (all the other reinforcers that are simultaneously available for all other behaviours)

81
Q

What is a concurrent-chain schedule of reinforcement?

A

choosing one option makes the other unavailable

two stages: choice link, terminal link

82
Q

self control:

A

choosing a large delayed reward over an immediate small reward

83
Q

What are the two models of self control?

A

(1) direct-choice procedure
(2) concurrent-chain procedure

84
Q

Value discounting function:

A

V = M / (1+kD)

85
Q

In the value discounting function, __ is directly related to __ and inversely related to __-

A

-Value of a reinforcer (V) is directly related to reward magnitude (M) and inversely related to reward delay (D)

86
Q

K is the:

A

Discounting rate parameter that indicates how rapidly reward value declines as a function of delay

87
Q

What did Madden et. al study ?

A

self control in heroin addicts vs non-dependent subjects

1000$ in future vs smaller amount of money now

88
Q

addicts are more impulsive:

A

they have a steeper discounting function

89
Q

Petry and casarella showed that:

A

Addicts are more impulsive: they have a steeper discounting function

90
Q

Impulsivity and delay discounting affect many other human behaviours (3):

A
  • Slower reward discounting associated with higher grades
    -Faster reward discounting associated with unsafe sex
    -Slower delay discounting with age (less impulsivity with age)
91
Q

Training self control (2):

A

1.Training with delayed reinforcement increases likelihood of choosing larger delayed reward in the future
2. Pre-commitment: Make a decision to choose larger delayed alternative in advance, in a manner that is difficult or impossible to change later on

92
Q

5 steps to modify your own behaviour:

A
  1. Identify the target behaviour
    2.Gather and record baseline data
  2. Plan your behaviour modification program
  3. Choose your reinforcers
    5.Set the reinforcement conditions and begin recording and reinforcing your progress
93
Q

Schedules of reinforcement define whether:

A

The O follows every R, is available after some number of R pr is available only after some time interval

94
Q

When multiple responses are reinforced under a VI schedule, the matching law predicts that organisms will:

A

allocate time among those responses based on the relative rates of reinforcement for each response

95
Q
A