Chapter 6 Flashcards

1
Q

schedule of reinforcement

A

a rule determining if a response will be followed by a reinforcer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

schedules influence how a response is ______ and ________

A

learned; maintained

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

simple schedules definition and 4 types

A

a single factor determines occurrence of the reinforcer

Ratio (fixed and variable)

Interval (fixed and variable)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Ratio schedule

A

reinforcement depends upon the number of responses performed/response accumulation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

continuous reinforcement

A

every time the response is occurring, so does the reinforcer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

partial reinforcement

A

response is reinforced only some of the time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

fixed ratio (FR)

A

a fixed ratio between the number of responses necessary to produce the reinforcer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

which one produces the most vigorous responding: continuous reinforcement (FR1) or partial reinforcement (FR50)?

A

FR50

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

3 FR characteristics

A

post-reinforcement pause

ratio run

ratio strain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

post-reinforcement pause

A

decrease in responding just after a reinforcer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

ratio run

A

a high steady rate of responding that completes the ratio (usually between reinforcers)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

ratio strain

A

rapid increase in FR requirement results in long pre-reinforcement pauses

increase requirement where animal stops and breaks, usually resumes after a while

higher requirement = more likely to see ratio strain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

reinforcement is predictable: T/F?

A

False; unpredictable!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Variable ratio (VR)

A

a different number of responses are required for reinforcement

average of responses = VR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

characteristics of VR when compared to FR

A

fewer post-reinforcement pauses

fewer ratio runs

more resistance to ratio strain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

interval schedules

A

responses are reinforced if they occur after a certain amount of time

Still have a response requirement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

fixed interval schedule (FI) and example

A

the time between reinforcers is constant

ex. washing clothes in a washing machine, when started it tells you how long it will take

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

variable interval (VI) and example

A

time between reinforcers is variable/not constant

average = VI, won’t know average until done multiple times

ex. calling to see if you car is fixed, sometimes 45 mins other time 3 hours

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

2 characteristics of FI

A

responses cluster around reinforcer delivery, aka FI Scallop
- ex. silly first-years studying night before exam

depend upon the ability to perceive time
- ex. visual stimuli increased scalloping, modern example is google calendar

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

3 characteristics of VI

A

VI schedules support steady, stable rates of response

once time has past, the response will be reinforced

limited hold

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

limited hold

A

in some instances, a restriction can be placed on the length of time a reinforcer will be available

ex. a surfer waiting for the perfect wave, if passing up too many waves to get a different one you may miss your chance to surf!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

inter-response time (IRT) and what happens if short vs. long IRTs are reinforced

A

interval between responses

if short IRTs are reinforced, increase responding

if long IRTs are reinforced, decrease in responding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

response-rate schedules and example

A

requires a certain number of responses at a specified rate

ex. assembly line:

too fast = piss off others
too slow = shut down line
just right = team player

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

2 types of response-rate schedules

A

differential reinforcement of high rates (DRH)

differential reinforcement of low rates (DRL)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

differential reinforcement of high rates (DRH) and example

A

responses are reinforced only if accumulated before a give time

encourages a high rate of responding

ex. At DRH 12, a rat must press lever more or = 12 times/min in order to be reinforced

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

differential reinforcement of low rates (DRL) and example

A

responses are reinforced only if restrained in given time

encourages a low rate of responding, looking at self control

ex. At DRL 3, a pigeon can peck less than or = 3 times/min for reinforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Spot Check: In research methods you need to answer at least 5 questions and hand them in at the end of each week to get full credit for the class. This is an example of:

a fixed ratio schedule

a variable ratio schedule

a fixed interval schedule

a response-rate schedule

A

a response-rate schedule, specifically DRH

28
Q

2 common techniques for studying choice

A

skinner box

concurrent schedules

29
Q

relative rate of responding

A

RR
Ra
RRa for key A = ——–
(Ra+Rb)

                           Rb RRB for key B =  --------
                       (Ra+Rb)
30
Q

meaning of RRa > 0.5?

A

schedule a > schedule b

31
Q

meaning of RRa = 0.5?

A

schedule a = schedule b in preferences

32
Q

relative rate of reinforcement

A

rr

calculated same as RR:

                         ra rra for key A =  --------
                       (ra+rb)

                           rb rrB for key B =  --------
                       (ra+rb)
33
Q

we chose/respond (more/less) to things that are reinforced

34
Q

matching law

A

relative rate of responding (RR) on a given alternative is approximately equal to the relative rate of reinforcement (rr) earned on that alternative

35
Q

matching law equation

A

RRa = rra

Ra = ra
—– ——-
Rb rb

36
Q

matching law is affected by 3 variables…

A

sensitivity (s)

bias (b)

reinforcer value

37
Q

Sensitivity (s) in matching law and the equation

A

tendency to choose a particular schedule, despite loss of reinforcement
s
Ra = ra
—– ( ——- )
Rb rb

38
Q

undermatching in sensitivity

A

choice responding less than predicted, s < 1.0

matching law predicts 2:1 ratio if choice is less than 2:1 (ex. 1:1)

39
Q

overmatching in sensitivity

A

choice responding more than predicted, s > 1.0

matching law predicts 2:1 ratio, if choice is greater than 2:1 (ex. 3:1)

40
Q

bias (b) in matching law and equation

A

tendencies to certain responses and/or reinforcers (not about schedules!)

if b > 1.0 = more preferred

if b < 1.0 = less preferred

                           s
Ra    =      ra b -----      (  -------  )
Rb            rb
41
Q

response vs. reinforcer bias

A

response: how would you rather respond?

ex. drinks at the bar or drinks at home?

reinforcer: what would you rather receive?

ex. diet coke or dr. pepper?

42
Q

have to learn biases by testing (ideally) ______ organisms

43
Q

reinforcer value in matching law (3 features)

A

reinforcer features influence rate of responding (R)

ex. amount

ex. palatability

ex. immediacy

44
Q

basketball matching law example

A

in basketball, players choose:
- RRa (3 pointers): further but more points (rra)
- RRb (2 pointers): easier but less points (rrb)

shots (RR) were more proportional to the shooting percentage (rr) of those shots, don’t match = lose

45
Q

3 levels of choice

A

molecular

melioration

molar

46
Q

levels of choice: molecular

A

individual responses (choosing A > B)

47
Q

levels of choice: melioration

A

we respond to improve local rates of responding

48
Q

levels of choice: molar

A

sum of responses (all choices, no matter A or B)

49
Q

molecular vs molar maximizing

A

molecular: choosing the response that is best at a single point in time

molar: choosing the response that will maximize reinforcement over the long run

50
Q

lab examples of molecular and molar maximizing

A

molecular: what key light does a pigeon choose to peck in a single instant

molar: how many lever presses does a rat make on 2 levers over 3 days

51
Q

melioration: local rate definition

A

time a subject responds to a particular alternative

ex. lever pressing 60 times in 60 minutes, but all occur in first 30 min

overall rate (molar): 1/min BUT local rate (in 1st 30 min): 2/min

52
Q

Spot check question:

It’s girl-scout cookie season. We all have our favorites. You prefer samoas, your friend is a thin mints lover. These preferences are examples of response biases.

true or false

A

false! response bias is how you would respond/get to the cookies while reinforcer bias is which you would rather

53
Q

concurrent Chain schedule

A

testing choice and self control

terminal link and choice link

54
Q

concurrent Chain schedule: terminal link

A

reinforced, second choice made that leads to schedule of reinforcement

55
Q

concurrent Chain schedule: choice link

A

not reinforced, first choice

once chosen they are committed to the choice

56
Q

self control and test used to test it

A

choosing a large delayed reward over an immediate small reward

marshmallow test

57
Q

does a pigeon chose a delayed large reward or small concurrent chain schedule

A

delayed large

58
Q

how to quantify self-control

A

value discounting function

59
Q

value discounting function and equation

A

the value of the reinforcer is reduced by how long you have to wait for it

      M V = ------------
   (1 + kD)
60
Q

M
V = ————
(1 + kD)

A

v = value of reinforcer

M = reward magnitude

k = decay parameter, tells you how influential decay will be on reinforcer

D = reward delay

61
Q

in the value discounting function: as D increases, the value of the reward …

62
Q

In the value discounting function, what happens if D = 0 and V = M

A

you receive reward immediately

63
Q

consequences of the VDF (value discounting function) and three terms

A

as reward value decays over time, choice is shifted:

T0 (onset): the reward value for “large” is greater, no decay

T1 (early): immediate small reward preferred if large reward value decays with delay, like “direct choice”

T2 (late): in long delays, large reward retains value and is preferred, like “concurrent schedule”

64
Q

the longer we wait = _________ the likelihood of choosing large reward

65
Q

in the VDF, what does a small/large k signify? what study showed this?

A

small k = shallow function, increase in self control

large k = steep function, decrease self control

Madden et. al. did a study with heroin users and control groups to observe self control, heroin users had a large K = less self control, debated on learned from environment or genetically disposed

66
Q

can we teach self control?