Chapter 10 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Reinforcement (punishment) Schedule

A

Arrangements that specify which instances of the target behaviour will be reinforced (punished).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Continuous Reinforcement (CRF)

A

All instances of the target behaviour are reinforced.

e.g., talking produces sound of one’s voice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Intermittent Reinforcement

A

Some, but not all, instances of the target behaviour are reinforced.

e.g., playing slot machine sometimes results in small win

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why study reinforcement schedules?

A

Improved prediction and control of target behaviour under conditions of learning, maintenance, and extinction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Cumulative Records

A

Way of visually displaying responses to reinforcement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Fixed Ratio Schedule

A

Reinforcer given for a certain number of instances of target behaviour.

Number constant from one reinforcer to the next.

Example: At basketball practice, coach allows player to take break after making five free throws.

Effects

  • Post reinforcement pause: pause after giving reinforcement followed by high rate of responding between reinforcers.
  • High RTE (lower than VR)
  • Requires constant monitoring
  • Ratio strain: decrease of response because increase in FR schedule to fast.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A token system was used to improve socialization at a nursing home for geriatric patients. As one part of the token economy, the patients received a special token if they told a joke to another patient. If they accumulated four of the joke-telling tokens, they could exchange the tokens for several free videos. The response is telling a joke. Consider two possibilities. First, the reinforcer is the token. Second, the reinforcer is the video.

A
Continuous Reinforcement (CRF)
-reinforcer (token) delivered for all instances of the target behaviour (tell joke)

Fixed Ratio

  • reinforcer (video) delivered for certain number of instances of target behaviour (tell joke)
  • number constant from one reinforcer to the next
  • constant number determines FR value (FR 4)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Variable Ratio Schedule

A

Reinforcer delivered for certain number of instances of target behaviour.

Number not constant from one reinforcer to the next.

Example: play chess and occasionally win.

Effects

  • high steady rate
  • no post reinforcement pause
  • high RTE
  • requires constant monitoring
  • ratio strain
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Bertha wrote sex advice columns for several women’s magazines. Sometimes she would submit a column and it would be accepted for publication right away. Sometimes she would submit several columns before one was accepted for publication. On the average, she got one story in six accepted. Publication is a reinforcer for Berthas behaviour. The response is submitting a story.

A

Variable ratio (VR)

  • reinforcer (publication acceptance) delivered for certain number of instances of target behaviour (submit story)
  • number varies from one reinforcer to the next
  • average number determines VR value (VR 6)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Fixed Interval Schedule

A

Reinforcer delivered for 1st instance of target behaviour after certain time elapses.

Time constant from one reinforcer to the next.

Constant time determines FI value.

Responses during the interval have no effect.

Example: your favorite TV show occurs at 7:00 pm every Thursday, and your video recorder is set up to record the show each time it occurs. Because 1 week must elapse before you can be reinforced for watching your favorite show, we would call the schedule an FI 1-week schedule.

Effects

  • FI scalloping: responding occurs at low levels following reinforcement and increases as end of interval approaches
  • moderate RTE
  • requires constant monitoring after interval completion
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The annual moonlight party was going on at Upstairs. Every hour, the manager would scan the dance floor; for any couple dancing, he would give a rose to the woman and a cigar to her partner. The response is dancing and the reinforcers are flowers and cigars.

A

Fixed Interval (FI)

  • reinforcer (flowers and cigars) delivered for 1st instance of target behaviour (dancing) after certain time elapses
  • time constant from one reinforcer to the next
  • constant time (1 hour) determines F1 value (FI 1 hr)
  • responses during the interval have no effect
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Variable Interval Schedule

A

• Reinforcer delivered for 1st instance of target behaviour after certain time elapses
• Time not constant from one reinforcer to next
• Average time determines VI value
• Responses during the interval have no effect
• Example: Because messages on one’s telephone answering machine or email messages on one’s computer can be left at unpredictable times, checking one’s answering machine for messages and one’s computer for email are examples of VI schedules.

Effects
• Relatively moderate and constant response rate
• High RTE
• Requires constant monitoring after interval completion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Giovani had moved to a new town a town and enjoyed receiving snapchats from his friends back home. He would check his phone at work throughout the day for any new snapchats. Snapchats arrived for him sporadically, about one per hour. Receiving an email is the reinforcer. The behaviour is checking his phone.

A

Variable Interval (VI)

  • reinforcer (snapchat) delivered for 1st instance of target behaviour (check phone) after certain time elapses
  • time varies rom one reinforcer to the next
  • average time (1 hour) determines VI value (VI 1hr)
  • responses during the interval have no effect
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Limited-hold

A

Scheduled reinforcer for instance of target behaviour remains available only for a certain period of time.

If no response occurs during that time, reinforcer is canceled.

Example: Looking at TV produces sight of goal scored (very brief limited hold)

Effect

  • typically increases response rate when added to interval schedule
  • VI/LH has high RTE
  • FI/LH has moderate RTE
  • does not require constant monitoring
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

District was bumping on Saturday nights. Every hour, DJ combo beats would scan the dance floor for five minutes. For any couple headbanging, he would give a rose to the woman and a cigar to her partner. The headbanging and the reinforcers are flowers and cigars.

A

Fixed Interval (FI)

  • reinforcer (flowers and cigars) delivered for 1st instance of target behaviour (headbanging)
  • time constant from one reinforcer to the next
  • constant time (1 hour) determines FI value (FI -1hr)
  • responses during the interval have no effect

Limited-hold

  • scheduled reinforcer (flowers and cigar) for 1st instance of target behaviour (headbanging) remains available only for a certain period of time
  • if not response occurs during that time, reinforcer is cancelled
  • restricted time (5 min) determines LH value (LH 5-min)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Fixed Duration Schedule

A

• Reinforcer delivered when target behaviour occurs continuously for certain time
• Time constant from one reinforcer to the next
• Constant time determines FD value
• Example: a worker who is paid by the hour might be considered to be on an FD schedule

Effects
• Long periods of continuous behavior
• Moderate RTE

17
Q

Variable Duration Schedule

A

• Reinforcer delivered when target behaviour occurs continuously for certain time
• Time not constant from one reinforcer to next
• Average time determines VD value
• Example: waiting for traffic to clear before crossing a busy street

Effects
• Long periods of continuous behavior
• High RTE