Chapter 5 Review from textbook Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

In instrumental conditioning, constraints are placed on the opportunity to gain reward

A

operant conditioning has no constraints, and the animal or person can freely respond to obtain reinforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Primary reinforcements possess innate reinforcing ability

A

secondary reinforcements develop the capacity to reinforce instrumental or operant behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The reinforcing property of a secondary reinforcement is determined by the following three things

A

(1) The strength of the primary reinforcement it is associated with
(2) the number of pairings of the primary and secondary reinforcements
(3) the delay between primary and secondary reinforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A positive reinforcement is an event that has reinforcing properties

A

a negative reinforcement is the removal of an unpleasant event

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Shaping involves reinforcing a response that occurs at a high rate and then

A

changing the contingency so that closer and closer approximations to the final behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A fixed number of responses are necessary to produce reinforcement on FR schedules

A

an average number of responses lead to reinforcement on VR schedules

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

On an FR schedule, responding stops following reinforcement, called a postreinfrocement pause.

A

After the pause, responding resumes at the rate present before reinforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

On a VR schedule, response rate is high, with only an

A

occasional postreinforcement pause.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The first response occurring after a specified interval of time produces reinforcement on an FI schedule

A

an average interval of time elapses between available reinforcements on a VI schedule; the length of time varies from one interval to the next

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

On an FI schedule, responding stops after reinforcement, with the rate of response increasing slowly as the time approaches when reinforcement will once more become available

A

the behavior characteristic of a FI schedule is called the scallop effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The scallop effect does not occur on VI schedules

A

there is no pause following reinforcement on a VI schedule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The response requirement is high within a specified amount of time with a DRH schedule

A

and low with a DRL schedule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A DRO schedule requires

A

an absence of response during the specified time period

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

An instrumental or operant response is learned rapidly if

A

reward immediately follows the response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The performance of the instrumental or operant response is higher with a larger rather than smaller reward

A

which is due to the greater motivational impact of a large reward

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A shift from large to small reward leads to a rapid decrease in response

A

a shift from a small to large reward causes a significant increase in response

17
Q

The negative contrast (or depression) effect is a lower level of performance when

A

the reward magnitude is shifted from high to low than when the reward magnitude is always low

18
Q

The positive contrast (or elation) effect is a higher level of performance when the reward magnitude is shifted from

A

low to high than when the reward magnitude is always high

19
Q

The frequency and intensity of an instrumental or operant behavior declines during

A

extinction when that behavior no longer produces reward

20
Q

Nonreward can increase the intensity of instrumental or operant behavior when the

A

memory of nonreward has been conditioned to elicit the contingent behavior

21
Q

Nonreward eventually leads to an inhibition of an instrumental or operant behavior response as well as

A

elicitation of escape response to elicit the contingent behavior

22
Q

The environmental cues present during nonreward can develop aversive properties

A

which motivates escape from situations associated with nonreward

23
Q

Partial rather than continuous reinforcement leads to

A

a slower extinction of the instrumental or operant behavior

24
Q

The partial reinforcement effect is caused by

A

conditioned persistence and/or the memory of nonreinforcement associated with the appetitive behavior

25
Q

Contingent reinforcement have been useful in many real-world situations

A

to increase appropriate behaviors and decrease inappropriate behaviors

26
Q

There are three stages of contingency management:

A

(1) assessment
(2) contracting
(3) implementation

27
Q

In the assessment stage

A

The baseline levels of appropriate and inappropriate behaviors, the situation in which these behaviors occur, and potential reinforces are determined

28
Q

In the contracting stage

A

the precise relationship between the operant response and the reinforcement is decided

29
Q

Implementation of the contingency contract involves

A

providing reinforcement contingent upon the appropriate response or upon absence of the inappropriate behavior, or both.

30
Q

Contingency management has successfully modified many different behaviors

A

including inadequate living skills, drug use, poor study habits, antisocial responses, and energy consumption