test 4 Flashcards

1
Q
  1. Define each of the following schedules of reinforcement and describe the typical response
    pattern associated with each schedule (you may draw the pattern of responding in place of
    describing):
A

a. Fixed ratio – this is generally goes with a fixed number of responses. Ex rat needs to press the lever 5 times to get the food. This produces a high rate of response with a post reinforcement pause after each reinforcer
b. Variable ratio – this is when a behavior is reinforced after a random number of responses. This results in high steady rates of responding. Example gambling slot. You don’t know if you will be reinforced or not.
c. Fixed interval – this is the 1st response after a fixed amount of time. Time after the interval does not matter and number of responses within the interval does not matter. This provides the reward at a consistent time.
d. Variable interval – provides the reinforcement after random time intervals. Ex employer checks on your work at random times

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

schedule reinforcements fixed ratio

A

this is generally goes with a fixed number of responses. Ex rat needs to press the lever 5 times to get the food. This produces a high rate of response with a post reinforcement pause after each reinforcer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

schedule reinforcements variable ratio

A

this is when a behavior is reinforced after a random number of responses. This results in high steady rates of responding. Example gambling slot. You don’t know if you will be reinforced or not.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

schedules of reinforcement Fixed interval

A

this is the 1st response after a fixed amount of time. Time after the interval does not matter and number of responses within the interval does not matter. This provides the reward at a consistent time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

schedules of reinforcement Variable interval

A

– provides the reinforcement after random time intervals. Ex employer checks on your work at random times

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Provide an example of one of the above schedules of reinforcement. Explain why it is either fixed or variable and why it is either a ratio or interval schedule.

A

Ex receives a candy cane for every tree they decorate. This would be fixed ratio because the reinforcer is received upon the number of responses (every tree they decorate)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How might a researcher use a progressive ratio schedule to explore the reinforcement efficacy of a particular drug?

A

It is used to measure breaking points for the highest ration value, and efficiency for effectiveness.
Drugs- the researcher can see how many times does it have to happen to the drug abuse until they stop administrating it. Looking at the progression of the PR schedules of effective breaking point. They also look at the time that has impacted it and which drugs are the most addictive. This allows the researcher to know how effects of drugs on the problem behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is ratio strain?

A

This is breakdowns in behavior due to increasing response requirement to quickly or too high. (burnout)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  1. Describe the drive reduction theory of reinforcement. What is a major difficulty with this
    theory? What is incentive motivation?
A

This is reinforcing to the extent that it is associated with reduction in psychological drive. Ex food (reinforcer) for going to the cafeteria because it reduces hunger.
Problem with theory: not all reinforces reduce physiological drive. Ex secondary reinforcers like being cheered on.

Incentive motivation- derived form property of reinforcer vs internal drive state. Example playing a video game for fun.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
  1. Outline the Premack principle. Give an example of the Premack principle.
A

Objective way to determine if something can be used as a reinforcer. They are seen as behaviors instead of stimuli.
High probability (frequency) of behavior that can be used to reinforce low probability behavior.
Ex – someone who hates school
Mom says To do their homework (low probability) first and then they can go to the park. (High probability they would do on their own)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Outline the response deprivation hypothesis. Describe how the response deprivation
hypothesis differs from the Premack principle.

A

Behavior can serve as a reinforcer when access to the behavior is restricted and its frequency falls below preferred level of occurrence.
Ex If a rat normally run on the wheel and we restrict it to 10 minutes.

Response depreviation hypotheisis differs from premacks in the way LBP responses can reinforce HPN responses where in premocks the HPB influences the LPB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Define adjunctive behavior. What other term is used to refer to this class of behaviors?

A

One behavior is strengthened through intermittent reinforcement and a different behavior emerges as a side effect.
Also known as schedule-induced behavior.
The behavior happens way too much

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is schedule-induced polydipsia, and what is the typical procedure for inducing it in rats?

A

This is when you get exaggerated drinking behavior. The typical procedure for rats is the exaggerated drinking behavior happens when presented food pellets under a fixed time schedule.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

List the characteristics of adjunctive behaviors.

A
  1. Typically occurs on FI or FT schedules of reinforcement, immediately following consumption of the intermittent reinforcer, during which time another reinforcer is not available
  2. Affected by level of deprivation for the scheduled reinforcer; the greater the level of deprivation for the reinforcer, the stronger the adjunctive behavior
  3. Opportunity to engage in an adjunctive behavior can serve as a reinforcer for another behavior
  4. There seems to be an optimal time interval between reinforcers for development of adjunctive behavior, often in the range of 1 to 3 minutes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a generalization gradient and what does the shape of a typical generalization gradient look like?

A

Generalization gradient shows the relationship between probability response and stimulus value.
The probability of response is the highest for a stimulus that has signal reinforcement.
These are less for stimuli that are that are close but not identical to the SD and low for stimuli that have left from the discriminative stimulus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is peak shift and how is this related to the generalization gradient?

A

Peak shift- this is the change of peak of a generalization gradient to the side of the SD away from the stimulus that signals extinction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Define stimulus control. What would be an example of stimulus control of behavior at a
hockey game and at a school?

A

This is a stimulus or event that precedes the occurrence of an operant altering the probability of response. The behavior is triggered by a presence or an absent of a stimulus
Hockey game: Getting rowdy every time your team scores. The behavior depends on the if they get a goal or not.
School: Quietly listening in class. This would depend on whether the teacher is talking or not.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Define stimulus generalization and stimulus discrimination as they occur in operant
conditioning.

A

This is the ability to behave in a new situation in a way that has been learnt through similar situations.
Ex dog drooling when they hear a click noise, they might generalize it to another sound like beeping.
Stimulus discrimination- this is the ability to know the difference between one stimulus to another.
Example- if a pigeon pecks this glass that green and it gets reinforced but when the glass Is red it does not get reinforced the pigeon will learn the difference between the two and will be able only peck when it turns green.

19
Q

. How might a bird owner use stimulus control to eliminate a parrot’s tendency to squawk for long periods of time? How might a novelist use stimulus control to facilitate the act of writing

A

They can use a stimulus control to signal that the bird will be reinforced if it can squawk or not. The novelist control like turning on a lamp to signal the act of writing will be later reinforced.

20
Q

what is continuous reinforcement schedule (CRF)

A

Each specified response is
reinforced.
* Ideal for strengthening a newly
learned behavior.

21
Q

Intermittent (partial)
Reinforcement Schedule

A
  • Only some responses are
    reinforced.
  • Useful for teaching
    persistence.
22
Q

what happens if fixed ratio has a rich or dense schedule

A

has a very low response requirement – e.g., a rat can easily obtain a food pellet with only two lever presses
(FR 2

23
Q

Fixed ratio lean schedule

A

has a very high response requirement – e.g., a rat must press the lever 100 times (FR 100) to obtain a food
pellet

24
Q

fixed ratio stretching the ratio

A

” – increasing the response requirement from a
very rich schedule to a very lean schedule
– Must be done gradually.

25
Q

simple interval schedules

A

The length of the
interval changes
unpredictably from one
reinforcement to the next * Lengths of VI schedule
vary around average
* VI 1 minute
* Produces a moderate
steady rate of
responding and no
post-reinforcement
pause

26
Q

Interpretations of the PRP

A

The PRP may also be explained at the molar or molecular levels.
* The PRP is a function of the interreinforcement interval (IRI).
* Fixed-interval schedules: PRP is approximately half the IRI.
* Fixed-ratio schedules: Increased PRP as the ratio increases may be a function
of the ratio size, the IRI, or both.

27
Q

Molar Interpretation of Pausing

A

PRPs are normally distributed with a range over the length of the interval and a
mean equal to one-half the IRI.
Reinforcement is maximized by IRI distributions with this shape.
During initial training there are more PRPs that are small, but as the organism
learns the schedule of reinforcement, the PRPs become larger. All PRPs are
averaged, giving the appearance of the scallop

28
Q

Adjunctive Behavior

A

Excessive pattern of behavior that emerges as by-product of an
intermittent schedule of reinforcement for some other behavior. * One behavior is strengthened through intermittent reinforcement, and
a different behavior emerges as a side effect

29
Q

Adjunctive Behavior as Displacement Activity

A

Displacement activity is an activity that emerges when one is
confronted by conflict or prevented from obtaining a goal. * Used to be viewed as a way of releasing pent-up energy
* Now viewed as encouraging a diversified range of behaviors that may
prove useful in a particular setting
* For example, it might help an animal (or a person) to remain in a
situation where a reinforcer might eventually become available
– Might partially account for the tendency of many students to snack or sip
coffee or tea (or even just water) while studying

30
Q
  • Drive reduction theory –
A

– event is reinforcing to the extent that it is
associated with reduction in physiological drive
– e.g., food is a reinforcer for going to the cafeteria because it reduces a
hunger drive
– Problem: Some reinforcers don’t seem to reduce a physiological drive

31
Q

Incentive motivation –

A

– derived from property of reinforcer vs. internal
drive state (e.g., playing a video game for the fun of it)

32
Q

Behavioral Bliss Point Approach

A

Organism with free access to alternative activities will distribute its
behavior in such a way as to maximize overall reinforcement. * But activities are often not freely available. – e.g., most people have to work for a living and are therefore limited in
how much they can indulge in more reinforcing activities
* Organism must then distribute its behavior in such as way as to draw
as close to maximization as possible.

33
Q
  • Controlling stimulus:
A

A stimulus or event that precedes
the occurrence of an operant, altering the probability
of response

34
Q

Discriminative stimulus (SD)

A

A controlling stimulus that sets
the occasion for reinforcement of an operant.

35
Q
  • S-delta (SΔ
    ) or extinction stimulus
A

s: A controlling stimulus that sets the occasion for no reinforcement or extinction of an operant.

36
Q

Stimulus Discrimination

A

Process by which organisms learn
to emit a specific behaviour in the presence of some stimuli
and not in the presence of other stimuli

37
Q

Development of Stimulus Control

A
  • Stimulus discrimination training
  • Requires one behaviour * Two antecedent stimulus conditions (the SD
    and the S
    ) * Responses that occur in the presence of the SD
    are reinforced * Responses that occur in the presence of the S
    are not reinforced * Can also result in a lesser amount or quality of reinforcement
38
Q

Multiple schedule

A

Multiple schedule: Two or more simple schedules are presented, one after
the other, and each is accompanied with its own controlling stimulus

39
Q

How do we know when a stimulus is exerting
stimulus control?

A

A measure of the control of an antecedent
stimulus that ranges from 0 (no control) to 1 (complete control).

  • ID= (SDrate)/(SD rate + SΔ rate)
  • ID= 0.50, rate of response is the same in both the SD & SΔ
  • ID > 0.50, rate is higher in the SD than S
40
Q

Behavioral contrast

A

Behavioral contrast: Refers to a negative correlation between the
response rates in the changed and unchanged components of a MULT
schedule. When one goes up, the other goes down (inverse relation).

41
Q

Positive contrast:

A

Positive contrast: Occurs when the rate of response increases in the
unchanged component with a decrease in response rate in the
altered or manipulated component.

42
Q

Negative contrast

A
  • Negative contrast: Occurs when the rate of response decreases in the
    unchanged component with an increase in response rate in the
    altered or manipulated component.
43
Q

Behavioral Contrast: Relative Rates of Reinforcement

A
  • Behavioral contrast results from changes in relative rates of reinforcement. * On a two-component schedule, the relative rate of reinforcement in the
    unchanged component increases when the rate of reinforcement decreases in
    the altered component. Original: Component A—VI 15 s. Component B—VI 30 s. Changed: Component A—VI 15 s. Component B—VI 60 s. (relative rate of reinforcement increased in component A compared to component B) As the relative rate of reinforcement increases on the unchanged component, so does the rate of response.