Psy exam 3 Flashcards

1
Q

Primary (unconditioned) reinforcer

A

a consequence that functions as a reinforcer bc it is important in sustaining the life of the individual or the continuation of the species. Examples… food, water, sleep, oxygen.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Secondary (conditioned) reinforcer

A

consequences that function as reinforcers only after learning has occurred. (previously neutral before learning occurs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Generalized conditioned reinforcers

A

conditioned reinforcer that has been associated with different reinforcers. examples…money;tokens.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Token Economy

A

a system of formal contingencies for earning and exchanging reinforcers, based on conditioned reinforcers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Tokens

A

conditioned reinforcers that can be accumulated and exchanged for other reinforcers at a later time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Backup reinforcer

A

The reinforcer provided after the conditioned reinforcer signals the delay reduction to its delivery. Make sure to use an effective backup reinforcer, the more preferable it is the better the conditioned reinforcer will be. these are used in token economies, so the token would be the conditioned reinforcer, whereas the activity that they get for turning in the tokens, would be the backup reinforcer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Shaping

A

reinforcement of successive approximations to a terminal behavior. start with initial behavior, work through intermediate to get to the end goal, the terminal behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How is differential reinforcement used in shaping?

A

involves reinforcing the desired behavior and extinguishing previously reinforced behaviors. present an unconditioned or conditioned reinforcer only to responses that successively approximate the terminal behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Punisher

A

a contingent consequence that DECREASES the future probability of a behavior below its pre-punishment level. (A reinforcer increases future probability)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Positive punishment

A

Presentation of a stimulus contingent on a response that results in a decrease in the frequency or future probability of the response. Something aversive is ADDED. some common ones are pain, loud noise, “no” or “stop”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Negative punishment

A

the contingent REMOVAL, reduction, or prevention of a reinforcer; the effect of which decreases the future probability of the behavior below its no-punishment level. Something preferred is removed after a behavior. Neg punishment is not extinction. examples… Fines, time -out, being ignored.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Punishment is more effective when…

A

-it is introduced at moderate to high intensity
-punishers are delivered immediately after response
-deliver consequence every time behavior occurs
-ensure no one else is reinforcing the behavior
-reinforcement for alternative responses is provided
-select effective & appropriate punishers
-combine punishment w/ other interventions
-watch for side effects
-provide response prompts and reinforcement for alternative behaviors
-record, graph data daily

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Response cost

A

Something preferred is removed contingent on a behavior, for example; parking ticket, fees for something overdue, tokens.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Overcorrection

A

Restore the environment to how it was prior to the behavior occuring. can include restitution- making the environment better then it was before.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Continuous reinforcement

A

reinforcer follows after every response. example…hand raising during class.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Establishing operations (EO)

A

Value altering= increase in the reinforcing effectiveness of a stimulus.
Behavior altering= increase in behaviors that have been reinforced by that stimulus.

17
Q

example of an EO

A

you haven’t eaten anything since breakfast, which will increase the value of food as a reinforcer.

18
Q

Abolishing operations (AO)

A

Value altering= decrease of the reinforcing effectiveness of a stimulus.
Behavior altering= decrease in behaviors that have been reinforced by that stimulus.

19
Q

Noncontingent reinforcement (NCR)

A

reinforcer delivered regardless of responding, reinforcer delivered based on a passage of time.

20
Q

Fixed Ratio schedule (FR)

A

reinforcer is contingent on a FIXED number of responses, and the response requirement stays the same. for example, every 3x a child touches his nose, you reinforce the behavior. This has a STAIR STEP effect of responding.

21
Q

Variable Ratio schedule (VR)

A

reinforcer is contingent on a VARIABLE number of responses, the response requirement varies. for example, about every 5x your dog gives you his paw, you reinforce the behavior. This creates HIGH and STEADY rates of response. (creates the highest rate of responding)

22
Q

Fixed Interval Schedule (FI)

A

reinforcer delivered after a FIXED amount of time. for example, FI 4min- the first response AFTER the 4 minute interval of time gets reinforced. This creates a scallop pattern of responding.

23
Q

Variable Interval Schedule (VI)

A

reinforcer delivered after a VARIABLE amount of time. For example, VI 4 min- first response after ABOUT a 4-minute interval gets reinforced. This creates moderate stable rates of responding.

24
Q

Breakpoints and how to use them to determine reinforcer efficacy

A

the maximum amount of behavior the reinforcer will maintain, so, if the breakpoint for soda is 6 responses, but is 20 responses for tacos, TACOS are the more effective reinforcer because the behavior was displayed longer than was for soda.

25
Q

What makes reinforcers more effective?

A
  1. Quality: varies from person to person
  2. Size: large enough but not too large that is satiated
  3. Contingency.
  4. Immediacy.
26
Q

Reinforcement vs Punishment

A

Reinforcement increases in operant behavior as a function of its consequences, and responding becomes more probable and decreases in the future. Punishment decreases operant behavior as a function of its consequences, and responding becomes less
probable and increases in the future.

27
Q

Positive punishment procedures

A

-Contingent directives: anytime someone does something they shouldn’t, you give them new demands.
-Response interruption and redirection (RIRD): behavior interrupted, and demands delivered that are incompatible with problem behavior
-Overcorrection
-Positive practice

28
Q

Negative punishment procedures

A

-Time out (non-exclusionary and exclusionary)
-Response cost

29
Q

Ratio vs Interval schedules

A

Ratio= NUMBER of responses
Interval=passage of TIME

30
Q

Fixed ratio (FR) pattern

A

break and run/ stair step. once responding starts, it occurs at a fairly steady rate.

31
Q

Variable ratio (VR) pattern

A

high, stable rates of responding. Creates the highest rate.

32
Q

Fixed interval (FI) pattern

A

scallop, low levels of responding after the reinforcer is delivered, and responding gradually increases as the interval elapses.

33
Q

Variable interval (VI) pattern

A

moderate, stable rates of responding.

34
Q

What steps can you take to break a bad habit?

A
  1. identify a habit, and behavior to replace it.
  2. identify the antecedents that evoke the habit.
  3. replace those antecedents with stimuli that evoke your replacement behavior
  4. set a goal that is relatively low for the replacement behavior.
  5. contract the intrinsic reinforcers for the replacement behavior.
  6. gradually increase your goal!
35
Q

How are habits formed?

A

They are formed when an operant response has been repeatedly reinforced many many times in the presence of an antecedent stimulus.