Ch. 6 Flashcards

1
Q

Operant behaviors are influenced by their___

Elicited behavior is a function of what (precedes/follows) it; operant behavior is a function of what (precedes/follows) it.

Another name for operant conditioning is ____ conditioning.

Operant conditioning is ___

Classically conditioned behaviors are said to be ___ by the stimulus, while operant behaviors are said to be __ by the organism.

A
  1. Consequences
  2. Proceeds, follows
  3. instrumental
  4. a type of learning in which the future frequency (or probability) of a behavior is affected by its consequences. Note that this is essentially a restatement of Thorndike’s law of effect.
  5. elicited, emitted
    emitted is used to indicate that operant behavior appears to have a more voluntary, flexible quality to it compared to elicited behavior, which is generally more reflexive and automatic.
    Does this mean that operant behavior actually is voluntary? Not necessarily. Insofar as such behavior comes to be controlled by the consequences that follow the behavior, it can be argued that the sense of voluntariness, or “freedom of choice,” that accompanies such behavior is merely an illusion
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Thorndike’s Law of Effect

A

According to Thorndike’s law of effect, behaviors leading to a satisfying state of affairs are strengthened, or “stamped in,” while behaviors leading to an unsatisfying or annoying state of affairs are weakened, or “stamped out.”

Thus, the extent to which the consequences of a behavior are satisfying or annoying determine whether the behavior will be repeated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Thorndike (1898)

A
  • interested in animal intelligence and thought could be properly assessed only through systematic investigation.
  • Thorndike was not suggesting that animals could not in some ways be intelligent, but rather that we should not accept anecdotes as fact, nor should we assume that animals behaving in a particular way are doing so for intelligent reasons.
  • Argue for caution in interpreting animal behavior.

Based on his research with cats, Thorndike formulated his famous law of effect, which states that behaviors that lead to a(n) satisfying state of affairs are strengthened, while behaviors that lead to a(n) unsatisfying state of affairs are weakened.

According to Thorndike, behaviors that worked were stamped in, while behaviors that did not work were stamped out

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Thorndike’s puzzle box with the cat

A

In a typical experiment, a hungry cat was enclosed in a puzzle box and a dish of food was placed outside the box.

To reach the food, the cat had to learn how to escape from the box by stepping on a treadle that opened a gate.

  • First time the cat was placed in the puzzle box, several minutes passed before it accidentally stepped on the treadle and opened the gate.
  • Over repeated trials, it learned to escape the box more quickly.
  • There was, however, no sudden improvement in performance as would be expected if the cat had experienced a “flash of insight” about how to solve the problem.
  • Rather, it seemed as though the response that worked (stepping on the treadle) was gradually strengthened, while responses that did not work (e.g., clawing at the gate, chewing on the cage) were gradually weakened
  • Thorndike suspected that a similar process governed all learning, and on this basis he formulated his famous law of effect.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Skinner’s Selection by Consequences

A
  • Believed that behavior could best be analyzed as though it were a reflex. He also realized, like Pavlov, that a scientific analysis of behavior required finding a procedure that yielded regular patterns of behavior.
  • Without such regularity, which could be achieved only in a well-controlled environment, it would be difficult to discover the underlying principles of behavior.
  • “skinner box.”
  • operant” procedure because the rat freely responds with a particular behavior (like pressing a lever) for food, and it may do so at any rate.
  • experimenter controls the contingencies within the operant chamber, but the animal is not forced to respond at a particular time.
  • This contrasts with other procedures for studying animal learning, such as maze learning, in which the experimenter initiates each trial by placing the rat in the start box.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The Skinner box evolved out of Skinner’s quest for a procedure that would, among other things, yield __ patterns of behavior.

In the original version of the Skinner box, rats earn food by ___ a ___ in another version, pigeons earn a few seconds of access to food by ___ at an illuminated plastic disc known as a ___

A
  1. regular
  2. pressing, lever, pecking, response key
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Skinner’s procedures are also known as __ ___ procedures in that the animal controls the rate at which it earns food.

Skinner originally thought all behavior could be explained in terms of ______, but he eventually decided that
this type of behavior could be distinguished from another, seemingly more voluntary type of behavior known as ___ behavior.

A
  1. free, operant
  2. reflexes (or respondent behavior), operant
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

With the evolution of the Skinner box,

A

Skinner’s beliefs about the nature of behavior also changed.

He abandoned the notion that all behavior could be analyzed in terms of reflexes and, along with other learning theorists, came to believe that behaviors can be conveniently divided into two categories.

  1. One category consists of involuntary, reflexive-type behaviors, which, as Pavlov demonstrated, can often be classically conditioned to occur in new situations.
    — Skinner referred to such behavior as respondent behavior.
  2. The other category, which Skinner called operant behavior, consists of behaviors that seem more voluntary in nature and are controlled by their consequences rather than by the stimuli that precede them.
    — It was this type of behavior that Thorndike had studied in his puzzle box experiments and upon which he had based his law of effect. It was this type of behavior that most interested Skinner as well.
    He spent the rest of his life investigating the basic principles of operant conditioning and applying those principles to important aspects of human behavior
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Skinner’s definition of operant conditioning differs from Thorndike’s law of effect in that it is (more/less) mentalistic.

Skinner, however, was dissatisfied with Thorndike’s mentalistic description of consequences ___

A
  1. Less
  2. As being either satisfying or annoying.

Satisfaction and annoyance are internal states inferred from the animal’s behavior.

Skinner avoided any speculation about what the animal (or person) might be thinking or feeling and reworded the law of effect to emphasize the effect of the consequence on the future probability of the behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Skinner’s principle of operant conditioning bears a striking resemblance to Darwin’s evolutionary principle of natural selection.

A

According to the principle of natural selection, members of a species that inherit certain adaptive characteristics are more likely to survive and propagate, thereby passing that characteristic on to offspring.

Thus, over many generations, the frequency of those adaptive characteristics within the population will increase and become well established.

Similarly, according to the principle of operant conditioning, behaviors that lead to favorable outcomes are more likely to be repeated while those that do not lead to favorable outcomes are less likely to be repeated.

Thus, operant conditioning is sort of a mini-evolution of an organism’s behaviors, in which behaviors that are adaptive (lead to favorable outcomes) become more frequent while behaviors that are nonadaptive (do not lead to favorable outcomes) become less frequent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Operant conditioning is similar to the principle of natural selection in that behaviors that are ___ tend to increase in
frequency, while behaviors that are ___ tend to decrease in frequency.

The difference is that operant conditioning deals with changes at the level of a(n) ___ while the principle of natural selection deals with changes at the level of a(n) ___

A
  1. adaptive, nonadaptive
  2. individual, species.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The process of operant conditioning involves the following three components:

(1) A ____ that produces a certain
consequence.

(2) a ____ future likelihood of the
that serves to either increase or decrease the response that preceded it, and

(3) a _____stimulus that precedes the
and signals that a certain consequence is now available.

A

(1) response
(e.g., lever pressing produces a food pellet)

(2) consequence
(e.g., the consequence of a food pellet increases the rat’s tendency to again press the lever)

(3) discriminative response
(e.g, a tone that signals to the rat that a lever press will now
produce food).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

operant behavior

A

is a class of emitted responses that result in certain consequences; these consequences then affect the future frequency (or probability) of those responses.

  • call operants
  • the probability or frequency of the response is often referred to as the strength of the behavior (probability, frequency, and strength should be regarded as equivalent).
  • Note that, for the sake of simplicity, the definitions and examples will focus on whether the consequence increases or decreases the future frequency of a behavior.

— However, there are obvious limitations in how far that can progress, such that a consequence sometimes serves to simply maintain responding at a certain frequency. The manner in which certain patterns of reinforcement maintain a certain frequency of responding: schedule of reinforcement

-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Operant behavior is usually defined as a(n) ___ of responses rather than a specific response.

Remember that the terms response and behavior are essentially equivalent; however, behaviorists tend to use the term ____ when referring to a specific
instance of a behavior.

A
  1. Class
    For example, there are many ways a rat can press a lever for food: hard or soft, quick or slow, right paw or left paw. All of these responses are effective in depressing the lever and producing food; therefore, they all belong to the same class of responses known as “lever presses.”
    — Defining operants in terms of classes has proven fruitful because it is easier to predict the occurrence of a class of responses than it is to predict the exact response that will be emitted at a particular point in time.
    For example, it is easier to predict that a hungry rat will press a lever to obtain food than it is to predict exactly how it will press the lever on any particular occasion.
  2. Response
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

reinforcers

A

are those consequences that strengthen
behavior; that is, they increase its frequency or increase its probability.

A reinforcer is an event that follows a behavior and increases the future frequency of that behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Punishers

A

Punishers are those consequences that weaken a behaviour that is; they decrease its frequency or probability

A punisher is an event that follows a behaviour and decreases the future frequency of that behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

The terms reinforcement and punishment refer to

A

the process procedure whereby the future occurrence of a behavior is
strengthened or weakened by its consequences.

Ex: Strengthening a roommate’s tendency toward cleanliness by thanking them when they clean the bathroom is an example of a reinforcement, while the thanks itself is a reinforcer.

Ex: Eliminating a dog’s tendency to jump up on visitors by scolding it when it does so is an example of punishment, while the scolding itself is a punisher

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Diagrams of operant conditioning procedures generally use the following symbols.

A

Reinforcers are usually given the symbol SR (which stands for reinforcing stimulus), and punishers are given the symbol SP (which stands for punishing stimulus).

Lever press (R) - Food pellet (SR)
The food pellet is a reinforcer because it follows the lever press and increases the future probability of the rat pressing the lever.

Tell a joke (R) - Person frowns (SP)
The frown is a punisher because it follows the joke, and the future probability of Jonathan’s joke telling decreases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Note that, from a behavior analysis perspective, it is technically incorrect to say that a person or animal has been reinforced or punished.

A

Rather, it is the behavior that has been reinforced or punished.

Only the behavior increases or decreases in frequency.

There is value, however, in emphasizing the effect of the consequence on behavior.

If you want a child to stop doing something, should you tell them that their behavior displeases you or that they displease you?

Similarly, when your roommate does something that bothers you, will it be more constructive to tell them that their behavior disturbs you or that they disturb you?

Is it easier for people to change their behavior or to change who they are?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

reinforcers and punishers

A

Reinforcer and punisher both refer to the specific consequence used to strengthen or weaken a behavior.

Note, too, that reinforcers and punishers are formally defined entirely by their effect on behavior.

Thus, the safest bet is to define consequences as reinforcers and punishers in relation to their effect on behavior and not in relation to how pleasant or unpleasant they seem.

a teacher might yell at their students for being disruptive, and as a result the students become more (not less) disruptive. Although the teacher is trying to punish the disruptive behavior, the yelling is actually having the opposite effect. By definition, therefore, the yelling is a reinforcer because it is causing the disruptive behavior to increase in frequency (perhaps because disruptive students find that other students admire them if they upset the teacher).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Weakening a behavior through the withdrawal of reinforcement for that behavior is known as ___. In general, this is a ___ process than punishment.

A
  1. extinction
  2. slower
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Clayton stopped using the toaster after he received a shock while doing so. This is an example of ____. Manzar stopped
using the toaster after it no longer made good toast. This is an example of ___.

A
  1. punishment
  2. extinction
23
Q

discriminative stimulus

A

is a stimulus in the presence of which responses are reinforced and in the absence of which they are not reinforced.

In other words, a discriminative stimulus is a signal that indicates that a response will be followed by a reinforcer.

A discriminative stimulus is said to set the occasion the behavior,meaning that its presence makes the response more likely to occur.

Similar to occasion setting in classical conditioning. SD do not elicit behavior in the manner of a CS or US in classical conditioning.

Ex:
Tone (SD): Lever press (R) - Food pellet (SR)

24
Q

Discriminative stimulus example:

A

if Susan always laughs at Jonathan’s jokes, then he is more likely to tell her a joke.

The sight of Susan is an S for Jonathan’s behavior of telling jokes.

Susan (SD):Tell her a joke (R) - >She laughs (SR)

the presence of Susan does not automatically elicit the behavior of joke telling in Jonathan; rather, he is simply more likely to tell a joke in her presence.

Therefore, rather than saying that the SD elicits the behavior, we say that the person or animal emits the behavior in the presence of the SD

25
Q

three-term contingency

A

can also be viewed as consisting of an antecedent event (an antecedent event is a preceding event), a behavior, and a consequence (which can be remembered by the initials ABC).

Another way of thinking about the three-term contingency is that you
notice something, do something, and get something.

Another way of thinking about this sequence is that you notice something (Susan), do something (tell Susan a joke), and get something (Susan laughs at your joke) that then increases the likelihood of your repeating that response.
Antecedent —> behaviour —> consequence
Susan (SD): Tell her a joke (R) -> She laughs (SR)

26
Q

The operant conditioning procedure usually consists of three components:

A

(1) A discriminative stimulus
(2) an operant response, and
(3) a consequence

27
Q

discriminative stimulus for punishment

A

A stimulus in the presence of which a response is punished is called a discriminative stimulus for punishment. It can be given the symbol SDp.

For example, if a water bottle signals that meowing will result in being sprayed with water (rather than being fed), a cat will quickly learn to stop meowing whenever it sees the water bottle.

Water bottle (SDp: Meow (R) -> Get sprayed Sp

28
Q

discriminative stimulus for extinction

A

which is a stimulus that signals the absence of reinforcement.

A discriminative stimulus may also signal the occurrence of extinction; that is, it signals the nonavailability of a previously available reinforcer.

typically given the symbol Striangle

Ex:

Tone (SD) : Leverpress (R) -> Food pellet (SR)
Buzzer: (S triangle): Lever press (R) - > No food (- )

29
Q

A bell that signals the start of a round in a boxing match and therefore serves as an ___ for the operant response of beginning to box may also serve as a(n) ___ for a fear response.

This is an example of how the processes of ___ conditioning and ___ ___ conditioning often overlap.

A
  1. SD
  2. SD / CS
  3. CS
  4. classical
  5. operant
30
Q

positive reinforcement and punishment

A

The word positive, when combined with the word reinforcement or punishment, means only that the behavior is followed by the presentation of something.

— The word positive, when combined with the word reinforcement or punishment, does not mean that the consequence is good or pleasant.

— Positive reinforcement and positive punishment, the word positive means only that the behavior has resulted in something being presented or added.

— SR+ for positive reinforcement,

— SP+ for positive punishment ,

— The word reinforcement, of course, means that the behavior will be more likely to occur in the future.

— The word punishment means that the behavior will be less likely to occur in the future.

31
Q

Negative reinforcement and punishment

A

The word negative, when combined with the word reinforcement or punishment, means only that the behavior is followed by the removal of something.

— Similarly, the term negative, when combined with the word reinforcement or punishment, does not mean that the consequence is bad or unpleasant.

— In negative reinforcement and negative punishment, the word negative means only that the behavior has resulted in something being removed or subtracted.

— SR- for negative reinforcement,

— Sp- for negative punishment

— The word reinforcement, of course, means that the behavior will be more likely to occur in the future.

— The word punishment means that the behavior will be less likely to occur in the future.

32
Q

To determine which type of contingency is involved in a particular instance, ask yourself the following two questions:

A

(1) Does the consequence consist of something being presented or withdrawn?

—If the consequence consists of something being presented, then it is a positive contingency; if the consequence consists of something being withdrawn, then it is a negative contingency;

(2) Does the consequence serve to strengthen or weaken future occurrences of the behavior?

—If it strengthens future occurrences of the behavior, then we are dealing with reinforcement; if it weakens future occurrences of the behavior, then we are dealing with punishment.

33
Q

Negative reinforcement definition

A

is the removal of a stimulus (one that is usually considered unpleasant or aversive) following a response, which then leads to an increase in the future strength of that response.

Loosely speaking, the behavior results in the prevention or removal of something the person or animal hates, so they are more likely to behave that way in the future.

Ex:

Turn on the heater (R) -> Escape the cold
(SR)
Does the person turn on the heater to escape the cold (negative reinforcement) or to obtain warmth (positive reinforcement)? Either interpretation would be correct.

34
Q

Negative reinforcement involves two types of behavior:

A

Escape behavior results in the termination (stopping) of an aversive stimulus.

—person getting rained on, by opening the umbrella the person is able to stop this from happening.

Avoidance is similar to escape except that avoidance behavior occurs before the aversive stimulus is presented and therefore prevents its delivery.

—by opening the umbrella before stepping out into the rain, the person avoids getting rained on.

35
Q

Positive Reinforcement definition

A

consists of the presentation of a stimulus (one that is usually considered pleasant or rewarding) following a response, which then leads to an increase in the future strength of that response.

Loosely speaking, the behavior results in the delivery of something the person or animal likes, so they are more likely to behave that way in the future.

36
Q

When you reached toward the dog, it nipped at your hand. You quickly pulled your hand back.

As a result, it now nips at your hand whenever you reach toward it.

The consequence for the dog’s behavior of nipping consisted of the ___
of a stimulus (namely, your hand), and its behavior of nipping subsequently ___ in frequency; therefore this is an example of ___ reinforcement.

A
  1. Removal.
  2. increased
  3. negative
37
Q

Positive Punishment

A

consists of the presentation of a stimulus (one that is usually considered unpleasant or aversive) following a response, which then leads to a decrease in the future strength of that response.

Loosely speaking, the behavior results in the delivery of something the person or animal hates, so the subject is less likely to behave that way in the future.

People frequently confuse positive punishment with negative reinforcement. One reason for this is the fact that many behaviorists use the term negative reinforcer to refer to an aversive (unpleasant) stimulus and the term positive reinforcer to refer to an appetitive (pleasant) stimulus.

38
Q

Negative Punishment

A

consists of the removal of a stimulus (one that is usually considered pleasant or rewarding) following a response, which then leads to a decrease in the future strength of that response.

Loosely speaking, the behavior results in the removal of something the person or animal likes, so the subject is less likely to behave that way in the future.

39
Q

Example of negative punishment: Jonathan’s girlfriend, who is quite jealous, completely ignored him (withdrew her attention from him) when she observed him having a conversation with another woman at a party.

As a result, he stopped talking to the other women at the party.

A

Jonathan talks to other women (R) - >His girlfriend ignores him (SP)

Jonathan’s behavior of talking to other women at parties was negatively punished.

It is punishment in that the frequency with which he talked to other women at the party declined, and it is negative punishment because the consequence that produced that decline was the withdrawal of his girlfriend’s attention.

But what contingencies are operating on the girlfriend’s behaviour?

When she ignored him, he stopped talking to other women at the party.

Given that this occurred, she might ignore him at future parties if she again sees him talking to other women.

If so, her behavior has been negatively reinforced by the fact that it was effective in getting him to stop doing something she disliked.

If we diagram this interaction from the perspective of each person, we get the following:

For Jonathan:
I talk to other women (R) - >My girlfriend ignores me (SP)

For his girlfriend:
I Ignore Jonathan (R) - >He stops talking to other women (SR)

As you can see, a reduction in one person’s behavior as a result of punishment can negatively reinforce the behavior of the person who implemented the punishment.

This is the reason we are so often enticed to use punishment: punishment is often successful in immediately getting a person to stop behaving in ways that we dislike.

That success then reinforces our tendency to use punishment in the future, which of course can create major problems in the long run.

interpersonal relationships, people too often attempt to change each other’s behavior through the use of aversive consequences, such as complaining, when positive reinforcement for appropriate behavior might work just as well or better.

That is why it is so valuable to learn about the processes of reinforcement and punishment; once we become aware of them, we are then more likely to react in ways that are potentially more beneficial and less harmful.

40
Q

Although many people believe that the key to a great relationship is open communication, research has shown that a much more important element is ___

A

the ratio of positive (pleasant) interactions to negative (aversive) interactions.

In fact, one of the best predictors of a successful marriage is when the positives outweigh the negatives by a ratio of about five to one.

41
Q

Immediate Reinforcement

A

In general, the more immediate the reinforcer, the stronger its effect on the behavior.

for example, that you wish to reinforce a child’s behavior of playing quietly by giving him a treat. The treat should ideally be given while the quiet period is still in progress. If, instead, you deliver the treat several minutes later, while he is engaged in some other behavior (e.g., banging a stick on the floor), you might inadvertently reinforce that behavior rather than the one you wish to reinforce.

42
Q

Delayed Reinforcement

A

It has been suggested that delayed reinforcers do not function in the same manner as immediate reinforcers.

Rather, the effectiveness of delayed reinforcers in humans is largely dependent on the use of instructions or rules to bridge the gap between the
behavior and the consequences.

43
Q

primary reinforcer

A

A primary reinforcer (also called an unconditioned reinforcer) is an event that is innately reinforcing; that is, it is an unlearned reinforcer.

Loosely speaking, primary reinforcers are things we are born to like rather than learn to like, and we therefore have an innate tendency to find those events to be reinforcing.

Associated with basic physiological needs, and their effectiveness is closely tied to a state of deprivation.

Examples of primary reinforcers are food, water, proper temperature (neither too hot nor too cold), and sexual contact with a desired partner.

Note that an event can function as both a primary reinforcer and a secondary reinforcer.

44
Q

secondary reinforcer

A

A secondary reinforcer (also called a conditioned reinforcer) is an event that is reinforcing because it has been associated with some other reinforcer.

Loosely speaking, secondary reinforcers are those events that we have learned to like because they have become associated with other things we like.

Much of our behavior is directed toward obtaining secondary reinforcers, such as good grades, fine clothes, and a nice car. Because of our experiences with these events, they can function as effective reinforcers for our current behavior.

Conditioned stimuli (CSs) that have been classically conditioned using appetitive unconditioned stimuli (USs) can also function as secondary reinforcers.

For example, suppose that the sound of a metronome has been paired with food to produce a classically conditioned response of salivation:

The metronome, through its association with food, can now be used as a secondary reinforcer for an operant response such as lever pressing

Discriminative stimuli associated with reinforcers can likewise function as secondary reinforcers.

just as stimuli that are associated with reinforcement can become secondary reinforcers. so can behaviors that are associated with reinforcement.

45
Q

generalized reinforcer

A

A generalized reinforcer (also known as a generalized secondary reinforcer) is a type of secondary reinforcer that has been associated with several other reinforcers.

Ex: money and social attention (eps with young children. to the point they may act out for it)

Are often used in behavior modification programs.
— “token economy”

46
Q

intrinsic reinforcement

A

Is reinforcement provided by the mere act of performing the behavior.

running because it “feels good” is an example of a(n) intrinsically motivated activity.

47
Q

Extrinsic reinforcement

A

is the reinforcement provided by some consequence that is external to, or additional to, the behavior (with the consequence itself known as an extrinsic reinforcer).

Running to lose weight is an example of a(n) extrinsically activity

48
Q

What happens if you are given an extrinsic reinforcer for an activity that is already intrinsically reinforcing?

A

some research indicates that.
experiences like this can decrease intrinsic interest.

However, some researchers found that extrinsic rewards have no effect on intrinsic interest, or actually produce an increase in intrinsic

Unfortunately, despite these mixed findings, it is the damaging effects of extrinsic rewards on intrinsic motivation that are often presented to the public

49
Q

Cameron and Pierce (1994)

A

conduct a meta-analysis of 96 well-controlled experiments that examined the effects of extrinsic rewards on intrinsic motivation.

A meta-analysis is a statistical procedure that combines the results of several separate studies, thereby producing a more reliable overall assessment of the variable being studied.

indicated that extrinsic rewards usually
have little or no effect on intrinsic motivation.

Extrinsic rewards can occasionally undermine intrinsic motivation,

but only when the reward is expected (.e., the person has been instructed beforehand that they will receive a reward),

the reward is tangible (e.g., it consists of money rather than praise),

or the reward is given for simply performing the activity (and not for how well it is performed).

verbal rewards, such as praise, and tangible rewards given for high-quality performance increase intrinsic motivation.

concluded that extrinsic rewards can be safely applied in most circumstances and that the limited circumstances in which they decrease intrinsic motivation are easily avoided.

50
Q

Natural reinforcers

A

Are reinforcers that are typically provided for a certain behavior; that is, they are an expected consequence of the behavior within that setting.

51
Q

Contrived (or artificial) reinforcers

A

are reinforcers that have been deliberately arranged to modify a
behavior; they are not a typical consequence of the behavior within that setting.

In applied behavior analysis, although one might initially use contrived consequences to first develop a behavior, the hope is that the behavior will become trapped by the natural consequences(contingencies) associated with that behavior.

52
Q

To distinguish between intrinsic versus extrinsic reinforcers and natural versus contrived reinforcers,

A

remember that the former is concerned with the extent to which the behavior itself is reinforcing

while the latter is concerned with the extent to which a reinforcer has been artificially imposed so as to manipulate a behavior.

53
Q

Shaping

A

is the gradual creation of new behavior through remforcement of successively closer approximations to that behavior.

Ex: With our rat, we could begin by delivering food whenever it stands near the lever. As a result, it begins standing near the lever more often. We then deliver food only when it is facing the lever, at which point it starts facing the lever more often.

In a similar manner, step-by-step, we reinforce touching the lever, then placing a paw on the lever, and then pressing down on the lever. When the rat finally presses down on the lever with enough force, it closes the microswitch that activates the food magazine.

The rat has now earned a reinforcer on its own. After a few more experiences like this, the rat begins to reliably press the lever to earn food. By reinforcing successively closer approximations to the target behavior, we have managed to teach the rat an entirely new behavior.

Most of our behaviors have, to some extent, been learned or modified through shaping.

For example, when children first learn to eat with a knife and fork, parents might praise even very poor attempts. Over time, though, they expect better and better performance before offering praise.

54
Q

In clicker training with dogs,

A

the click is a secondary reinforcer that has been established by first pairing it with food, which is a primary reinforcer.

The advantages of using the click as a reinforcer is that it can be delivered
immediately.

The use of a clicker can also help prevent the animal from becoming satiated on the reinforcer.