Chapter summaries Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Summary of Chapter 6
A positive reinforcer is anything that, when presented immediately following a behavior, causes the behavior to increase
in frequency. The principle of positive reinforcement states that, if someone in a given situation does something that is
immediately followed by a positive reinforcer, then that person is more likely to do the same thing the next time he or
she encounters a similar situation. We are frequently influenced by positive reinforcement every working hour of our
lives. The presentation of a positive reinforcer immediately after a response strengthens that response. The removal of an
aversive stimulus immediately after a response also strengthens that response, but that would be an instance of negative
reinforcement (or escape conditioning), not positive reinforcement.
Positive reinforcement is most effective when (1) the behavior to be increased is identified specifically; (2) the
reinforcer chosen is effective for the person being reinforced; (3) the individual being reinforced has been deprived of the
reinforcer; (4) a large reinforcer is used; (5) the individual being reinforced is instructed about the reinforcement program;
(6) the reinforcer is presented immediately after the desirable behavior; (7) the reinforcer is presented contingent upon
the desired behavior; and (8) when the programmed reinforcer is no longer used, a natural reinforcer follows the desired
behavior. After a behavior has been appropriately modified in a reinforcement program, the next step is to wean the individual from the program so that natural reinforcers will take over the maintenance of the behavior. A part of the weaning
process often involves schedule thinning, in which the programmed reinforcer is or reinforcers are gradually removed.
There are several possible pitfalls regarding the principle of positive reinforcement. The unaware-misapplication
pitfall occurs when someone unknowingly reinforces an undesirable behavior. The partial-knowledge-misapplication
pitfall occurs when someone knows a little about positive reinforcement but does not apply it effectively, such as presenting a reinforcer noncontingently. The failure-to-apply pitfall involves missing opportunities to reinforce desirable
behaviors because of lack of understanding of positive reinforcement. The inaccurate-explanation-of-behavior pitfall
involves using positive reinforcement as an oversimplified explanation of behavior, or attempting to explain behavior by
inappropriately giving people a label.

A

Summary of Chapter 6
A positive reinforcer is anything that, when presented immediately following a behavior, causes the behavior to increase
in frequency. The principle of positive reinforcement states that, if someone in a given situation does something that is
immediately followed by a positive reinforcer, then that person is more likely to do the same thing the next time he or
she encounters a similar situation. We are frequently influenced by positive reinforcement every working hour of our
lives. The presentation of a positive reinforcer immediately after a response strengthens that response. The removal of an
aversive stimulus immediately after a response also strengthens that response, but that would be an instance of negative
reinforcement (or escape conditioning), not positive reinforcement.
Positive reinforcement is most effective when (1) the behavior to be increased is identified specifically; (2) the
reinforcer chosen is effective for the person being reinforced; (3) the individual being reinforced has been deprived of the
reinforcer; (4) a large reinforcer is used; (5) the individual being reinforced is instructed about the reinforcement program;
(6) the reinforcer is presented immediately after the desirable behavior; (7) the reinforcer is presented contingent upon
the desired behavior; and (8) when the programmed reinforcer is no longer used, a natural reinforcer follows the desired
behavior. After a behavior has been appropriately modified in a reinforcement program, the next step is to wean the individual from the program so that natural reinforcers will take over the maintenance of the behavior. A part of the weaning
process often involves schedule thinning, in which the programmed reinforcer is or reinforcers are gradually removed.
There are several possible pitfalls regarding the principle of positive reinforcement. The unaware-misapplication
pitfall occurs when someone unknowingly reinforces an undesirable behavior. The partial-knowledge-misapplication
pitfall occurs when someone knows a little about positive reinforcement but does not apply it effectively, such as presenting a reinforcer noncontingently. The failure-to-apply pitfall involves missing opportunities to reinforce desirable
behaviors because of lack of understanding of positive reinforcement. The inaccurate-explanation-of-behavior pitfall
involves using positive reinforcement as an oversimplified explanation of behavior, or attempting to explain behavior by
inappropriately giving people a label.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Unconditioned reinforcers, such as food for a hungry person, are stimuli that are reinforcing without prior learning.
Conditioned reinforcers, such as your favorite book, are stimuli that were not originally reinforcing but have become
reinforcers by being paired or associated with other reinforcers. The latter are called backup reinforcers. Tokens, such as
money, are conditioned reinforcers that can be accumulated and exchanged for a variety of backup reinforcers. A behavior modification program in which one or more individuals can earn tokens and exchange them for backup reinforcers
is called a token system or a token economy—usually a token system implemented with more than one individual.
A conditioned reinforcer that is paired with a single backup reinforcer is called a simple conditioned reinforcer, and
a stimulus that is paired with more than one kind of backup reinforcer is called a generalized conditioned reinforcer.
Money and praise are examples of generalized conditioned reinforcers. Factors influencing the effectiveness of conditioned reinforcement include (a) the strength of the backup reinforcers (the stronger the better); (b) the variety of
backup reinforcers—the greater variety the better; (c) the number of pairings with a backup reinforcer—the more the
better; and (d) whether pairings of conditioned and backup reinforcers continue at least intermittently. One pitfall of
conditioned reinforcement is the pairing of a conditioned reinforcer with a stimulus meant to decrease problem behavior such as a reprimand. This can cause the latter stimulus to be a reinforcer because it provides attention that the individual may not otherwise be receiving. A second pitfall is ceasing to at least occasionally pair a conditioned reinforcer
with a backup reinforcer, which will cause the conditioned reinforcer to lose its value.

A

Unconditioned reinforcers, such as food for a hungry person, are stimuli that are reinforcing without prior learning.
Conditioned reinforcers, such as your favorite book, are stimuli that were not originally reinforcing but have become
reinforcers by being paired or associated with other reinforcers. The latter are called backup reinforcers. Tokens, such as
money, are conditioned reinforcers that can be accumulated and exchanged for a variety of backup reinforcers. A behavior modification program in which one or more individuals can earn tokens and exchange them for backup reinforcers
is called a token system or a token economy—usually a token system implemented with more than one individual.
A conditioned reinforcer that is paired with a single backup reinforcer is called a simple conditioned reinforcer, and
a stimulus that is paired with more than one kind of backup reinforcer is called a generalized conditioned reinforcer.
Money and praise are examples of generalized conditioned reinforcers. Factors influencing the effectiveness of conditioned reinforcement include (a) the strength of the backup reinforcers (the stronger the better); (b) the variety of
backup reinforcers—the greater variety the better; (c) the number of pairings with a backup reinforcer—the more the
better; and (d) whether pairings of conditioned and backup reinforcers continue at least intermittently. One pitfall of
conditioned reinforcement is the pairing of a conditioned reinforcer with a stimulus meant to decrease problem behavior such as a reprimand. This can cause the latter stimulus to be a reinforcer because it provides attention that the individual may not otherwise be receiving. A second pitfall is ceasing to at least occasionally pair a conditioned reinforcer
with a backup reinforcer, which will cause the conditioned reinforcer to lose its value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Summary of Chapter 10
A schedule of reinforcement is a rule specifying which occurrences of a given behavior, if any, will be reinforced. A continuous reinforcement (CRF) schedule is one in which each instance of a particular response is reinforced. For example,
each time you turn on a water tap you are reinforced by the appearance of water. An intermittent schedule of reinforcement is one in which a behavior is reinforced only occasionally, rather than every time the behavior occurs. Intermittent
schedules have four advantages over CRF, including (a) slower satiation of the reinforcer with the reinforced individual;
(b) greater resistance to extinction of the reinforced behavior; (c) more consistent work patterns by the reinforced individual; and (d) greater likelihood of the reinforced behavior persisting when transferred to reinforcers in the natural
environment.
There are six commonly used intermittent schedules for increasing and maintaining behavior. In a fixed ratio (FR)
schedule, a reinforcer occurs each time a fixed number of a particular response occurs. For example, at a basketball
practice, a coach allows a player to take a break after making five free-throw shots. With a variable-ratio (VR) schedule, a
reinforcer occurs after a certain number of a particular response, and the number varies from one reinforcer to the next.
For example, when a person plays a slot machine, his or her behavior is reinforced by winning after an unpredictable
number of tries. With a fixed-interval schedule with a limited hold (FI/LH) schedule, a reinforcer is presented for the first
instance of a particular response after a fixed interval of time, provided the response occurs within a fixed time after the
fixed interval has passed. For example, for buses that run on a regular schedule, catching the bus is reinforced on an FI/
LH schedule. With a variable-interval with a limited hold schedule (VI/LH), it operates like an FI/LH schedule with the
exception that the first instance of a particular response is reinforced after a variable period of time and within the LH.
An example of a response reinforced on a VI/LH is telephoning a friend whose line is busy. With a fixed-duration (FD)
schedule, a reinforcer is presented only if a behavior occurs continuously for a fixed period of time. An example is being
paid by the hour for working at a job. With a variable-duration (VD) schedule, a reinforcer is presented only if a behavior
occurs continuously for a fixed period of time and the interval of time from reinforcer to reinforcer changes unpredictably. An example of a VD schedule is waiting for traffic to clear before crossing a busy street. Concurrent schedules of
reinforcement are when each of two or more behaviors is reinforced on different schedules at the same time. For example,
a student might have the choice of watching a TV show, or surfing the Net, or doing homework.
A common pitfall of intermittent reinforcement occurs when one is attempting to extinguish a problem behavior
but ends up reinforcing the behavior intermittently and making the behavior more persistent.

A

Summary of Chapter 10
A schedule of reinforcement is a rule specifying which occurrences of a given behavior, if any, will be reinforced. A continuous reinforcement (CRF) schedule is one in which each instance of a particular response is reinforced. For example,
each time you turn on a water tap you are reinforced by the appearance of water. An intermittent schedule of reinforcement is one in which a behavior is reinforced only occasionally, rather than every time the behavior occurs. Intermittent
schedules have four advantages over CRF, including (a) slower satiation of the reinforcer with the reinforced individual;
(b) greater resistance to extinction of the reinforced behavior; (c) more consistent work patterns by the reinforced individual; and (d) greater likelihood of the reinforced behavior persisting when transferred to reinforcers in the natural
environment.
There are six commonly used intermittent schedules for increasing and maintaining behavior. In a fixed ratio (FR)
schedule, a reinforcer occurs each time a fixed number of a particular response occurs. For example, at a basketball
practice, a coach allows a player to take a break after making five free-throw shots. With a variable-ratio (VR) schedule, a
reinforcer occurs after a certain number of a particular response, and the number varies from one reinforcer to the next.
For example, when a person plays a slot machine, his or her behavior is reinforced by winning after an unpredictable
number of tries. With a fixed-interval schedule with a limited hold (FI/LH) schedule, a reinforcer is presented for the first
instance of a particular response after a fixed interval of time, provided the response occurs within a fixed time after the
fixed interval has passed. For example, for buses that run on a regular schedule, catching the bus is reinforced on an FI/
LH schedule. With a variable-interval with a limited hold schedule (VI/LH), it operates like an FI/LH schedule with the
exception that the first instance of a particular response is reinforced after a variable period of time and within the LH.
An example of a response reinforced on a VI/LH is telephoning a friend whose line is busy. With a fixed-duration (FD)
schedule, a reinforcer is presented only if a behavior occurs continuously for a fixed period of time. An example is being
paid by the hour for working at a job. With a variable-duration (VD) schedule, a reinforcer is presented only if a behavior
occurs continuously for a fixed period of time and the interval of time from reinforcer to reinforcer changes unpredictably. An example of a VD schedule is waiting for traffic to clear before crossing a busy street. Concurrent schedules of
reinforcement are when each of two or more behaviors is reinforced on different schedules at the same time. For example,
a student might have the choice of watching a TV show, or surfing the Net, or doing homework.
A common pitfall of intermittent reinforcement occurs when one is attempting to extinguish a problem behavior
but ends up reinforcing the behavior intermittently and making the behavior more persistent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Summary of Chapter 14
Chapter 10 described schedules of reinforcement for increasing and maintaining rates of operant responses. This chapter describes differential reinforcement schedules to decrease rates of operant responses. Limited-responding differential reinforcement of low (DRL) rates is a schedule in which a reinforcer is presented only if a particular response occurs
at a low rate. A spaced-responding DRL requires that a specified behavior not occur during a specified interval, and after
that interval has passed, an instance of that behavior must occur for a reinforcer to occur. A differential reinforcement
of zero responding (DRO) is a schedule in which a reinforcer is presented only if a specified response does not occur
during a specified period of time. A differential reinforcement of incompatible behavior (DRI) is a schedule in which
reinforcers are withheld for a specific behavior and an incompatible behavior is reinforced. A differential reinforcement
of alternative behavior (DRA) is a schedule that involves the extinction of a problem behavior combined with reinforcing a behavior that is topographically dissimilar to but not necessarily incompatible with the problem behavior. A pitfall
unique to DRL is the tendency to unknowingly reinforce a desirable behavior on a DRL, causing the desired behavior
to occur at a lower-than-desired rate

A

Summary of Chapter 14
Chapter 10 described schedules of reinforcement for increasing and maintaining rates of operant responses. This chapter describes differential reinforcement schedules to decrease rates of operant responses. Limited-responding differential reinforcement of low (DRL) rates is a schedule in which a reinforcer is presented only if a particular response occurs
at a low rate. A spaced-responding DRL requires that a specified behavior not occur during a specified interval, and after
that interval has passed, an instance of that behavior must occur for a reinforcer to occur. A differential reinforcement
of zero responding (DRO) is a schedule in which a reinforcer is presented only if a specified response does not occur
during a specified period of time. A differential reinforcement of incompatible behavior (DRI) is a schedule in which
reinforcers are withheld for a specific behavior and an incompatible behavior is reinforced. A differential reinforcement
of alternative behavior (DRA) is a schedule that involves the extinction of a problem behavior combined with reinforcing a behavior that is topographically dissimilar to but not necessarily incompatible with the problem behavior. A pitfall
unique to DRL is the tendency to unknowingly reinforce a desirable behavior on a DRL, causing the desired behavior
to occur at a lower-than-desired rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Summary of Chapter 24
A behavioral program in which groups of individuals can earn tokens for a variety of desirable behaviors and can
exchange the earned tokens for backup reinforcers is called a token economy. In this chapter, we described the steps typically used to set up and manage a token economy. These steps include (1) deciding on the target behaviors; (2) taking
baselines and keeping data on the target behaviors; (3) selecting the type of token to use; (4) selecting backup reinforcers; (5) managing the backup reinforcers; (6) identifying available help; (7) training and monitoring staff and helpers;
(8) handling potential problems; and (9) preparing a manual.
Following the token economy being successful, it is usually necessary to use schedule thinning to transfer the
target behaviors to reinforcers in the natural environment, and thus to wean the participants from tokens. The first
schedule thinning method is to eliminate tokens gradually by (a) making the token delivery schedule more intermittent;
Chapter 24 Token Economies 265
(b) decreasing the number of behaviors that earn tokens; or (c) increasing the delay between the target behavior and
token delivery. The second schedule thinning method is to decrease the value of the tokens gradually by (a) decreasing
the amount of backup reinforcement that a token can purchase; or (b) increasing the delay between token acquisition
and the purchase of backup reinforcers.
Contingency management systems (CMSs) are a category of token economies in which individuals obtain tokens
in the form of points, vouchers, or money for decreasing their intake of harmful substances as specified in their contingency contract. Some CMSs make use of a deposit contract. Individuals provide a small deposit, which they are paid
back when testing shows that the intake of the harmful substance has decreased to the amount agreed upon in their
contract.
The importance of ethical considerations in the design and application of token economies was discussed. As
with all powerful technologies, there are possibilities for abuses, which must be avoided. One important precaution is to
ensure that the system is completely open to public scrutiny, provided such openness is approved by the individuals in
the program or their advocates.
In addition, some important areas of research for the study of token economies, including types of behaviors
reinforced, types of tokens used, variety of backup reinforcers, token training procedure, schedule of token delivery,
schedule of store time, and token exchange schedule were outlined. Further research in these areas would lead to a better
understanding of the best practices for the development and use of token economies.

A

A behavioral program in which groups of individuals can earn tokens for a variety of desirable behaviors and can
exchange the earned tokens for backup reinforcers is called a token economy. In this chapter, we described the steps typically used to set up and manage a token economy. These steps include (1) deciding on the target behaviors; (2) taking
baselines and keeping data on the target behaviors; (3) selecting the type of token to use; (4) selecting backup reinforcers; (5) managing the backup reinforcers; (6) identifying available help; (7) training and monitoring staff and helpers;
(8) handling potential problems; and (9) preparing a manual.
Following the token economy being successful, it is usually necessary to use schedule thinning to transfer the
target behaviors to reinforcers in the natural environment, and thus to wean the participants from tokens. The first
schedule thinning method is to eliminate tokens gradually by (a) making the token delivery schedule more intermittent;
Chapter 24 Token Economies 265
(b) decreasing the number of behaviors that earn tokens; or (c) increasing the delay between the target behavior and
token delivery. The second schedule thinning method is to decrease the value of the tokens gradually by (a) decreasing
the amount of backup reinforcement that a token can purchase; or (b) increasing the delay between token acquisition
and the purchase of backup reinforcers.
Contingency management systems (CMSs) are a category of token economies in which individuals obtain tokens
in the form of points, vouchers, or money for decreasing their intake of harmful substances as specified in their contingency contract. Some CMSs make use of a deposit contract. Individuals provide a small deposit, which they are paid
back when testing shows that the intake of the harmful substance has decreased to the amount agreed upon in their
contract.
The importance of ethical considerations in the design and application of token economies was discussed. As
with all powerful technologies, there are possibilities for abuses, which must be avoided. One important precaution is to
ensure that the system is completely open to public scrutiny, provided such openness is approved by the individuals in
the program or their advocates.
In addition, some important areas of research for the study of token economies, including types of behaviors
reinforced, types of tokens used, variety of backup reinforcers, token training procedure, schedule of token delivery,
schedule of store time, and token exchange schedule were outlined. Further research in these areas would lead to a better
understanding of the best practices for the development and use of token economies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Summary of Chapter 9
Shaping is the development of a new operant behavior by the reinforcement of successive approximations of that
behavior and the extinction of earlier approximations of that behavior until a new behavior occurs. Shaping consists of
three stages, including (a) specification of the target behavior, such as Frank jogging a quarter mile each day; (b) identification of a starting behavior, such as Frank walking around his house once; and (c) reinforcing the starting behavior
and closer and closer approximations to the target behavior until the target behavior occurs, such as Frank having a
beer for walking around his house once, and then twice, and then more and more until he walked a quarter mile to earn
the beer. Shaping can be used to increase five dimensions of behavior, one dimension at a time, including topography,
frequency, duration, latency, and intensity. Factors influencing the effectiveness of shaping include (1) identifying a
specific target behavior by clearly specifying the details of the five relevant dimensions: (2) choosing a starting behavior
that occurs often enough to be reinforced within the session time and is an approximation of the target behavior; (3) try
to plan ahead an outline of the plausible successive approximations from the starting behavior to the target behavior;
(4) follow several rules of thumb for moving through the shaping steps successfully. Three common pitfalls of shaping
include (a) unknowingly applying it to develop undesirable behavior of friends, family members, acquaintances, and
others; (b) failing to apply it to develop desirable behavior of others; and (c) using labels, rather than lack of shaping, to
attempt to explain behavior deficiencies.

A

Summary of Chapter 9
Shaping is the development of a new operant behavior by the reinforcement of successive approximations of that
behavior and the extinction of earlier approximations of that behavior until a new behavior occurs. Shaping consists of
three stages, including (a) specification of the target behavior, such as Frank jogging a quarter mile each day; (b) identification of a starting behavior, such as Frank walking around his house once; and (c) reinforcing the starting behavior
and closer and closer approximations to the target behavior until the target behavior occurs, such as Frank having a
beer for walking around his house once, and then twice, and then more and more until he walked a quarter mile to earn
the beer. Shaping can be used to increase five dimensions of behavior, one dimension at a time, including topography,
frequency, duration, latency, and intensity. Factors influencing the effectiveness of shaping include (1) identifying a
specific target behavior by clearly specifying the details of the five relevant dimensions: (2) choosing a starting behavior
that occurs often enough to be reinforced within the session time and is an approximation of the target behavior; (3) try
to plan ahead an outline of the plausible successive approximations from the starting behavior to the target behavior;
(4) follow several rules of thumb for moving through the shaping steps successfully. Three common pitfalls of shaping
include (a) unknowingly applying it to develop undesirable behavior of friends, family members, acquaintances, and
others; (b) failing to apply it to develop desirable behavior of others; and (c) using labels, rather than lack of shaping, to
attempt to explain behavior deficiencies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Summary of Chapter 13
A behavioral chain is a consistent sequence of stimuli and responses that occur closely to each other in time and in
which the last response is typically followed by a reinforcer. A behavior sequence with breaks between responses, such
as studying for an exam and then writing the exam and eventually getting a grade, is not a behavioral chain.
With the total-task presentation method, an individual will attempt all of the steps from the beginning to the
end of the chain on each trail and continues with total-task trials until the chain is mastered. With the backward
chaining method, the last step is taught first, and then the next-to-last step is taught and linked to the last step, then
the third-from-last step is taught and linked to the last two steps, and so on until the entire chain is mastered. With
the forward chaining method, the initial step is taught first, then the first and second steps are taught and linked
together, then the first three steps are taught and linked together, and so on until the entire chain is mastered. When
teaching chains to persons with developmental disabilities, the total-task presentation method is recommended for
four reasons.
Behavior shaping, fading, and chaining are sometimes called gradual change procedures, because each involves
progressing gradually through a series of steps to produce a new behavior (shaping), or new stimulus control over a
behavior (fading), or a new sequence of stimulus–response steps (chaining). Factors that influence the effectiveness
of behavioral chaining include (a) the appropriateness of the task analysis; (b) the possibility of independent use of
prompts (such as written prompts, picture prompts, or self-instructions) by the learner; (c) whether or not a preliminary modeling prompt is used; (d) the training of the behavioral chain; and (e) the provision of ample social and other
reinforcers. A common pitfall of chaining is the development of an adventitious chain in which at least one component is superstitious and nonfunctional, such as frequently saying “uh” while talking. Another pitfall is that, with some
chains, such as overeating, all the components are involved in producing a reinforcer—e.g., excess food—but all are
undesirable.

A

Summary of Chapter 13
A behavioral chain is a consistent sequence of stimuli and responses that occur closely to each other in time and in
which the last response is typically followed by a reinforcer. A behavior sequence with breaks between responses, such
as studying for an exam and then writing the exam and eventually getting a grade, is not a behavioral chain.
With the total-task presentation method, an individual will attempt all of the steps from the beginning to the
end of the chain on each trail and continues with total-task trials until the chain is mastered. With the backward
chaining method, the last step is taught first, and then the next-to-last step is taught and linked to the last step, then
the third-from-last step is taught and linked to the last two steps, and so on until the entire chain is mastered. With
the forward chaining method, the initial step is taught first, then the first and second steps are taught and linked
together, then the first three steps are taught and linked together, and so on until the entire chain is mastered. When
teaching chains to persons with developmental disabilities, the total-task presentation method is recommended for
four reasons.
Behavior shaping, fading, and chaining are sometimes called gradual change procedures, because each involves
progressing gradually through a series of steps to produce a new behavior (shaping), or new stimulus control over a
behavior (fading), or a new sequence of stimulus–response steps (chaining). Factors that influence the effectiveness
of behavioral chaining include (a) the appropriateness of the task analysis; (b) the possibility of independent use of
prompts (such as written prompts, picture prompts, or self-instructions) by the learner; (c) whether or not a preliminary modeling prompt is used; (d) the training of the behavioral chain; and (e) the provision of ample social and other
reinforcers. A common pitfall of chaining is the development of an adventitious chain in which at least one component is superstitious and nonfunctional, such as frequently saying “uh” while talking. Another pitfall is that, with some
chains, such as overeating, all the components are involved in producing a reinforcer—e.g., excess food—but all are
undesirable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Summary of Chapter 11
Any situation in which operant behavior occurs can be analyzed in terms of three sets of events: (a) the antecedent stimuli
that are present just prior to the occurrence of the behavior; (b) the behavior itself; and (c) the immediate consequences of
the behavior. A stimulus in the presence of which a response will be reinforced is referred to as a discriminative stimulus
or SD. A stimulus in the presence of which a response will not be reinforced is referred to as an extinction stimulus or S∆.
Operant stimulus generalization refers to the procedure of reinforcing a response in the presence of a stimulus or
situation and the effect of the response becoming more probable, not only in the presence of that stimulus or situation
but also in the presence of another stimulus or situation. When there is limited physical similarity between two or more
members of a common-element stimulus class, a child may not show unlearned stimulus generalization between the
two stimuli until some learning has occurred, and generalization then would be referred to as learned stimulus generalization. In cases where we show learned stimulus generalization between items that are completely dissimilar, those
items would be referred to a stimulus equivalent class.
We identified the following four factors that determine the effectiveness of stimulus discrimination: (1) choosing distinct
SDs and S∆s; (2) minimizing the opportunities for error; (3) maximizing the number of trials; and (4) describing the contingencies. A common pitfall of stimulus discrimination is failing to responses to SDs and accidently reinforcing responses to S∆s.

A

Summary of Chapter 11
Any situation in which operant behavior occurs can be analyzed in terms of three sets of events: (a) the antecedent stimuli
that are present just prior to the occurrence of the behavior; (b) the behavior itself; and (c) the immediate consequences of
the behavior. A stimulus in the presence of which a response will be reinforced is referred to as a discriminative stimulus
or SD. A stimulus in the presence of which a response will not be reinforced is referred to as an extinction stimulus or S∆.
Operant stimulus generalization refers to the procedure of reinforcing a response in the presence of a stimulus or
situation and the effect of the response becoming more probable, not only in the presence of that stimulus or situation
but also in the presence of another stimulus or situation. When there is limited physical similarity between two or more
members of a common-element stimulus class, a child may not show unlearned stimulus generalization between the
two stimuli until some learning has occurred, and generalization then would be referred to as learned stimulus generalization. In cases where we show learned stimulus generalization between items that are completely dissimilar, those
items would be referred to a stimulus equivalent class.
We identified the following four factors that determine the effectiveness of stimulus discrimination: (1) choosing distinct
SDs and S∆s; (2) minimizing the opportunities for error; (3) maximizing the number of trials; and (4) describing the contingencies. A common pitfall of stimulus discrimination is failing to responses to SDs and accidently reinforcing responses to S∆s.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Summary of Chapter 8
The principle of operant extinction states that (a) if an individual, in a given situation, emits a preciously reinforced
behavior and that behavior is not followed by a reinforcer, (b) then that person is less likely to emit that behavior when
next encountering a similar situation. This was illustrated in the lead case involving Gregory; his yelling for dessert was
eliminated when it no longer was reinforced. Operant extinction decreases a behavior using a different procedure than
punishment and forgetting. To maximize the effectiveness of operant extinction, the behavior modifier should (1) control the reinforcers for the behavior to be extinguished; (2) combine extinction of a behavior with positive reinforcement
for a desirable alternative behavior, taking care that the alternative behavior does not undergo extinction which could
lead to resurgence or reappearance of the original behavior; (3) adjust the setting in which an extinction procedure is
carried out to (a) minimize the influence of alternative reinforcers on the undesirable behavior to be extinguished and
(b) maximize the chances of the behavior modifier persisting with the program; (4) inform the individual, whose behavior is being extinguished, about the consequences for the undesirable behavior and the desirable alternative behavior;
(5) make sure that the undesirable behavior is not being intermittently reinforced; (6) be prepared for the undesirable
behavior to get worse (an extinction burst) before it gets better (by decreasing); (7) be prepared for the extinction procedure to cause aggression as a side effect; and (8) be prepared for the reappearance of an extinguished behavior following
a break, called spontaneous recovery. A common pitfall of operant extinction is the unaware application of extinction
of desirable behaviors of friends, family, acquaintances, and others.

A

Summary of Chapter 8
The principle of operant extinction states that (a) if an individual, in a given situation, emits a preciously reinforced
behavior and that behavior is not followed by a reinforcer, (b) then that person is less likely to emit that behavior when
next encountering a similar situation. This was illustrated in the lead case involving Gregory; his yelling for dessert was
eliminated when it no longer was reinforced. Operant extinction decreases a behavior using a different procedure than
punishment and forgetting. To maximize the effectiveness of operant extinction, the behavior modifier should (1) control the reinforcers for the behavior to be extinguished; (2) combine extinction of a behavior with positive reinforcement
for a desirable alternative behavior, taking care that the alternative behavior does not undergo extinction which could
lead to resurgence or reappearance of the original behavior; (3) adjust the setting in which an extinction procedure is
carried out to (a) minimize the influence of alternative reinforcers on the undesirable behavior to be extinguished and
(b) maximize the chances of the behavior modifier persisting with the program; (4) inform the individual, whose behavior is being extinguished, about the consequences for the undesirable behavior and the desirable alternative behavior;
(5) make sure that the undesirable behavior is not being intermittently reinforced; (6) be prepared for the undesirable
behavior to get worse (an extinction burst) before it gets better (by decreasing); (7) be prepared for the extinction procedure to cause aggression as a side effect; and (8) be prepared for the reappearance of an extinguished behavior following
a break, called spontaneous recovery. A common pitfall of operant extinction is the unaware application of extinction
of desirable behaviors of friends, family, acquaintances, and others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Summary of Chapter 15
A punisher is an immediate consequence of an operant behavior that causes that behavior to decrease in frequency.
The principle of punishment states that if someone emits an operant behavior that is immediately followed by a punisher, then that person is less likely to emit that behavior again when she or he next encounters a similar situation.
There are four main types of punishers. Physical punishers activate pain receptors. Reprimands are strong negative
verbal stimuli. A timeout is a period of time during which an individual loses the opportunity to earn reinforcers.
There are two types of timeout: exclusionary and non-exclusionary. Response cost is the removal of a specified
amount of a reinforcer. Punishment is most effective when: (1) a desirable alternative behavior is strengthened; (2)
the causes of the undesirable behavior are minimized; (3) an effective punisher is used; (4) the antecedents (including stated rules) for the undesirable behavior and the desirable alternative behavior are clearly presented; and (5)
the punisher is delivered effectively, which includes (a) immediate presentation following the undesirable behavior,
(b) presentation following every instance of the undesirable behavior, (c) no pairing of the punisher with positive
reinforcement, and (d) the person delivering the punisher remains calm while doing so. Five potential harmful side
effects of punishment include the person being punished showing (1) aggressive behavior, (2) emotional behavior,
(3) escape and avoidance behavior, (4) no new behavior, and (5) modeling of punishment. A sixth harmful side
effect is that the person applying punishment might overuse it. Two opposing views on the use of punishment were
described, namely, the right-to-effective-treatment position and the freedom-from-harm position. Two common
pitfalls of punishment are (a) some people apply punishment without being aware that they are doing so; and (b)
some people think that they are applying punishment but they are actually applying reinforcement to an undesirable
behavior.

A

Summary of Chapter 15
A punisher is an immediate consequence of an operant behavior that causes that behavior to decrease in frequency.
The principle of punishment states that if someone emits an operant behavior that is immediately followed by a punisher, then that person is less likely to emit that behavior again when she or he next encounters a similar situation.
There are four main types of punishers. Physical punishers activate pain receptors. Reprimands are strong negative
verbal stimuli. A timeout is a period of time during which an individual loses the opportunity to earn reinforcers.
There are two types of timeout: exclusionary and non-exclusionary. Response cost is the removal of a specified
amount of a reinforcer. Punishment is most effective when: (1) a desirable alternative behavior is strengthened; (2)
the causes of the undesirable behavior are minimized; (3) an effective punisher is used; (4) the antecedents (including stated rules) for the undesirable behavior and the desirable alternative behavior are clearly presented; and (5)
the punisher is delivered effectively, which includes (a) immediate presentation following the undesirable behavior,
(b) presentation following every instance of the undesirable behavior, (c) no pairing of the punisher with positive
reinforcement, and (d) the person delivering the punisher remains calm while doing so. Five potential harmful side
effects of punishment include the person being punished showing (1) aggressive behavior, (2) emotional behavior,
(3) escape and avoidance behavior, (4) no new behavior, and (5) modeling of punishment. A sixth harmful side
effect is that the person applying punishment might overuse it. Two opposing views on the use of punishment were
described, namely, the right-to-effective-treatment position and the freedom-from-harm position. Two common
pitfalls of punishment are (a) some people apply punishment without being aware that they are doing so; and (b)
some people think that they are applying punishment but they are actually applying reinforcement to an undesirable
behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Summary of Chapter 16
The principle of escape conditioning, also called negative reinforcement, states that the removal of aversive stimuli
immediately after the occurrence of a behavior will increase the likelihood of that behavior. Many of our everyday
behaviors are influenced by escape conditioning, such as squinting in the presence of a bright light. The principle of
avoidance conditioning states that if a behavior prevents an aversive stimulus from occurring, then that will result in
an increase in the frequency of that behavior. Avoidance conditioning often involves a warning stimulus. Avoidance
responses may occur because they enable us to escape from a warning stimulus, anxiety, or unpleasant thoughts. One
pitfall of escape conditioning is that it may lead someone to reinforce someone’s undesirable behavior. Another pitfall
is the inadvertent establishment of conditioned aversive stimuli to which an individual will respond to escape or avoid
them

A

Summary of Chapter 16
The principle of escape conditioning, also called negative reinforcement, states that the removal of aversive stimuli
immediately after the occurrence of a behavior will increase the likelihood of that behavior. Many of our everyday
behaviors are influenced by escape conditioning, such as squinting in the presence of a bright light. The principle of
avoidance conditioning states that if a behavior prevents an aversive stimulus from occurring, then that will result in
an increase in the frequency of that behavior. Avoidance conditioning often involves a warning stimulus. Avoidance
responses may occur because they enable us to escape from a warning stimulus, anxiety, or unpleasant thoughts. One
pitfall of escape conditioning is that it may lead someone to reinforce someone’s undesirable behavior. Another pitfall
is the inadvertent establishment of conditioned aversive stimuli to which an individual will respond to escape or avoid
them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly