Learning and Memory Prpjet Flashcards

1
Q
Pavlov found that requiring dogs to make difficult discriminations between similar stimuli caused which of the following?
A. superstitious behavior
B. overshadowing
C. latent inhibition
D. experimental neurosis
A

Answer D is correct. While conducting studies on stimulus discrimination, Pavlov found that requiring the dogs to make difficult discriminations caused them to become extremely agitated and uncharacteristically aggressive, and he referred to these behaviors as experimental neurosis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Classical Conditioning Procedure

A

Classical conditioning helps explain involuntary, automatic responses to stimuli.

Classical Conditioning Procedure: Classical conditioning was initially described by Ivan Pavlov, and his most famous studies involved training dogs to respond to a ringing bell with salivation. In these studies, meat powder was an unconditioned stimulus (US) because it automatically elicited the unconditioned response (UR) of salivation from the dogs. The ringing bell began as a neutral stimulus because it did not automatically elicit salivation; however, after being repeatedly presented with meat power, the ringing bell became a conditioned stimulus (CS) that elicited a conditioned response (CR) of salivation when presented alone.

The magnitude of a CR is always less than the magnitude of the UR. It’s affected, however, by the number of times the CS and US are paired during conditioning trials, with the magnitude increasing (up to a point) as the number of pairings increases. In addition, the strength and speed of acquisition of a CR depend on the procedure that’s used: (a) When using delay conditioning, presentation of the CS precedes and overlaps presentation of the US. Of the conditioning procedures, delay conditioning is most effective, and a delay of about one-half second between presentation of the CS and US is optimal. (b) When using trace conditioning, the CS is presented and terminated just before the US is presented. (c) When using simultaneous conditioning, the CS and US are presented and terminated at about the same time. (d) When using backward conditioning, the US is presented before the CS. Backward conditioning is usually ineffective, which suggests that it’s the contingency of stimuli (i.e., that the occurrence of the US depends on the occurrence of the CS) that accounts for classical conditioning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Classical Extinction

A

Classical Extinction: Classical extinction occurs when, after the CS is repeatedly presented without the US, the CS no longer produces a CR.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Spontaneous Recovery

A

Spontaneous Recovery: Spontaneous recovery occurs when a CR returns after it was apparently extinguished. According to Pavlov, spontaneous recovery confirms that an extinguished CR is suppressed rather than eliminated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
The most effective conditioning procedure is \_\_\_\_\_\_\_\_\_\_ conditioning.
A. simultaneous
B. backward
C. delay
D. trace
A

Answer C is correct. Of the conditioning procedures, delay conditioning is most effective and involves presenting the CS so that it precedes and overlaps presentation of the US

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Stimulus Discrimination

A

Stimulus Discrimination: Stimulus discrimination is the opposite of stimulus generalization and is the ability to discriminate between the CS and similar stimuli. Pavlov found that, when dogs were exhibiting stimulus generalization, they could be taught to discriminate between similar stimuli. For example, if a dog was conditioned to salivate in response to a 2000-Hz tone and subsequently also salivated in response to a 2100-Hz tone, discrimination training would involve repeatedly presenting the 2000-Hz tone with meat powder and the 2100-Hz tone without meat powder. As the result of this discrimination training, the dog would be able to discriminate between the two stimuli and salivate only in response to the 2000-Hz tone.

While conducting studies on stimulus discrimination, Pavlov found that requiring dogs to make difficult discriminations had unexpected consequences. For example, in one study, Pavlov attempted to train dogs to salivate in response to a circle but not to ellipses. When discrimination became very difficult (when an ellipse was very similar to a circle), the dogs became extremely agitated and uncharacteristically aggressive. Pavlov referred to these unusual behaviors as experimental neurosis and concluded that they were caused by a conflict between excitatory and inhibitory processes in the central nervous system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Pavlov found that a CS elicits:
A. a similar amount of salivation as the US regardless of the conditioning procedure used.
B. a slightly greater amount of salivation than the US when delay conditioning was used.
C. a smaller amount of salivation than the US regardless of the conditioning procedure used.
D. a smaller or larger amount of salivation than the US, depending on which conditioning procedure used.

A

Answer C is correct. Pavlov found that a CS (e.g., a ringing bell) always elicited a response of less intensity or magnitude than the response elicited by the US (e.g., meat powder) regardless of the number of pairings of the CS and US or which conditioning procedure was used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
One explanation for \_\_\_\_\_\_\_\_\_\_ is that the second neutral stimulus does not provide any new information about the occurrence of the unconditioned stimulus.
A. stimulus discrimination
B. blocking
C. latent inhibition
D. higher-order conditioning
A

Answer B is correct. According to Rescorla and Wagner (1972), blocking occurs because the initial neutral stimulus (the conditioned stimulus) already predicts the occurrence of the unconditioned stimulus. Consequently, an association between the second neutral stimulus and the US is not made because the second neutral stimulus provides redundant information about the US.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

When using second-order conditioning:
A. the CS acts as a US.
B. the US acts as a CS.
C. the CS is presented with more than one US.
D. stimuli similar to the original CS are paired with the US.

A

Answer A is correct. Second-order conditioning involves using the initial CS (e.g., a ringing bell) as a US by pairing it with a second neutral stimulus (e.g., a blinking light) so that the second neutral stimulus also becomes a CS and elicits a CR (e.g., salivation) when presented alone.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Latent Inhibition

A

Latent Inhibition: Latent inhibition occurs when pre-exposure to a neutral stimulus alone on multiple occasions prior to conditioning trials reduces the likelihood that the stimulus will become a CS and elicit a CR when it’s subsequently paired with a US.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Higher-Order Conditioning

A

Higher-Order Conditioning: Higher-order conditioning involves treating a CS (e.g., a ringing bell) as an unconditioned stimulus and pairing it with a neutral stimulus (e.g., a blinking light) so that the neutral stimulus also becomes a CS and elicits the CR (e.g., salivation) when presented alone. Note that, when higher-order conditioning involves a second CS, it’s also referred to as second-order conditioning; when it involves a third CS, it’s also referred to as third-order conditioning; etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Compound Conditioning

A

Compound Conditioning: Compound conditioning occurs when two or more stimuli are presented together and includes blocking and overshadowing. Blocking occurs when conditioning trials are first used to establish an association between one neutral stimulus (e.g., a ringing bell) and a US (e.g., meat powder) by repeatedly pairing presentation of the neutral stimulus with the US. Then, when the neutral stimulus becomes a CS and elicits a CR when presented alone, it’s repeatedly presented simultaneously with a second neutral stimulus (e.g., a flashing light) just before presenting the US. In this situation, classical conditioning of the first neutral stimulus blocks classical conditioning of the second neutral stimulus, and the second neutral stimulus never becomes a CS. Blocking occurs apparently because the second neutral stimulus does not provide any new information about the occurrence of the US (Rescorla & Wagner, 1972).

Overshadowing occurs when two neutral stimuli are, from the start, repeatedly presented together before the US. In this situation, the two stimuli will elicit a CR when presented together; however, when each stimulus is presented alone, the more salient (stronger) stimulus produces a CR, but the less salient (weaker) stimulus does not. The research has found that this occurs even when the less salient stimulus can become a CS when it’s paired by itself with the US. In other words, the failure of the less salient stimulus to become a CS is not due to its low salience but to being overshadowed by the more salient stimulus when the two stimuli are presented together during conditioning trials

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Interventions That Use Extinction

A

Interventions based on classical conditioning decrease or eliminate an undesirable behavior using extinction or counterconditioning.

Interventions That Use Extinction: Interventions that use classical extinction to alter a behavior include exposure with response prevention, cue exposure therapy, implosive therapy, and eye movement desensitization and reprocessing.

  1. Exposure with Response Prevention: Exposure with response prevention uses classical extinction to eliminate an anxiety response and is based on two assumptions. The first assumption is that anxiety-arousing stimuli that do not ordinarily elicit anxiety (e.g., elevators, crowds, white rats) become conditioned stimuli and begin to elicit anxiety because, at some time in the past, they were paired with an unconditioned stimulus that naturally elicited anxiety. The second assumption is that the conditioned fear response never extinguishes because the person avoids the conditioned stimulus in order to avoid experiencing fear. As an example, a person who has a fear of elevators may have acquired this fear because he was in an elevator when there was an earthquake that automatically elicited a fear response. In this situation, the elevator was the CS and the earthquake was the US and, as a result of the pairing of the CS and US and subsequent stimulus generalization, the person is now afraid of all elevators. In this situation, the person always takes the stairs instead of the elevator to avoid feeling fear, which deprives him of opportunities for extinguishing his conditioned fear response. When using exposure with response prevention, the client is exposed to the feared (conditioned) stimulus while preventing the client from making his/her usual avoidance response. As a result, the conditioned response is extinguished.

Exposure with response prevention can be conducted in vivo (in real life settings), in virtual reality, or in imagination, and it can take the form of flooding or graded exposure: Flooding involves sustained exposure to stimuli that elicit the most intense levels of anxiety during all exposure sessions, while graded exposure (also known as graduated exposure) involves progressive exposure to anxiety-arousing stimuli, beginning with the least anxiety-arousing stimulus and gradually progressing to stimuli that produce increasingly greater levels of anxiety. Although flooding can be effective (especially for treating agoraphobia), graded exposure is more acceptable to clients who resist experiencing high levels of anxiety. Also, to be effective, each exposure session should not end until the client has experienced a substantial decrease in anxiety (Clark & Beck, 2010).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Exposure with Response Prevention

A

Exposure with Response Prevention: Exposure with response prevention uses classical extinction to eliminate an anxiety response and is based on two assumptions. The first assumption is that anxiety-arousing stimuli that do not ordinarily elicit anxiety (e.g., elevators, crowds, white rats) become conditioned stimuli and begin to elicit anxiety because, at some time in the past, they were paired with an unconditioned stimulus that naturally elicited anxiety. The second assumption is that the conditioned fear response never extinguishes because the person avoids the conditioned stimulus in order to avoid experiencing fear. As an example, a person who has a fear of elevators may have acquired this fear because he was in an elevator when there was an earthquake that automatically elicited a fear response. In this situation, the elevator was the CS and the earthquake was the US and, as a result of the pairing of the CS and US and subsequent stimulus generalization, the person is now afraid of all elevators. In this situation, the person always takes the stairs instead of the elevator to avoid feeling fear, which deprives him of opportunities for extinguishing his conditioned fear response. When using exposure with response prevention, the client is exposed to the feared (conditioned) stimulus while preventing the client from making his/her usual avoidance response. As a result, the conditioned response is extinguished.

Exposure with response prevention can be conducted in vivo (in real life settings), in virtual reality, or in imagination, and it can take the form of flooding or graded exposure: Flooding involves sustained exposure to stimuli that elicit the most intense levels of anxiety during all exposure sessions, while graded exposure (also known as graduated exposure) involves progressive exposure to anxiety-arousing stimuli, beginning with the least anxiety-arousing stimulus and gradually progressing to stimuli that produce increasingly greater levels of anxiety. Although flooding can be effective (especially for treating agoraphobia), graded exposure is more acceptable to clients who resist experiencing high levels of anxiety. Also, to be effective, each exposure session should not end until the client has experienced a substantial decrease in anxiety (Clark & Beck, 2010).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Cue Exposure Therapy

A

Cue Exposure Therapy: Cue exposure therapy (CET) is a type of exposure with response prevention that’s used to treat substance use disorders. It involves exposing a client to cues (conditioned stimuli) associated with a substance while prohibiting him/her from using the substance. Doing so weakens the strength of the relationship between the cues and substance use, apparently as the result of extinction or habituation (Sher, Talley, Littlefield, & Martinez, 2011). Cues include internal and external triggers for substance use such as craving for the substance, interpersonal conflicts, and the sight of the substance. CET is often effective when used alone, but there’s evidence that its effectiveness increases when it’s combined with training in coping strategies to use when faced with cues, such as reminding oneself about the negative consequences of substance use and engaging in alternative activities (Spiegler, 2015).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Implosive Therapy

A

Implosive Therapy: Implosive therapy (Stampfl & Levis, 1967) is a type of exposure that’s always conducted in imagination and incorporates psychodynamic elements. When using implosive therapy, the therapist encourages the client to exaggerate his/her image of the feared object or event in order to elicit a high level of anxiety and embellishes the scene being imagined by the client with psychodynamic conflicts that are believed to underlie the client’s anxiety (e.g., conflicts related to sexuality, hostility, or rejection).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

EMDR

A

Eye Movement Desensitization and Reprocessing (EMDR): EMDR was originally developed as an intervention for PTSD (Shapiro, 2001) but is now also used to treat phobias and several other disorders. It combines exposure to trauma-related imagery, exposure to negative cognitions related to the feared event, rehearsal of adaptive cognitions, and rapid lateral eye movements, and it’s based on the assumption that eye movements facilitate the mental processing of traumatic memories. Despite evidence for its effectiveness, it’s not clear if the beneficial effects of EMDR depend on eye movements. Based on their meta-analysis of the research, Davidson and Parker (2013) conclude that eye movements do not add to the effectiveness of EMDR and that its benefits are due to repeated exposure to the feared event. In contrast, other investigators have argued that exposure does not adequately explain its effects (e.g., Rogers & Silver, 2002).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Interventions That Use Counterconditioning

A

Interventions That Use Counterconditioning: These interventions include systematic desensitization and aversion therapy.

  1. Systematic Desensitization: Systematic desensitization was developed by Joseph Wolpe as a treatment for phobic anxiety and involves three steps: (1) The client learns deep muscle relaxation or other procedure that produces a state of relaxation. (2) The client and therapist create an anxiety hierarchy, which is a list of stimuli that cause low to high levels of anxiety. (3) The client imagines the stimuli included the hierarchy and uses the relaxation procedure while doing so. This step begins with the least anxiety-arousing stimulus and, once the client can maintain a relaxed state while imagining that stimulus, he or she imagines the next stimulus in the hierarchy and practices the relaxation procedure while doing so. This procedure continues until the client can maintain a relaxed state while imagining the most anxiety-arousing stimulus.

As described by Wolpe, systematic desensitization uses counterconditioning (which he referred to as reciprocal inhibition) and involves replacing an undesirable anxiety response with an incompatible and more desirable relaxation response. From this perspective, the anxiety-arousing stimulus is the conditioned stimulus (CS) and the procedure that naturally produces relaxation is the unconditioned stimulus (US); and, as a result of pairing presentation of the CS and US, the CS produces relaxation rather than anxiety. Research using the dismantling strategy (which involves comparing the individual components of a treatment) has found, however, that the effectiveness of systematic desensitization is due to classical extinction. From this perspective, anxiety-arousing stimuli are conditioned stimuli that, at some time in the past, were paired with an unconditioned stimulus that naturally produced anxiety, and systematic desensitization extinguishes the anxiety response to these stimuli by repeatedly presenting them without that unconditioned stimulus.

  1. Aversion Therapy: Aversion therapy is also known as aversive counterconditioning and is used to treat addictions and other self-reinforcing behaviors. When using aversion therapy, stimuli associated with the problem behavior are paired with an unconditioned stimulus that naturally produces an unpleasant response that’s incompatible with the reinforcing response. As a result, stimuli associated with the problem behavior become conditioned stimuli and produce the unpleasant response rather than the self-reinforcing response. For example, when aversion therapy is used to treat a client’s fetish, presentation of the fetish object might be paired with electric shock to the client’s arm. In this situation, the fetish object is the CS, the electric shock is the US, and the pain caused by the electric shock is the UR. As the result of pairing the fetish object with electric shock, the fetish object produces a conditioned response (CR) of pain rather than sexual arousal.

When aversion therapy is conducted in imagination rather than in vivo (with real stimuli), it’s known as covert sensitization. When using covert sensitization, sessions often end by having the client imagine a relief scene. This involves having the client imagine facing a stimulus associated with the problem behavior but refraining from engaging in the behavior and, as a result, experiencing a sense of relief or other positive sensation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

E. L. Thorndike

A

Operant conditioning is useful for understanding the factors that contribute to the acquisition, maintenance, and cessation of voluntary behaviors.

E. L. Thorndike: Thorndike (1898) studied learning by placing hungry cats in a wooden crate (“puzzle box”) that required them to make a certain response in order to escape and obtain food placed outside the crate. He found that the cats engaged in random behaviors until they accidentally performed the behavior that opened a door and allowed them to escape. Although the cats did not immediately perform the successful behavior when they were returned to the crate, the amount of time between being placed in the crate and performing the behavior gradually decreased over subsequent trials. Based on these results, Thorndike concluded that the cats learned how to escape through a process of trial-and-error and that the likelihood that behaviors would recur depended on their consequences. According to his law of effect, behaviors that are followed by satisfying consequences are likely to occur again, while behaviors that are followed by dissatisfying consequences are less likely to be repeated.

20
Q

B. F. Skinner

A

B. F. Skinner: Skinner’s (1938) theory of operant conditioning extends Thorndike’s work and proposes that whether or not a voluntary behavior is emitted depends on how it “operates” on the environment – i.e., whether it produces reinforcement or punishment. Skinner also distinguished between positive and negative reinforcement and positive and negative punishment to indicate whether the behavior is followed by the application or removal of a stimulus: (a) Positive reinforcement occurs when a behavior increases or is maintained at its current level because a stimulus is applied following the behavior. Example: An employee works overtime because he’s paid extra for doing so. (b) Negative reinforcement occurs when a behavior increases or is maintained because a stimulus is removed following the behavior. Example: A child straightens her room because her parents stop nagging her when she does so. (c) Positive punishment occurs when a behavior decreases because a stimulus is applied following the behavior. Example: A child stops teasing the family dog because his parents always yell at him whenever he does so. (d) Negative punishment occurs when a behavior decreases because a stimulus is taken away following the behavior. Example: An adolescent stops swearing because one dollar is deducted from his weekly allowance whenever he swears.

When determining if a scenario presented in an exam question describes positive or negative reinforcement or punishment, a good strategy is to first determine whether the behavior is increasing or being maintained or is decreasing. This will indicate if the scenario is describing reinforcement or punishment. The next step is to determine if something is being applied or taken away following the behavior. When something is being applied, the scenario is describing positive reinforcement or positive punishment; when something is being taken away, it’s describing negative reinforcement or negative punishment. Alternatively, a mnemonic that helps some candidates is to associate each consequence with a different word and determine which word best fits the scenario: positive reinforcement = reward, negative reinforcement = relief, positive punishment = pain, negative punishment = loss.

21
Q

Operant Conditioning Terms

A

ther Operant Conditioning Terms and Procedures: For the exam, you want to be familiar with several other terms and procedures associated with operant conditioning:

  1. Operant Extinction: To extinguish a behavior that has been reinforced, reinforcement is withheld every time the behavior occurs. Although termination of reinforcement eventually results in a decrease or cessation of the behavior, it often initially produces a temporary increase in the behavior, which is referred to as an extinction burst. Also, when reinforcement has been provided for two different behaviors and reinforcement for one behavior is terminated, the other behavior is likely to increase, and this is referred to as behavioral contrast.
22
Q

Reinforcement Schedules

A

Reinforcement Schedules: The acquisition of a behavior is fastest when it’s reinforced on a continuous schedule, which means that the behavior is reinforced every time it occurs. However, because the continuous schedule is associated with satiation (loss of the reinforcer’s reinforcing value) and with rapid extinction of the behavior when reinforcement is stopped, the optimal procedure is to start with a continuous schedule and then switch to an intermittent (partial) schedule when the behavior is occurring at the desired level. There are four intermittent schedules:

(a) When using a fixed interval (FI) schedule, reinforcement is consistently provided after a fixed period of time regardless of how many times the behavior occurs during each interval. A rat on an FI-20 schedule is reinforced with a food pellet every 20 seconds regardless of whether it presses a lever once or multiple times during a 20-second interval. FI schedules produce a low rate of responding, with responses being made shortly before the end of each interval.
(b) When using a variable interval (VI) schedule, reinforcement is provided after intervals of varying and unpredictable lengths. A rat on a VI-20 schedule will be reinforced, on average, after 20 seconds, but the length of the interval varies – first after 15 seconds, then after 25 seconds, then after 20 seconds, etc. Again, as long as the rat presses the lever at least once during an interval, it will receive a food pellet at the end of the interval. VI schedules produce a steady but relatively low rate of responding.
(c) When using a fixed ratio (FR) schedule, reinforcement is consistently provided after a specific number of responses. A rat on an FR-10 schedule receives a food pellet after every ten lever presses. FR schedules produce a steady and relatively high rate of responding.
(d) When using a variable ratio (VR) schedule, reinforcement is provided after a variable number of responses. A rat on a VR-10 schedule will be reinforced, on average, after every ten lever presses, but the exact number of responses varies – first after 10 responses, then after 8 responses, then after 12 responses, etc. Of the four intermittent schedules, VR schedules produce the highest rate of responding and the greatest resistance to extinction.

23
Q

Thinning

A

Thinning: Reducing the amount of reinforcement for a behavior is referred to as thinning. Switching from a continuous to an intermittent schedule of reinforcement or from an FR-10 to an FR-20 schedule are examples of thinning. Thinning the reinforcement schedule once a behavior reaches its desired level helps increase resistance to extinction.

24
Q

Matching Law:

A

Matching Law: According to the matching law, when two or more behaviors are concurrently reinforced on different schedules, the rate of performing each behavior is proportional to the frequency of reinforcement. As an example, when a rat is reinforced on a VI-30 schedule for pressing lever #1 and on a VI-15 schedule for pressing lever #2, the rat will press lever #2 about twice as often as it presses lever #1. The matching law also predicts that rate of responding will match the magnitude of the reinforcement. When a rat is being reinforced on a VI-30 schedule for two levers, but pressing lever #1 provides the rat with access to food for 10 seconds and pressing lever #2 provides access to food for five seconds, the rat will press lever #1 about twice as often as it presses lever #2.

25
Q

Types of Positive Reinforcers

A

Types of Positive Reinforcers: Primary reinforcers (e.g., food, water) are inherently reinforcing because they satisfy needs that are related to basic survival. In contrast, secondary reinforcers (e.g., praise, tokens) are neutral stimuli that become reinforcing because of their association with primary reinforcers. When secondary reinforcers are associated with a variety of back-up (primary) reinforcers, they are referred to as generalized reinforcers (also known as generalized secondary reinforcers and generalized conditioned reinforcers). Money is a generalized reinforcer because it can be exchanged for a variety of back-up reinforcers.

26
Q

Superstitious Behavior

A

Superstitious Behavior: As described by Skinner, superstitious behavior occurs when a behavior increases because it was accidentally reinforced. In one study, Skinner delivered food to pigeons every 15 seconds regardless of what they were doing. As a result, the pigeons developed an association between the behaviors they were performing just before being reinforced and performed those behaviors toward the end of subsequent 15-second intervals.

27
Q

Stimulus Control

A

Stimulus Control: A behavior is brought under stimulus control when it occurs in the presence of one stimulus but not another stimulus. For example, rats might learn that, when a light is blinking and they press a lever, a food pellet will be delivered; but, when the light is not blinking and they press the lever, a food pellet will not be delivered. In this situation, the blinking light is a positive discriminative stimulus (also referred to as just the discriminative stimulus or SD) because it signals that reinforcement will be delivered. In contrast, the non-blinking light is a negative discriminative stimulus (also referred to as the S-delta stimulus) because it signals that reinforcement will not be delivered. Stimulus control is an example of two-factor learning, which combines operant and classical conditioning: Performance of a particular behavior is due to positive reinforcement (operant conditioning). Performance of the behavior in the presence of a positive discriminative stimulus but not in the presence of a negative discriminative stimulus is the result of discrimination training (classical conditioning).

28
Q

Prompts

A

Prompts: Prompts are cues that help initiate the performance of a behavior and include providing cues, instructions, or physical guidance. When the behavior is reinforced, prompts become associated with the reinforcement and act as positive discriminative stimuli. “Finish your homework and you can play video games” is an example of a prompt that acts as a positive discriminative stimulus. Gradually removing a prompt once the behavior is at the desired level is referred to as fading.

29
Q

Stimulus Generalization

A

Stimulus Generalization: Stimulus generalization in operant conditioning is the same as stimulus generalization in classical conditioning and occurs when stimuli similar to the positive discriminative (conditioned) stimulus elicit the same response.

30
Q

Response Generalization

A

Response Generalization: In some situations, providing reinforcement for a specific behavior not only increases that behavior but also increases the likelihood that similar behaviors will occur, and this is referred to as response generalization. Response generalization is occurring when a young child who’s praised for sharing a toy with another child starts sharing other toys with other children.

31
Q

Escape and Avoidance Conditioning

A

Escape and Avoidance Conditioning: Escape and avoidance conditioning are applications of negative reinforcement. Escape conditioning occurs when a behavior occurs because it allows the individual to escape an unpleasant stimulus. As an example, a dog might escape an electric shock that’s being applied to the floor of its cage by jumping over a barrier to get to the other side of the cage where electric shock is not being applied. In this situation, the dog’s “jumping over the barrier” behavior is negative reinforced.

Avoidance conditioning is the result of two-factor learning and occurs when a stimulus signals that an unpleasant stimulus is about to be applied and a behavior occurs because it allows the individual to avoid the unpleasant stimulus. A dog might learn that a blinking light signals that electric shock is about to be applied to the floor of its cage, and it jumps over the barrier as soon as the light starts to blink to avoid being shocked. In this situation, the blinking light has become associated with electric shock and is a conditioned stimulus (classical conditioning). And jumping over the barrier as soon as the light starts blinking is the result of negative reinforcement (operant conditioning).

32
Q

Habituation

A

Habituation: Habituation refers generally to the gradual decline in the frequency or magnitude of a response. In the context of operant conditioning, habituation is identified as one of the reasons why punishment does not have good long-term effects – i.e., over time, punishment tends to become less effective because the person habituates (becomes accustomed) to it. One of the dangers of punishment is that it can escalate to a dangerous (abusive) level if the person delivering the punishment continues to increase its intensity as it becomes less effective.

33
Q

Shaping:

A

Shaping involves reinforcing successive approximations to the desired behavior. As an example, Lovaas and his colleagues (e.g., Lovaas, Berberich, Perloff, & Schaeffer, 1966) used shaping to teach imitative speech to children with schizophrenia or autism. This involved first reinforcing a child with food and praise for making any vocalization, then reinforcing the child only for making any vocalization after the teacher said a word, then only for making sounds that were similar to the teacher’s word, and then only for imitating the teacher’s word. Once the child was able to imitate the initial word, the teacher modeled other words and then phrases and sentences for the child to imitate.

34
Q

Chaining

A

Chaining: Chaining is used to establish a complex behavior that consists of separate responses (links in the behavior chain). Chaining begins with a task analysis to identify the individual responses that make up the behavior chain and can be forward or backward. Forward chaining involves teaching each response separately, beginning with the first response in the behavior chain and, once that response is mastered, teaching the next response. This process continues until the individual is performing the desired behavior. Backward chaining involves beginning with the last response in the chain and then teaching the second to last response, etc. For both types of chaining, the individual is reinforced after he/she masters each new response and after he/she performs the new response with all of the previously learned responses in the correct sequence. As an example of forward chaining, teaching a child to brush her teeth would involve teaching her to pick up the toothpaste with her left hand and open it with her right hand, picking up the toothbrush with her right hand, squeezing toothpaste on the toothbrush and putting down the toothpaste, using the toothbrush to brush her teeth, etc.

Note that the difference between shaping and chaining is that, with shaping, only the final target behavior is important and the other behaviors that were reinforced during training are no longer important or evident once the individual can perform the final behavior. In contrast, with chaining, each response in the behavior chain is important and evident when the individual performs the target behavior.

35
Q

Premack Principle

A

Premack Principle: When using the Premack principle (Premack, 1965), a high frequency or preferred behavior is used as reinforcement for a low frequency or less preferred behavior to increase the low frequency behavior. A parent is using the Premack principle when her child loves to play video games but hates doing homework and she tells the child he can play video games only after he’s finished his homework.

36
Q

Overcorrection

A

Overcorrection: Overcorrection is classified as a type of positive punishment because it involves applying a penalty following an undesirable behavior in order to reduce or eliminate that behavior. It consists of two procedures that can be used alone or together: Restitution involves having the individual correct the consequences of his/her behavior and restore the environment to a better state. For example, if a child becomes angry and knocks over several chairs in the classroom, he would be required to pick up the chairs and appropriately position those chairs and all of the other chairs in the room. Positive practice involves practicing alternative appropriate behaviors that are similar to the desirable behavior. The child who knocked over chairs might be required to straighten up the entire classroom. Both procedures may include providing verbal instructions and/or physical (manual) guidance.

37
Q

Response Cost

A

Response Cost: Response cost is a type of negative punishment that involves removing a specific reinforcer temporarily or permanently after a behavior occurs to decrease or eliminate that behavior. A parent is using response cost when her child loses 15 minutes of video game time whenever he acts aggressively toward his younger sister.

38
Q

Time Out

A

Time Out: Time out is also known as time out from positive reinforcement and time out from generalized reinforcement and involves removing all sources of reinforcement following an undesirable behavior for a brief period of time. Having a preschool child sit in the corner of the classroom facing the wall after she engages in an undesirable behavior is an example of time out. Several factors impact the effectiveness of time out (Zastrow & Kirst-Ashman, 2007): The individual should be informed about what behaviors will result in a time out and how long the time out will last. The time out should be applied immediately after the behavior occurs and should be applied consistently to that behavior. The length of the time out should be brief (one to 10 minutes), but it should not end until the individual is no longer engaging in the behavior. Time out should be combined with positive reinforcement for appropriate behaviors.

39
Q

Extinction

A

Extinction: Operant extinction involves withholding reinforcement from a behavior that has been being reinforced. A parent would be using extinction if, in the past, she usually paid attention to her young child whenever he whines but decides to stop paying attention to him after realizing that doing so increases his whining. For extinction to be effective, the removal of reinforcement must be consistent and should be combined with reinforcement for appropriate behaviors.

Differential reinforcement combines extinction and positive reinforcement and is used to eliminate or weaken a target (undesirable) behavior and increase an alternative (desirable) behavior or an alternative form of the target behavior. There are several types of differential reinforcement: (a) Differential reinforcement of incompatible behavior (DRI) involves reinforcing an individual when he/she engages in a specified desirable behavior that’s incompatible with the undesirable behavior. A teacher is using DRI when a child who frequently gets out of his seat during class is reinforced for each 30-minute interval that he stays in his seat. (b) Differential reinforcement of alternative behavior (DRA) involves reinforcing the individual when he/she engages in one or more specified alternative (but not necessarily incompatible) behaviors rather than the undesirable behavior. A teacher is using DRA to reduce a student’s shouting out in class by reinforcing the student for raising her hand instead of shouting whenever she wants to say something in class. (c) Differential reinforcement of other behavior (DRO) involves reinforcing the individual for engaging in any behaviors other than the undesirable behavior. A parent is using DRO to reduce the hand-flapping of a child with autism spectrum disorder when she ignores the child’s hand-flapping and reinforces the child for engaging in any appropriate activities during 30-minute intervals. (d) Unlike DRI, DRA, and DRO, differential reinforcement of low rates of behavior (DRL) doesn’t involve reinforcing a desirable behavior but, instead, reinforcing the individual when he/she engages in the target behavior at or less than a specified low rate. An elementary school teacher is using DRL to reduce the number of times a child interrupts the class by asking questions when she reinforces the child for every 60-minute interval that the child asks three or fewer questions.

40
Q

Multi-Store Model of Memory

A

Multi-Store Model of Memory: The multi-store model (Atkinson & Shiffrin, 1968) describes memory as consisting of three levels: sensory memory, short-term memory, and long-term memory.

(a) Sensory memory is capable of storing a large amount of incoming sensory information but does so for a very brief period, with the duration being determined by the type of information. For example, information remains in iconic (visual) sensory memory for about one-half second and remains in echoic (auditory) sensory memory for about two seconds.
(b) When we pay attention to incoming sensory information, it’s transferred to short-term memory, which stores a limited amount of information for only about 20 seconds unless the information is rehearsed (repeated over and over again). Short-term memory consists of memory span and working memory: (a) Memory span is the storage capacity of short-term memory and is also known as primary memory. According to Miller (1956), the capacity of short-term memory is between 7 + 2 units of information but can be expanded by “chunking” information. For example, to maintain 14 words in short-term memory, you can chunk the 14 words into 7 word pairs. (b) Working memory is responsible for processing and manipulating information that’s in short-term memory. For instance, it’s working memory that allows you to mentally solve simple math problems.
(c) Information is transferred from short-term to long-term memory when it’s encoded, for example, by relating it to something that’s already stored in long-term memory. The capacity and duration of long-term memory is believed to be unlimited, and it’s divided into recent (secondary) and remote (tertiary) memory. Recent long-term memory contains memories that have been stored from minutes to years and, of the types of memory, is most adversely affected in adulthood by increasing age. Remote long-term memory contains memories that have been stored for years to decades.

The three-store model has been used to explain the serial position effect, which occurs when a person is asked to memorize a list of unrelated words and then asked to recall as many words as possible in any order either immediately or after a brief delay. When asked to recall the words immediately, a person is likely to exhibit both primacy and recency effects. In other words, the person will recall more words at the beginning and end of the list than words in the middle of the list because words from the beginning of the list have been stored in long-term memory, while words from the end of the list are still in short-term memory. In contrast, when asked to recall the words after a brief delay, the person will exhibit only a primacy effect because words from the end of the list are no longer in short-term memory.

41
Q

Baddeley’s Model of Working Memory

A

Baddeley’s Model of Working Memory: According to Baddeley (2000; Repovs & Baddeley, 2006), the working memory aspect of short-term memory consists of a central executive and three subsystems – a phonological loop, a visuo-spatial sketchpad, and an episodic buffer. The central executive controls the three subsystems by directing attention to relevant information and coordinating other cognitive processes (e.g., the processes required to mentally solve math problems). The phonological loop is responsible for the temporary storage of verbal information, while the visuo-spatial sketchpad is responsible for the temporary storage of visual and spatial information. The episodic buffer integrates verbal, visual, and spatial information and links working memory to long-term memory.

42
Q

Long-Term Memory

A

Long-Term Memory: In addition to categorizing long-term memories as recent or remote, they can be classified as procedural or declarative. Procedural memories are also referred to as nondeclarative memories and are memories for learned skills and actions such as remembering how to ride a bicycle and play the saxophone. Declarative memories are divided into two types – semantic and episodic: Semantic memories are memories for facts, concepts, and other kinds of knowledge – e.g., remembering that the multi-store model distinguishes between sensory, short-term, and long-term memory. Episodic (autobiographical) memories are memories for personally experienced events – e.g., remembering where you went on your last vacation.

Long-term memories are also classified as retrospective or prospective. Retrospective memories are memories for events that occurred in the past, while prospective memories are memories for events that will occur in the future. Remembering that you have a doctor’s appointment in two weeks is a prospective memory.

Finally, long-term memories are sometimes categorized as implicit or explicit. Implicit memory is often used as a synonym for procedural memory and refers to memories that are recalled without conscious effort. However, some investigators note that implicit memory includes memories of both learned skills and actions (procedural memory) and conditioned responses. Explicit memory is often used as a synonym for declarative memory and includes semantic and episodic memories.

43
Q

Explanations for Forgetting

A

Explanations for Forgetting: Trace decay theory and interference theory have been used to explain why information is forgotten. According to trace decay theory, memories create physical changes in the brain that deteriorate over time when they’re not rehearsed or recalled. Trace decay theory has not been well-supported by the research.

In contrast, interference theory has received consistent research support and attributes forgetting to the disruption of memories by previously or more recently acquired information. It distinguishes between two types of interference – proactive and retroactive:

(a) Proactive interference occurs when previously learned information interferes with the ability to learn or recall new information. For example, proactive interference has occurred when memorization of a list of words in the past interferes with your ability to learn or recall a new list of words.
(b) Retroactive interference occurs when newly acquired information interferes with the ability to recall previously acquired information. Retroactive interference has occurred when you’re unable to recall the first list of words you memorized after you memorize a second list of words.

Note that proactive and retroactive interference are most likely to occur when previously and more recently acquired information are similar. For example, interference is more likely to occur when a student is taught one way to solve a math problem and is taught an alternative way to solve the same kind of problem two weeks later than when the student is taught different strategies for solving two entirely different kinds of math problems.

44
Q

Techniques for Improving Memory

A

Techniques for Improving Memory: Techniques for improving memory include elaborative rehearsal, mnemonics (memory aids), encoding specificity, and practice testing.

  1. Elaborative Rehearsal: Elaborative rehearsal involves making new information meaningful by, for example, relating it to something you already know or generating personally meaningful examples. Encoding information by making it meaningful is referred to as semantic encoding and is considered the best way to ensure that information is transferred to and can be retrieved from long-term memory (e.g., Craik & Lockhart, 1972).
  2. Verbal Mnemonics: Acronyms and acrostics are verbal mnemonics that are useful for memorizing a list of words. When creating an acronym, the first letter of each word to be memorized is used to create a word or pronounceable non-word. For example, OCEAN can be used to remember the “Big Five” personality traits – openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism. When using an acrostic, a phrase or sentence consisting of familiar words is created, with each word in the phrase or sentence beginning with the first letter of one of the words to be memorized. As an example, a student might use the phrase PLEASE EXCUSE MY DEAR AUNT SALLY to memorize the order of operations in a math problem – parentheses, exponents, multiplication, division, addition, and subtraction.
  3. Visual Imagery Mnemonics: The keyword method and method of loci are mnemonics that use visual imagery. The keyword method is useful for paired associate learning. When using this method to remember that “pato” is the Spanish word for “duck,” you might create an image of a duck with a pot on its head. The method of loci is useful for memorizing a list of words and involves linking an image of each word to a specific object in a familiar location. To remember what you need to buy at the grocery store, you might create a visual image that links each item to an object in your living room – an apple balanced on top of a lamp, a loaf of bread on the seat of a chair, etc. When you want to recall the items while at the store, you mentally walk around your living room and observe each item.
  4. Encoding Specificity: According to the encoding specificity hypothesis, retrieval from long-term memory is maximized when the conditions at the time of learning (encoding) new information are the same as the conditions at the time of recall. According to this hypothesis, this occurs because learning conditions serve as retrieval cues. Encoding specificity includes context-dependent learning, which refers to learning and retrieving information in the same environment, and state-dependent learning, which refers to learning and retrieving information while in the same psychological or physiological state.
  5. Practice Testing: Practice testing refers to practice recalling information from long-term memory during learning and includes practicing recall by using flashcards and taking practice tests. The positive impact of practice testing is referred to as the testing effect and has been confirmed by numerous studies. For example, Dunlosky et al.’s (2013) review of the research led them to conclude that, of 10 learning techniques, practice testing and distributed practice have the greatest utility. According to the mediator effectiveness hypothesis, practice testing improves memory by generating effective mediators (cues) that facilitate the future retrieval of information (Pyc & Rawson, 2010).
45
Q

Tolman’s Latent Learning

A

Cognitive Learning Theories: Cognitive theories of learning focus on the mental processes that are responsible for the acquisition of information and skills.

  1. Tolman’s Latent Learning: Tolman derived his theory of latent learning from research with rats in mazes. In one study, Tolman and Honzik (1930) compared the performance of three groups of rats: Rats in Group 1 were reinforced with food when they reached the end of the maze every day, rats in Group 2 were never given reinforcement, and rats in Group 3 were not given food when they reached the end of the maze until the eleventh day. These investigators found that the performance of the Group 1 rats gradually improved in terms of speed and errors from the first day through the end of the study, the performance of the Group 2 rats remained the same throughout the study (they wandered randomly through the maze), and the performance of the Group 3 rats was similar to the performance of the Group 2 rats on days one through eleven but, beginning on day twelve, was similar to the performance of the Group 1 rats. Based on these results, Tolman concluded that the Group 3 rats had formed cognitive maps of the maze in the first 10 days without receiving reinforcement for doing so, which allowed them to be indistinguishable from the Group 1 rats once they received reinforcement. He attributed the formation of cognitive maps to latent learning, which occurs without reinforcement or being demonstrated in observable behaviors.
46
Q

Kohler’s Insight Learning

A

Kohler’s Insight Learning: Kohler’s theory of insight learning was influenced by Gestalt psychology and was based on his research with chimpanzees. In one study, Kohler (1925) placed Sultan the chimpanzee in a cage that contained a box and had a banana hanging from the ceiling. Sultan quickly noticed the banana (which was out of his reach) and paced back and forth several times. Suddenly, he moved the box beneath the banana, climbed on the box, and retrieved the banana. Kohler concluded that Sultan was thinking about the problem when he was pacing back and forth and suddenly had insight into the solution to the problem (i.e., had an “a-ha” experience).

47
Q

Bandura’s Social Cognitive Theory

A

Bandura’s Social Cognitive Theory: Bandura’s (1986) social cognitive theory was derived from his research on observational learning. In one series of studies, children observed an adult model act either aggressively or nonaggressively toward a clown (“bobo”) doll. Bandura found that children who observed the aggressive model were more likely than those who observed the nonaggressive model to subsequently act aggressively toward the doll and that this occurred whether the children observed a live model, a filmed model, or a filmed cartoon model. In addition, among children who observed an aggressive model, boys and girls were more likely to imitate a model of the same gender, boys acted more aggressively than girls toward the doll, and providing the children with a reward for acting aggressively reduced the gender difference in aggressiveness.

Based on the results of his research, Bandura concluded that observational learning depends on four mediational processes – attention, retention, production, and motivation: First, the learner must notice and pay attention to the model’s behavior. Second, the learner must store the information about the model’s behavior in memory. Third, the learner must be capable of imitating the model’s behavior. And, fourth, the learner must be motivated to perform the behavior. According to Bandura, reinforcement increases motivation, but it can take the form of self-reinforcement, external reinforcement (given to learner by someone else), or vicarious reinforcement (given to the model).

Research evaluating the application of Bandura’s theory to the treatment of phobias has found that modeling is most effective when it uses guided participation, which is also known as participant modeling. It involves having the client observe a model gradually approach the feared stimulus in steps and having the client perform each step with assistance from the model. In addition, coping models who are initially apprehensive about approaching the feared stimulus but gradually overcome their fear are more effective than mastery models who initially approach the feared stimulus without hesitation.