Chapter 6 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Learning

A

a relatively permanent change in behavior or knowledge that results from experience.
learning involves acquiring knowledge and skills through experience.
Learning, like reflexes and instincts, allows an organism to adapt to its environment. But unlike instincts and reflexes, learned behaviors involve change and experience:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Instincts

A

innate behaviors that are triggered by a broader range of events, such as aging and the change of seasons. They are more complex patterns of behavior, involve movement of the organism as a whole (e.g., sexual activity and migration), and involve higher brain centers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Reflexes

A

a motor or neural reaction to a specific stimulus in the environment. They tend to be simpler than instincts, involve the activity of specific body parts and systems (e.g., the knee-jerk reflex and the contraction of the pupil in bright light), and involve more primitive centers of the central nervous system (e.g., the spinal cord and the medulla)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Conditioned and learning

A

Associative learning occurs when an organism makes connections between stimuli or events that occur together in the environment. You will see that associative learning is central to all three basic learning processes discussed in this chapter; classical conditioning tends to involve unconscious processes, operant conditioning tends to involve conscious processes, and observational learning adds social and cognitive layers to all the basic associative processes, both conscious and unconscious.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Classical Conditioning

A

In classical conditioning, also known as Pavlovian conditioning, organisms learn to associate events—or stimuli—that repeatedly happen together. We experience this process throughout our daily lives. For example, you might see a flash of lightning in the sky during a storm and then hear a loud boom of thunder. The sound of the thunder naturally makes you jump (loud noises have that effect by reflex). Because lightning reliably predicts the impending boom of thunder, you may associate the two and jump when you see lightning. Psychological researchers study this associative process by focusing on what can be seen and measured—behaviors. Researchers ask if one stimulus triggers a reflex, can we train a different stimulus to trigger that same reflex?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Stimulus Discrimination

A

When an organism learns to respond differently to various stimuli that are similar, it is called stimulus discrimination. In classical conditioning terms, the organism demonstrates the conditioned response only to the conditioned stimulus. Pavlov’s dogs discriminated between the basic tone that sounded before they were fed and other tones (e.g., the doorbell), because the other sounds did not predict the arrival of food. Similarly, Tiger, the cat, discriminated between the sound of the can opener and the sound of the electric mixer. When the electric mixer is going, Tiger is not about to be fed, so she does not come running to the kitchen looking for food.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Stimulus Gerneralization

A

when an organism demonstrates the conditioned response to stimuli that are similar to the condition stimulus, it is called stimulus generalization, the opposite of stimulus discrimination. The more similar a stimulus is to the condition stimulus, the more likely the organism is to give the conditioned response. For instance, if the electric mixer sounds very similar to the electric can opener, Tiger may come running after hearing its sound. But if you do not feed her following the electric mixer sound, and you continue to feed her consistently after the electric can opener sound, she will quickly learn to discriminate between the two sounds (provided they are sufficiently dissimilar that she can tell them apart).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Watson and Behaviorism

A

John B. Watson is considered the founder of behaviorism. Behaviorism is a school of thought that arose during the first part of the 20th century, which incorporates elements of Pavlov’s classical conditioning (Hunt, 2007). In stark contrast with Freud, who considered the reasons for behavior to be hidden in the unconscious, Watson championed the idea that all behavior can be studied as a simple stimulus-response reaction, without regard for internal processes. Watson argued that in order for psychology to become a legitimate science, it must shift its concern away from internal mental processes because mental processes cannot be seen or measured. Instead, he asserted that psychology must focus on outward observable behavior that can be measured.

According to Watson, human behavior, just like animal behavior, is primarily the result of conditioned responses. Whereas Pavlov’s work with dogs involved the conditioning of reflexes, Watson believed the same principles could be extended to the conditioning of human emotions (Watson, 1919). Thus began Watson’s work with his graduate student Rosalie Rayner and a baby called Little Albert. Through their experiments with Little Albert, Watson and Rayner (1920) demonstrated how fears can be conditioned.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Taste Aversion

A

a type of conditioning in which an interval of several hours may pass between the conditioned stimulus (something ingested) and the unconditioned stimulus (nausea or illness). Here’s how it works. Between classes, you and a friend grab a quick lunch from a food cart on campus. You share a dish of chicken curry and head off to your next class. A few hours later, you feel nauseous and become ill. Although your friend is fine and you determine that you have intestinal flu (the food is not the culprit), you’ve developed a taste aversion; the next time you are at a restaurant and someone orders curry, you immediately feel ill. While the chicken dish is not what made you sick, you are experiencing taste aversion: you’ve been conditioned to be averse to a food after a single, negative experience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Spontaneous Recovery

A

the return of a previously extinguished conditioned response following a rest period.
What happens when learning is not used for a while—when what was learned lies dormant? As we just discussed, Pavlov found that when he repeatedly presented the bell (conditioned stimulus) without the meat powder (unconditioned stimulus), extinction occurred; the dogs stopped salivating to the bell. However, after a couple of hours of resting from this extinction training, the dogs again began to salivate when Pavlov rang the bell.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Operant Conditioning

A

organisms learn to associate a behavior and its consequence. A pleasant consequence makes that behavior more likely to be repeated in the future. For example, Spirit, a dolphin at the National Aquarium in Baltimore, does a flip in the air when her trainer blows a whistle. The consequence is that she gets a fish.

The target behavior is followed by reinforcement or punishment to either strengthen or weaken it, so that the learner is more likely to exhibit the desired behavior in the future.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Positive Reinforcement

A

Desirable stimulus added to increase the likelihood of a behavior. For example, you tell your five-year-old son, Jerome, that if he cleans his room, he will get a toy. Jerome quickly cleans his room because he wants a new art set.

Positive reinforcement as a learning tool is extremely effective. It has been found that one of the most effective ways to increase achievement in school districts with below-average reading scores was to pay the children to read.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Negative Reinforcement

A

an undesirable stimulu is removed to increase the likelihood of a behavior. For example, car manufacturers use the principles of negative reinforcement in their seatbelt systems, which go “beep, beep, beep” until you fasten your seatbelt. The annoying sound stops when you exhibit the desired behavior, increasing the likelihood that you will buckle up in the future.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Positive Punishment

A

add an undesirable stimulus to decrease the likelihood of a behavior. An example of positive punishment is scolding a student to get the student to stop texting in class. In this case, a stimulus (the reprimand) is added in order to decrease the behavior (texting in class).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Negative Punishment

A

Something is removed to decrease the likelihood of a behavior. For example, when a child misbehaves, a parent can take away a favorite toy. In this case, a stimulus (the toy) is removed in order to decrease the behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Primary reinforcer

A

reinforcers that have innate reinforcing qualities. These kinds of reinforcers are not learned. Water, food, sleep, shelter, sex, and touch, among others, are primary reinforcers. Pleasure is also a primary reinforcer. Organisms do not lose their drive for these things. For most people, jumping in a cool lake on a very hot day would be reinforcing and the cool lake would be innately reinforcing—the water would cool the person off (a physical need), as well as provide pleasure.

17
Q

Partial Reinforcement

A

also referred to as intermittent reinforcement, the person or animal does not get reinforced every time they perform the desired behavior. There are several different types of partial reinforcement schedules. These schedules are described as either fixed or variable, and as either interval or ratio.

18
Q

Fixed Interval

A

Fixed refers to the number of responses between reinforcements, or the amount of time between reinforcements, which is set and unchanging. Interval means the schedule is based on the time between reinforcements,
Description: Reinforcement is delivered at predictable time intervals (e.g., after 5, 10, 15, and 20 minutes).
Result: Moderate response rate with significant pauses after reinforcement
Example: Hospital patient uses patient-controlled, doctor-timed pain relief

when behavior is rewarded after a set amount of time. For example, June undergoes major surgery in a hospital. During recovery, she is expected to experience pain and will require prescription medications for pain relief. June is given an IV drip with a patient-controlled painkiller. Her doctor sets a limit: one dose per hour. June pushes a button when pain becomes difficult, and she receives a dose of medication. Since the reward (pain relief) only occurs on a fixed interval, there is no point in exhibiting the behavior when it will not be rewarded.

19
Q

Variable Ratio

A

Variable refers to the number of responses or amount of time between reinforcements, which varies or changes. ratio means the schedule is based on the number of responses between reinforcements.
Description: Reinforcement is delivered after an unpredictable number of responses (e.g., after 1, 4, 5, and 9 responses).
Result: HIgh and Steady Response rate
Example Gambling

the number of responses needed for a reward varies. This is the most powerful partial reinforcement schedule. An example of the variable ratio reinforcement schedule is gambling. Imagine that Sarah—generally a smart, thrifty woman—visits Las Vegas for the first time. She is not a gambler, but out of curiosity she puts a quarter into the slot machine, and then another, and another. Nothing happens. Two dollars in quarters later, her curiosity is fading, and she is just about to quit. But then, the machine lights up, bells go off, and Sarah gets 50 quarters back. That’s more like it! Sarah gets back to inserting quarters with renewed interest, and a few minutes later she has used up all her gains and is $10 in the hole. Now might be a sensible time to quit. And yet, she keeps putting money into the slot machine because she never knows when the next reinforcement is coming. She keeps thinking that with the next quarter she could win $50, or $100, or even more. Because the reinforcement schedule in most types of gambling has a variable ratio schedule, people keep trying and hoping that the next time they will win big. This is one of the reasons that gambling is so addictive—and so resistant to extinction.

20
Q

Cognitive map

A

a mental picture of the layout of the maze
Tolman placed hungry rats in a maze with no reward for finding their way through it. He also studied a comparison group that was rewarded with food at the end of the maze. As the unreinforced rats explored the maze, they developed a cognitive map:

21
Q

Observational Learning

A

we learn by watching others and then imitating, or modeling, what they do or say.

22
Q

Models

A

The individuals performing the imitated behavior are called models. Research suggests that this imitative learning involves a specific type of neuron, called a mirror neuron (Hickock,