Chapter 4- Types of learning Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Types of learning

A
  • Non-associative
  • Associative
  • Observational (social)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Non-associative learning

A

Learning about a single stimulus
- Habituation and Sensitization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Associative learning

A

Learning a relationship between 2 stimuli or between a stimulus and response
- Classical and Operant/Instrumental Conditioning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Observation (social) learning

A

Learning by watching how others behave or by instruction
- Modeling and Vicarious Learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Non-associative learning: Habituation

A
  • Decline in the intensity of an organism’s response to a stimulus, once that stimulus becomes familiar
  • In contrast to DISHABITUATION- Sensitivity to any change in the parameters of a stimulus to which one has previously habituated
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Non-associative learning: Sensitization

A
  • If an organism is repeatedly exposed to a stimulus that is biologically relevant, they can become sensitized to it – That is, they become more sensitive than would be expected.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Diagram showing the difference between classical and operant conditioning

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Overview of classical conditioning

A
  • Ivan Pavlov (1849 – 1946)
  • Received Nobel Prize for his work on the
    digestive system of dogs
  • We know him best for his work on
    what he called “Conditioned Reflexes”
  • A previously neutral stimulus comes to
    elicit a response through its association with a stimulus that always elicits that response.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Components of classical conditioning

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Appetitive conditioning

A

An appetitive conditioning procedure is one in which the US is desirable or appetitive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Aversive conditioning

A

An aversive conditioning procedure is one in
which the US is undesirable or aversive.
- Food aversions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Basic principles of classical conditioning

A
  • Acquisition
  • Extinction
  • Spontaneous recovery
  • Generalization
  • Discrimination
  • Second‐order conditioning
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How is behavior changed from the US to the UR?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Graph showing acquisition period of classical conditioning

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Extinction

A

-‘Undoing’ classical conditioning
- Extinction: Repeated presentations of the
CS in the absence of the US
- Results in decreased CR activity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Graph showing second portion of classical conditioning graph (CS alone)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Spontaneous recovery

A
  • Subject is removed from the context where the CS is presented, and later put back in that context
  • A spontaneous increase in CR activity will occur, even in the absence of the US.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Last part of classical conditioning graph- spontaneous recovery

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Watson- father of behaviorism

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Stimulus discrimination

A

Fine discriminations of stimulus conditions of what will and what won’t elicit the CR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What factors are important in classical conditioning?

A

Contiguity:
- Optimal time interval between presentation of CS
and the occurrence of the US ½ second to a few
seconds

Contingency:
- The CS should reliably lead to the US
- The US should only rarely appear without the CS
appearing first

Why?
- Need to learn that the CS is a signal for US.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Higher/second order conditioning

A
  • Sometimes, a conditioned stimulus does not become directly associated with an unconditioned stimulus.
  • Instead, the conditioned stimulus becomes associated with other conditioned stimuli that are already associated with the unconditioned stimulus.
  • Once an association between a CS and US is well learned so that it consistently produces a CR, the CS itself can take on value. For example, we value money because of its associations, not because of its physical characteristics.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Compensatory responses

A
24
Q

The Law of Effect

A
25
Q

Other principles that maximize learning (law of recency and law of exercise)

A

Law of Recency
- Most recent response is likely to govern the recurrence of that behavior.

Law of Exercise
- Stimulus‐Response associations are strengthened through repetition

26
Q

Operant/instrumental conditioning

A
27
Q

Reinforcement and punishment- impacts

A
28
Q

Reinforcement and punishment- examples with dog

A
29
Q

Shaping

A
  • Method used to reward successive
    approximations to the goal behavior.
  • Uses a previously neutral cue as a reinforcer.

Successive approximations: we “shape”
the animal to do a difficult or complex task
- Begin with simple tasks
- Begin with first step of many

30
Q

Types of reinforcers

A

Primary reinforcer
- Food, water, sleep, sex

Conditioned (Secondary) reinforcer:
- Predicts or helps create conditions that increase availability
of primary reinforcers
- Defined after we see what patterns of behavior accompany
its presence
- Examples??

Not all events or responses are reinforcing under all
conditions; lots of variability

31
Q

Superstitious behavior

A

Rare or odd behaviors repeated if they
are accidentally reinforced
- Mistaken beliefs of causal relationships

Skinner demonstrated such behavior
- He set the dispenser to deliver food to
animals in an operant chamber at fixed
time intervals. The pigeons associated
whatever behavior they were engaged in
at the time of the dispensed food with
the delivery of the food. The likelihood of
those behaviors occurring increased.

32
Q

Discriminative stimuli

A

When will a behavior result in the desired outcome?
When will it fail?
- Control over outcome critical factor

  • Animals are capable of quite sophisticated
    discriminatory behaviors than is generally
    appreciated
33
Q

Types of reinforcement schedules

A

Continuous vs. Partial Reinforcement
- Which will be easier to extinguish?

Ratio: based on the NUMBER of behaviors that have
occurred
- Each “nth” behavior is reinforced: each 10th, or each 4th,
or each 20th, etc. (regardless of how much time has
elapsed between behaviors)

Interval: based on the AMOUNT OF TIME that has
elapsed since the last reinforced behavior
- Every ten minutes, reinforcement occurs for the next
behavior (regardless of how many behaviors were
produced during that ten minutes)

34
Q

Ratio schedules

A

Fixed‐ratio schedules (FR)

Variable‐ratio schedules (VR)

35
Q

Fixed-ratio schedule (FR)

A

A schedule in which the participant is required to
perform X number of responses to obtain a reinforcer (where X does not change from one trial to the next)

36
Q

Variable-ratio schedules (VR)

A

A schedule in which the participant is required to
perform some number of responses to obtain a
reinforcer, but that number changes from one trial to the next (but can be described by the mean number required to get a reinforcer)

37
Q

Interval schedules

A
  • Fixed-interval schedules (FI)
  • Variable-interval schedules (VI)
38
Q

Fixed-interval schedules (FI)

A

A schedule in which a fixed amount of time since
the last reinforcer delivery must elapse before the
next response will deliver a reinforcer.

39
Q

Variable-interval schedules (VI)

A

A schedule in which the time between reinforcers
is different on each trial, so no fixed amount of
time must pass before the next response will deliver a reinforcer

40
Q

Graph showing cumulative response with different patterns and rates

A
41
Q

There is more to learning than associations

A
  • Change in behavior -> learning?
  • Is there an underlying change in insight? In
    comprehension?

Edward Tolman (1886‐1959): demonstrated
“latent learning” using operant conditioning
- Rats explored a maze with no reward
- Later, under conditions of reward: could
demonstrate formation of a “cognitive map”
- Indicated that learning had taken place, not
“mere” conditioning

42
Q

Cognitive elements of operant conditioning: latent learning

A
43
Q

Cognitive elements of operant conditioning: cognitive maps

A
44
Q

Evolutionary elements of operative conditioning

A
  • Foraging animals explore their environment (even places that are not reinforcing).
  • Each species (including humans) is biologically predisposed to learn some things more readily than others in ways consistent with evolutionary history.
45
Q

Observational (social) learning

A
  • A condition in which learning takes place by watching the
    actions of others

Albert Bandura studied aggressive observational learning using the Bobo doll experiment
- Children imitated adult behaviors and were sensitive to the consequences of their aggressive behavior

46
Q

Observational learning- diffusion chain

A
  • Individuals initially learn a behavior by observing
    another individual perform that behavior, and then
    serve as a model from which other individuals learn
    the behavior
47
Q

Observational learning in animals

A
  • Pigeons learn through observation how to get reinforced for pecking behavior.
  • Rhesus monkeys learned to fear snakes through an observational diffusion chain
    (biological predisposition to fear snakes).
  • Chimpanzees learned through observation to use a novel tool, although children
    (using the same paradigm) showed greater learning of the function of the tool.
  • Enculturation hypothesis
48
Q

Observational learning: role of mirror neurons

A
  • Mirror neurons fire during
    observational learning in humans as well as other animal species.
  • If appropriate, neurons fire when another organism is seen performing an action, it could indicate an awareness of intentionality or that the animal is anticipating a likely course of future actions.
  • fMRI studies show the same brain areas to be activated when one engages in a task or observes another engage in the task.
49
Q

Diagram of Pavlov’s apparatus

A
50
Q

Diagram explaining Pavlov’s classical conditioning

A
51
Q

Diagram and graph showing acquisition, extinction, and spontaneous recovery

A
52
Q

Classical conditioning- what is acquisition?

A

The formation of an association between a conditioned stimulus (here, a metronome) and an unconditioned stimulus

53
Q

Classical conditioning- what is extinction?

A
  • Normally, after standard Pavlovian conditioning, the metronome (CS) leads to salivation (CR) because the animal learns to associate the metronome with the food (US).
  • If the metronome is presented several times and food does not arrive, the animal eventually learns that the metronome is no longer a good predictor of food.
  • Because of this new learning, the animal’s salivary response gradually disappears.
  • This process is known as extinction.
  • The conditional response is extinguished when the conditioned stimulus no longer predicts the unconditioned stimulus.
54
Q

Classical conditioning- what is spontaneous recovery?

A
  • Suppose that, some time after extinction, the metronome is set in motion. Starting the metronome again will produce the conditioned response of salivation.
  • Through such spontaneous recovery, the extinguished CS again produces a CR.
  • This recovery is temporary, however.
  • It will fade quickly unless the CS is again paired with the US.
55
Q

Stimulus generalization

A
  • Stimulus generalization occurs when stimuli similar but not identical to the CS produce the CR. Generalization is adaptive because in nature the CS is seldom experienced repeatedly in an identical way.
  • Slight differences in variables- such as backround noise, temperature, and lighting- lead to slightly different perceptions of the CS. As a result of these different perceptions, animals learn to respond to variations in the CS.
56
Q

Stimulus discrimination

A
  • This occurs when an animal learns to differentiate between two similar stimuli if one is consistently associated with the US and the other is not.
  • Pavlov and his students demonstrated that dogs can learn to make very fine distinctions between similar stimuli. For example, dogs can learn to detect subtle differences in tones of different frequencies.
57
Q

Graph showing stimulus generalization and discrimination

A