Unit 6: Learning (Chapter 6) Flashcards

1
Q

Learning

A

An enduring change in behaviour resulting from prior experience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Associative learning

A

A form of learning that involves making connections between stimuli and behavioural responses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Nonassociative learning

A

A form of learning that involves a change in the magnitude of an elicited response with repitition of the elicited stimulus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Habituation

A

A form of nonassociative learning by which an organism becomes less responsive to a repeated stimulus.

Brain’s version of the “cry wolf” effect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Sensitization

A

A form of nonassociative learning by which an organism becomes more sensitive, or responsize, to a repeated stimulus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Dishabituation

A

The recovery of a response that has undergone habituation, typically as a result of presentation of a novel stimulus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Classical conditioning

A

A passive form of learning by which an association is made between a reflex-eliciting stimulus (ex: a shock) and other stimuli (ex: a sound).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Unconditioned stimulus (US)

A

A stimulus that produces a reflexive response without prior learning.

Ex: Food.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Unconditioned response (UR)

A

The response that is automatically generated by the unconditioned stimulus (US).

Ex: Salivation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Conditioned stimulus (CS)

A

A stimulus that has no prior positive or negative association but comes to elicit a response after being associated with the unconditioned stimulus.

Ex: Sight of Pavlov/food.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Conditioned response (CR)

A

A response that occurs in the presence of the conditioned stimulus after an association between the conditioned and the unconditioned stimulus is learned.

Ex: Salivation (same as UR in this case).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Aquisition

A

The initial learning of an association between the unconditioned stimulus and the conditioned stimulus during classical conditioning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Generalization

A

The tendancy to respond to stimuli that are similar to the conditioned stimulus, so that learning is not tied too narrowly to specific stimuli. Can be adaptive.

Ex: Pavlov’s dogs salivating at. sound that is similar to the original CS, but louder, longer, or lower in pitch.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Discrimination

A

Learning to respond to a particular stimulus but not to others, thus preventing overgeneralizations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Extinction

A

An active learning process whereby the conditioned response is weakened in response to the conditioned stimulus if it is frequently presented in the absece of the unconditioned stimulus.

Ex: Pavlov continuing to ring the bell (CS) but no longer bringing the food (US) would result in the dogs no longer salivating at its sound.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Spontaneous recovery

A

The reappearence of an extinct behaviour after a delay.

Ex: Smokers’ addiction related to daily cup of coffee; if they drink coffee after having quit smoking, they might cave in and smoke again.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Contiguity

A

Closeness in time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Contingency

A

Predictiveness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Contingent reinforcement

A

Reinforcement is only given when a spcific behaviour occurs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Noncontingent reinforcement

A

Reinforcement is delivered on a fixed-interval schedule independant of the actions the organism is engaging in.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Blocking

A

A classical conditioning phenomenon whereby a prior association with a conditioned stimulus prevents learning of an association with another stimulus because the second one adds no further predictive value.

Previously learned association to one stimulus prevents the learning of a new association to a second stimulus because the second stimulus adds no predictive value
- Adaptive because helps us learn true causal association of events and filter out irrelevant stimuli. Associations are only made to events that are informative!

Ex: Rats conditioned to shock, then shown light after having been shocked.
Schizophrenia = lack of blocking ability (sensory overload).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Preparedness

A

The species-specific biological predisposition to learn some associations more quickly than other associations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Conditioned taste aversion

A

A classically conditioned response where individuals are more likely to associate nausea with food than with other environmental stimuli.

Ex: Rats avoiding food they had consumed before having been made sick by reaserchers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Operant conditioning

A

A mechanism by which our behaviour acts as an instrument or a tool to change the environment and, as a result, voluntary behaviours are modified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Law of effect

A

Edward Thorndike: The idea that behaviour is a function of its consequences - actions that are followed by positive outcomes are strengthened, while behvaiours followed by negative outcomes are weakened.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

ABCs of operant conditioning

A

Antedecent
Behaviour
Consequence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Reinforcement

A

A consequence that increases the likelihood of a behvaiour being repeated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Punishment

A

A consequence that decreases the likelyhood of a behaviour being repeated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Primary reinforcers

A

A consequence that is inately pleasurable and/or satisfies some sort of biological need. These do not have to be learned.

Think of primates (instincts) => primary.
Ex: Food, drink, warmth, sex.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Secondary reinforcers (or conditioned reinforcers)

A

A learned pleasure that acquires value through experience because of its association with primary reinforcers (ex can be used to exchange for primary reinforcers).

Ex: Money can be used to purchase food, drinks, etc; “Good dog!”’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Positive reinforcement

A

The presentation of a stimulus (positive = adding), leading to an increase in the frequency of the behaviour (reinforcing it).

Ex: Giving a child a sticker for putting in hard work. Usually something desirable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Negative reinforcement

A

The removal of a stimulus (negative = removing), leading to an increase in the frequency of the behaviour (reinforcing it).

Ex: Drinking water to not feel thristy, taking pain medication to relieve (remove) a headache, cleaning your room so your mom will stop scolding you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Positive punishment

A

The presentation of a stimulus (positive = adding), leading to a decrease in the frequency of a behaviour (since the person behaving in such a way has been punished).

Ex: Being scolded by your teacher for having been on your cellphone; burning your hand and not touching the stove while it’s hot again.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Negative punishment

A

The removal of a stimulus (negative = adding), leading to a decrease in the frequency of a behaviour (since the person behaving in such a way has been punished).

Ex: Teacher taking away a cellphone. Getting a fine (removal of money).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Premack principle / relativity theory of reinforcement

A

The idea that activities individuals frequently engage in can be used to reinforce activities that they are less inclined to do.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Shaping

A

The process by which random behaviours are gradually changed into a desired target behaviour. This is done by the reinforcement of successive approximations.

Ex: Skinner’s box, trying to get the rats to pull the lever by gradually rewarding them for certain actions that eventually led them to do so.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Instinctive drift

A

An animal’s reversion to evolutionarily derived instinctive behaviours instead of demonstrating newly learned responses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Immediate reinforcement

A

Reinforcing behaviour immediately after it occurs helps establish strong association between response & consequence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Delayed reinforcement

A

If there is a delay between response & reinforcement, association will be weaker.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Delay discounting

A

Tendency to devalue delayed outcomes. Explains why we might be more impulsive.

Ex: Going out the night before a midterm before considering how it might affect our GPA.
Ex 2: Feeling more strongly about a test after getting it back the day after v.s. caring less if you get it back a month afterwards.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Continuous reinforcement schedule

A

A reinforcement schedule in which a behaviour is rewarded every time it is performed.

42
Q

Partial reinforcement schedules

A

A reinforcement schedule in which a behaviour is rewarded only some of the time.

43
Q

Fixed-ratio schedule

A

A reinforcement schedule in which a specific number of behaviours are required before a reward is given. Its graph makes a scallop shaped pattern, though less distinctly than the fixed-interval schedule.

Ex: A worker recieves 1$ for every 10 pieces they make.

44
Q

Variable-ratio schedule

A

A reinforcement schedule in which an average number of behaviours are required before a reward is given.

Ex: Rats keep pressing on lever; how people stay stuck to slot machines (lack of predictability leads to high frequency behaviour).

45
Q

Fixed-interval schedule

A

A reinforcement schedule based on a fixed amount of time before a reward is given. Its graph makes a distinct scallop-shaped pattern.

Ex: Studying right before an exam, and not studying again until right before the next one.

46
Q

Best to worst reinforcement schedules (most increased to least increased responses)

A
  1. Fixed-ratio
  2. Variable-ratio
  3. Variable-interval
  4. Fixed-interval

BUT variable schedules are more resistant to extinction!

47
Q

Variable-interval schedule

A

A reinforcement schedule based on an amount of time between rewards that varies around a constant average.

48
Q

Superstitious conditioning

A

A form of operant conditioning in which a behaviour is learned because it was coincidentally reinforced, but has no actual relationship with reinforcement.

Ex: Students wearing a “lucky” shirt for an exam because they wore it last time they aced one. Pigeon receiving food and turning before it appears because that’s what it did the first time.

49
Q

Latent learning

A

Learning that occurs without any incentive of any clear motivation to learn.

50
Q

Insight learning

A

A form of learning that occurs without trial and error, and thus without clear reinforcement.

51
Q

Observational learning

A

A form of learning in which a person observes and imitates behaviour from a model.

52
Q

Imitation

A

The purposeful copying of a goal-directed behaviour.

53
Q

Social learning theory

A

A theory of how people’s cognitions, behaviours, and dispositions are shaped by observing and imitating the actions of others.

Ex: Bobo doll study.

54
Q

Mirror neurons

A

Neurons that are active both when performing an action and when the same actions are observed in others.

55
Q

Cultural transmission

A

The transfer of information from one generation to another that is maintained not by genetics, but by teaching and learning.

56
Q

Vertical transmission

A

Transmission of knowledge and skills from parent to offspring.

57
Q

Horizontal transmission

A

The transmission of skills between peers.

58
Q

Diffusion chain

A

A process in which individuals learn a behaviour by observing a model and then serve as models from which other individuals can learn.

59
Q

Learning

A

Process through which experience (any effect of the environment that we can sense) can affect behaviour at a future time.
▪ Allows us to adjust to our environments

60
Q

What are the two basic types of learning?

A

Nonassociative learning and associative learning.

61
Q

Nonassociative learning

A

A type of learning where the strength of a response to a stimulus changes with repeated exposure to the same stimulus. Includes two types:
▪ Habituation
▪ Sensitization

62
Q

Habituation

A

Reduction in response to a repeated stimulus that is unchanging and harmless.
**NOT the same as sensory adaptation: sensory adaptation occurs at the level of the sensory receptor, whereas habituation is a higher-level brain process (meaning it lasts longer and can be recovered).

Ex: New parents adjust how often they go check on their baby when it starts to cry within the first few months.

63
Q

Dishabituation

A

Reappearance of a response that had diminished due to habituation, usually triggered by the introduction of a new stimulus.

64
Q

Sensitization

A

Form of nonassociative learning by which a stimulus leads to an increased response over time.
▪ May have evolved to help us notice and focus on potentially harmful stimuli in our surroundings.

Ex: Watching horror movie and then jumping at every small noise afterwards.

65
Q

Dual-process theory of nonassociative learning

A

Habituation and sensitization are both always at work. Which one will “win out” will depend on factors like our state of arousal.
o When aroused, sensitization is more potent than habituation (Ex: In an exam, one may feel as though the clock is very loud).
o When relaxed, habituation is more potent than sensitization (Ex: bored in a lecture, one may tune out the clock).

66
Q

Associative learning

A

Form of learning that involves making connections among stimuli and behaviours (i.e., if A happens, B is likely to follow).
Two types:
▪ Classical conditioning;
▪ Operant conditioning.

67
Q

Classical conditioning

A

Form associations between pairs of stimuli.

Ex: Ivan Pavlov, who studied salivatory reflex in dogs. He controlled the signals that would precede food, and observed their effects on the dogs’ salivary reflex.

68
Q

Operant conditioning

A

Form associations between behaviours and their consequences.

69
Q

Reflex

A

Simple, automatic response to a stimulus.

Ex: Salivation in response to food.

70
Q

Unconditioned stimulus (US)

A

Stimulus that produces a reflexive response without prior learning.

71
Q

Unconditioned response (UR)

A

Response automatically generated by unconditioned stimulus.

72
Q

Conditioned stimulus (CS)

A

Stimulus that comes to elicit a reflexive response only because of its previous pairing with the unconditioned stimulus.

73
Q

Conditioned response

A

Reflexive response elicited by the conditioned stimulus due to the prior pairing of the conditioned stimulus with the unconditioned stimulus.

Ex: Cringeing in pain at the sight or sound of the dentist office.

74
Q

Principles of learning

A
  • Aquisition
  • Extinction
  • Spontaneous recovery
  • Generalization
  • Discrimination
75
Q

Acquisition

A

Initial learning of an association between the unconditioned and conditioned stimuli during classical conditioning.

Ex: Pavlov operationalized the amount of saliva the dogs produced. The fully learned association showed as a plateau on a graph.

76
Q

Extinction

A

Reduction of a learned response that occurs when UCS no longer follows the CS. This is useful for getting rid of an undesired response (“new learning overrides the old learning”) but does not mean that learned response has been entirely forgotten.
- Additional evidence that extinction training does not return animal to original state.

In a graph, shows up as a plateau of saliva to 0.

77
Q

Spontaneous recovery

A

Reappearance of CR after periods of rest during extinction training.
-Relearning an association after extinction happens more rapidly than the original conditioning session.

78
Q

Generalization

A

Tendency to respond to stimuli similar to the original CS.
▪ More likely to occur when the similarity between the two stimuli is greater (e.g., tones of similar frequency; attacked by a lion, all cats will scare you).
Adaptive value:
▪ Efficient: can respond to new but similar situations without needing to learn each one individually.
▪ Survival advantage: can respond to potential threats or opportunities that resemble past experiences. However, may contribute to emergence of conditions like PTSD.

79
Q

Discrimination

A

Counteracts generalization. Learned ability to distinguish between stimuli (i.e., learning to respond to a particular stimulus but not similar stimuli.

Ex: Dogs taught to salivate to sight of black square would also salivate to sight of gray square (generalization) After series of trials where presentations of black square are always followed by food and presentations of gray square are never followed by food, will stop salivating to gray square.

80
Q

Contingency and contiguity

A

For classical conditioning to occur, CS has to consistently precede the US (contingency), and the CS and US have to be presented together close in time (contiguity).
- Conditioning occurs mainly when new stimulus helps predict arrival of unconditioned stimulus → US = relevant for survival like food or danger.
- Timing determines extent to which new stimulus will be valuable for making predictions about arrival of unconditioned stimulus.

Ex: sound before shock v.s. at the same time v.s. part of the time (hard to relate them to one another).

81
Q

Behaviourism

A

Behaviour should be understood in relation to observable events in environment, processes rather than in terms of unobservable mental processes. Leading figure: John Watson.

Ex: Demonstrated how an 11-month-old infant (“Little Albert”) could be conditioned to fear a white rat.

82
Q

Beneficence principle

A

Harm in the form of psychological distress with no steps taken to mitigate the harm.

83
Q

Counterconditioning

A

Technique used to replace undesirable response to a stimulus with more desirable one.

Ex: Exposing Albert to the rat or white furry objects while giving him a toy or a treat.

84
Q

Neural substrates

A

Fear conditioning involves the amygdala (also involved in conditioned reward).
Recall that amygdala is responsible for processing emotional significance of stimuli
* Well-positioned for creating CS-US links: Connections to memory-related brain structures as well as structures involved in mediating reflexes and autonomic responses (interactions with brainstem).

85
Q

Adaptive value of classical conditioning

A

Classical conditioning allows organisms to learn to prepare for biologically significant events → negative, painful, threatening.

Ex: Conditioned stimulus (CS) preceding a painful or startling event can trigger fear or bodily reactions that help avoid or brace for the event.
Ex: Appetite stimuli helps with salivation to prepare for the arrival of food.

86
Q

Drug tolerance

A

Decline in physiological and behavioural effects of a drug taken repeatedly,
- In part due to conditioning.
- Cues associated with drug delivery produce conditioned response opposite to drug’s effect → Ex: sight of needle, area you take it…
- Explains why overdoses more common in unfamiliar environments → No cues to have your body compensate

87
Q

Preparedness

A

Organisms are biologically predisposed to learn some associations more quickly than others (dependant on species).

Ex: Conditioned taste aversion

88
Q

Conditioned taste aversion

A

Tendency to avoid food after experiencing illness following its consumption.
- Humans (and rats) are more ready to form associations between taste and illness than visual/auditory stimuli and illness.

Ex: Rats and water experiment, where rats formed an aversion to the “tasty water”.

89
Q

Preparedness

A

Predisposition to learn certain associations is species-specific-shaped by natural capacities of the species.

Ex: Birds usually identify food sources using visual cues and therefore develop visual stimulus-illness associations more readily than rats.

90
Q

Operant conditioning

A

Type of associative learning process wherein the consequence of a behavioural response affects the likelihood of that response being repeated.
- Association is made between voluntary response (e.g., pulling a lever) and a. consequence (e.g., food).

91
Q

Edward Thorndike’s Puzzle Box Experiments

A
  • Used “puzzle boxes” (i.e., a kind of kitty escape room) to study how cats make associations between their voluntary behaviours and outcomes.
  • Cat’s goal: escape box to obtain food
  • In the beginning: various actions performed in attempt to open box.
  • With successive trials: begin to perform action that leads to escape more and more frequently.
92
Q

Law of effect

A

Behaviours followed by satisfying outcomes are more likely to be repeated, whereas behaviours followed by unsatisfying outcomes are less likely to be repeated.

93
Q

Skinner’s box

A

B. F. Skinner designed apparatus for studying learning: cage with lever that animal can press to produce effect (e.g., obtain food pellet)
- Allows animal to respond at any time and as many times as needed.
- Learning process can be operationalized as changes in rate of responses (lever presses).

94
Q

Antecedents

A

Situation or stimulus that precedes the behaviour and sets the stage for the behaviour to happen.

Ex: Light signals availability of food.

95
Q

Behaviour

A

The voluntary action that takes place (the operant response).

Ex: Pressing a lever.

96
Q

Consequences

A

The stimuli presented after the behaviour that either increase or decrease the likelihood that the behavior will be repeated.

97
Q

Example: Teaching a child to share

A

Reinforcement
Goal: Increase sharing
Positive:
- Money, candy
- Praise
Negative:
- Not have to do homework
- Take away chore

Punishment
Goal: Decrease toy-hoarding
Positive:
- Hit the kid (… not great)
- Yelling
- Adding chores
Negative:
- Take away screen time
- Remove door to room

98
Q

Shaping of operant responses

A

Operant conditioning procedure in which successively closer approximations to the desired response are reinforced until the desired response finally occurs (reinforcement of successive approximations).
- Can be used to teach animals complex behaviours.

Ex: Clicker for dogs.
Ex: Reward the rat first to walk towards lever, then just touch it, then just pull it…

99
Q

Instinctive drift

A

Animal’s reversion to evolutionarily derived instinctual behaviours instead of demonstrating behaviour learned through conditioning.

Ex: Raccoons “washing” coins they had been taught to place in a piggy bank. Pigs rooting (dig for food as communication) with coin instead of putting it into a piggy bank.

100
Q

Mentalizing

A

Understanding others’ mental statements.

101
Q

Components of social learning theory

A
  • Attention = models that get our attention are more likely to elicit imitation
  • Retention = must be able to retain a memory of what the model did
  • Reproduction = must be able to reproduce (re-create) what the model did
  • Motivation = must be motivated to reproduce the behaviour. Reinforcement and punishment do not have to be experienced directly:
    Vicarious reinforcement = increase in a behaviour due to the observer witnessing the model being reinforced for that behavior.
    Vicarious punishment = decrease in a behaviour due to the observer witnessing the model being punished for that behavior.