chapter 6 - slides 21 to 44 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What three concepts describe how behaviours are learned, unlearned, and potentially re-emerge?

A

Acquisition, extinction and spontaneous recovery.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe the concept of acquisition.

A

-initial stage of learning when an organism learns to associate two stimuli
-when an organism learns to connect a neutral stimulus and an unconditioned stimulus
-neutral stimulus begins to elicit the conditioned response, and eventually the neutral stimulus becomes a conditioned stimulus capable of eliciting the conditioned response by itself

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Describe the concept of extinction.

A

-decrease in the conditioned response when the
unconditioned stimulus is no longer presented with the conditioned stimulus.
-When presented with the conditioned stimulus alone, the dog, cat, or other organism would show a weaker and weaker response, and finally no response. In classical conditioning terms, there is a gradual weakening and disappearance of the conditioned response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe the concept of spontaneous recovery.

A

-the return of a previously extinguished conditioned response following a rest period

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define stimulus generalization.

A

Process by which the CR is observed even though the CS is slightly different from the original one used during acquisition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Define stimulus discrimination.

A

Capacity to distinguish between similar but
distinct stimuli.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define habituation.

A

Repeated or prolonged exposure to a stimulus
results in a gradual reduction in responsing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How did this affect psychology?

A

-John B.Watson influenced by Pavlov
-great thing for behaviourism
-goal of john watson: if ppl coming into the world as blank slate then we can push them into smt/ shape who they are/control ppl

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What was the research that John Watson did?

A

The Little Albert Study
1. Presented with neutral stimuli (rabbit, dog, cotton wool, a white rat etc).
2. Watson then paired these with a loud sound every time Little Albert touched the stimulus that caused him
to feel fear.
3. After repeated pairings, Little Albert became fearful of the stimulus alone.

  • Little Albert demonstrated stimulus generalization—he became afraid of other furry things: a rabbit, a furry coat, and even a Santa Claus mask
    -Emotions could become conditioned responses
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a more ethical study that was done?

A

Mary Cover Jones: a study with three-year- old Peter, who already showed a fear of rabbits
* Successfully eliminates the fear response through conditioning
* Precursor to behavioral therapy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Are certain things easier to condition?

A

clown phobia, spiders, love, vomiting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Are there evolutionary elements of classical conditioning?

A

yes! adaptive behaviours allow us to survive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Define biological preparedness.

A

Propensity (an inclination or natural tendency to behave in a particular way) for learning particular kinds
of associations over others

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Give an example of taste aversion.

A

For example, rats do not throw up so they can simply die if they digest the wrong foods. So they develop taste aversion (strong dislike or disinclination) towards things that are bad for them) - higher awareness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Is everything we learn based on reflexes?

A

No. for example, if the teacher asks a question will we all raise our hands, no! we learn through reward and punishment through social acceptance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Define operant conditioning.

A

-organisms learn to associate a behaviour and its consequence. A pleasant consequence makes that behaviour more likely to be repeated in the future.

-Behaviour that an organism produces that has some impact on the environment; coined by B.F. Skinner Demonstrated using the operant chamber or Skinner Box

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the law of effect?

A

first proposed by psychologist Edward Thorndike… Behaviors that are followed by consequences that are satisfying to the organism are more likely to be repeated, and behaviors that are followed by unpleasant consequences are less likely to be repeated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

When we talk about operand conditioning, it’s reinforcement and punishment. For the example of the rat pressing the lever to get food, does it initially know?

A

No, not at all. It knows by the off chance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

In operant conditioning, what do positive and negative mean?

A

+: you are adding something
-: taking something away

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

In operant conditioning, what does reinforcement and punishment mean?

A

Reinforcement: increasing a behaviour
Punishment: decreasing a behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

True or false? All reinforcers (positive or negative) increase the likelihood of a behavioural response. All punishers (positive or negative) decrease the likelihood of a behavioural response

A

true!

22
Q

In terms of the rat and the lever example, please identify the things corresponding to the following terms:
-positive reinforcement
-negative reinforcement
-positive punishment
-negative punishment

A

positive reinforcement= food
negative reinforcement= remove shock
positive punishment= shock (electricity)
negative punishment= remove food

23
Q

Why are we evolutionary primed to fear?

A

adaptive=to keep us safe
-fear of a lot of things keeps you safe but may not be happy. (evolution does not care about our happiness)

24
Q

Define positive reinforcement + give an example

A

desirable stimulus is added to increase a behaviour (e.g. tell child jerome if he cleans his room, he will get a toy

25
Q

Define negative reinforcement + give an example.

A

undesirable stimulus is removed to increase a behaviour (e.g. seatbelt systems, which go “beep, beep, beep” until you fasten your seatbelt)

26
Q

Define punishment.

A

punishment always decreases a behavior

27
Q

Define positive punishment + give an example.

A

You add an undesirable stimulus to decrease a behaviour. An example of positive punishment is scolding a student to get the student to stop texting in class.

28
Q

Define negative punishment + give an example.

A

you remove a pleasant stimulus to decrease behavior. For example, when a child misbehaves, a parent can take away a favorite toy

29
Q

In his operant conditioning experiments, Skinner often used what approach?

A

Shaping

30
Q

Define shaping + give an example.

A

Learning that results from the reinforcement of
successive steps (successive approximation) to a final desired
behaviour.

e.g. teaching a dog to lie down

31
Q

Why is shaping needed?

A

Shaping is needed because it is extremely unlikely that an organism will display anything but the simplest of behaviours spontaneously. In shaping, behaviours are broken down into many small, achievable steps.

32
Q

Define primary reinforcers.

A

Primary reinforcers are reinforcers that have innate reinforcing qualities. These kinds of reinforcers are not learned. Water, food, sleep, shelter, sex, and touch, among others, are primary reinforcers. Pleasure is also a primary reinforcer. Organisms do not lose their drive for these things.

33
Q

Define secondary reinforcers.

A

A secondary reinforcer has no inherent value and only has reinforcing qualities when linked with a primary reinforcer. Praise, linked to affection, is one example of a secondary reinforcer, as when you called out “Great shot!” every time Sydney made a goal. Another example, money, is only worth something when you can use it to buy other things—either things that satisfy basic needs (food, water, shelter—all primary reinforcers) or other secondary reinforcers.

34
Q

Name two concepts linked to schedules of reinforcement.

A

Continuous reinforcement and partial reinforcement (intermittent reinforcement).

35
Q

Define continuous reinforcement.

A

When ALL of the responses are followed by
reinforcement.
* Quickest learning but leads to habituation and easy extinction.
-When an organism receives a reinforcer each time it displays a behaviour.

36
Q

Why can habituation be a bad thing?

A

Rewarding everything so nothing feels like a reward.

37
Q

Define Partial reinforcement (Intermittent reinforcement).

A

When only some of the
responses made are followed by reinforcement
* Takes longer to learn but more resistant to extinction.

-the person or animal does not get reinforced every time they perform the desired behavior

38
Q

What are the different types of partial reinforcement?

A

Fixed VS variable.
Interval VS ratio.

39
Q

Define each:
Fixed VS variable.
Interval VS ratio.

A

Fixed: refers to the number of responses between reinforcements, or the amount of time between reinforcements, which is set and unchanging.

Variable: refers to the number of responses or amount of time between reinforcements, which varies or changes.

Interval: schedule is based on the time between reinforcements.

Ratio: schedule is based on the number of responses between reinforcements.

40
Q

Now differentiate…
Fixed interval schedule (FI)
Variable interval scheduled (VI)
Fixed ratio schedule (FR)
Variable ratio schedule (VR)

A
  • Fixed interval schedule (FI): Reinforcements are presented at fixed time periods,
    provided the appropriate response is made
  • Variable interval scheduled (VI): Behaviour is reinforced based on an average time that
    has expired since the last reinforcement
  • Fixed ratio schedule (FR): Reinforcement is delivered after a specific number of
    responses have been made
  • Variable ratio schedule (VR): Delivery of reinforcement is based on a particular average
    number of responses
41
Q

True or False?
Intermittent Reinforcement Schedule
* Different schedules of reinforcement produce different rates of responding.
* These lines represent the amount of responding that occurs under each type of reinforcement.

A

true! (MUST LOOK AT GRAPH IN NOTES)

42
Q

What is an example of reinforcement schedule?

A

Gambling

43
Q

What is an example of superstitious behaviour?

A

Playing badminton
-flip racket around before serve
-to not loose the point

44
Q

Did Watson and Skinner focus on cognition or behaviour?

A

Strict behaviorists like Watson and Skinner focused exclusively on studying behavior rather than cognition

45
Q

How to describe operant conditioning?

A

means-end relationship

46
Q

Define latent learning.

A

Condition in
which something is learned
but it is not manifested as a
behavioural change until
sometime in the future.
-something we learn but not for that moment. and we don/t even know about it

47
Q

What did Edward Chace Tolman believe in?

A

Tolman’s experiments with rats demonstrated that organisms can learn even if they do not receive immediate reinforcement. This finding was in conflict with the prevailing idea at the time that reinforcement must be immediate in order for learning to occur, thus suggesting a cognitive aspect to learning.

48
Q

Define Cognitive map.

A

A mental representation of the physical features of the environment.

49
Q

Describe Tolman’s experiment.

A

Tolman placed hungry rats in a maze with no reward for finding their way through it. He also studied a comparison group that was rewarded with food at the end of the maze. As the unreinforced rats explored the maze, they developed a cognitive map: a mental picture of the layout of the maze (Figure 6.15). After 10 sessions in the maze without reinforcement, food was placed in a goal box at the end of the maze. As soon as the rats became aware of the food, they were able to find their way through the maze quickly, just as quickly as the comparison group, which had been rewarded with food all along. This is known as latent learning: learning that occurs but is not observable in behavior until there is a reason to demonstrate it.

50
Q

What would be a human example of a cognitive map.

A

Road closure = detour (both ways get home eventually)

51
Q

Define observational learning.

A

learning by watching others and then imitating.