Exam 3 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Describe behavioural therapy.

A

All based on classical and operant conditioning. If a phobia is the result of classical conditioning (i.e. learning), then CC procedures should be effective in curing the disorder.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is systematic desensitization for phobias?

A

Developed by Wolpe 1958 (still relevant today). It is based on counter conditioning (not extinction).
It has four phases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Describe phase 1 of counter conditioning.

A
  • Construction of an anxiety hierarchy.
  • Client writes down 10-15 anxiety-inducing scenes related to the phobia.
    Then, the client rank orders the scenes related to the phobia (creates a hierarchy.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe phase 2 of counter conditioning.

A
  • Relaxation training.
  • The client is classically conditioned to relax.
  • Induces a state of bodily calm and relaxation by having the client alternately tense and relax groups of muscle.
  • This pairs the word “relax” with the physical response of relaxation (20 min/session)
  • After training, person is able to relax on command.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe phase 3 of counter conditioning.

A
  • i.e. condition a new CR taht is counter to the old CR of fear.
  • Note: a person an not be relaxed and fearful at the same time, so relaxation is going to be our new CR.
  • Client is instructed to relax and imagine the lowest scene in the hierarchy.
  • Before any anxiety is experienced/can develop, the client is instructed to relax.
  • It is critical that the client does not become anxious while imagining the scene so the image is very brief.
  • After scene (CS) has ended, client is instructed to relax (CR)
  • Over trials, increase the amount of time the scene is imagined..
  • When the scene can be imagined for relatively long periods of time with no anxiety (only relaxation), move to next scene in the hierarchy, and repeat the process.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Describe phase 4 of counter conditioning.

A

Assessment:

  • Person encounters the phobic object
  • If person feels relaxed = success
  • if person feels fear = fail
  • 90% success rate.
  • Occasional relapses after 1-3 years, but easily handled by additional desensitization.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Who founded operant conditioning?

A

Burrhus Frederick Skinner (1904-1990).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Explain the premise of operant conditioning:

A
  • All behaviour takes place in a setting and produces outcomes:
    E.g. sticking finger into coin slot, not finding coins is an outcome, or finding coins is an outcome.
  • An association between the behaviour and outcome forms
  • This is called conditioning (learning) –> operant conditioning (instrumental conditioning).
  • Through operant conditioning, that behaviour is made more or less probable.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the two types of outcomes or environmental events?

A

1: Those that increase the probability of behaviour;
2: Those that decrease the probability of behaviour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a reinforcer?

A

Events that increase the probability of behaviour. Two tpes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the two types of reinforcers?

A

Positve reinforcers and negative reinforcers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a positive reinforcer?

A

Any event (or stimulus or outcome) which immediately follows behaviour and which increases the probability of that behaviour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a negative reinforcer?

A

Any event or stimulus which stops when a behaviour is given, and which increases the probability of that behvaiour.
- Withdrawal symptoms are negative reinforcers - they go away when you take the drug.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the three events that Skinner recognized that decrease the probability of behaviour?

A

Punishment
Omission
Extinction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is punishment?

A
  • Positive punishment: Any event or stimulus which immediately follows a behaviour and which decreases the probability of that behaviour.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is omission?

A

Negative punishment - Behaviour is followed by the removal of a positive reinforcer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is extinction?

A

When a positively reinforced behaviour is no longer reinforced (ever).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How did Premack break the circular definition?

A

An event which is of greater value to an O can positively reinforce an event of lesser value.

To apply this, calculate the amount of time an O spends on a variety of activities under conditions of free access.

Assumption: the more time spent on an activity, the greater its value to the O.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Example of Premack:

A
Time spent on various activities:
TV - 15%
Music - 10%
XBOX - 20%
Internet - 12%
Phone - 30%
Homework - 0%

Phone should possibly reinforce all other activation., internet should reinforce homework and music, everything should reinforce homework.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Describe Premack’s take on positive punishment:

A

An event which is of lesser value to an O can punish an event of higher value. So, in our example, homework can punish listening to music. So, don’t watch TV then do homework, do homework then then watch TV.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Describe the experiment for Premack:

A

Thirsty rats –> drink 80% of the time and run 20% of the time.
So, placed thirsty rats on a motorized wheel with a drinking tube close by.

  • Contingency: after 15 licks (drinks) there were 5 wheel turns (i.e. rats forced to run).
  • Premack predicts running will punish the drinking therefore the thirsty rats will drink less.
  • Results: thirsty rats greatly decreased their drinking.
    Conclude: Running punished drinking.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the application for the results of the Premack experiment?

A

Behaviours that are reinforced become intrinsicly reinforcing.
So, kid comes home from school, oes homework immediatly then immediatly after turn on TV + apple cheese snacks + praise. Over enough trials, the kid will say that they do homework after coming home from school because they like it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

According to Skinner, if you want to change behaviour, all you have to do is change the environmental contingencies. That is…

A

a) remove the reinforcers maintaining the undesirable behaviour (i.e omission and extinction)
AND
b) give positive reinforcement dependent (contingent) upon the desired behaviour (at least some of the time).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is a universal positive reinforcer?

A

Praise.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What are some of the great things about praise?

A
  • It is always with you
  • It can be given immediatly
  • Giving praise is free, doesnt cost anything
  • No psychological repercussions for using praise. Kid won’t hate you.
  • It works (e.g. trucker example)
  • It doesn’t weigh anything
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What are the two reasons that Skinner was against punishment?

A

1: Doesn’t extinguish behaviour it only suppresses the behaviour in the presence of the punisher, punishment needs surveillance.
2: And can lead to negative psychological repercussions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

If you are to punish, punish the way nature does. How does nature punish?

A
  1. Punish 1st undesirable behaviour.
  2. Punishes everytime.
  3. Maximum intensity first time. If it is a reasonable stimulus then it is appropriate full strength first time and everytime.
  4. “Burns” quickly - once you administer, its over.
  5. Punish the behaviour, not the person. (no judgement, no emotion).
  6. Be consistent (i.e. all employees equally, all children equally, etc).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What are the different types of reinforcement schedules?

A
  • Continuous reinforcement schedules
  • Partial reinforcement schedules
  • Interval schedules
29
Q

Describe the continuous reinforcement schedule

A

Positive reinforcement is given after each occurrence of the behaviour.

30
Q

Describe partial reinforcement schedules

A
  • Behaviour does not have to be reinforced each time it occurs for it to be acquired or sustained.
  • When behaviour is reinforced intermittently, it is called partial reinforcement.
  • The reinforcement is still contingent upon the desired behaviour.
  • Reinforcement follows the behaviour some of the time, and the reinforcement is never given in the absence at the behaviour.
31
Q

What are the two types of partial reinforcement schedules?

A

Ratio schedule

Interval schedule

32
Q

Describe ratio scales and their two types

A

Reinforcement is given according to the number of behaviours or responses demontrated.
Two types:
a) fixed ration (FR)
b) Variable ratio (VR)

33
Q

Describe fixed ratio

A
  • Reinforcement is delivered after a constant or fixed number of behaviours. Therefore a fixed number of behaviours are necessary to produce reinforcement.
  • If 10 responses are required its called an FR-10 schedule, if 250 responses are required: FR-250
34
Q

Describe variable ratio (VR)

A
  • Reinforcement is delivered after a variable, or average number of behaviours.
  • If on average, 20 responses are required for reinforcement, it’s called a VR-20 schedule.
  • With a VR-20 schedule, reinforcemnt may be delivered after 5 responses, 40 responses, or 8 responses or 23 responses etc. So long as the average, or mean # of responses = 20.
35
Q

Describe interval schedules and their two types.

A
  • Reinforcement is delivered according to the amount of time since the last reinforcement and a single reponse after this time interval has ended (written example in notes).
    Note:
  • only the first response after the time interval has ended is reinforced.
  • The next time interval starts immediately after this response + reinforcement.
  • The next time interval begins immediately after this reinforcement.
  • Responses during the time interval do nothing.

Two types: Fixed interval (FI) and variable interval (VI)

36
Q

Describe fixed interval (FI) schedule

A
  • the time interval is fixed, or constant. If the time interval is two minutes, we call that an FI-2 min schedule. If 4 hours, FI-4 hours. If time interval is 7 days, we call that an FI-7 days.
  • Cleaning room parenting example (allowance)
37
Q

Describe variable interval (VI) schedule

A
  • Same as before, except now the time interval varies (around a mean amount of time).
  • If the average or mean amount of time for the interval is 4 hours we call it a VI–4 hour scheduule.
    • Thus, one interval might be 6 hors long, another 118 minutes, another 50 minutes long, etc. o long as the average of the time intervas = 4 hours.
      E.g. fishing.
  • number of casts
  • How often fish swims by is critical and ranges from few mins to hours (variable time interval)
38
Q

What type of reinforcement is best?

A

Partial reinforcement schedule leads to stronger conditioning (higher response rate) than continuous reinforcement.

39
Q

Explain extinction as it applies to operant theory

A

Operant theory suggests that since CRFs provide more reinforcements, CRFs should result in stronger S-R associations than partial reinforcement schedules (PRFs)

Therefore, extinction should take longer (be delayed) following CRF.

BUT - What actually happens during extinction?

  • extinction is delayed (takes longer) following PRF.
  • Extinction is faster for CRF.

This is known as Humphrey’s Paradox.

40
Q

Expand on Humphrey’s Paradox, and name two hypotheses that attempt an explenation:

A
  • Also known as the partial reinforcement effect (PRE)
  • Why should a R that is only partially reinforced be stronger (i.e. more resistant to extinction) than a R that has been reinforced each time it occurred?

Two hypotheses to explain Humphrey’s paradox:
A) Discrimination Hypothesis
B) Generalized Decremental Hypothesis

41
Q

Describe the Discrimination Hypothesis

A
  • For an O’s behaviour to change during extinction, the O must be able to discriminate the change in reinforcement contingencies.

Note: a witch from CRF to extinction is highly discriminant

  • A switch from FR or FI to extinction is less discriminant
  • A switch from VR or VI to extinction is even less discriminant (Because with VR and VI, O experiences occasional long stretches of unreinforced Rs)

Thus, discrimination hypothesis predicts extinction will be fastest following CRF, followed by fixed reinforcement schedules, followed by variable reinforcement schedules (which would be the slowest to extinguish).

42
Q

Explain the Generalized Decremental hypothesis

A

“Generalized decrement” refers to the decrease in responding in a S generalization test, when the test S become less and less similar to the training S.

43
Q

Explain the Stimulus Generalization test:

A

(diagram in notes)
During extinction, predict strong responding if S during extinction is similar to training S. Weak responding if S during extinction is different to training S.
* one important class of S is the number of consecutive unreinforced Rs that were followed by reinforcement.

So, after CRF, 10 unreinforced Rs during extinction results in a S situation quite unlike training. Therefore, predict a large generalized decrement. Thus, predict responding will soon stop

After, say FR-50 training, even after 100 unreinforced responses during extinction, the S situation is not that different from training. Therefore predict responding to continue (because generalized decrement is small)

Capaldi says that the O can discriminate between reinforcement contingencies.

Note: Capaldi says that the O can discriminate the change from training to extinction. But says its not the discrimination that’s controlling behaviour. Rather, the S is controlling the behaviour.

44
Q

Do both hypotheses make the same predictions?

A

Yes. Both state that extinction should be fastest following CRF and slowest following VI or VR training. The data confirms these predictions.

But which is correct?

45
Q

Describe the experiment that investigated which hypothesis is correct.

A

Phase I: Group 1 - PRF; Group 2 - CRF
Phase II: Group 1 - Extended CRF (so that before extinction, both groups had same amount of CRF training); Group 2 - CRF
Phase III: Group 1 - Extinction; Group 2 - Extinction

Discrimination hypothesis predicts that they will extinguish at the same rate because they both have the same discrimination task.

Capaldi predicts that there is enough S generalization…

Result: During extinction, group 1 responds more than group 2. Supports generalized decrement hypothesis.

46
Q

What is learned during operant conditioning?

A

It is believed that S - R associations are made.

S - R theory has 3 assumptions.

47
Q

Describe the first assumption of S-R theory.

A

Associations form between the stimuli and the responses.
E.g. Thorndike’s puzzle box: wooden cage with latch for the door. Place cat inside and close the door. Dish of salmon just outside the puzzle box. The cage is the S.

Scage - Rmeow
Scage - Rpace
Scage - Rbat latch (salmon contiguous with this R). i.e. door opens, cat eats the salmon.

The reinforcement of the salmon strengthens the S-R association for Scage - Rbat latch.

48
Q

Describe the second assumption of S-R theory.

A

Reinforcement is the mechanism that strengthens the S-R association.

So, Scage - Rbat latch association is strengthened (learned) by the salmon reinforcer.

49
Q

Describe the third assumption of S-R theory.

A

The O’s behaviour is mechanized (no anticipation or expectation of reinforcement).

So, cat should be “surprised” each time it finds the salmon.

50
Q

Describe the latent learning experiment

A
  • Tolman and Honzik (1930)
  • Complex maze, 14 choice points
  • Rats given 1 trial a day in the maze
  • 3 groups of rats:

Group 1: Never fed in the maze (control) - removed when reached empty goal box.
Group 2: Received food reinforcement in goal box on every trial.
Group 3: For first 10 trials No food in the goal box; for trials 11-17, food reinforcement in the goal box.

Performance error: # of errors (wrong turns) rat made. (therefore, the lower the #, the higher the performance - i.e. the learning).

51
Q

What was found in the latent learning experiment?

A

Latent learning experiment refutes S-R theory (which says that reinforemment is the mechanism which strengthens S-R associations).

Conclude: Reinforcement is not necessary for learning an operant response.
Reinforcement is necessary for the performance of an operant response.

52
Q

Explain the Irrelevant Incentive Learning experiment.

A

Phase I: Satiated (full) rats in T-Maze with food in goal box (rats don’t prefer either arm of the maze; assume S-R associations not formed because no reinforcement).

Phase II: Food deprived rats and place in T-Maze

Result: rats make B-line for the food (i.e. go directly to it)
Learning occurred in the absence of reinforcement.
This refutes S-R theory

53
Q

Describe Contrast Effects as they relate to proving/disproving S-R theory

A

According to S-R model, all reinforcements should have the same effect. i.e. strengthen S-R associations.
- Thus, experience with other reinforcements should not affect behaviour. (Except to strengthen the S-R associations).

54
Q

Describe the experiment by Tinkelpaugh for contrast effects

A

Monkey subjects
Phase I: Discrimination task…
The banana under 1 of 2 cups game.
- Banana is always under the cup on the left.
- Monkey quickly earns and always chooses the cup on the left (and gets banana reinforcement)
Phase II: Test:
- Secretly switches the banana for lettuce (which monkeys also like to eat)
- S-R prediction if an Scups - Rlift left cup association is learned, the switch to lettuce should go unnoticed.

Result: the money flips out (surprised and frustrated - refuses to accept lettuce)
Conclude: A R-Reinforcement association was learned.

Refutes S-R model which states that only S-R associations are formed.
The monkey developed an expectation

55
Q

What are the overall conclusions of the latent learning experiment, Irrelevant incentive learning experiment, and the contrast effects experiment?

A

Results are consistent with expectations that Os can associate any events that occur together.
Including: S-S, S-Rf, R-S, R-Rf, S-R (Rf = Reinforcement)

It is tough to maintain a strict S-R perspective.
* Performance in operant conditioning seems to result from R-Rf associations.

56
Q

What is inhibitory training in operant conditioning called?

A

Discrimination procedure.

57
Q

Explain the discrimination procedure (inhibitory training in operant conditioning)

A

2 types of trials:
i) Red/food
ii) Green
Successive discrimintion - red and green light are on successive trials (but can be re, green, green, red, green, red, red, red, green, green, etc…)

In operant conditioning we can also have simultaneous discrimination training. Both stimuli are presented on the same trials.
Left cup/food
Right cup <– the right cup became an associative inhibitor.

58
Q

What are the two theories of stimulus control of operant behaviour?

A

A) Spence’s theory of generalization (also called absolute theory)
B) Relational theory of Stimulus control

59
Q

Describe Spence’s theory of generalization (absolute theory) for S control

A
  • Adopted and extended Pavolov’s view.
    i.e. gradient of excitation around the S+ (training S) and excitation decreases as the S becomes less similar to the S+
    AND
    Gradient of inhibition around the S- (training S) and inhibition decreases (responding increases) as S become less similar to S-

(Gradient’s are an inherent property of the nervous system)

If the S+ and S- are in the same stimulus dimension (intradimensional, e.g. they are both lights), then the excitatory and inhibitory gradients add together algebraically.
The net associative strength of any S within that S dimension = it’s excitatory + it’s inhibitory strength

60
Q

What is the “remarkable” prediction that Spence posits in his Theory of Generalization (absolute theory) of S control?

A
  • Novel S will have higher responding than training S.
  • Following intradimensional discrimination training, specific predicts a peak shift.
    I.e. the O will respond more to a novel S than to a S+
    (this peak shift is in a direction away from the S- but closest to the S+)
61
Q

Explain the experiment conducted to examine Spence’s Generalization theory (absolute theory) for S control

A
2 groups of pigeons:
Phase I
Group 1: 
i)S550nm/Rf (only if O responds)
ii) S555nm alone
Group 2:
S550nm/Rf (only if O responds)

Phase II
S generalization test (both groups)

Predict: pg 4 green handout
Results: pg 5 green handout

62
Q

Describe the relational theory of S control

A
  • Kohler (1939)
  • Stimuli are not responded to in absolute terms, but, rather, relative to one another.
    i. e. the O learns the relationship between 2 stimuli (e.g. brighter, sweeter, warmer, etc)
63
Q

Explain the experiment that investigated relational theory of S control

A
  • used chickens
  • Simultaneous intradimensional discrimination task
    Phase I
    Simultaneous presentation of 2 stimuli
    S+ = light grey card, S- = dark grey card.
    Response = approach
    Reward = food

Phase II
Test: Simultaneous presentation of S+ and novel S
S+ = light grey card
Novel S = very light grey card

Kohler says that the O learns the relationship between the S+ and S-
Predict: approach novel S
Result: chickens approached the novel card more than the S+.
Chickens learned the relationship between the stimuli, that is, approach the lighter coloured card. This is called transposition

64
Q

Explain transposition

A

O has transferred the relational rule to a new pair of stimuli.
But, this could also be explained as a peak shift (absolute theory from Spence)
So we need a different experiment to distinguish between these two theories.

65
Q

Explain the experiment used to distinguish between Generalization theory and relational theory (intermediate size problem)

A
  • Used chimps
    Stimuli: 9 squares of different sizes (from 9sq” to 27sq”)

Phase I
Training - simultaneous intradimensional discrimination task training
- Chimps always simultaneously presented with squares 1, 5, & 9
- Reinforced for choosing sq #5 (intermediate size)

Phase II
Presented with different sets of 3 squares
- Spence (generalization/absolute theory) predicts that chimps will choose the one closest in size to square #5.
- Relational theory predicts that the chimps will choose the most intermediate sized square
- (reinforced regardless of choice)

Results: support relational theory, not absolute theory. The chimps chose the intermediate sized square.

66
Q

What are the conclusions across all research between generalization and relational theory?

A
  • Results from successive discrimination support Spence’s absolute theory
  • Results from simultaneous discrimination support relational theory.
    Conclude: Both absolute cues and relational cues are “features” of the stimuli in discrimination tasks
  • O responds according to the most salient features
    i.e. during simultaneous discrimination, relational cues overshadow absolute cues –> transposition.
    During successive discrimination training, the absolute cues overshadow the relational cues –> peak shift.
67
Q

Human applications:

How do we treat phobias using O.C?

A

Phobias are not irrational, they are learned.

- use procedure called “response prevention”, or “flooding”.

68
Q

Explain the Response Prevention (flooding) procedure.

A
  • Based on extinction
  • “Force” client to experience the feared S (or CS) in the absence of the “aversive” consequence (or US).
  • This should extinguish the fear.
    Procedure:
  • client creates a hierarchy of feared stimuli
  • Expose the client to the most feared S first
  • Ensure that this exposure is really, really long.
    i.e. flood client with the S. Actually can’t terminate the initial session until a fear reduction is observed.
    Why?
    The client is initially extremely fearful. If the session ends prior to fear reduction, the fear (or phobic reaction) may actually increase. Once fear subsides, can terminate the session.
    Repeat the process until most feared stimulus no longer elicits fear –> go to next S on hierarchy and repeat process.
69
Q

How do parents get it wrong?

A

1: timing is wrong
2: They don’t make positive reinforcement contingent on behaviour
2 aspects of contingency not followed:
- The positive reinforcer immediately follows the desire behaviour with some prob
- the positive reinforcer never follows the absence of the behaviour
3: thinking that punishment expels a behaviour
4: forgiving the first instance/first few trials
5: don’t reinforce desired behaviour
6: parents expect 1 trial learning
7: think once a behaviour is learned it no longer needs to be reinforced
8: don’t pay attention to whether something actually is a positive reinforcer or negative reinforcer.
9: they think that what they say matters - not what is said that counts: what O experiences that counts
10: they take “no” for an answer