Exam 2: Chapter 4-6 Flashcards

1
Q

Know the classical conditioning (CC) associative learning paradigm and be able to identify the following in an example: US, CS, UR, CR.

A

Classical conditioning represents an association between the CS and the US

example:
Pavlov Dog

Innate reflex:
US –> UR

US: saliva
UR: dog food

Training:
CS: bell ringing before food

Conditioned:
CS –> CR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

CC represents a LEARNED association between the _____ and the _____.

A

CS and US

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

CC leverages what kind of behavior?

A

Innate, AKA reflexive, AKA involuntary!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Understand the difference between appetitive conditioning (approach) and aversive conditioning (avoidance)

A
  • Appetitive Conditioning: new reflex prepares to obtain the US
  • Aversive Conditioning: new CS->CR reflex helps avoid noxious US
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Be able to explain the eyeblink conditioning experiment, understanding that the CR can occur before the US in anticipation.

A

US: a puff of air
UR: blink eyes
CS: tone
CR: blink eyes before air puff

Initially, CS does not do anything but after pairing US and CS the bunny/human will produce CR before the US.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a conditioned compensatory response? How does this maintain homeostasis? How does this contribute to tolerance?

A

Conditioned compensatory response is a CR that is the opposite of the UR, helping to balance/correct for the US-UR reflex.

I- nject adrenaline (US) -> heart rate increase (UR)
- Repeat procedure in same testing chamber (CS).
- Eventually, CS comes to produce a decrease in heart rate (CR) that helps maintain homeostasis (balance) against expected adrenaline injection.
- We observe this as tolerance, as the testing chamber evokes a CR that weakens the overall effects of the drug.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is extinction? What does it mean to say it can occur “in context”?

A

Breaking the association betweenthe CS and US can extinguish thenew CS->CR reflex:
- Present the CS alone repeatedly.
- Initially, CS evokes strong CRs.
- With repetition, however, CS becomesless effective, similar to beginning oftraining.

Extinction doesn’t erase the CS-US connection, just inhibits it:
- Stress, new context, and/or passage of time can make the CS effective again!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is compound conditioning? Overshadowing? Blocking?

A
  • When two cues appear together in a conditioning experiment, a paradigm known as compound conditioning occurs.
  • One stimuli has the potential to overshadow another stimuli if they occur together. Overshadowing occurs when a more salient cue within a compound acquires far more share of attention and learning than the less salient one.
  • Blocking demonstrates that classical conditioning occurs only
  • when a cue is BOTH a useful and a non-redundant predictor of the future.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Be able to differentiate between the Rescorla-Wagner (US modulation) and Mackintosh models (CS modulation) of CC.
Which is better for explaining error-correction learning? Latent inhibition?

A
  • US modulation – in which learning changes processing of the US (e.g., Rescorla-Wagner Model) EXPECTATION/SURPRISE
    –> Error-correction learning: Errors on each trial lead to small changes in performance that seek to reduce the error on the next trial.
  • CS modulation – which postulates that the stimulus enters into an association which is determined by how the CS is processed (e.g., Mackintosh Model) LIMITED ATTENTION
    –> Sometimes there is a reduction in learning about a stimulus if there has been prior exposure to that stimulus without any consequence. This is latent inhibition.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which interstimulus interval (timing) is best for CC learning? What happens when the US is presented before the CS? Understand delay vs. trace conditioning.

A
  • Delay conditioning, is the best learning, where CS is continuous presented until US is presented for a shorter period of time
  • “Backwards” conditioning: US then CS, no learning!
  • Trace conditioning is CS is presented before the US but only the same amount of time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does it mean to say that some associations are easier to make than others? What is a conditioned taste aversion (CTA)?

A

Some associations are innately easier to make:
- When tone + taste paired with poison, only taste provokes CR
- When tone + taste paired with shock, only tone provokes CR

Seems we have some innate preferences for forming associations that can override statistical correlations
- E.g., conditioned taste aversion: poison group, tone + taste –> poisoning, tone = ?; shock group: tone + taste –> shock, taste = ?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What experiment (conducted by Thompson in the 1980s) showed that CC partly occurs in the cerebellum?

A
  • In the 1980s, Thompson and associates discovered that eyeblink conditioning in rabbits depends on the cerebellum.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What two locations in the cerebellum are most likely to store the CS-US association? What evidence do scientists have to implicate these areas?

A

CS–US association may be stored in:
- Purkinje cells of the cerebellar cortex.
- Cerebellar interpositus nucleus (eyeblink CR pathway).

  • Recordings from thecerebellum before andafter learning have helpeddemonstrate where thememory is stored …
    • Shutting off the Purkinje inhibition (double negative!) to the interpositus enables the CS to generate CRs. (shown in Electrophysiological Recording in the Cerebellum)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What happens to associative learning if there is damage to the interpositus nucleus? What if there is damage to the Purkinje cells?

A
  • Interpositus nucleus damage = destroy and prevent CRs
  • Purkinje cells damage = affect timing of CR learning
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What role does the hippocampus play in CC? (hint: eliminates latent inhibition and other paradigms that depend on changes in the CS – i.e., Mackintosh model vs. R-W model).

A
  • CS modulation (Mackintosh) occurs in thehippocampus and medial temporal lobe
  • US modulation (Rescorla-Wagner) occurs in the cerebellum
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Understand the role of classical conditioning and tolerance in drug addiction.

A
  • Drug tolerance increases with use in the same drug-taking context (situation-specific).
  • Even subtle changes (e.g., to drug taste) can overrun tolerance and increase drug effects.
    • Increases possibility overdose

In first-time dose rats, large heroin dose led to:
- 96 percent fatal overdose.
In rats with small heroin dose before larger dose in a different location:
- 64 percent fatally overdose.
In rats with small dose before larger dose in the same location:
- Only 32 percent overdose.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How can cue-exposure therapy help addicts kick the habit? What happens to the association between the CS-CR in cue-exposure therapy?

A
  • Bouton (2000) suggests that therapists conduct cue-exposure therapy:
    • In different contexts, including home.
    • Over varying time lengths.

The association between CS-CR in cue-exposure therapy is being put through extinction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What role does extinction play in reducing addiction?

A

The extinction between the association of CS-CR plays a role in cue-exposure therapy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Review Thorndike’s Law of Effect from Chapter 1.

A

when you want a behavior to increase use reinforcement, and when you want a behavior to decrease use a punishment

20
Q

Define and understand the relationships between the discriminative stimulus (SD), the response (R) and the outcome in operant (instrumental) conditioning (O).

A
  • In context, discriminative stimulus (SD), Response (R) produces Outcome (O): SD -> R -> O
21
Q

What does the SD indicate about contingencies?

A
  • Discriminative Stimulus (S): tell us which contingencies are in effect
22
Q

What is a habit slip? What evidence indicates that the response (R) is not just a rote motor program?

A

Some SD -> R behavior is so strong that it happens automatically, no matter what other options are available.

This is called a habit slip.

  • If you’ve ever woken up late and jumped up to get ready for class only to remember that it is Saturday, then I am talking about you.

Initially, it was thought to be a rote motor program.

  • However, if normal motor program is blocked, animal will use other methods to achieve same ends.

R is not a single behavior but a class of behaviors producing an effect. Some cognitive psychologists would call it a goal or intention.

23
Q

What is the effect of reinforcement on behavior? Punishment? What is the difference between a positive or negative consequence?

A
  • Reinforcement increases behavior
  • Punishment decreases behavior

Positive Reinforcement
- If it leads to positive effects, do it more
Negative Reinforcement (escape)
- If it ends/avoids a negative effect, do it more

Positive Punishment
- If it leads to negative effects, do it less

Negative Punishment (omission)
- If it ends/avoids a positive effect, do it less

24
Q

Define the acquisition and extinction phases of OC.

A

Training – contingency is introduced: If S, R -> O

Acquisition – animal discovers contingency, rate of R increases

Extinction – contingency is eliminated (R -> __), rate of R decreases

25
Q

Be able to fully describe the Skinner box experimental paradigm, including how you can shape complex behaviors (chaining) with successive approximations.

A
  • Note that animal is ‘free’ in the chamber, no experimenter intervention (free-operant paradigm Shaping
  • Shaping through successive approximation builds a complex R incrementally

Chaining
- Chaining builds complex R sequences by linking together SD->R->O conditions

Shaping and chaining can be used together to train animals to complete incredibly complex behaviors
Both techniques require skill and patience from the trainer
- Keep animal motivated and interested
- Select proper training sequence–can’t move too fast

26
Q

What are the major differences between OC and CC? Be able to identify which is being used to promote learning in example situations. look over chapter examples of differences between OC and CC

A

Classical Conditioning:

  • Environment operates on the animal’s innate behavior
  • Stimulus evokes Response (SD->R?)
  • Stimulus followed by O
  • Animal learns CS (SD) predicts US (O)

Operant Conditioning:

  • Animal operates on the environment
  • Stimulus evokes a response to produce an outcome (SDRO)
  • Animal connects context, behavior, and outcome
27
Q

What is the difference between a discrete-trial paradigm and a free-operant paradigm?

A

Have to repeat trials over and over, resetting animal and device.
I.e., trials were controlled by the experimenter.
- This is known as a discrete-trials paradigm.

  • Note that animal is ‘free’ in the chamber, no experimenter intervention (free-operant paradigm)
28
Q

What are primary and secondary reinforcers? What is the drive reduction theory? What does it mean to say that reinforcers can have negative contrast?

A
  • A stimulus, such as food, water, sex, or sleep, that has innate biological value to the organism and can function as a reinforcer, is known as a primary reinforcer.
  • Drive reduction theory states that organisms have innate drives to obtain primary reinforcers and that learning is driven by the biological need to reduce those drives.
  • A stimulus (such as money or tokens) that has no instrinsic biological value but that has been paired with primary reinforcers or that provides access to primary reinforcers is known as a secondary reinforcer.
  • negative contrast - water to sugar water, no longer want water as a reward
29
Q

What are four things about punishment that can make it less effective than reinforcement? How can differential reinforcement of alternative behaviors help?

A

Punishment can be effective, but …
1. Punishment leads to more variable behavior.
2. Discriminitive stimuli for punishment can encourage cheating.
3. Concurrent reinforcement can undermine the punishment.
4. Initial intensity matters.

  • Rather than delivering punishment every time, use differential reinforcement of alternative behaviors (DRA): a method to decrease the frequency of unwanted behaviors by instead reinforcing PREFFERED alternate behavior.
30
Q

How does timing of reinforcement affect learning in OC?

A
  • Closer time between behavior and consequence, better the learning
31
Q

Know detail regarding the four main schedules of reinforcement (FR, FI, VR, VI) including some examples. What is a post-reinforcement pause? Which schedule creates the “strongest learning”? What did Skinner mean by “strongest learning/behavior”?

A

Fixed Ratio (FR) - Every X Responses produces 1 Outcome
- ex. ice cream stamp card
- If 1 R = 1 O, then continuous reinforcement schedule
- If 2 R = 1 O, then partial reinforcement schedule

  • Post-reinforcement pause:(flat line): time out fromresponding after each reward

Variable Ratio (VR) - Every X Rs produces 1 O, but X changes with each reinforcer
- ex. gambling
strongest learning

Fixed Interval (FI) - After Y seconds, 1 R produces 1 O
example: paycheck, looking at the clock
–> behavior before interval expires has no consequences

FI scallop:
- At beginning of interval, little/no responding
- Increases to rapid rate of responding before interval expiration

Variable Interval (VI) - After Y seconds, 1 R produces 1 O, but Y changes after each O. Behavior before interval expires has no consequence.
- ex. checking your texts

  • Skinner meant that the “strongest learning/behavior” are the hardest to extinguish
32
Q

Know the matching law of “choice” behavior, behavioral economics, and bliss point.

A
  • Matching law of choice behavior—response rates to concurrent schedules often correspond to the rate of reinforcement for each schedule.
  • Behavioral economics—the study of how organisms distribute their time and effort among possible behaviors.
  • Bliss point—the ideal distribution for the organism; provides maximum subjective value.
33
Q

What is the Premack Principle (AKA the response deprivation hypothesis)?

A

The Premack Principle: Responses as Reinforcers
- By restricting the ability to execute a response, you can make the opportunity to perform that response reinforcing.

34
Q

What role does the dorsal striatum play in OC? Of what structure is the dorsal striatum a part? If the dorsal striatum is lesioned, what happens to OC associative learning?

A
  • Within the basal ganglia, the dorsal striatum (caudate nucleus and putamen) seems to play an important role in SD–> R learning.
  • No learning, however, with discriminative stimuli (if light on, then lever press gets food)
  • Though, simple RO relationships are learned correctly (e.g., press lever, get food)
35
Q

What role does the orbitofrontal cortex play in OC?

A

The orbitofrontal cortex (seen at left) may be an important brain substrate for storing R->O associations.
- helps organisms choose particular responses based on the expected outcomes

36
Q

What is the difference between wanting( (motivation) and liking (hedonics)? Can you have one without the other?

A
  • Only when “wanting” and “liking” signals are both present will the arrival of the reinforcer evoke responding and strengthen the SD-> R value.
  • Motivational value: (Da)
    how much we “want” a reinforcer and how hard we are willing to work to obtain it
  • Hedonic value: (μ)
    the subjective “goodness” of a reinforcer, or how much we “like” it

example: No matter how much we may “like” chocolate cake, most of us will not be very motivated to obtain more if we have just eaten three large slices!
- we can have like without motivation —> satiated and want without like —> addiction

37
Q

What is one of the “pleasure centers” in the brain? What neurotransmitter is used by this center to stimulate powerful reinforcement?

A
  • One of the “pleasure centers” is the ventral tegmental area (VTA) in the brainstem.
    • The VTA is the center for dopamine neuromodulation.
  • VTA stimulation = powerful reinforcement
38
Q

How does the VTA signal wanting and by stimulating what other regions of the brain with what neurotransmitter? [hint: SNc (SD -> R); orbitofrontal cortex (R -> O); DA]. What is the incentive salience hypothesis of wanting?

A

Wanting:

Some dopamine (DA) neurons in the VTA have axons that extend to the
- substantia nigra pars compacta (SNc) which is part of the basal ganglia/striatum (SD  R)
- orbitofrontal cortex (R  O)

Makes this an ideal place to start looking at how the brain signals motivation.

  • Incentive Salience Hypothesis—dopamine motivates learners to work for reinforcement.

Liking:

Endogenous opioids (endorphins) may mediate hedonic value or “liking.”

  • Opiates (heroin, morphine) bind to the brain’s natural opiate receptors.
  • Opiates may provide information about “liking” that helps stimulate VTA’s “wanting” system.
39
Q

How does the brain signal liking and with what chemicals?

A
  • Endogenous opioids (endorphins) may mediate hedonic value or “liking.”
  • Opiates (heroin, morphine) bind to the brain’s natural opiate receptors.
  • Opiates may provide information about “liking” that helps stimulate VTA’s “wanting” system.
40
Q

What two regions in the brain are thought to be involved in punishment signaling?

A
  • Insular cortex (purple) (–> taste center of the brain (gustation, disgusting)) and specifically the dorsal posterior insula is a brain region that helps us determine subjective disliking of painful physiological and psychological stimuli.
  • The dorsal anterior cingulate cortex (dACC; orange) may help determine the motivational value of punishers (or no reward… or less reward… Remember negative contrast?) (specifically pain in general)
41
Q

Understand the difference between pathological and behavioral addiction. What initiates addiction? What maintains it?

A
  • Pathological addiction- a strong habit maintained despite harmful consequences.
    • involves craving a high “euphoria” and avoiding withdrawal
    • seeking pleasure involves positive reinforcement
    • avoid pain involves negative reinforcement
  • Although liking a drug may help initiate addiction, the incentive salience hypothesis suggests that addiction is maintained by “wanting” a drug.
  • Behavioral addiction - addiction to certain behaviors, rather than drugs.
    -Produces euphoria.
    • Understanding drug addiction may help understand/treat behavioral
      addictions.
  • damaging insula –> eliminate addiction
42
Q

What happens at the dopaminergic synapses with methamphetamines? Cocaine? How about with naltrexone?

A
  • cocaine stops the reuptake of dopamine –> get the euphoric
  • amphetamines causes you to release more dopamine in the synapse –> get more dopamine
  • naltrexone inhibits dopamine production, can help treat heroin addicts and compulsive gamblers
43
Q

What cognitive-behavioral treatments can help with addiction?

A

(Cognitive) behavior therapies:
- e.g., extinction, distancing, reinforcement of alternative behaviors (DRA), delayed reinforcement (imposing a fixed delay)
- Based on instrumental conditioning principles

44
Q

What is generalization?

A

Generalization: transferring past experiences to new situations

  • generality: applying the past broadly; generalizing
  • learned specificity –> a change in the generalization gradient for less generality
45
Q

What is discrimination?

A

Discrimination: the perception of differences between stimuli

  • specificity: applying the past narrowly; discrimination
  • discrimination training - providing two different consequences for stimuli initially treated by an animal as similar