Decision making Flashcards
1
Q
Expected value
A
- Average outcome if a scenario is repeated many times
- Calculated using probabilities and values of possible
outcomes - To maximize average outcome, choose option with the greatest expected value
- Difficult to apply for non-monetary decisions
- Doesn’t explain actual choices by actual people
2
Q
Prospect Theory (Kahneman & Tversky, 1979)
A
- People do not make decisions based on expected values, probabilities, and absolute outcomes.
- People make decisions based on subjective utility, decision weights, and relative outcomes
3
Q
Subjective utililty
A
- People transform objective value into subjective
utility
● Utility = usefulness or
desirability of an outcome
4
Q
Diminishing marginal utility
A
- Subjective utility increases more slowly than objective
value, especially at large values - $10 is subjectively worth twice as much as $5, but $10,000,000 is not subjectively worth twice as much as $5,000,000
- Individual differences: Bill Gates vs. me
5
Q
Loss aversion
A
- Losses loom larger than gains
- Losing $20 feels worse than winning $20 feels good
- Individual differences in sensitivity to loss
6
Q
Decision weight
A
- People transform objective probability into
subjective decision weights - Small probabilities (but greater than 0%) are overweighted
● 1% feels like much more than 0%
● 51% feels about the same as 50% - Large probabilities (but less than 100%) are underweighted
● 99% feels like a lot less than 100%
● 50% feels about the same as 51%
7
Q
Reference dependence (Framing effect)
A
- People make decisions based on gains and losses relative to a point of reference, not based on
absolute outcomes. - Changing the way a question is asked to create a different point of reference leads to different
valuations and thus different choices.
8
Q
Reward prediction error (RPE)
A
RPE = Actual Reward - Expected Reward
RPE > 0 (Better than expected)
RPE = 0 (As expected)
RPE < 0 (Worse than expected)
9
Q
Dopamine & reinforcement learning
A
- We are continuously predicting
expected future reward - We take actions to maximize future reward
- When we receive information that violates our expectations,
it generates a reward prediction error - As a result, we update our predictions, which may alter our actions
10
Q
Dopamine pathways in human brain
A
- Mindbrain dopamine neurons project to basal ganglia, prefrontal cortex, and many other areas
11
Q
Schultz, Dayan, & Montague (1997)
A
- Task: Monkey must touch lever when light appears to receive drops of juice
- Before learning: Dopamine neurons are activated after the delivery of reward
- After training: onset of light causes a phasic burst of activity in
dopamine neurons - After training: Dopamine neurons decrease firing for short period of time, so activity is lower than baseline
12
Q
Reinforcement learning & addiction
A
- Opioids physiologically trigger the release of dopamine
- This is misinterpreted as a reward prediction error signal
- Thus, opioids “hijack” the reinforcement learning mechanism
13
Q
Iowa gambling task
A
- Goal: win as much money as possible
- Compared control participants and patients with damage to Ventromedial Prefrontal Cortex
- Patients with damage to VMPFC could not generate expected emotions
-Patients with tend to pick from the “bad” decks throughout the session, because the draw of the large wins isn’t cancelled out by anticipatory feelings of dread about the potential for large losses - Patients overemphasized immediate reward over long-term outcomes (temporal discounting)
14
Q
Prefrontal cortex functions
A
- All central to decision making
and the selection of actions - Maintenance and updating of
goals - Inhibition of prepotent actions
- Shifting between rules, sets,
and tasks - Monitoring and adjusting
performance - Integrating multiple sources of
value
15
Q
Rostral and caudal prefrontal cortex
A
- Rostral: Complex, abstract, long timeframe
- Caudal: Simple, concrete, short timeframe