Decision Making Flashcards
Decision
a choice one makes after considering different alternatives
Expected Value (Rational Choice)
● Average outcome if a scenario is repeated many times
● Calculated using probabilities and values of possible
outcomes
● Example: a gamble
● 75% chance of winning $200,
● 25% chance of winning $0.
● EV = (.75 × $200) + (.25 × $0) = $150
In order to maximize average outcome _______
Choose option with greatest expected value
● Option A: You win $125. ● EV = 1.0 × $125 = $125 ● Option B: 25% chance you win $400, 75% chance you win $0. ● EV = (.25 × $400) + (.75 × $0) = $100 ● Therefore, you should choose Option A.
Advantages to using Expected Value
● Clear prescription for “correct” choices
● Leads people, on average, to maximize monetary
gains given what they know about the world
● Keeps people’s decisions internally consistent
Problems to using Expected Value
● Difficult to apply for non-monetary decisions
● Doesn’t explain actual choices by actual people
Kahneman & Tversky Developed ______ Theory
Prospect
What is Prospect Theory?
A descriptive approach to decision making in which we focus on HOW we decide
What is Rational Choice Theory?
A Prespective approach to decision making in which we focus on how we SHOULD decide using expected values
Behavioural Economics
People do not make decisions based on expected
values, probabilities, and absolute outcomes.
People make decisions based on subjective utility,
decision weights, and relative outcomes.
Ultility
Usefulness or desirability of an outcome
Loss Aversion
Refers to people’s tendency to prefer avoiding losses to acquiring equivalent gains: it is better to not lose $5 than to find $5.
=Losses loom larger than gains
Losing $20 feels worse than winning $20 feels good
Subjective Utility - prospect
Individual differences in sensitivity to loss
Utility Function (see notes Decision 1 for graph)
- Diminishing marginal utility: Utility function curves
- Loss aversion: Utility function is steeper for
losses than gains
Decision Weight - prospect
People transform objective probability into subjective decision weights
Small probabilities (but greater than 0%) are overweighted
● 1% feels like much more than 0%
● 51% feels about the same as 50%
Large probabilities (but less than 100%) are
underweighted
● 99% feels like a lot less than 100%
● 50% feels about the same as 51%
Decision weight Graph see notes Decision part 1)
- Underweighting of large probabilities
2. Overweighting of small probabilities
Influences of ‘gain’ Framing on decision making
In terms of gains, safe net option selected
= A. You gain $200. Net: $200
instead of
B. 33% chance you gain $600, Net: $600
66% chance you gain $0. Net: $0
Influences of ‘loss’ framing on decision making
A. You lose $400. Net: $200
is a 50/50 split
B. 33% chance you lose $0, Net: $600
66% chance you lose $600. Net: $0
Frame Effect - prospect
People make decisions based on gains and losses
relative to a point of reference, not based on
absolute outcomes.
● Changing the way a question is asked to create a
different point of reference leads to different
valuations and thus different choices.
People make decisions based on individual: (3)
Subjective utilities
● Diminishing marginal utility & loss aversion
Decision weights
● Underweight large probabilities & overweight small
probabilities
Relative outcomes
● Reference dependence, gain & loss framing
Reinforcement Learning
We perform actions in the world and experience
the resulting outcomes as good (reward) or bad
(lack of reward or punishment).
How do we make predictions and compare the real outcome with the predicted outcome.?
“actual” outcome vs. “expected” outcome
- “violation of expectations” - > “adjustment of behavior”
- > meeting expectations -> maintenance of behavior
Which part of the brain measures reward/pleasure?
mid brain; but mid brain does not just do reward stuff
● Activity of midbrain dopamine neurons is related to reward
● But dopamine neurons do more than simply report occurrence of reward
● They code deviations from predictions about time and magnitude of reward