Decision-making Flashcards
reasoning
drawing new conclusions from existing information, a prerequisite for making decisions
a higher-order process that involves other areas of cognition (attention, memory, perception)
premises
estimates about whether certain facts about the world (propositions) are true or not
propositions
any statement that can be true or false
refers to properties about the external world
deduction
conclusion follows logically from premises
using general theories to reason about specific observations (reasoning toward information, making predictions based on theories)
formal system for generating statements that will be true if the rules of the system are followed
induction
generalizing from a set of information and extending it to make an informed guess, involves interpretation (forming a hypothesis based on evidence - could be false)
making predictions for the future based off what happened in the past (could become a heuristic if we over-generalize)
basis of learning - applying learned rules to new situations, language learning, making associations
syllogism
conclusion is derived from two or more propositional statements (deductive reasoning)
premises are presumed to be true, determining if the premises support the conclusion based on logical structure, not content
each premise shares a term with the conclusion and both premises share a middle term that is not present in the conclusion
categorical syllogism
three statements - two premises and a conclusion
fallacy
an invalid syllogism
what is the difference between a valid syllogism and truth?
valid only indicates that the conclusion follows logically from the premises
whether or not a syllogism is true depends on whether the premises are true
decision-making
choosing a specific course of behavioural actions from among multiple possibilities
expected utility hypothesis (EUT)
people will choose the option with the highest expected value (people are rational and pursue the logical course of action related to their goals)
what is the problem with the expected utility hypothesis?
people are often irrational and can be induced to make systematic errors
neuroeconomics
combines economics, psychology, neuroscience to understand why humans make the choices they do
belief bias
tendency to rate syllogisms as valid because their conclusions seem believable
bringing in our prior knowledge instead of reasoning based on structure - people have difficulty reasoning with syllogisms in which logical validity interferes with truth (and people are unwilling to accept unbelievable syllogisms)
when are people more likely to fall for the belief bias?
when made to work through syllogisms quickly - when they have more time to evaluate validity, they are less likely to determine it based on believability
atmosphere effect
people rate a syllogism as valid when the qualifiers (some, no, all) in the premises match those in the conclusion
mental models
mental simulation of the world based on the description of the syllogism (visualizations of the sentence to see if it breaks down)
the reason why negative statement syllogisms are difficult to reason with; difficult to imagine the absence of something
conditional/hypothetical syllogism
conditional claim, rule that relates two propositions (if P, then Q - where P is the antecedent proposition and Q is the consequent proposition)
modus ponens
affirming the antecedent: if the antecedent is true, then the consequent must also be true
(It is Tuesday, therefore I have class)
modus tollens
denying the consequent: if the consequent is false, then the antecedent must also be false
(I do not have class, therefore it is not Tuesday)
affirming the consequent
if the consequent is true, then the antecedent is also true - INVALID
I have class, therefore it is Tuesday (you could have class on other days)
denying the antecedent
if the antecedent is false, then the consequent is also false - INVALID
It is not Tuesday, therefore I do not have class (you could have class on other days)
confirmation bias
tendency to look for information that supports a hypothesis rather than evidence that falsifies it
generalization
type of inductive reasoning, we extrapolate about a limited set of observations to draw a conclusion about a broader population/category
statistical syllogism
type of induction, we infer something about an individual based on observations from a group
argument from analogy
type of induction, we assume that two things share a set of properties, so they must share a different property
one-shot learning
a concept is learned from a single example (requires inductive reasoning)
Bayesian inference
mathematical model for incorporating existing beliefs (prior) with new data to make an educated inference (may be unconscious)
heuristics
inferential system not based on mathematics or logic, but mental shortcuts that allow us to skip careful deliberation of evidence in order to draw an inference
System 1 vs. System 2
S1: quick, automatic, relies on heuristics - can result in impulsive, emotional, optimistic judgments (following first impressions and intuition), uses the limbic system
S2: deliberate and logical, takes cognitive resources - turning to S2 to make decisions can help avoid mistakes, uses the frontal cortex
availability heuristic
tendency to rely on information that comes to mind more quickly (salience) when trying to make a decision (ease of retrieval = judged to be more frequent)
what makes events more salient for heuristics?
examples are personal (do you know someone who died in a car accident = more frequent)
reported in the news (more coverage = more frequent, media tends to report sensational stories)
affect heuristic
tendency to overestimate the frequency of events which generate strong emotional reactions (like a sense of dread - people are afraid of sharks because they inspire dread)
anchoring and adjustment heuristic
tendency for people to focus and rely on initial pieces of information
even if the initial information is unrelated (spinning a wheel, then guessing the number of countries)
impact of the starting point on a Likert scale
representativeness heuristic
tendency to rely on the fact that a person/object conforms to a specific category while neglecting other information/reasoning
related to stereotypes, overuse of schemas, prior knowledge which we use to infer something about an exemplar - use similarity to group to judge membership
conjunction fallacy
people assume that two specific conditions are more probable than either single condition
based on the representativeness heuristic (because the exemplar matches out schema, we violate a basic probability rule)
base-rate neglect
people ignore the underlying probability of an event in favour of other evidence
bias based on the representativeness heuristic
cultural cognition
people hold beliefs about risks consistent with their social and moral values
results of experiment of cultural cognition by Kahan et al.
those with hierarchal and individualistic views tended to consider climate change a smaller risk, and nuclear energy production a higher risk than those with egalitarian/communitarian views (their worldviews also influenced what they thought the scientific assessment of these issues was)
people tend to ignore contradicting information or become even more entrenched in their views
optimism bias
people tend to overestimate the probability of positive events happening to them
not present in depressed people (and related to severity of symptoms - mild = no optimism bias, severe = pessimism bias)
evidence for the optimism bias
people underestimated the likelihood of being infected by COVID compared to the general population (and this was related to their underusing preventative measures)
loss aversion
people prefer to avoid losing something vs. not gaining something of equal value (the amount you could win has to be almost double what you could stand to win for people to take the risk)
endowment effect
people place a higher value on things they own over things they don’t own yet
IKEA effect
tendency for people to value items they created/built over items they bought or received
status quo bias
tendency to leave things as they are rather than making a change
framing effects
people make decisions based on how the question is framed (positive - people saved or negative - people die) - shows irrationality/inconsistency in decision-making (if classical economic theories are correct, people should make the same decision no matter the framing)
people are risk-averse when options are framed as gains (go with the safe option)
people are risk-seeking when options are framed as losses (can tolerate uncertainty)
integral emotions
those directly related to the decision (like feeling anxious about the decision to ask someone out) - can be useful and lead to sub-optimal choices
incidental emotions
those not related to the decision, but happen to be the state of the person at the time of making the decision
basis of retail therapy
People are more likely to make a purchase when in a negative mood - and this purchase can have a lasting positive effect
effect of negative incidental emotions on decision-making
People in a negative mood are more motivated to change something - if you own something, you want to get rid of it and if you don’t own something, you want to get it
ultimatum game and results
proposer must offer a way to split a sum of money to the responder; in case of rejection, both get nothing
rationally, the responder should accept any proposal (any money is better than none), but in reality, they reject any proposal below a 7:3 ratio (the lowball offer is so unfair that they want to punish the proposer)
Strong emotions (outrage) affect decision-making
brain regions in decision-making
sensation and perception, STM and LTM, attention
ventromedial prefrontal cortex
ventromedial prefrontal cortex (vmPFC) role in decision-making
lesions lead to the person not taking into account long-term effects of decisions (they are able to reason about socially and morally correct behaviours, but do not choose them)
important in delay gratification
somatic marker hypothesis
- vmPFC is involved in associating emotional reactions with certain behaviours (vmPFC could elicit the emotional states that might occur based on the outcome of a choice)
- the emotion will become an integral part of what decision you make (without the vmPFC you might make more rash impulsive, risker decisions because of a stunted emotional response)
delay gratification and associated research
put off short-term reward for the long-term reward
Stanford Marshmallow Experiment: have one marshmallow now or two later - ability to wait could predict better academic success, less drug use, lower levels of aggression and rejection by peers
nudge theory and examples of how to use biases
encouraging people to make certain behavioural decisions by introducing small changes in their environment
opt-out instead of opt-in - using status quo bias to stay with the default opt-in choice
loss aversion is adding a surcharge for using a grocery store bag rather than giving a discount to people who bring their own
bias
deviations from rationality (errors) that are caused by using heuristics - systematically inaccurate choices that don’t reflect the current situation
three categories of heuristics
- bias how we interpret information
- bias how we judge frequency of events
- bias how we make predictions
regression to the mean
when a process is somewhat random (doesn’t have a perfect correlation), extreme values will get closer to the mean when measured a second time
related to reward and punishment in reinforcement learning
illusory correlation
we tend to see causal relationships even when there are none when we see events co-occurring
bounded rationality and its effect on people
people are limited by environmental constraints (time) and individual constraints (cognitive resources, working memory, attention)
so people are satisficers: look for ‘good enough’ solutions to conserve cognitive resources (basis of why we use heuristics)
ecological rationality
sees heuristics not as ‘good enough’ (bounded rationality), but as the optimal approach - they were developed under certain conditions because they give rise to better solutions than other strategies
example of ecological rationality
investing - the heuristic 1/N (equally dividing assets) is a better strategy than other complex optimization algorithms (which take longer, more effort, and don’t necessarily produce better results)
Which is not true of heuristics and biases?
(a) the conjunction fallacy arises because of the availability heuristic
(b) regression to the mean only happens when there is a non-perfect correlation
(c) heuristics can sometimes give the right answer
(d) people use heuristics because we are boundedly rational
A: the conjunction fallacy arises from people using the representativeness heuristic
types of decision-making
perceptual: objective (externally-defined) criteria for making your choice
value-based: subjective (internally-defined) criteria for making your choice (depends on motivational state and goals)
risk
taking an action despite the outcome being uncertain
specific to value-based decision-making
ambiguity
you have incomplete information, so you don’t know the specific consequences
consequences of individual differences in risk-taking
high: addiction and impulsivity
low: stagnant living
most people are risk averse
three risk profiles
risk averse: has a positive risk premium
risk neutral: has zero risk premium
risk seeking: has a negative risk premium
risk premium
(expected gains of the risk option) - (expected gains of the certain option)
what is irrational about risk?
risk preferences aren’t irrational (expected utility theory can account for these individual differences), but people are inconsistent in their preferences which is irrational (a bias)
behavioural economics
because of the discrepancy between how people should act (classical economic theories) and the irrational decisions we make - looks at how people do act
the framing effect
inconsistent risk preference when risks are framed in terms of gains or losses
risk averse when options framed as gains (safety option)
risk seeking when options are framed as losses (tolerate uncertainty)
prospect theory
contrary to expected utility theory, stems from loss aversion
shape of the utility function (how we process losses and gains)
shape of the probability weighting function (unlikely vs. likely events, how we understand probabilities and risk)
utility
subjective value assigned to an object (context-dependent) - assigned as a function of current state (reference point), not an absolute value - deviations from the reference point determine risk preferences
utility function
describes how people map money to satisfaction - steeper for losses than gains (dissatisfaction is greater when you lose $1 than earned $1)
an extra dollar doesn’t cause the same amount of satisfaction based on the reference point (reference is $0, extra $1 causes a lot vs. reference is $1M, extra $1 doesn’t matter)
probability weighting function
we overestimate the likelihood of extreme events (dying in a car crash) and underestimate the frequency of likely events (dying of cancer)
Fourfold Pattern
depicts risk preferences based on probability weighting and utility functions
high probability gain + framed in terms of losses = risk seeking
low probability + framed in terms of losses = risk averse
high probability + framed in terms of gains = risk averse
low probability + framed in terms of gains = risk seeking
neuroimaging evidence for framing effects
Ps made choice between options framed in terms of gains and losses (risky outcome and safe option)
activity in the frontal cortex when you don’t fall into the framing trap vs. activity in the amygdala when you do - emotional response may underlie framing effects
prediction error
difference between what you predicted would happen and what actually happened
can be positive (unexpected good outcome) or negative (unexpected bad outcome) and they drive reinforcement learning
link between prediction errors and decision-making
unexpected outcomes (in sports or weather - win/loss, sun/clouds) changed affect accordingly - and mood predicted risky decision-making in gambling (happy = more likely to gamble)
which of the following about prospect theory is false?
(a) It tells us how people do act instead of how they should act
(b) the asymmetry of the utility function is made to account for the framing effect
(c) people tend to overestimate the probability of rare events and underestimate the probability of common events
(d) utility is an immutable property and is reference-independent
D - utility is context-dependent and subject to anchoring and adjustment heuristics (reference point)
loss aversion in neuroscience
people with bilateral amygdala lesions were tested on a gambling task which people usually don’t take (equal risk of losing a small amount or winning a large amount) because of loss aversion - they took the gamble
amygdala may mediate loss aversion (removing a fear of poor outcomes)
Someone goes to a park every day and sees dogs wearing top hats. They conclude that all dogs that visit this park wear top hats. This is an example of - reasoning because you are reasoning - information
(a) inductive; toward
(b) inductive; from
(c) deductive; toward
(d) deductive; from
B - working from observations to generalize
types of syllogisms
All statements: “All A are B”
Negative statements: “No A is B” = “no B is A”
Some statements: “Some A are B” (at least one, possibly all)
which types of syllogisms are difficult to reason with
Negative statements: mental model theory (difficult to imagine the absence of something)
Some statements: they can represent many structures
omission bias + trolley problem
withholding is not as bad as doing (inaction is less bad than action because inaction is more difficult to classify) - even if the outcome is the same
example of difficulty reasoning with negative information
people have difficulty choosing the “action” option in the trolley problem - but people with frontal lobe damage have no trouble with it
The atmosphere effect occurs when people -
(a) determine what conclusions follow from certain statements
(b) visualize sentences and mentally explore them
(c) rate a conclusion as true when there is similar phrasing in the premises and conclusion
C is correct
A = deductive reasoning
B = mental model theory
Wason task
people asked to test a conditional statement by determining which cards to flip over
they tend not to go with the falsification principle (look for falsifying information) but with the confirmation bias
familiarity effects on Wason task
Wason task with real-world situations = people are better at choosing the right cards
Heuristics are to biases as - are to - ?
(a) mental shortcuts; rules
(b) mental shortcuts; outcomes
(c) outcomes; mental shortcuts
(d) specification; generalization
B
Ellen is afraid sharks but not of drowning. Her perceived risk of sharks will be - prevalence rates and her perceived risk of drowning will be - prevalence rates.
(a) lower than; higher than
(b) higher than; lower than
(c) the same as; the same as
(d) the same as; lower than
B - higher media coverage of shark attacks than drownings and affect heuristic
effect of the availability heuristic on our perception of our challenges
we think we’ve had it harder than others because we can remember the challenges we’ve had to overcome but not others’ = stronger availability
gambler’s fallacy
false belief that a predicted outcome of an independent event depends on past outcomes (assuming sequential events are linked when they aren’t)
people keep investing after losses on the stock market
belief in a “winning streak”
loan officers and U.S. judges more likely to deny applications after granting the previous one, and vice versa
how does the availability heuristic bias our judgments?
(a) makes judgments more accurate when information is available
(b) we confuse the frequency of what we can remember with actual occurrences
(c) facilitates ambiguous decisions
(d) we always recall things more easily when they occur frequently
B
post-mortem vs. pre-mortem
post: learning from failures
pre: anticipate and prevent mistakes by putting a plan in place