Reasoning and Decision-Making Flashcards
Probability judgments often deviate from the dictates of probability theory
What are the 2 theories?
1) Heuristics and the biases
2) Ecological rationality
Apparent biases may be rational responses given the ecology of the human decision-maker
This is known as…?
Ecological rationality
Simplifying strategies that reduce effort but are prone to bias/error
This is known as…?
Heuristics and the biases
Are people generally good or bad at working with probability?
Bad
Define Heuristics and Biases
When people base their intuitive probability and frequency judgments on simple, experience-based strategies (“heuristics”) which are right (or good enough) most of the time
But which will sometimes lead to biased or illogical responses
When people base their intuitive probability and frequency judgments on simple, experience-based strategies (“heuristics”) which are right (or good enough) most of the time
But which will sometimes lead to biased or illogical responses
This is known as…?
Heuristics and Biases
What are the 3 types of Heuristics and Biases?
1) Availability
2) Representativeness
3) Anchoring
1) Availability
2) Representativeness
3) Anchoring
Are these part of Ecological rationality or Heuristics and Biases?
Heuristics and Biases
What view falls within a broader view of cognition that posits two systems:
1) A fast, associative, automatic, “System 1”
2) A slower, deliberative “System 2” that uses sequential processing and rule-based operations.
Heuristics and Biases
Heuristics and biases falls within a broader view of cognition that posits two systems
What are they?
1) A fast, associative, automatic, “System 1”
2) A slower, deliberative “System 2” that uses sequential processing and rule-based operations.
Describe the Availability Heuristic
Judgments are based on the ease with which relevant instances come to mind
Simply = If an event happens a lot then it will be easy to think of many past instances, so basing judgments on availability is sensible.
Judgments are based on the ease with which relevant instances come to mind
Simply = If an event happens a lot then it will be easy to think of many past instances, so basing judgments on availability is sensible.
This is known as…?
a. Availability
b. Representativeness
c. Anchoring
a. Availability
Overestimating the chances that a shark will attack you when you swim in the ocean is an example of…?
a. Availability
b. Representativeness
c. Anchoring
a. Availability
When it is easier to bring to mind things that happen frequently
This is known as…?
a. Availability
b. Representativeness
c. Anchoring
a. Availability
What are the 2 ways availability heuristic can entail bias?
- Our experience of past events does not reflect their true frequencies
- Events are easy to recall for some reason other than their frequency of occurrence
- Our experience of past events does not reflect their true frequencies
- Events are easy to recall for some reason other than their frequency of occurrence
These are implications of…?
a. Availability
b. Representativeness
c. Anchoring
a. Availability
Describe the study by Lichtenstein et al. (1978) in estimating causes of death
List 2 points
1) Ps estimate the number of US deaths per year due to 40 causes
2) The causes ranged from very rare (e.g., botulism, with one death per 100 million people) to very common (e.g., stroke: 102 000 deaths per 100 million people)
Describe the results of the study by Lichtenstein et al. (1978) in estimating causes of death
Ps over-estimated rare causes and underestimated common causes
Ps over-estimated rare causes and underestimated common causes of death
Why does this occur?
Because rare events/causes of death gets a lot of attention in media
But common events/causes of death happen too frequently that often, the media does not cover them because it would be too repetitive
Describe the study by Tversky & Kahneman (1973) on the effect of memory
List 3 points
1) Ps listened to list of 39 names including:
19 famous women and 20 less famous men
OR
19 famous men
20 less famous women
2) Some Ps had to write down as many names as they could recall
3) Others were asked whether the list contained more names of men or of women
Describe the results of the study by Tversky & Kahneman (1973) on the effect of memory
List 2 points
1) In the recall task, Ps retrieved more of the famous names (12.3 out of 19) than the non-famous names (8.4 out of 20). That is, famous names were more available.
2) 80 out of 99 (81%) Ps judged the gender category that contained more famous
names to be more frequent.
(e.g., the people given a list of 19 famous men and 20 less famous women reported that there were more men than women in the list)
1) In the recall task, Ps retrieved more of the famous names (12.3 out of 19) than the non-famous names (8.4 out of 20). That is, famous names were more available.
2) 80 out of 99 (81%) Ps judged the gender category that contained more famous
names to be more frequent.
(e.g., the people given a list of 19 famous men and 20 less famous women reported that there were more men than women in the list)
What do these results suggest?
List 2 points
1) It seems that people made their proportion estimates by assessing the ease with which examples of each come to mind
2) When one category was easier to retrieve (via the fame manipulation) it was judged more frequent, even when it was actually experienced less often
When one category was easier to retrieve (via the fame manipulation) it was judged more frequent, even when it was actually experienced less often
This is an example of…?
a. Availability
b. Representativeness
c. Anchoring
a. Availability
When people mistakenly believe that two events happening together is more likely than one of the events happening alone
This is known as…?
Conjunction fallacy
When subjective probability estimates sometimes violate this principle: The probability of event “A” cannot be less than
the probability of the conjunction “A and B”.
This is known as…?
Conjunction fallacy
Describe Tversky and Kahneman’s (1983) study on conjunction fallacy
List 2 points
1) Ps were asked; In four pages of a novel (about 2,000 words), how many words would you expect to find that have the form:
a. seven letter words that end with ING
b. seven letter words with ‘n’ as the penultimate letter
2) All ING words have N as the penultimate letter, so the number of N words must be at least as large as the number of ING words.
Describe the results of Tversky and Kahneman’s (1983) study on conjunction fallacy
List 2 points
1) Ps estimated, on average, 13.4 ending in ING words
2) But Ps only estimated 4.7 words with N as the penultimate letter
1) Ps estimated, on average, 13.4 ending in ING words
2) But Ps only estimated 4.7 words with N as the penultimate letter
What do these results suggest?
List 2 points
1) People may be basing their judgments on the mental availability of relevant instances
2) It is easy to think of “ing” words (e.g., by thinking of words that rhyme) but we are less accustomed to retrieving words based on their penultimate letter, so N words are harder to retrieve and thus seem rarer.
Under what circumstance will the conjunction fallacy problem disappear?
If/when participants apply a more systematic mental search strategy
Define the representativeness heuristic
Judgments are based on the extent to which an outcome (or item) is representative of the process or category in question
Simply = Judgments of probability are based on assessments of
similarity
Judgments are based on the extent to which an outcome (or item) is representative of the process or category in question
Simply = Judgments of probability are based on assessments of
similarity
This is known as…?
The representativeness heuristic
According to the representativeness heuristic, when estimating a probability – for example, how likely it is that a person belongs to a particular category or the probability that an observed sample was drawn from a particular population
What do people assess?
The similarity between the outcome and the category (or between the sample and the population)
When estimating a probability – for example, how likely it is that a person belongs to a particular category or the probability that an observed sample was drawn from a particular population – people assess the similarity between the outcome and the category (or between the sample and the population)
This is known as…?
a. Availability
b. Representativeness
c. Anchoring
b. Representativeness
Judgments of probability are based on assessments of similarity
This is known as…?
a. Availability
b. Representativeness
c. Anchoring
b. Representativeness
“an assessment of the degree of correspondence between a sample and a population, an instance and a category, an act and an actor or, more generally, between an outcome and a model.” (Tversky & Kahneman, 1983, p. 295).
This applies to…?
a. Availability
b. Representativeness
c. Anchoring
b. Representativeness
Suppose that you meet a new person at a party and try to estimate the probability that they have tried internet dating.
The idea is that you base your judgment on the similarity between the person and your stereotype of internet-daters – that is, on the extent to which the person is representative
of the category “people who have tried internet dating”
This is an example of…?
a. Availability
b. Representativeness
c. Anchoring
b. Representativeness
Describe Kahneman and Tversky’s (1973) study on base rate neglect
List 5 points
1) Ps were told that a panel of psychologists had interviewed a number of engineers and lawyers and produced character sketches of each person.
2) They were told that 5 such descriptions had been randomly selected and that they should rate, from 0-100, the likelihood that each sketch described one of the engineers
3) Ps assess the extent to which the description of Jack is similar to each of the two categories – lawyers and engineers
(To the extent that Jack is more similar to the stereotypical engineer, he is more likely to be judged an engineer)
4) Because this assessment of similarity is independent of the prevalence of lawyers and engineers in the population, the resulting probability judgment is independent of the base rates for these two professions
5) Some participants were told that the population from which the descriptions were drawn consisted of 30 engineers and 70 lawyers. Others were told that the population comprised 70 engineers and 30 lawyers. That is, Kahneman and Tversky manipulated the base rates for the two possible outcomes.
Describe the results of Kahneman and Tversky’s (1973) study on base rate neglect
1) 55% of Ps judged the probability that Jack is an engineer when they were told that the population from which the descriptions were drawn consisted of 70 engineers and 30 lawyers
2) 50% of Ps judged the probability that Jack is an engineer when they were told that the population from which the descriptions were drawn consisted of 30 engineers and 70 lawyers
Define base rate neglect
People’s tendency to ignore relevant statistical information in favor of case-specific information
1) 55% of Ps judged the probability that Jack is an engineer when they were told that the population from which the descriptions were drawn consisted of 70 engineers and 30 lawyers
2) 50% of Ps judged the probability that Jack is an engineer when they were told that the population from which the descriptions were drawn consisted of 30 engineers and 70 lawyers
What do these results suggest?
The personality description might provide some information about Jack’s likely occupation, but this should be combined with information about the number of engineers and lawyers in the population from which his description was randomly drawn.
However, people ignored these base probabilities and simply judged based on the description of Jack only
People’s tendency to ignore relevant statistical information in favor of case-specific information
This is known as…?
Base rate neglect
Describe Kahneman and Tversky’s (1973) study on base rate neglect part 2
List 4 points
1) Ps were given a list of 9 academic subject areas (e.g., computer science, medicine, law, humanities).
2) The prediction group was told that the sketch of Tom was prepared by a psychologist during
Tom’s final year in high school, and that Tom is now a graduate student. They were asked to rank the 9 academic subjects by the probability that Tom is specialising in that topic, based on his description
Simply = Rank each subject by likelihood Tom is specialising in it
3) The base-rate group was not shown the Tom W. sketch but “consider[ed] all first year graduate students in the US today” and indicated the percentage of students in each of the 9
subject areas – that is, the estimated the base rates for each subject area.
Simply = Estimate what percentage of all students study this topic
4) The representativeness group ranked the 9 subject areas by the degree to which Tom W. “resembles a typical graduate student” in that subject area
Simply = Rank each subject by how representative Tom is of a typical student
Describe the results of Kahneman and Tversky’s (1973) study on base rate neglect part 2
List 2 points
1) Across the 9 subjects, probability judgments (mean likelihood rank) were very positively correlated with representativeness judgments or mean similarity rank (r = .97) but negatively correlated with mean estimated base-rate (r= -.65)
2) Predictions were based on how representative people perceive Tom W. to be of the various fields, and ignored the prior probability that a randomly-selected student would belong to those fields (base rate neglect).
Judgments based on representativeness will be largely independent of …?
Base rates
Judgments based on representativeness will be largely dependent of base rates
True or False?
False
So, judgments based on representativeness will be largely independent of base rates
Define the Anchor-and-Adjust heuristic
Judgments are produced by adjusting from a (potentially irrelevant) starting value (anchor), and these adjustments are often insufficient
Judgments are produced by adjusting from a (potentially irrelevant) starting value (anchor), and these adjustments are often insufficient
This is known as…?
Anchor-and-Adjust heuristic
A pervasive cognitive bias that causes us to rely too heavily on information that we received early in the decision-making process
This is known as…?
Anchoring
What is anchoring?
A pervasive cognitive bias that causes us to rely too heavily on information that we received early in the decision-making process
Describe Tversky and Kahneman’s (1974) study on anchoring
List 2 points
1) Experimenters span a wheel of fortune that landed on 10 (a low anchor) for one group of participants and on 65 (a high anchor) for another group.
2) Ps were asked whether the proportion of African Countries in the United Nations was more or less than the anchor, and then asked for their best estimate of the true value.
e.g. Is the percentage of African countries in the UN:
a. More or less than 65%? What is your best estimate?
b. More or less than 10%?
What is your best estimate?
Describe the results of Tversky and Kahneman’s (1974) study on anchoring
List 2 points
1) The median estimate was 25% in the low anchor condition (more or less than 10%) and 45% in the high anchor condition (more or less than 65%)
2) Ps judgments were pulled towards the anchor values
Simply = Random anchor values can influence your estimates
Describe Chapman and Johnson’s (1999) study on anchoring
List 2 points
1) Ps wrote down the last 2 digits of their social
security number and treat it as a probability (e.g., “34%”).
2) Ps were asked whether the probability that a Republican would win the 1996 US Presidential Election was more or less than this probability (e.g. 34%), prior to giving their best estimate of the true probability
Describe the results of Chapman and Johnson’s (1999) study on anchoring
The larger the anchor number, the larger the best estimate, with a correlation of r = 0.45.
e.g. 14% vs 54% = Ps with 54% as their anchor made larger estimates
What do we use anchoring for?
An initial estimate of the target value and adjust from that starting point in the right direction
We use anchoring as an initial estimate of the target value and adjust from that starting point in the right direction
Why?
Because the adjustment is effortful, we often adjust insufficiently and so our judgment is biased towards the anchor value
If your anchor value is above the true value, your adjusted estimate value would be…?
Above the true value
If your anchor value is below the true value, your adjusted estimate value would be…?
Below the true value
What 3 things have little effect on anchoring accuracy?
1) Incentives
2) Warnings
3) Cognitive capacity
What are the 3 mechanisms that contribute towards anchoring effects?
1) The idea that consideration of the anchor as a possible value for the estimated quantity activates relevant semantic knowledge
(e.g., when considering 12% as a possible probability for the probability of a Republican win, we call to mind relevant info about the state of the economy, public perceptions of the candidates etc; this activated knowledge then shapes or biases our final estimate)
2) The idea that an anchor value changes our perception of the magnitude of other candidate values (e.g., if we’ve just been thinking about a 12% probability, 50% seems quite large; if we’ve been considering 88%, 50% seems quite small).
3) The idea that externally-presented anchors may be seen as a “hint” or suggestion, even if they are ostensibly uninformative (after all, doesn’t the fact that the experimenter is getting me to consider a number generated by a wheel of fortune suggest that they want me to be influenced by it in some way?)
What effect does warning people about anchoring effects and/or giving them an incentive to be accurate have?
It has little effect on the extent to which people anchor on the provided value, which doesn’t fit with the idea that the anchoring effect reflects a “lazy” or “intuitive” judgment system that can be over-ridden by effortful deliberation
Some researchers treat bias as error that results from
“____” processing
Lazy
What 3 types of heuristics violate rational prescriptions because the human judge has adopted a quick-and-dirty strategy rather than a more effortful consideration of relevant material?
1) Availability
2) Representativeness
3) Anchor and Adjust
What are the 2 types of Ecological rationality?
1) Natural Frequencies
2) Misperception of randomness
1) Natural Frequencies
2) Misperception of randomness
These are types of…?
a. Heuristics
b. Ecological rationality
b. Ecological rationality
What is the natural frequency effect?
When people are better able to solve descriptive Bayesian inference tasks when represented as joint frequencies obtained through natural sampling, known as natural frequencies, than as conditional probabilities
When people are better able to solve descriptive Bayesian inference tasks when represented as joint frequencies obtained through natural sampling, known as natural frequencies, than as conditional probabilities
This is known as…?
Natural frequency effect
People are more likely to perform well in a task that involves:
a. Probability e.g.70% of people
b. Frequency obtained e.g. 5/10 people
b. Frequency obtained e.g. 5/10 people
Some researchers argue that people do much better at probability tasks when the information is
presented in way that matches our supposed “evolved” cognitive capacities for handling this kind of information.
In particular, it has been argued that humans evolved to process frequencies (counts)
obtained by sampling the environment, rather than normalised probabilities
This is known as…?
Natural frequency effect
Describe Eddy’s (1982) study on the natural frequency effect
Ps were asked to calculate the probability that a woman has breast cancer, based on the statistics (%) given in the description
e.g. 32% of people
Describe the results of Eddy’s (1982) study on the natural frequency effect
List 4 points
1) 95 out of 100 gave estimates between 0.70 and
0.80 when the actual answer was only 0.075. Only 8% gave the correct answer.
2) An estimate of 80% demonstrates the inverse fallacy: it confuses the probability of a positive
test result given the presence of cancer, p(positive|cancer), with the probability of cancer given the positive test result, p(cancer|positive)
3) These probabilities are not the same: the chances of cancer given a positive test depend on the base rate (prior probability) of cancer in the population.
4) A positive test is more likely to indicate cancer when cancer is widespread than when it is very rare.
But the physicians (and most people) tend to ignore this base rate information
What did Gigerenzer et al. argue on the natural frequency effect?
List 2 things
1) Probability theory and normalised probabilities are recent
2) We are much better at tracking natural frequencies (because people have not adapted to understanding complex statistics right off the bat)
According to Gigerenzer et al., probabilities only make sense when conceived as …?
Longrun frequencies, and that it does not make sense to talk about the probability of a one-off event
(e.g., that a given person has a disease)
Gigerenzer argues that humans evolved to keep track of _______, estimated over time by “natural sampling”
(i.e., encountering different types of events and remembering the number of times they occur)
Event frequencies
1) Probability theory and normalised probabilities are recent
2) We are much better at tracking natural frequencies (because people have not adapted to understanding complex statistics right off the bat)
What does this relate to?
a. Availability
b. Natural Frequencies
c. Anchor and adjust
d. Misconception of randomness
b. Natural Frequencies
Describe Hoffrage & Gigerenzer’s (1998) study on natural frequencies
Ps were asked to calculate the probability that a woman has breast cancer, based on frequencies (counts) given in the description
e.g. 10 out of 1000 people
Describe the results of Hoffrage & Gigerenzer’s (1998) study on natural frequencies
46% of physicians answered correctly when the description followed the natural frequency format, compared to only 8% of physicians answered correctly in the original wording of the task
46% of physicians answered correctly when the description followed the natural frequency format, compared to only 8% of physicians answered correctly in the original wording of the task
How come calculating probability using frequencies is easier than statistics?
Because it is easier to solve: true positives divided by the total number of positives
There is no need to keep track of the base rate, explaining base-rate neglect when problems are presented in standard probability format
46% of physicians answered correctly when the description followed the natural frequency format, compared to only 8% of physicians answered correctly in the original wording of the task
How come calculating probability in the original format is more difficult than statistics?
Because the use of normalised probabilities (which necessitate the explicit incorporation of base rates/priors) deviates from how we “naturally”
evaluate chance
What is the alternative explanation for why the natural frequency format makes the task easier?
It simply clarifies the set relations between the various event categories, and that any manipulation which achieves this will have the same beneficial effect
What is gambler’s fallacy?
A belief that a random event is less or more likely to happen based on the results from a previous series of events
A belief that a random event is less or more likely to happen based on the results from a previous series of events
This is known as…?
Gambler’s fallacy
Gambler’s fallacy is part of…?
a. Availability
b. Natural Frequencies
c. Anchor and adjust
d. Misconception of randomness
d. Misconception of randomness
Describe Croson and Sundali’s (2005) study on gambler’s fallacy
List 3 points
1) Examined roulette betting patterns from a Nevada casino
2) They focused on “even money” outcomes (where the two outcomes are equally likely, such as “red or black” or “odd or even”; if you bet on the right outcome, you get back twice what you staked)
3) Looked at bets as a function of the number of times that an outcome had occurred in a row (e.g., a streak of two would mean that the last two spins both came up heads or both came up tails)
Describe the results of Croson and Sundali’s (2005) study on gambler’s fallacy
List 3 points
1) As the run-length increased, people were increasingly likely to bet that the next outcome would be the opposite of the streak
2) After a run of 6 or more, 85% of bets were that the streak would end, even though this probability remains fixed at .50.
3) 15% of gamblers bet the streak would continue
Look at this scenario:
The Number 53 in a lottery in Italy has not come up in 2 years
People bet more than 2.4 billion GBP hoping that number 53 would show up
But actually, nothing can predict that the number could show up
This is an example of…?
Gambler’s fallacy
Describe the findings of Tversky and Kahneman’s (1971) study on gambler’s fallacy and representativeness
List 2 points
1) People expect a “local” sequence to be representative of the underlying process
2) i.e. I know that a coin should, in the long run, produce equal numbers of heads and tails, so I expect any sequence of coin tosses to have this property. A run of heads means that a tails outcome will make the local sequence more representative of the data-generating process
Simply = HHHHHT is not as representative as HTHTTH
The gambler’s fallacy is often attributed to the _______ heuristic
a. Availability
b. Anchoring
c. Representativeness
c. Representativeness
I know that a coin should, in the long run, produce equal numbers of heads and tails, so I expect any sequence of coin tosses to have this property. A run of heads means that a tails outcome will make the local sequence more representative of the data-generating process
Simply = HHHHHT is not as representative as HTHTTH
This is an example of…?
When the gambler’s fallacy is attributed to the representativeness heuristic
Describe Ayton and Fischer’s (2004) study on misconception of randomness with past experience
List 2 points
1) Ps watched the TV and ate skittles (took skittles and not put them back in the bowl)
2) Ps had to calculate the probability of picking a specific colour skittle (e.g. Blue) from the bowl without replacement every time they ate a skittle
Many physical processes involve sampling with replacement, which results in a consistent probability for a given outcome the more times that it has occurred
True or False?
False
Many physical processes involve sampling without replacement, which results in diminishing probability for a given outcome the more times that it has occurred
If you rummage blindly in your cutlery drawer for spoons, removing the spoons as you find them, then the probability that the next item will be a spoon _______ as your hunt progresses.
a. Decreases
b. Stays the same
c. Increases
a. Decreases
What did Ayton and Fischer’s (2004) study on misconception of randomness with past experience conclude?
List 2 points
1) Inappropriate generalisation of past experience
2) Random mechanical outcomes = Sampling without replacement
How can we induce random mechanical outcomes ?
Sampling without replacement
1) With an infinitely long sequence of coin flips, all sequences of a given length occur with _______
e.g., the sequence HHHH will occur with the same frequency as HHHT, so believing that a
run of heads means it’ll be tails next time is indeed a fallacy
Equal probability
What are the 3 types of misperception of randomness?
1) Past experience
2) Memory constraints
3) Gambler’s fallacy
The sequence HHHH will occur with the same frequency as HHHT, so believing that a
run of heads means it’ll be tails next time is a ______
Fallacy
Hahn and Warren (2009) looked at the properties of fair coin under realistic conditions
What were their conclusions?
List 2 points
1) People only ever see finite sequences
2) People can only hold a short subsection of a sequence in memory (e.g., the last 4 outcomes)
Hahn and Warren (2009) noted that humans do not experience or remember …?
a. Short sequences
b. Long sequences
b. Long sequences
Describe Hahn and Warren’s (2009) study and findings on memory constraints
List 2 points
1) Simulated 10,000 sequences of 10 coin flips
2) The pattern HHHH only appeared in about 25% of the sequences, whereas HHHT occurred in about 35% of the simulated samples.
Simply = If we had 10,000 people each of whom had experienced 10 flips of a fair coin, it would be perfectly reasonable for more of them to expect a sequence HHH to end with a T than with another H.
Hahn and Warren (2009) noted that for shorter sequences, the probability of encountering HHHT and HHHH are…?
a. Not equal
b. Equal
a. Not equal
The pattern HHHH only appeared in about 25% of the sequences, whereas HHHT occurred in about 35% of the simulated samples, when flipping a coin
What does this suggest?
If we had 10,000 people each of whom had experienced 10 flips of a fair coin, it would be perfectly reasonable for more of them to expect a sequence HHH to end with a T than with another H.
True or False?
The brain is lazy
False
The brain is not lazy
The supposed “fallacies” of human judgment and decision-making are often perfectly rational given the finite and imperfect information afforded by the environment and our limited mental capacities
Simply = The brain can think rationally and make careful judgements but it has working memory capacity and can only see finite sequences
What are the 2 biases of the availability heuristics?
1) Conjunction Fallacy
2) Under and over estimation
1) Conjunction Fallacy
2) Under and over estimation
These are biases of…?
a. Anchoring
b. Availability
c. Representativeness
b. Availability
1) Base Rate Neglect
2) Gambler’s Fallacy
These are biases of…?
a. Anchoring
b. Availability
c. Representativeness
c. Representativeness
What are the 2 biases of the representativeness heuristics?
1) Base Rate Neglect
2) Gambler’s Fallacy
Identify the 2 different theoretical approaches to errors in probability judgment
1) Heuristics & Biases
2) Ecological Rationality
Bayes theorem tells us how to update an estimate based on ______ and _______
Existing and new knowledge
Bayes theorem tells us how to __________ based on existing and new knowledge
Update an estimate
True or False?
Human probability judgments always follow the laws of probability
False
Human probability judgments do not always follow the laws of probability, and these deviations illuminate the judgment process
One broad framework posits the use of simplifying strategies that reduce effort at the expense of sometimes introducing bias
This is known as…?
Heuristics
People sometimes simplify judgments by substituting an easier-to-evaluate entity for the target dimension
Give 2 examples of this
1) Availability Heuristics
2) Representativeness Heuristics
Judgments often assimilate towards ______ values
Anchor
We can also consider probability judgments in their ecological context
How?
One idea is that humans evolved to process frequencies, not normalising probabilities, although this interpretation of frequency-format effects is debatable
The gambler’s fallacy reflects ecological experience with …?
Different types of generating processes
The gambler’s fallacy reflects _______ experience with different generating processes.
Ecological