Judgement Flashcards
What theories do subjective probability judgements form part of?
Subjective Expected Utility theories of normative decision-making
People make judgements on the basis of…
…heuristics
What are heuristics?
Mental rules of thumb, shortcuts
Don’t always work
Help us to get to a reasonable estimate of the likelihood of something happening
What is the availability heuristic?
Judging probability by ease with which things come to mind
What are the pros of the availability heuristic?
- frequency of events happening is a good tally for how often things come to mind
Frequency isn’t the only thing that can affect the availability of info. What can this lead to?
Biased judgements
Which research highlights the limitations of the availability heuristic?
Tversky & Kahneman (1974) – asked students “does ‘r’ appear more in 1st/3rd position?” –> most pps said 1st position because it is easier to recall words from memory by their 1st letter than their 3rd letter
Slovic, Fischhoff & Lichtenstein (1979) – pps were given a list of causes of death & had to estimate the frequencies of each cause –> pps overestimated ones that were heavily reported in newspapers (murder, fires) & underestimated ones that were rarely reported (emphysema, stomach cancer)
Why is Slovic, Fischhoff & Lichtenstein’s (1979) study good?
It uses source material - we know the correct answers (of the causes of death) & can compare them to pps’ answers
What is the representativeness heuristic?
An assessment of the degree of correspondence between an instance & a category
What helps to drive our likelihood judgement?
How much a set of circumstances looks like another set that has been previously experienced
What are limitations of the representativeness heuristic?
X other things can influence representativeness – it doesn’t always work, is tied to stereotypes
X it doesn’t rely on all the info out there, only a small sub-set –> this can affect our judgements
X can lead to the conjunction fallacy
What is the conjunction fallacy?
When we assume that specific conditions are more probable than a single general one
The conjunction of 2 events can’t be more probable that either of its constituent events
Which researcher/s studied the conjunction fallacy?
Tversky & Kahneman (1983) - the Linda problem
What did Tversky & Kahneman’s (1983) study involve?
Pps were shown a description about ‘Linda’…
“Linda is 31 years old, single, outspoken & very bright. She majored in philosophy. As a student, she was deeply concerned about issues of discrimination & social justice, & also participated in anti-nuclear demonstrations. Put these statements about Linda in order of probability…”
a) Linda is active in the feminist movement
b) Linda is a bank teller
c) Linda is a bank teller & active in the feminist movement
What did Tversky & Kahneman (1983) find?
Pps rated Linda as a feminist quite highly, a bank teller quite low, & a feminist + bank teller in the middle
This breaks the rules of probability because feminist bank tellers are a sub-set of ‘bank tellers’ – it must be more probable that she is a bank teller than a feminist bank teller
What is ‘anchoring’?
A cognitive bias
When we rely too heavily on the first piece of info we are given (the ‘anchor’) when making decisions
What researchers studied anchoring? (x2)
Tversky & Kahneman (1974)
Lichtenstein et al. (1978)
What did Tversky & Kahneman’s (1974) study involve?
Pps had to estimate the answer to sums under time pressure, either…
a) 1x2x3x4x5x6x7x8
b) 8x7x6x5x4x3x2x1
What did Tversky & Kahneman (1974) find?
Mean estimates:
Group a) = 512
Group b) 2,250
Pps anchored on the first values they were given & didn’t adjust their estimates appropriately
What did Lichtenstein et al.’s (1978) study involve?
Pps estimated the frequencies of 40 causes of death; they were given an answer, either…
a) 50,000 deaths by motor vehicle accidents
b) 1,000 deaths by electrocution
What did Lichtenstein et al. (1978) find?
Group a)’s estimates were higher - they had adjusted down from the high value they were given
Group b) had adjusted upwards, starting with lower frequency estimates
What are Bayesian reasoning problems?
Reasoning problems that we solve using base theorem
Involves updating probabilities on the basis of additional info
What can Bayesian reasoning be influenced by?
Representativeness
What is the base rate fallacy?
If we are presented with base rate info (generic info) & specific info, we tend to ignore the base rate info & focus on the specific info
Kahneman & Tversky (1973) told pps that there were 30 engineers & 70 lawyers in a sample. Pps were given a personality description of a person and asked what the probability of the person being an engineer was.
What did pps do?
The description was stereotypical of an engineer.
Pps focused on the stereotypical/specific info & not on the base rate info (the actual number of engineers in the sample)
–> said that there was a higher chance of the description being about an engineer
Kahneman & Tversky (1973) found that even if pps were given a neutral description (30/70)…
…pps still tended to say there was a 50% chance of the description being about an engineer (ignored the base rate info)
What type of info do we tend to get over-influenced by?
Stereotypical info
Info that gets representativeness
What is an example of a ‘probability format’ problem?
“The probability of breast cancer is 1% for women at age 40 who have routine screenings. If she has breast cancer, there is an 80% probability that she will get a positive mammography. If she doesn’t have breast cancer, there is a 9.6% probability that she will get a positive mammography. A woman in this age group had a positive mammography in routine screening, what is the probability that she actually has breast cancer?”
How do pps normally respond to this probability format problem?
“The probability of breast cancer is 1% for women at age 40 who have routine screenings. If she has breast cancer, there is an 80% probability that she will get a positive mammography. If she doesn’t have breast cancer, there is a 9.6% probability that she will get a positive mammography. A woman in this age group had a positive mammography in routine screening, what is the probability that she actually has breast cancer?”
Most common answer = 70-80%
Normative Bayesian answer = 7.8%
Why do pps respond like this?
“The probability of breast cancer is 1% for women at age 40 who have routine screenings. If she has breast cancer, there is an 80% probability that she will get a positive mammography. If she doesn’t have breast cancer, there is a 9.6% probability that she will get a positive mammography. A woman in this age group had a positive mammography in routine screening, what is the probability that she actually has breast cancer?”
Pps focus on the hit rate (80%) at the expense of the base rate info (1%)
Misunderstandings are common in naive pps, expert pps are not as susceptible. True/false?
False
Misunderstandings are common in both naïve pps & experts (e.g. medical students, Casscells et al., 1978)
How can we improve Bayesian reasoning?
We can change the way we present info - use a frequency format instead to improve understanding of probabilities (Gigerenzer & Hoffrage, 1995)
Give an example of a frequency format problem.
“10/1,000 women at age 40 who have routine screenings have breast cancer. 8/10 women with breast cancer will get a positive mammography. 95/990 women without breast cancer will also get positive mammography. In a sample of women at age 40 who got a positive mammography in routine screening, how many do you actually expect to have breast cancer?”
Why is the frequency format easier to understand than the probability format?
“10/1,000 women at age 40 who have routine screenings have breast cancer. 8/10 women with breast cancer will get a positive mammography. 95/990 women without breast cancer will also get positive mammography. In a sample of women at age 40 who got a positive mammography in routine screening, how many do you actually expect to have breast cancer?”
We can see the number of women with & without cancer = easier to see the total sums
Most get the answer correct = 8/103 women
Cosmides & Tooby (1996) claim that frequency format is easier because…
…natural frequencies are more alike the kind of info we get in our daily environment & that we have evolved alongside compared to single event probabilities (which are a relatively new invention)
Probability info is not always bad & frequency info is not always good.
Who said this?
Girotto & Gonzalez (2001)
Probability info is not always bad.
Give an example of a ‘chances format’ problem.
“Chances of breast cancer in women age 40 who have routine screenings are 10 chances in 1,000. 8 of the 10 chances of having breast cancer are associated with a positive mammography. 95 of the remaining 990 chances of not having breast cancer are associated with a positive mammography. A woman is tested now. Out of a total of 1,000 chances, she will have ___ chances of a positive mammography, of which ___ chances will be associated with having breast cancer.”
What are ‘chances’?
Chances are single-event probabilities that can be broken down like frequencies
They are well-structured
Frequency info isn’t always good.
What problems might make frequency format problems impossible to solve? (x2)
- Defective frequency
2. Multiple sample
Give an example of a defective frequency problem.
“5/100 applicants on the list were accepted. 3/5 applicants who were accepted passed the oral test. 88/95 applicants who were rejected didn’t pass oral test. Among 100 applicants who passed the oral test, the proportion that get accepted are ___ out of ___.”
- we don’t have enough frequency info to create a tree diagram/answer the question
- people may think they understand the problem but they can’t answer it
Give an example of a problem with multiple samples.
“5/100 applicants were accepted. 80/100 applicants who passed oral test were accepted. 20/100 applicants who were rejected also passed oral test. Among 100 applicants who pass oral test, the proportion that get accepted are ___ out of ___.”
- applicants are drawn from different samples so would be put onto separate tree diagrams that you can’t put together to answer the question
How might we improve Bayesian frequency?
- it is the structure, not the format, that helps us answer questions
- this also applies to other types of statistical reasoning tasks (e.g. cumulative risk problems)
- if you give people training on tree diagrams, they are better at answering the questions
How do we make judgements about probabilities?
Johnson-Laird et al. (1999) - it may relate to how we consider the alternatives
What experiment did Johnson-Laird et al. (1999) do on naive probability?
They told pps “there is a box in which there is a black marble/red marble/both. In the box there is a black marble with or without another marble: probability = __%”
–> pps tended to say 67% or 2/3 probability
What is a limitation of Johnson-Laird et al.’s (1999) study?
The problem is ill-specified (there is no info about probabilities) – in the absence of info, we assume that all possible alternatives are equiprobable
What does ‘all possible alternatives are equiprobable’ mean?
All alternatives are just as likely
How do we answer Johnson-Laird et al.’s (1999) question?
We think about the different states of affairs that could have been the case on the basis of the description – 3 possibilities: Mental models: 1) Black 2) Red 3) Black Red
In 2 ^ there is a black marble in the box so we say that there is 2/3 probability that there will be a black marble
We total up the proportion of situations that come to mind in which that event happens –> as we do this, things like availability will have an influence (e.g. if we can readily think of states of affairs where black marbles happen in the world, we will rate those as more likely)