MDM 1 Flashcards

1
Q

What is critical to understanding evidence?

A

Counterfactual

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Experimentation = Best way to test causality - three features?

A

Indep. variable, Dep variable, random assignment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Correlation..

A

DNE causation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Internal validity

A

Did random assignment happen?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Threats to internal validity

A

1) Random assign. didn’t happen, 2) Small # randomly assigned units, 3) Attrition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

External validity

A

Can you generalize?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Random Assignment VS Random Sampling

A
RA = Random assg. to different conditions (ensure internal validity)
RS = Random sample of units to include in the study
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Reasons not to experiment

A

Too expensive
Benefit = low
Physically, legal, ethically impossible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Chance

A

streakier and lumpier than we would expect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Gambler’s Fallacy

A

Chance will correct itself

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Low sample?

A

Greater variance, less reliable, more chance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Calculating odds of a patter AFTER you notice

A

WRONG

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

P-Hacking

A

1) Stop data collection when p

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Regression to mean

A

Perform = f(skill,luck)

Low luck is followed by avg luck

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Availability Bias

A

What is in front of us is it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Hindsight bias

A

Clinton

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Anchoring

A

seems informative, if you have no idea and seems plausible, acts as default

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Curse of knowledge

A

Tapping tone, running turn

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Attitude projection

A

think about class survey

20
Q

Bias of perspective

21
Q

Motivated Reasonings

A

f(desire, ambiguity)

22
Q

Can I vs Must I

A
Can I ( I like this)
Must I (I don't like this)
23
Q

MR is different from

A

conscious corruption (punishments don’t work)

24
Q

Causes of overconfidence

A

Availability, anchoring, failure to appreciate chance, biases of know. and perspective, MR, reward for overconfidence

25
Overconfidence = good and bad
Good = motivation Bad = everything else (especially decision making)
26
Intuition Good when...
Near-perfect, immediate, unbiases feedback, doesn't change in an importnat way
27
Is X greater than Y? How much?
Amazon stocks, Ravens football, elephant weight
28
Bootstrapping
Judge makes decisions. Calculate weights. Apply weights and make changes.
29
Reliability vs validity
Rel = how much do 2 predictions correlate with each other? Validity = How much does prediction correlate with outcome. Difficult to have val w/out rel & rel will be higher than val.
30
Why is human judgement unreliable?
1) Fatigue/distraction 2) Incidental factors (mood, framing, order, etc.) 3) Random error that leads to inconsistent weighting
31
Steps - random statistical model
1) Determine sign of each regression coefficient 2) Standardize all variables 3) Choose a coeffecient at random over range 0 to 1 4) Use these coefficients to make predictions
32
Human judges often use?
Useless info
33
Caveat to models
1) Data used must reflect range of actual possibilities 2) Must have quantifiable criterion 3) Stat models don't account for "soft" var's which can matter 4) Diversity is difficult
34
How to overcome algorithm aversion?
Give people some control - people are less tolerant of algorithm mistakes
35
Methods of combining opinions
1) Group deliberations 2) Averages opinions 3) Prediction markets
36
Problems with discussion
- Bias - Opinions are not voices - Anchoring on one idea - Opinions are not independent
37
Ingredients for eliciting information. Make sure groups are?
- Knowledgeable - Diverse/unbiased - Independent - Motivated to contribute - remove the risk!
38
Adding options can
Drastically change decisions!
39
Prospect Theory
1) Losses weigh heavier than gains 2) Risk-averse in gains; risk-seeking in losses 3) people are sensitive to arbitrary ref. points
40
Rules for Gains/Losses
Integrate losses Segregate gains Integrate small losses with large gains
41
Endowment effect
Selling prices tend to be 2x greater than buyign prices
42
Probability weighting
people overweight very small probabilities (.01), but underweight mid-high probabilities (.8-.9)
43
Steps for designing interventions
ID specific behavior Ask why is this happenign What env. factors influence this Change env.
44
Why defaults work/matter?
1) Mech. reasons (decision costs, forgetting, etc.) 2) Belief-based: Social norm, recommednation 3) Psych. reasons: Risk-averse, regret, lack of reasons for making active choice
45
Why don't people act to help at times?
Diversion of responsibility