MDM 1 Flashcards

1
Q

What is critical to understanding evidence?

A

Counterfactual

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Experimentation = Best way to test causality - three features?

A

Indep. variable, Dep variable, random assignment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Correlation..

A

DNE causation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Internal validity

A

Did random assignment happen?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Threats to internal validity

A

1) Random assign. didn’t happen, 2) Small # randomly assigned units, 3) Attrition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

External validity

A

Can you generalize?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Random Assignment VS Random Sampling

A
RA = Random assg. to different conditions (ensure internal validity)
RS = Random sample of units to include in the study
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Reasons not to experiment

A

Too expensive
Benefit = low
Physically, legal, ethically impossible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Chance

A

streakier and lumpier than we would expect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Gambler’s Fallacy

A

Chance will correct itself

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Low sample?

A

Greater variance, less reliable, more chance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Calculating odds of a patter AFTER you notice

A

WRONG

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

P-Hacking

A

1) Stop data collection when p

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Regression to mean

A

Perform = f(skill,luck)

Low luck is followed by avg luck

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Availability Bias

A

What is in front of us is it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Hindsight bias

A

Clinton

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Anchoring

A

seems informative, if you have no idea and seems plausible, acts as default

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Curse of knowledge

A

Tapping tone, running turn

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Attitude projection

A

think about class survey

20
Q

Bias of perspective

A

McNamara

21
Q

Motivated Reasonings

A

f(desire, ambiguity)

22
Q

Can I vs Must I

A
Can I ( I like this)
Must I (I don't like this)
23
Q

MR is different from

A

conscious corruption (punishments don’t work)

24
Q

Causes of overconfidence

A

Availability, anchoring, failure to appreciate chance, biases of know. and perspective, MR, reward for overconfidence

25
Q

Overconfidence = good and bad

A

Good = motivation Bad = everything else (especially decision making)

26
Q

Intuition Good when…

A

Near-perfect, immediate, unbiases feedback, doesn’t change in an importnat way

27
Q

Is X greater than Y? How much?

A

Amazon stocks, Ravens football, elephant weight

28
Q

Bootstrapping

A

Judge makes decisions. Calculate weights. Apply weights and make changes.

29
Q

Reliability vs validity

A

Rel = how much do 2 predictions correlate with each other? Validity = How much does prediction correlate with outcome. Difficult to have val w/out rel & rel will be higher than val.

30
Q

Why is human judgement unreliable?

A

1) Fatigue/distraction
2) Incidental factors (mood, framing, order, etc.)
3) Random error that leads to inconsistent weighting

31
Q

Steps - random statistical model

A

1) Determine sign of each regression coefficient
2) Standardize all variables
3) Choose a coeffecient at random over range 0 to 1
4) Use these coefficients to make predictions

32
Q

Human judges often use?

A

Useless info

33
Q

Caveat to models

A

1) Data used must reflect range of actual possibilities
2) Must have quantifiable criterion
3) Stat models don’t account for “soft” var’s which can matter
4) Diversity is difficult

34
Q

How to overcome algorithm aversion?

A

Give people some control - people are less tolerant of algorithm mistakes

35
Q

Methods of combining opinions

A

1) Group deliberations
2) Averages opinions
3) Prediction markets

36
Q

Problems with discussion

A
  • Bias
  • Opinions are not voices
  • Anchoring on one idea
  • Opinions are not independent
37
Q

Ingredients for eliciting information. Make sure groups are?

A
  • Knowledgeable
  • Diverse/unbiased
  • Independent
  • Motivated to contribute - remove the risk!
38
Q

Adding options can

A

Drastically change decisions!

39
Q

Prospect Theory

A

1) Losses weigh heavier than gains
2) Risk-averse in gains; risk-seeking in losses
3) people are sensitive to arbitrary ref. points

40
Q

Rules for Gains/Losses

A

Integrate losses
Segregate gains
Integrate small losses with large gains

41
Q

Endowment effect

A

Selling prices tend to be 2x greater than buyign prices

42
Q

Probability weighting

A

people overweight very small probabilities (.01), but underweight mid-high probabilities (.8-.9)

43
Q

Steps for designing interventions

A

ID specific behavior
Ask why is this happenign
What env. factors influence this
Change env.

44
Q

Why defaults work/matter?

A

1) Mech. reasons (decision costs, forgetting, etc.)
2) Belief-based: Social norm, recommednation
3) Psych. reasons: Risk-averse, regret, lack of reasons for making active choice

45
Q

Why don’t people act to help at times?

A

Diversion of responsibility