Bayes II Flashcards

1
Q

op welk level is de Bayesian Estimation (met theta)

A

op level van de parameter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

uninformative model

A

reflects the idea that all values of the proportion are equally likely

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Bayesian statistics gaat over means/proportions

A

proportions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

wat laat r skewed distribution zien

A

dat values onder 0,5 meer plausibel zijn

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

wat laat l skewed distribution zien

A

dat values boven 0,5 meer plausibel zijn

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

marginal likelihood

A

average quality of the prediction model, over all values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

likelihood

A

quality of the prediction for this specific value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

en via marginal likelihood en likelihood kijken naar

A

hoe elke value het voorspelt, tov het hele model. values die het goed doen krijgen een boost, values die het niet goed doen krijgen geen boost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

L > M ezelsbruggetje

A

L komt eerder in alfabet dan M, dus L>M

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

in wat voor grafiek kan je de marginal likelihood aflezen

A

likelihood - y as
number of successes - x as

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

hoe doe je een model comparison (= Bayes Factor)

A

marginal likelihood model 1 / marginal likelihood model 2

-> the data are … more likely under model 1 than under model 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

wat is de Bayes Factor

A

marginal likelihood 1/marginal likelihood 2!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

op welk level is de bayesian hypothesis testing (met H1)

A

op hypothese niveau

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

wat is de Bayesian Estimation (formule)

A

P(0|data) = P(0) * (P(data 0)/P(data))

GOED kennen!!!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

wat is de formule voor bayesian hypothesis testing

A

p(H1|data) p(H1) p(data|H1)
—————- = ———- x ——————
p(H0|data) p(H0) p(data|H0)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

prior odds = (formule)

A

p(H1)/p(H0)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

wat laat de prior odds zien

A

hoe plausibel een hypothese is, vergeleken met een andere hypothese. before seeing the data!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

bv. wat is de prior odds als je denkt dat de Ha 5 times more likely is? en wat als de H0 5 times more likely is?

A

1e optie: prior odds = 5
2e optie: prior odds = 0,20

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

predictive updating factor interpretation

A

how well did the alternative hypothesis predict the data, compared to how well the null hypothesis predicted the data?

= zelfde als Bayes Factor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

wat voor level ga je tijdens hypothesis generation

A

theory to prediction -> deduction

21
Q

when is the prior distribution truncated

A

if the hypothesis is one sided.
then all the values that H1 does not predict anything for, are equal to 0

22
Q

Predictive updating factor in hypothesis testing vergeleken met equation

A

marginal likelihood / likelihood

dit is dus andersom dan bij de bayesian equation!!!!

23
Q

predictive updating factor formule

A

p(data|H1) / p(data|H0)

24
Q

uitleg formule PUF

A

average likelihood over all values predicted by H1 / average likelihood across all values predicted by H0

25
Q

Savage-Dickey density ratio

A

prior density / posterior density

26
Q

interpretation of Savage-Dickey ratio

A

prior > posterior: evidence for H1

posterior > prior: evidence for H0

dus het posterior moet LAGER zijn

27
Q

dus welke density moet lager zijn voor evidence voor H1

A

posterior moet lager liggen dan prior

28
Q

PUF hypothesis is ook wel

A

BF10

29
Q

als ze beiden een zelfde stap nemen vanaf een ander punt…

A

is de BF nog steeds hetzelfde

30
Q

interpretatie BF10 = 20

A

the data are 20 times more likely under H1 than under H0

31
Q

interpretatie BF10 = 1

A

data are equally likely under H1 as under H0

32
Q

BF 1-3

A

anecdotal

33
Q

BF 3-10

A

moderate

34
Q

BF 10-30

A

strong

35
Q

BF 30-100

A

very strong

36
Q

> 100

A

extreme

37
Q

dus wat zijn de classificatie namen

A

anecdotal - moderate - strong - very strong - extreme

38
Q

hoe posterior distribution opstellen

A

a=a+aantal successes in observed data
b=b + aantal failures in observed data

dus stel je begint met a=1, b=1 en je observed 4 successes en 2 fails
a=5, b=3

39
Q

naar welke theta value kijk je voor savage dickey

A

0,5 -> vergelijk prior vs posterior

40
Q

wat gebeurt er als je two sided gaat testen

A

als je two sided gaat testen: spread out your bets.

aangezien de marginal likelihood weighted is, over het gemiddelde van alle values, word de likelihood van theta dan dus vergeleken met meer values

dus daarom krijg je less winnings voor de values die het model beter hadden voorspeld dan gemiddeld

-> marginal likelihood wordt lager dan bij one sided!

41
Q

dus wat is lager bij two sided vergeleken met one sided

A

marginal likelihood

42
Q

hoe zie je in de grafiek of het one sided of two sided is

A

one sided: truncated, grafiek begint opeens omhoog
two sided: rechte lijn, alle values hebben dezelfde density

43
Q

parsimony

A

when both models predicted the data equally well, but one (the one sided model) was more specific, and therefore receives more winnings = higher marginal likelihood

44
Q

wat is de interpretatie van een hoge BF10

A

betekent niet gelijk dat de H1 juist is, maar dat het in ieder geval beter is dan de H0! is dus echt allemaal heel relatief

45
Q

In the Bayesian framework we keep updating our beliefs
It does not matter if we update it all at once, or one data
point at a time (“Today’s posterior is tomorrow’s prior” )

A

oke

46
Q

sequential analysis

A

plot how the BF evolves as we accumulate knowledge

47
Q

Bayesian Hypothesis testing is another form of updating beliefs: we
compare the predictions made by 2 different hypotheses (or,
models) to update our beliefs about which hypothesis is better

A

oke

48
Q

The Bayes factor is central: it is the predictive updating factor of our
beliefs about hypotheses.

It is the ratio of each hypothesis’ “predictive quality”, measured by
their marginal likelihoods: the average likelihood of all values of the
parameter predicted by each respective hypothesis.

A

oke

49
Q

The Bayes factor is a relative metric! Both hypothesis can predict
very poorly: the Bayes factor tells you which did the least poorly
The Bayes factor can be monitored as evidence accumulates
We can investigate the effects of the prior distribution

A

oke