Lecture 5 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

bayesian inference =

A

the outcome of a learning process that is governed by relative predictive success

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

bayesian learning cycle

A

prior knowledge - prediction - data - prediction error - knowledge update - prior knowledge…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

wat is theta altijd

A

hypothesis, something that is unknown and we wish to learn about

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

wat is bayes rule

A

p(O|data) = P(O) x P(data|O)/P(Data)

posterior beliefs about parameters = prior beliefs about parameters x predictive updating factor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

uit welke 2 componenten bestaat bayes rule

A

support en predictive success

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

formule support =

A

p(O|data) / P(O)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

formule predictive succes =

A

p(data|O) / p(data)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

wat vergelijk je bij support

A

wat je dacht over theta voordat je de data zag, en wat je nu weet over theta nadat je de data zag.

if the data increased the plausitibility of a value of theta: the data provided support for that value. en zo niet dan minder support

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

wat zie je bij predicitive success

A

what is the probability of the observed data? how surprising are the observed data vs how surprising are the observed data given a value of theta?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

wat zegt bayes rule

A

dat support hetzelfde is als predictive succes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

wat is surprise in statistics

A

a bad thing, you want the data to be predictable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

dus less surprising data =

A

predictive success, wat betekent dat er support is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

wat is het verschil tussen support en predicitive success

A

support = what happens to our beliefs
predictive success = how surprising is our data, how well was our prediction?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

wat is een ezelsbruggetje

A

surprise lost is credibility gained.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

wat is de dotted line

A

prior distribution on theta
vaak een rechte lijn omdat het makkelijk is, dus alle values are equally likely

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

wat is de niet dotted line

A

posterior distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

wat zie je als je prior en posterior vergelijkt bijvoorbeeld

A

dat de waardes onder …. (waar dotted en niet dotted elkaar kruizen) less likely zijn geworden, en waardes die tussen die kruizing liggen zijn meer likely

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

the area under the curve…

A

needs to be one: dus als een area minder likely is, moet een andere wel meer likely zijn

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

in a continuous distribution, theta can take on any value from …

A

0 to 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

what is the probability in a continuous distribution

A

not the height, but the area under the curve

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

wat zegt de height

A

heeft niet echt een interpretatie. maar je kan wel kijken hoe plausibel het a priori was, en dan hoe het a posterior was. de ratio tussen deze twee -> how much more likely is a value in the posterior? dat is de increase in our belief

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

hoe reken je die ratio uit

A

de bovenste gedeeld door de onderste
posterior / prior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

p(data) betekenis

A

probability of the data
if you assume the uniform prior distribution (if any proportion is equally likely a priori) -> then you predict that the number of successes is also uniform. dus alle outcomes are equally likely a priori, if you assume that all values of theta are equally likely a priori. your predictions are very broad

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

if you assume the specific theta (such as 0,50)….

A

you get much more specific predictions, peaked around bv 0.5.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

hoe zou je p(O|data)/P(O) kunnen uitleggen

A

you compare the prior distribution to the posterior distribution, and then we can see how our beliefs have changed.

26
Q

principle parsimony

A

when you prefer simple over complex explanations, unless the data forces you to abandon the simple explanation

27
Q

posterior belief about theta formule =

A

p(O|data)

28
Q

prior belief about theta formule =

A

p(O)

29
Q

change in belief formule =

A

p(O|data)/p(O)

30
Q

predictive adequacy of theta formule =

A

p(data|O)

31
Q

average predictive adequacy formule =

A

p(data)

32
Q

relative predictive adequacy of theta formule =

A

p(data|O) /p(data)

33
Q

bij wat voor soort distribution weet je meer

A

bij een peaked distribution, more narrow

34
Q

if the posterior distribution is narrower than the prior distribution, this indicates that the data have … the uncertainty

A

reduced

35
Q

ratio a=

A

the probability of x being lower than [hypothesis], before you have seen the data

36
Q

ratio b=

A

the probability of x being lower than [hypothesis], after you have seen the data

37
Q

c =

A

the single most believable number, the highest point, the most likely value

38
Q

d=

A

hoe veel meer plausibel is value c compared to d (=hypothesis)?
ratio d indicates that the value of …. is … times more probable than the value of [hypothesis]

39
Q

interval e=

A

a central 95% credible interval which indicates that one can be 95% confident (the posterior probability is 95%) that bobs true iq falls within the interval ranging from … to …

40
Q

what does it mean when a model underfits the data

A

it is too simple. fails to capture the patterns

41
Q

wat is de consensus over variation

A

variation is random until the contrary is shown

42
Q

simple models tend to make..

A

precise predictions
complex models need to spread out their predictions

43
Q

which model is better supported by the data

A

the model that predicted the data best

44
Q

posterior beliefs about hypotheses formule =

A

p(H1|data)/p(H0|data)

45
Q

waarom is de predictive updating factor zo belangrijk

A

omdat dit het enige is waar mensen het over eens kunnen zijn, de posterior en prior beliefs van iedereen kunnen anders zijn maar we kunnen het er over eens zijn welk model het beter heeft voorspelt

46
Q

dus wat is de predictive updating factor

A

which model predicted the data the best

47
Q

bf 1-3

A

anecdotal

48
Q

bf 3-10

A

moderate

49
Q

bf 10-30

A

strong

50
Q

bf 30-100

A

very strong

51
Q

bf >100

A

extreme

52
Q

4 advantages of the bayes factor

A
  • quantifies evidence instead of forcing an all or nothing decision (kan gewoon zeggen de bayes factor of 3, en dan kunnen mensen zelf interpreteren hoe likely het is)
  • discriminates ‘evidence of absence’ from ‘absence of evidence’
  • Allows evidence to be monitored as data accumulate.
  • Applies to data from the real world, for which no sampling plan can be articulated.
53
Q

evidence of absence =

A

there is evidence for the null hypothesis, the data is much more likely under the null hypothesis.

54
Q

absence of evidence =

A

bayes factor near one, there is no evidence. geen significant resultaat zegmaar

55
Q

Allows evidence to be monitored as data accumulate. wat is heir anders dan bij frequentism

A

frequentism: meer data is ook alpha aanpassen enzo. bij bayes you are simply learning from the data

56
Q

wat is de interpretation van BF01 = 3

A

the observed data are 3 times more likely under the null hypothesis than the H1

57
Q

dus wat is GEEN goede interpretatie van BF01=3

A

after seeing the data, h0 is now 3 times more likely than h1. this is only correct when H0 and H1 are equally alike a priori

58
Q

welke error hoor bij die verschillende interpretaties van BF01 = 3

A

the fallacy of transposing the conditional: Pr(A|B) =/= Pr(B|A)

59
Q

dus wat is de overkoepelende gedachte van bayes factor

A

the bayes factor quantifies the relative predictive performance. it shows the degree to which the data should shift your opinion. it does not quantify that opinion itself!

60
Q

Every confirmatory instance should increase
your confidence in the general law

A

oke

61
Q

Simple models make daring
predictions; if these come
true, the simple model is
rewarded.

A

oke

62
Q

if we have n subjects that have successes, what is the bayes factor

A

n +1