Chapter 3 Monte Carlo Approximation Flashcards

1
Q

Explain the concept of monte carlo approximation

A

Monte Carlo experimentation is the use of simulated random numbers to estimate some functions of a probability distribution. It allows for approximating quantities of interest that are hard to calculate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How does the monte carlo method approximate the posterior

A

We could sample S independent theta values from the posterior distribution. Then the empirical distribution fo the sample we take of s values will approximate the posterior the larger the sample size the more accurate the approximation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What to be cautious of when dealing with tail probabilities estimated by monte carlo inference

A

If events are very very rare alot fo data may eb needed to make the probability accurate as often tail probabilities can be small.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What theorem/ result allows to us to approximate quantities of interest from monte carlo inference as about equal to the actual posterior calculated values

A

The law of large numbers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What effect does the sample size have on monte carlo inference

A

Bigger the sample size the more accurate the approximation - gets closer to the true value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do we estimate credible intervals using Monte Carlo simulation

A

Using rbeta() and the quantile function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the parameters of the quantile function

A

quantile(sample, c(q1,q2))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How can the exact solution of a beta credible interval be found?

A

beta_interval(% of interval, c(a,b),color=crcblue)
Or
qbeta(quantile, a, b)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the interpretation of the posterior odds for beta posterior

A

For beta distribution posterior: If o>1 then theta> 0.5. If o<1 then theta<0.5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why is the posterior odds much easier to calculate using monte carlo inference

A

The odds p(o|y) is a very complicated distribution and formula so calculation is tricky.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do we find p(o|y)?

A

Sub theta in terms of the odds into the posterior distribution p(theta | y). Theta = o/1+o and sub this into the posterior and multiply by dtheta/do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How to draw independent samples from p(o|y)? - the posterior odds size 1000

A

Betasamples

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

If the variable odds - is sample from p(o|y) how do you investigate the posterior probability that the odds is less than 1 in monte carlo inference

A

mean(odds<1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

If the variable odds - is sample from p(o|y) how do you investigate the posterior mean

A

mean(odds)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

If the variable odds - is sample from p(o|y) how do you investigate the posterior credible interval 95%

A

quantile(odds, c(0.025,0.975))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the actual distribution of the predictive posterior distribution for beta prior, binomial likelihood and beta posterior and why is normally using monte carlo

A

Ytheta | Y=y is Beta-Binomial(m, a+y,b+n-y)

This is a complex distribution and monte carlo approximation is mostly easier

17
Q

What is the r command to use to find the beta-binomial predictive posterior distribution

A

prob

18
Q

What is the r command to use to find numbers that give a range of values in the the beta-binomial predictive posterior distribution that cover a certain probability mass

A

discint(pred_distribution, 0.9)
or
discint(pred_distribution, 0.8)
etc

19
Q

How do we simulate the posterior predictive distribution in R?

A

Draw theta samples from the beta posterior and draw from the likelihood binomial using the theta posterior sample as the parameters

20
Q

explain model checking

A

Does the model fit the data well. Key idea : the observed data should eb similar in some sense to the data predicted form the bayesian model.

21
Q

What do we assume in our bayesian prediction by monte carlo method to carry out model checking

A

Sample theta from Beta(a+y,b+n-y) distribution and then use this theta in sample of y_tilde from binomial(m,theta). We require m=n to see if the observed data is similar to the predicted simulated data of the same size.

22
Q

What conclusion should be drawn from model checking

A

If observed data y is different to the samples of y_tilde ‘s form the posterior predictive distribution we have evidence the Bayesian model is not a good fit for the data.

23
Q

Discuss comparing a histogram for the purpose of model checking

A

Comparing a histogram of the samples of y_tilde our observed y value should sit in the middle of the graph. If it is it is consistent with the simulations of the replicate data from the predictive distribution.

24
Q

Discuss the use of tail probabilities to comment on model checking

A

If P(y>y_tilde | y) or 1-P(y>y_tilde | y) are very small it suggests the model does not describe y very well.

25
Q

What is the basic need to be able to use monte carlo

A

Need to be able to sample independently from the posterior distribution