Chapter 2 Bayesian Inference Flashcards

1
Q

What are the qualities/assumptions of a binomial experiment

A
  1. One repeats binomial experiment n times
  2. Each trial outcome is success or failure - binary
  3. Probability of success p and is the same for each trial
  4. Each trial results are independent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the likelihood

A

The data - probability of an observed value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

how do we denote prior of theta

A

p(theta)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

how do we denote likelihood function for data as a function of theta

A

p(y|theta)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

how do we denote posterior of theta

A

p(theta|y)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What si the relationship between beta distribution and uniform distribution

A

Beta(1,1) is a special case of the uniform distribution on (0,1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How to find the pdf of beta(a,b) in R

A

dbeta(theta,a,b)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How to find the cdf of beta(a.b) in R that is (P(thetaX<=thetax)

A

pbeta(theta,a,b)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How to generate a ransom sample of size 5 from the beta(a,b) distribution

A

rbeta(5,a,b)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How find quantiles of beta function giving the cdf at a certain point in R

A

qbeta(q,a,b)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does beta.select() do and how to use it

A

Gets us to specify two quantiles and find the beta curve that matches these quantiles
Ex: beta.select(list(x=0.55,p=0.5),list(x=0.8, p=0.9)) = 3.06 , 2.56 so we choose a = 3.06 and b= 2.56

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does beta_interval (x, shape_par=c(a,b)) do

A

beta_interval(0.5, shape_par=c(a,b)) plots the middle 50% area of the prior distribution quantile function can calculate the 25th/75th percentile

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What prior and sampling distribution give rise to a beta posterior

A

A beta prior and a binomial sampling distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Define conjugate prior

A

A class of priors is conjugate for a sampling model p(y|theta) is the prior and the posterior are from the same class of distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

When plotted what will be the difference between the prior and the posterior

A

The posterior will have variance smaller as we have more information forming the distribution. The prior mean and posterior mean can be pretty similar if the prior assumptions were accurate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

how can we estimate the posterior mean from sample proportion and the prior mean

A

Posterior mean will lie somewhere in between

17
Q

Answering a hypothesis - is this a reasonable assumption

A

Find the probability using pbeta function - gives P(theta<=X)

18
Q

Define bayesian coverage of 95% of theta

A

This coverage is achieved in interval [L(y),U(y)] based on observed data Y=y if P(L(y)

19
Q

Define frequent coverage

A

This coverage is achieved in interval [L(y),U(y)] if before the data is observed if P(L(y)

20
Q

What is the probability of a 100x(1-alpha)% credible interval

A

P(theta is in interval [theta_(quantile alpha/2), theta _(1-quantile alpha/2) | Y=y) = 1- alpha

21
Q

What is the difference between HPD region and a credible interval

A

All points in HPD region have higher posterior density than points outside the region. This is not the case for a credible interval. Also HPD can have more than one interval region if posterior is multimodal

22
Q

What does HPD stand for

A

Highest posterior density

23
Q

Explain what the predictive distribution is

A

The conditional distribution of Y tilde (which si a future unobserved value) given the data y1……yn. It is written p(Y_tilde | y1……yn)

24
Q

What is significant about independence when considering Y_tilde

A

Y_ tilde is not independent from the data. Once we have the data we learn more about theta which in turn tells us more about Y_tilde