Lecture 2 Flashcards
What are the two types of priors?
What is an conjugate prior?
How to derive the likelyhood, post. density, and mean of a bernoulli dist.?
On the exam the Bernoulli dist. formula is given
How do you derive the post. kernel of a normal distribution with known variance for n = 1?
Assume a prior of N(m_prior, v_prior)
How do you?
This prior is non-informative s.t. v_prior –> inf
How do you derive the post. kernel of a normal distribution with known variance for n > 1? Thus was is the post. mean?
With prior N(m_prior, v_prior).
Derive the likelyhood of a normally dist. with mean mu and unknown variance 1/h.
Note the formula for the normal distribution is given on the exam
Derive the joint posterior density kernel of mu and h.
From normally dist. with mean mu and unknown variance 1/h with prior 1/h
What is the kernel of the conditional posterior of h given mu (and y)?
Just give the formula of how to calculate
What is the Gibbs sampling algorithm?
The algorithm below might be somewhat unclear, thus here a simplification:
For i in range(n):
Sample for all variables in θ using their respective distribution, and the previous value of θ.
Thus for a normal distribution sample for μ from p(μ|h, y), for μ and h, use their previous draw.
Then sample for h from p(h|μ, y).
end
Then our estimate of θ is the mean (after burn in).
What is burn-in? Why is it used?