Sampling Flashcards
What are the two problems we often want to solve with sampling using Monte Carlo methods?
- Generate samples for simulation (generative models) or represantion of distrubutions
- Compute expectations (solve integrals), e.g. mean/variance, marginal likelihood, prediction
How can we approximate expectation of f(x) using monte carlo?
Let x_s be samples, then
E(f(x)) = integ f(x)*p(x) dx = approx (1/S) sum f(x_s)
How can we approximate predictions using monte carlo sampling?
Let theta_s be samples, then:
p(y|x) = integ p(y|x,theta) p(theta) d theta = approx
(1/S) sum p(y|x,theta_s)
Is the monte carlo estimator for expectation biased?
No it is unbiased
What happens to the variance of the monte carlo estimator for expectation when the number of samples, S, increases?
It decreases by a factor of 1/S
How can we use monte carlo to sample discrete values?
Sample from U(0,1), assign the probability for different outcomes to different intervals
What is the rejection sampling algorithm?
- Assume that sampling from p(x) is difficult, but evaluating p_hat(x) = Zp(x) is easy.
- Estimate the complex distribution p_hat(x) using a simple distrubution q(x)
- Calculate a k such that kq(x) > p_hat(x)
- Draw sample, x_s from q(x)
- Draw sample u_0 from uniform = U[0, kq(x_s)]
- Reject if u_0 > p_hat(x_s), else retain.
What are the problems of rejection sampling?
- k might be difficult to estimate
2. We might reject most samples
What is the idea behind importance sampling?
Instead of throwing away samples, we weigh them by w_s = p(x_s)/q(x_s). So Ep( f(x) ) = Eq( f(x) * w_s) = approx 1/S sum f(x_s) w_s
Is importance sampling unbiased?
Yes, if q > 0 when p > 0.
What are the dissadvantages of importance sampling?
- Breaks down if we do not have enough samples (puts almost all weight on a single sample)
- Requires evaluating true p
- Does not scale to high-dimensional problems
What is the idea behind MCMC (Markov Chain Monte Carlo)?
Instead of generating independent samples, use a proposal function q that depends on the previous sample.
p(x_t+1|x_t) = T(x_t+1|x_t) where T is called a transition operator. Example:
T(x_t+1|x_t) = N(X_t+1 | x_t, sigma^2*I)
Name a necesary property for the MCMC transition operator.
The markov property, so
p(x_t+1|x_t, x_t-1…. x_0) = p(x_t+1|x_t)
What are the properties of the samples from MCMC?
They are not independent, but they are unbiased, so we can take the average.
What are the four different ways a markov chain might behave?
1) Diverge
2) Converge to a single point
3) Conberge to a limited cycle
4) Converge to an equilibrium distribution