8) Models with latent variables and missing data Flashcards

1
Q

What is the difference between complete data log-likelihood and observed data log-likelihood in models with missing data

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the observed data log-likelihood

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How can we obtain probabilistic predictions about the latent variables after estimating the marginal model

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the EM Algorithm

A

1) (“E” step), Impute missing data Dy using Bayes’ theorem to provide probabilities for each latent variable state. This estimates latent variable probabilities given observed data and current parameters.
2) (“M” step), Compute and maximize the expected complete data log-likelihood with respect to θ, using the probabilities from the E-step to update parameter estimates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Does the EM algorithm provide approximate estimates or exact estimates for the observed data log-likelihood

A

The EM algorithm leads to the exact same estimates as if the observed data log-likelihood were optimised directly. Therefore, the EM algorithm is not an approximation; it is just a different way to find the MLEs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly