3) Maximum Likelihood Estimation Flashcards

1
Q

What is the Likelihood Function and Log-Likelihood Function

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the score function

A

The first derivative of the log-likelihood function with regard to the parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What steps are involved in finding the Maximum Likelihood Estimate (MLE)

A
  • The condition for MLE is 𝑺𝑛(πœ½Λ† 𝑀𝐿) = 0 indicating the point where the log-likelihood function’s slope is zero, suggesting a maximum or minimum.
  • Confirming a Maximum:
  • Scalar ΞΈ: Ensure the second derivative of the log-likelihood is negative confirming maximum at πœ½Λ†π‘€πΏ
  • Vector ΞΈ: Compute the Hessian matrix at the MLE. The MLE is confirmed if this matrix is negative definite (all eigenvalues are negative), ensuring the log-likelihood is locally concave.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How can cross-entropy and KL divergence be approximated for large sample sizes

A

For large sample size n, the cross-entropy 𝐻(𝐹, 𝐺𝜽) is approximated by the empirical cross-entropy where the expectation is taken with regard to 𝐹ˆ𝑛 rather than 𝐹

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How is maximizing the log-likelihood related to minimizing KL divergence for large sample sizes

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does it mean for an estimator to be consistent

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Does the maximum likelihood function produce consistent estimates

A

Yes the method of maximum likelihood in general produces consistent estimates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why are MLEs considered asymptotically unbiased

A

Maximum Likelihood Estimates (MLEs) are considered asymptotically unbiased because, with sufficient data, the MLE will converge to the true parameter value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does maximum likelihood estimation connect to least squares estimation in a normal distribution model

A

In a normal distribution model, finding the mean using maximum likelihood estimation is equivalent to using least squares estimation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the observed Fisher information

A

The negative curvature at the MLE:
𝑱𝑛(πœ½Λ†π‘€πΏ), the derivative of the negative score function evaluated at the MLE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the relationship between observed and expected Fisher information

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly