Unit 7 Flashcards

1
Q

Note

A

Worth reading pages 1-4 for understanding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

If α is an unknown parameter, how is the likelihood of the data denoted?

A

f(X1,…,Xn|α) (line shows likelihood depends on α)
Then define

L(α) = f(X1,…,Xn|α)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How can we develop L(α) once we assume that Xi are independent?

A

We can then see the likelihood is the product of the individual densities

L(α) = f(X1|α)…f(Xn|α)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the idea of MLE?

A

To choose the estimate of α to make the likelihood L(α) as large as possible!

Ie. To make the observed data as likely as possible!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How to optimise L(α)?

A

L=objective function
α=choice variable

Differentiate L wrt α, equate to 0 then obtain α estimate, & check with 2nd order conditions it is a maximum point

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What can we do to avoid algebraic mess?

A

Since log(x) increases with x, L(α) will take its highest value of α at same point as logL(α) tf can take logs of both sides - log-likelihood function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What can we say about MLE estimators in large samples?

A

They are normally distributed with mean θ (true value of parameter) and variance CRLB (Cramer-Rao Lower Bound) (don’t need to know how to calculate)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do we know MLE are asymptotically unbiased?

A

As n->infinity, E(θestimate) -> θ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

See optional material pp.11-14 and summary and other examples 15->

A

Now

How well did you know this?
1
Not at all
2
3
4
5
Perfectly