Lecture 10 Flashcards

1
Q

State the assumptions and theorem detailing the AN of beta hat for the extension of the classic regression example: y_i = beta’z_i + u_i to time series.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

State two possible relaxations to the conditions in Theorem 33, AN of time series.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

State the conditions where we can rely on finite sample distribution for the distribution of beta hat in our classic example.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

State two conditions in our classic example under which we cannot rely on finite sample distributions.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Is there any way of “going around” the problem to employ finite sample distributions? Describe with words and by utilizing the classic set up.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

State Lyapunov’s CLT and describe how it relates to Lindeberg-Levy CLT.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Set up the problem of CLT in the time series framework, specifically MA(1) with all the appropriate assumptions (example 28.)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In the time series CLT example 28, what is the first term of the series we are interested in? Show that it converges. What does it converging mean for what we need to show for the rest of the sequence?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a key assumption we make in Example 28, and how can we move forward with the second part of the main sequence we’re interested in?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In the proof of example 28 (CLT in MA(1)), what are we interested in then for the normalization in order to use finite sample statistics?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In the proof of example 28 (CLT in MA(1)), what is x_t ^ * ? Show how to get there, why it is useful, and why we cannot use L-L on this sequence?

A

-> uncorrelated, but not independent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

In the MA context, Write the new sequence after the appropriate normalization and list all the assumptions and how it satisfies them in relation to L-F CLT.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

In the proof of example 28 (CLT in MA(1)), write the sequence that we are interested in checking against the Lyapunov CLT and show how you would check against it in the Lyapunov CLT.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Re-write the definition of and the intermediary conclusion regarding X_n in the proof of example 28 (CLT in MA(1)), and show how the proof concludes from this point forward.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Set up the errors in the problem to obtain the asymptotic distribution of the unfeasible GLS estimator of beta hat in the classic regression problem in the AR(1) setting.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

In the AR(1) setting of obtaining the asymptotic distribution of the unfeasible GLS estimator of beta hat in the classic regression problem, write the sequence we are interested in in terms of rho and x. Why is it called unfeasible?

16
Q

In the AR(1) set up, in words, explain the strategy of obtaining the asymptotic distribution, and why we cannot rely on L-L CLT.

17
Q

In the AR(1) set up, show that the first term and then the second term in the series we are interested in converges.

18
Q

Conclude with bringing together the two intermediary results of obtaining the asymptotic distribution of the unfeasible GLS estimator of beta hat in the AR(1) setting of classic regression problem.