Pinboard Flashcards

1
Q

what is a random walk

A

accumulation of error terms from a stationary series of error terms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is the static model

A

yt=β0+β1zt+ut

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is the finite distributed lag model

A

yt=β0+β1zt+β2zt-1+…+ut

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is a stochastic process

A

sequence of random variables indexed by time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what does weak stationary mean

A

Mean, variance and covariances are stable. Mean and variance constant over time. Covariance between yt and yt-j depends only on distance between two terms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is an AR(1) model

A

Autoregressive:

yt=θyt-1+εt

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is a MA(1) model

A

Moving average:

yt=εt+αεt-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is weak dependence

A

correlations between time series variables become smaller and smaller. Weakly dependent if Corr(yt,yt-j)->0 as j->∞ (asymptotically uncorrelated)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is the Correlagram equation

A

ρj=Cov(yt,yt-j)/Var(yt)=γj/γ0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what is the variance part of the correlagram equation γ0: (ρj=Cov(yt,yt-j)/Var(yt)=γj/γ0)

A

Var: γ0=E((yt-μ)^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is the autocovariance part of the correlagram equation γj: (ρj=Cov(yt,yt-j)/Var(yt)=γj/γ0)

A

Autocov: γj=E((yt-μ)(yt-j-μ))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what does the fact that E(et^2)=σ^2 mean

A

the variance where the expected value is 0 (can derive it)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what does efficient mean

A

smallest variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what does consistent mean

A

plim(αhat)=α

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what does a unit root mean

A

yt=θyt-1+et

Unit root: θ=1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what is a way of showing et and es are serially uncorrelated when E(et)=0

A

E(etes)=0 (from Cov(etes) with E(et)=0)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

what is the stability condition

A

|θ|<1

18
Q

how do you do the test of order of integration

A

checking whether weakly stationary -> check whether mean and variance are constant over time -> then check covariance between yt and yt-j

19
Q

what is the test for serial correlation

A

OLS yt on xt to get β1 -> form residual -> regress uthat on ut-1hat and xt… to get ρ -> F test

20
Q

what is the unit root test

A

∆yt=c+(θ-1)yt-1+et, (θ-1)=γ -> Dickey-Fuller test against adjusted CVs. DF=γhat/var(γhat)^1/2

21
Q

How do you do the Breusch-Pagan test for homoskedasticity

A

Null homo H0:E(ui^2|xi)=σ^2, var not fct of explanatory variables, can’t observe ui^2hat so replace by OLS residuals and test H0:δ1=δ2=…=δk=0 in ui^2hat=δ0+δ1x1i+δ2x2i+…+δkxki+ε R^2 in regression of ui^2hat on xi->R^2u^2hat. Bresuch-Pagan stat nR^2u^2hat, n sample size, bull home nR^2u^2hat->d χk^2, null rejected if nR^2u^2hat larger than cv of χk^2 distribution. don’t have to specify an alternative

22
Q

In the Breusch-pagan test do you expect a high or low R^2 under the null of homoskedasticity: (H0:δ1=δ2=…=δk=0 in ui^2hat=δ0+δ1x1i+δ2x2i+…+δkxki+ε)

A

R^2 small under null because none of var in u explained by regressors

23
Q

what is the definition of heteroskedasticity

A

conditional variance of the error term in the linear model is different for different values of the explanatory variable
E(ui^2|xi)=Var(yi|xi)=σ^2(xi),
fct of explanatory

24
Q

what is the equation for heteroskedasticity

A

E(ui^2|xi)=Var(yi|xi)=σ^2(xi),

fct of explanatory

25
Q

what does robust mean

A

allows for heteroskedasticity

26
Q

what does less noise do

A

improves efficiency

27
Q

how does the weighted least squares method work (in words)

A

more noise=less weight
less noise=more weight,
less noise improves efficiency

28
Q

what is the variance (words and equation that matches words)

A

sum of squared distances of each term from the mean (μ), divided by number of terms in the distribution, from this subtract the square of the mean,
σ^2=(Σ(X-μ)^2)/N = (Σx^2)/N-μ^2

29
Q

what is the variance formula

A

Var(X)=E((X-E(X))^2) = E(X^2)-(E(X))^2

30
Q

what is homoskedasticity

A

conditional variance of u given x1,…,xk is constant: Var(u|x1,…,xk)=σ^2

31
Q

what is the total sum of squares (TSS)

A

Σ(yi-ybar)^2,
measure of total sample variation in the y: how spread out the yi are in the sample.
If we divide the TSS by n-1 we obtain the sample variance

32
Q

what is the model sum of squares (MSS)

A

Σ(yihat-ybar)^2.

Sample variation in yihat (where we use the fact that ybarhat=ybar)

33
Q

what is the residual sum of squares (RSS or SSR)

A

Σuihat^2.

RSS measures sample variation in uihat

34
Q

what is the relationship between the RSS the MSS and the TSS

A

the total variation in y can be expressed as the sum of the explained variation and the unexplained variation.
TSS=MSS+RSS

35
Q

what is the R^2 test

A

R^2=MSS/TSS = 1 - RSS/TSS

36
Q

if TSS=MSS+RSS what is the F stat that allows for testing whether all parameters, apart from the constant are zero

A

F = (TSS-RSS)/k / RSS/(n-k-1)

= MSS/k / RSS/(n-k-1)~F(k,n-k-1)

37
Q

what is R^2 a measure of

A

measure of how much variation in y is explained by variables in model

38
Q

what is the Gauss-Markov property

A

under the additional assumption of homoskedasticity, the estimator βhat is efficient (smallest variance), and is the BLUE of β, with variance Var(βhat)=σ^2(X’X)^-1

39
Q

what is the covariance equation

A

Cov(x,y)=E((X-E(X))(Y-E(Y)))

E(XY)-E(X)E(Y)

40
Q

Correlation equation

A

ρxy=Cov(x,y)/σxσy

41
Q

what is the zero conditional mean assumption

A

random error u satisfies E(u|x1,…,xk)=0 -> needed for consistency

42
Q

what does no perfect collinearity mean

A

matrix X full (column) rank