Simple regression model Flashcards
SLR. 1
Population model is linear
SLR. 2
Random sampling. X and Y are independent
SLR. 3
Sample variation in x
SLR. 4
Zero conditional mean E(u|x) That u is mean independent of x
SLR. 5
Homoskedasticity - Var (u|x) = standard deviation squared
SST
Squared difference of observation from the mean
SSE
Squared difference of the fitted value from the mean
SSR
Sum of all squared residuals - Squared difference between the observed value and the fitted value
SST =
SSR + SSE
Error
Deviation of the observed value from true value
Hence error term is unobservable
Residual
Deviation between the observed value and the estimated value
Hence residual is observable
Perfect collinearity
Explanatory variable lies exactly on the linear function
ui hat =
completely broken down
yi − ybi = yi − β0 hat − β1xi hat
F.O.C for B0
−2 Ε (yi − βc0 − βc1xi)= 0
F.O.C for B1
−2 Ε xi (yi − βc0 − βc1xi) = 0
What is the OLS estimator trying to do?
Minimise SSR by deriving it (first order condition)
Finding Bo hat
Take -2 away Eyi -nB0 -B1Exi = 0 Divide everything by n y bar - B0 - B1xbar =0 B0= Y bar -B1 xbar bar = mean
Finding B1 hat
Take -2 away
We know B0 hat so plug that in
Left with E(Yi-Ybar) - Exi(BiXi -B1Xbar) = 0
EXi(Yi-Ybar) = B1E(xi(xi-xbar))
Exi(yi-ybar) =
E(xi-xbar)(yi-ybar)
Average value of x
1/nExi = Xbar
therefore Exi = nxbar
E (Xi-Xbar)(Yi-Ybar) =
useful trick
= E Xi(Yi-Ybar) - Xbar E (Yi-Ybar)
= E Xi(Yi-Ybar) -XbarNYbar + XbarNYbar
= E Xi(Yi-Ybar)
E(Xi-Xbar)^2 =
= E(Xi-Xbar)(Xi-Xbar)
= EXi(Xi-Xbar) - XbarE(Xi-Xbar)
= EXi(Xi-Xbar) - XbarNXbar + XbarNXbar
= EXi(Xi-Xbar)
B1
E(Xi-Xbar)^2
Only assumption needed to calculate the OLS estimator B1
E (Xi-Xbar)^2 > 0
PRF
Population regression function - True relationship between x and y
SRF
Sample regression function
Yhat = Bohat + B1xhat
3 mathematical properties that hold in any sample of data. First two follow the two F.O.C
E Uihat = 0
E XiUihat = 0
(Xbar, Ybar) is always on the regression line
Unbiasedness assumptions
E(B1hat) = B1 E(B0hat) = B0
Var(x)
1/1-n E (Xi-Xbar)^2
ZCM assumptions in terms of Y
E(Y|X) =
Var(Y|X) =
B0 +B1x
σ^2
y = B0 + B1x + u (Level - Level)
If you change x by one, we’d expect y to change by B1
Ln(y) = B0 + B1x + u (Log - Level)
If we change x by one unit, we’d expect our y variable to change by 100 x B1 %
Y= B0 + B1Ln(x) + u
If we increase x by 1% we would expect y to increase by B1/100 units of y
Ln(y) = B0 + B1Ln(x) + u
If we change x by 1%, we would expect y to change by B1%