Chapter 4 Flashcards
We adopt the least-squares criterion
THE METHOD OF ORDINARY LEAST
SQUARES
We want to minimize the sum of the squared residuals
THE METHOD OF ORDINARY LEAST
SQUARES
Three Statistical Properties of
OLS Estimators
I. The OLS estimators are expressed solely in
terms of the observable quantities (i.e. X and Y).
Therefore they can easily be computed.
II. They are point estimators (not interval
estimators). Given the sample, each estimator
provide only a single (point) value of the
relevant population parameter.
III. Once the OLS estimates are obtained from the
sample data, the sample regression line can be
easily obtained.
The properties of the
regression line
- It passes through the sample means of Y and X
- The mean value of the estimated Y = Y; is equal to the mean value of the actual Y
- The mean value of the residuals is zero
- The residuals û; are uncorrelated with the predicted Y
- The residuals û are uncorrelated with X₁; that is, sum hat u_{i}*X_{i} = 0 .
The Classical Linear
Regression Model: The Assumptions Underlying the Method of Least Squares
Assumption 1: Linear regression model.
Assumption 2: X values are fixed in repeated sampling
Assumption 3: Zero mean value of disturbance u.
Assumption 4: Homoscedasticity or equal variance of u,.
Assumption 5: No autocorrelation between the disturbances.
Assumption 6: Zero covariance between u; and Xi, or E(uiXi) = 0 Formally,
Assumption 7: The number of observations n must be greater than the number of parameters to be estimated.
Assumption 8: Variability in X values.
Assumption 9: The regression model is correctly specified.
Assumption 10: There is no perfect multicollinearity.
The regression model is linear in the parame- ters
Assumption 1: Linear regression model.
. Values taken by the regressor X are considered fixed in repeated samples. More technically, Xis assumed to be nonstochastic.
Assumption 2: X values are fixed in repeated sampling
Given the value of X, the mean, or expected, value of the random disturbance term u; is zero. Technically, the conditional mean value of u; is zero. Symbolically, we have E(u; X) = 0
Assumption 3: Zero mean value of disturbance u.
Given the value of X, the vari- ance of u, is the same for all observations. That is, the conditional variances of u; are identi- cal. Symbolically, we have
var (u; Xi) = E[u; - E(u; | Xi)]2 = E(ut | Xi) because of Assumption 3
where var stands for variance.
Assumption 4: Homoscedasticity or equal variance of u,.
Given any two X values, X, and Xj (ij), the correlation between any two u, and u; (ij) is zero.
Assumption 5: No autocorrelation between the disturbances.
cov (ui, Xi) = E[u-E(u)] [X-E(X))] = E[u(X-E(X))] since E(u_{i}) = 0 = E(uX) - E(X)E(u) since E(X) is nonstochastic = E(u,X) since E(u_{i}) = 0 = 0 by assumption
Assumption 6: Zero covariance between u; and Xi, or E(u_{i}*X_{i}) = 0 Formally,
Alternatively, the number of observations n must be greater than the number of explanatory variables.
Assumption 7: The number of observations n must be greater than the number of parameters to be estimated.
The X values in a given sample must not all be the same. Technically, var (X) must be a finite positive number. 13
Assumption 8: Variability in X values.
Alternatively, there is no specification bias or error in the model used in empirical analysis
Assumption 9: The regression model is correctly specified.
That is, there are no perfect linear relationships among the explanatory variables
Assumption 10: There is no perfect multicollinearity.