Chapter 4 Flashcards

1
Q

We adopt the least-squares criterion

A

THE METHOD OF ORDINARY LEAST
SQUARES

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

We want to minimize the sum of the squared residuals

A

THE METHOD OF ORDINARY LEAST
SQUARES

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Three Statistical Properties of
OLS Estimators

A

I. The OLS estimators are expressed solely in
terms of the observable quantities (i.e. X and Y).
Therefore they can easily be computed.

II. They are point estimators (not interval
estimators). Given the sample, each estimator
provide only a single (point) value of the
relevant population parameter.

III. Once the OLS estimates are obtained from the
sample data, the sample regression line can be
easily obtained.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The properties of the
regression line

A
  1. It passes through the sample means of Y and X
  2. The mean value of the estimated Y = Y; is equal to the mean value of the actual Y
  3. The mean value of the residuals is zero
  4. The residuals û; are uncorrelated with the predicted Y
  5. The residuals û are uncorrelated with X₁; that is, sum hat u_{i}*X_{i} = 0 .
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The Classical Linear
Regression Model: The Assumptions Underlying the Method of Least Squares

A

Assumption 1: Linear regression model.
Assumption 2: X values are fixed in repeated sampling
Assumption 3: Zero mean value of disturbance u.
Assumption 4: Homoscedasticity or equal variance of u,.
Assumption 5: No autocorrelation between the disturbances.
Assumption 6: Zero covariance between u; and Xi, or E(uiXi) = 0 Formally,
Assumption 7: The number of observations n must be greater than the number of parameters to be estimated.
Assumption 8: Variability in X values.
Assumption 9: The regression model is correctly specified.
Assumption 10: There is no perfect multicollinearity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The regression model is linear in the parame- ters

A

Assumption 1: Linear regression model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

. Values taken by the regressor X are considered fixed in repeated samples. More technically, Xis assumed to be nonstochastic.

A

Assumption 2: X values are fixed in repeated sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Given the value of X, the mean, or expected, value of the random disturbance term u; is zero. Technically, the conditional mean value of u; is zero. Symbolically, we have E(u; X) = 0

A

Assumption 3: Zero mean value of disturbance u.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Given the value of X, the vari- ance of u, is the same for all observations. That is, the conditional variances of u; are identi- cal. Symbolically, we have
var (u; Xi) = E[u; - E(u; | Xi)]2 = E(ut | Xi) because of Assumption 3
where var stands for variance.

A

Assumption 4: Homoscedasticity or equal variance of u,.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Given any two X values, X, and Xj (ij), the correlation between any two u, and u; (ij) is zero.

A

Assumption 5: No autocorrelation between the disturbances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

cov (ui, Xi) = E[u-E(u)] [X-E(X))] = E[u(X-E(X))] since E(u_{i}) = 0 = E(uX) - E(X)E(u) since E(X) is nonstochastic = E(u,X) since E(u_{i}) = 0 = 0 by assumption

A

Assumption 6: Zero covariance between u; and Xi, or E(u_{i}*X_{i}) = 0 Formally,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Alternatively, the number of observations n must be greater than the number of explanatory variables.

A

Assumption 7: The number of observations n must be greater than the number of parameters to be estimated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The X values in a given sample must not all be the same. Technically, var (X) must be a finite positive number. 13

A

Assumption 8: Variability in X values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Alternatively, there is no specification bias or error in the model used in empirical analysis

A

Assumption 9: The regression model is correctly specified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

That is, there are no perfect linear relationships among the explanatory variables

A

Assumption 10: There is no perfect multicollinearity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Given the assumptions of the classical linear regression model, the least-squares estimators, in the class of unbiased linear estimators, have minimum variance, that is, they are BLUE.

A

Gauss-Markov Theorem

17
Q

An estimator, say the OLS estimator , is said to be a

A

best linear unbiased estimator (BLUE)

18
Q

An estimator, say the OLS estimator , is said to be a best linear unbiased estimator (BLUE) of β2 if the following hold:

A
  1. It is linear, that is, a linear function of a random variable, such as the dependent variable Y in the regression model.
  2. It is unbiased, that is, its average or expected value, E(ẞ2), is equal to the true value, ẞ2.
  3. It has minimum variance in the class of all such linear unbiased estimators; an unbiased estimator with the least variance is known as an efficient estimator.
19
Q

that is, a linear function of a random variable, such as the dependent variable Y in the regression model.

A
  1. It is linear,
20
Q

that is, its average or expected value, E(ẞ2), is equal to the true value, ẞ2.

A

It is unbiased,

21
Q

an unbiased estimator with the least variance is known as an efficient estimator.

A

It has minimum variance in the class of all such linear unbiased estimators;

22
Q
  • TSS:
  • ESS:
  • RSS:
A

total sum of squares
explained sum of squares
residual sum of squares

23
Q

is the sample correlation coeffient

A

r

24
Q

Some of the properties of r

A
  1. It can be positive or negative, the sign depending on the sign of the
    term in the numerator of (3.5.13), which measures the sample covariation of
    two variables.
  2. It lies between the limits of -1 and +1; that is, - 1 <= r <= 1
  3. It is symmetrical in nature; that is, the coefficient of correlation be- tween X and Y(r XY ) is the same as that between Y and X(r YX ).
  4. It is independent of the origin and scale; that is, if we define X_{i} ^ * = aX_{i} + C and Y i ^ * = bY_{i} + d , where a > 0 b > 0 and c and d are constants
    then r between X and Y is the same as that between the original variables X and Y.
  5. If X and Y are statistically independent (see Appendix A for the defi- nition), the correlation coefficient between them is zero; but if r = 0, it does not mean that two variables are independent. In other words, zero correla- tion does not necessarily imply independence. [See Figure 3.11(h).]
  6. It is a measure of linear association or linear dependence only; it has no meaning for describing nonlinear relations. Thus in Figure 3.11(h), Y = X2 is an exact relationship yet r is zero. (Why?)
  7. Although it is a measure of linear association between two variables, it does not necessarily imply any cause-and-effect relationship, as noted in
    Chapter 1.
25
Q

an unbiased estimator with the least variance is known as

A

an efficient estimator.

26
Q

var:
se:
σ²:
σ:
6:

A
  • variance
  • standard error
  • the constant homoscedastic variance of ui
  • the standard error of the estimate
  • OLS estimator of σ
27
Q

How to minimize the sum of the residual

A

Through a good model or theory

28
Q

The opposite of homescedasticity

A

heteroscedasticity

29
Q

The coefficient of determination

A

30
Q

How much of the variation of variable Y explained by variable X

A