The Simple Linear Regression Model Flashcards

1
Q

What are the SLRM and the error term?

A

A regression model that estimates the relationship between one independent variable and one dependent variable using a straight line.
y = β0 + β1x + u

The variable u is called the error term, random disturbance
(”disturbs an otherwise stable relationship”).
Represents unobserved factors other than x that affect y.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the Conditional mean assumption?

A

E [u | x] = 0

It states that unobserved factors do not contain information on explanatory variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the population regression function (PFR)?

A

The conditional mean independence assumption implies that:
E [y | x] = β0 + β1x

This means that the average value of the dependent variable can be expressed as a linear function of the explanatory variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the orthogonality conditions?

A

E [u | x] = 0

E [xu] = 0
E [u] = 0

E [x (y − β0 − β1x)] = 0
E [y − β0 − β1x] = 0

When independent variables are orthogonal, they are uncorrelated, which is beneficial.
These conditions enable us to estimate the coefficients.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the method of the moments approach?

A

The idea behind the Method of Moments (MoM) estimation is that: to find a good estimator, we should have the true and sample moments match as best we can. That is, I should choose the parameter β such that the first true moment E[X] is equal to the first sample moment ¯x.

The equations given by the orthogonality conditions might be used to obtain good estimators for the parameters β0 and β1:

(1/n).[Σxi(ˆy −ˆ β0 −ˆβ1xi)] = 0
and
(1/n).[Σ(ˆy − ˆβ0 − ˆβ1xi)] = 0

(numerically the same as OLS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the OLS estimators?

A

Using the basic properties of the summation operator we have the ordinary least squares (OLS) estimates of β0 and β1:
1. ˆβ0= ¯y − ˆβ1.¯x
and
2. ˆ β1 = [(1/n).(Σ(xi - ¯x) (yi − ¯y)] / [(1/n).(Σ(xi - ¯x)^2] or, the sample covariance between x and y divided by the sample variance of x.

These parameters are chosen to minimize the sum of squared residuals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How do we minimize the SSR?

A

we choose ˆβ0 and ˆβ1 to make the sum of squared
residuals minimal:

SSR(ˆβ0, ˆβ1) = Σ(u^2) = Σ(y − ˆβ0 − ˆβ1xi)^2

The FOC (First-order condition) for the minimum is found by setting the partial derivatives of the previous equation equal to zero:

(δ/δˆβ0) . SSR(ˆβ0, ˆβ1) = −2.Σ(yi − ˆβ0 − ˆβ1xi)

(δ/δˆβ1) . SSR(ˆβ0, ˆβ1) = −2.Σ(yi − ˆβ0 − ˆβ1xi).xi

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the so-called normal equations?

A

The FOC equations when solved equal to zero, provide the so-called normal equations:

  1. n.ˆβ0 + ˆβ1.Σxi = Σyi
  2. ˆβ0.Σxi + ˆβ1.Σ(xi^2) = Σyi.xi

which ultimately gives us:

  1. ˆβ0= ¯y − ˆβ1.¯x
  2. ˆ β1 = [(1/n).(Σ(xi - ¯x) (yi − ¯y)] / [(1/n).(Σ(xi - ¯x)^2]
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the algebraic properties of the OLS?

A
  1. Σ(ˆui^2) = 0
  2. Σ(xi.ˆui^2) = 0
  3. ¯y = ˆβ0 + ˆβ1.¯x
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the SST, SSE, and SSR?

A

The sum of squares total, the sum of squares regression, and the sum of squares error.

STT = Σ(yi - ¯y)^2
SSE = Σ(ˆyi - ¯y)^2
SSR = Σ(ˆu^2)

Therefore, it is straightforward to show that:
SST = SSE + SSR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the R-square?

A

R^2 = SSE/SST = 1 − SSR/SST

It is the ratio of the explained variation compared to the total variation;

It is interpreted as the fraction of the sample variation in y that is explained by x.

A value of R^2 that is nearly equal to zero indicates a poor fit of the OLS line: very little of the variation in the yi is captured by the variation in the ˆyi .

It can be shown that R2 is equal to the square of the sample
the correlation coefficient between yi and ˆyi.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly