Lessons Plans Flashcards
Building and Estimating a Model (5 step test)
- ) Hypothesize a relation between X variables and Y
- ) Use sample data to estimate the unknown parameters in the model
- ) Compute the standard errors of the parameters
- ) Statistically evaluate the usefulness of the model
- ) Use the model for prediction
When do we say there is heteroskedasticity in the data?
The variance of e changes when X changes (the variance of e is not constant and does not equal variance^2)
What are the assumptions about the error term?
- ) E[e] = 0
- ) the variance of e does not change when X changes
- ) the probability of e is normal. (Generally, this is difficult to know whether it’s not true)
- ) The value of ei is independent of ei for any two observations I and j. Example, if any two of you of you in the room work for the same company, your errors in the regression of your income on your parent’s income would likely be correlated. `
Test whether B1 is “significantly” different from zero
1.) Estimate variance^2
2.) a = compute a confidence interval around Bhat1
b = compute the T-statistic for the difference between Bhat1 and 0
Total Sum of Squares (TSS)
the sum of squares of the deviations from the mean of the dependent variable
Explained Sum of Squares
the sum of squares of the deviations from the mean of the predictions (i.e. predictions of the dependent variable Y in this case)
Residual Sum of Squares (RSS)
the sum of squares of the residuals (i.e. the errors in the predictions)
How is the Total Sum of Squares (TSS) partitioned into two parts?
This partitioning is called variance decomposition. We fit a regression line and see how much of the variation can be explained by the fitted relationship (the explained sum of squares, ESS) and how much is left over (the residual sum of squares, RSS).
the smaller the RSS, relative to TSS, the better the regression line appears to fit.
R^2is defined as the …
fraction of the total variability in Y that is explained by the regression model. R^2 is measure of “goodness of fit” and (rarely) called the “coefficient of determination.”
R^2 =
Variation in Y that is explained by X/Total variation in y
OLS (Ordinary Least Squares, also known as minimizing squared errors, which is what we’ve been doing) gives us
Betas that minimize the RSS (keeps squared residuals as small as possible), thus gives largest possible R^2
Adjusted R^2
adjusts for the number of independent variables (i.e. predictors) in the regression.
When moving to multivariate regressions the interpretation of R^2 will be…
“what fraction of the variation in Y can be explained by the X variables.”
What are the two ways to use a bivariate prediction equation?
- ) To estimate the mean value of Y for a population with a given X.
- ) To estimate the value of Y for an individual with a given X.
The smaller the RSS is, relative to TSS…
the better the regression line appears to fit
How does the total sum of squares partition into two parts? (Variance Decomposition)
- ) We fit a regression line and see how much of the variation can be explained by the fitted relationship (the explained sum of squares, ESS)
- ) How much is left over (the residual sum of squares, RSS)
OLS gives us…
Betas that minimize the RSS, thus giving the largest possible R^2
“Adjusted R squared”
adjusts for the number of independent variables in the regression
when there is only one X variable explain Y then r =
square root of r squared
What are the two ways we would use a prediction equation?
- ) To estimate the mean value of Y for a population with a given X
- ) To estimate the value of Y for an individual with a given X
We should be less confident with our answer for an individual than for a …
group average
Why should we include more than one variable in our regressions?
- ) To improve the precision of our predictions of Y. By lowering the standard error of the regression model, we also lower the standard errors of our coefficients and thus have more precisely estimated coefficients.
- ) To avoid biased estimates of the coefficients (“omitted variable bias”)
If N is large and k is small, r squared and adj r squared will be…
very similar
If N is small or k is large, r squared and adj r squared will be …
very different. In these cases you should use Adjusted R Squared
It is impossible to estimate a model where…
k > N. This is because there are not enough degrees of freedom in the dat to estimate all of the k parameters.
If we don’t pass the F test…
we should be very concerned with the validity of our model or its predictions