Section 4 Flashcards
In a multiple regression model, how many explanatory variables are there and how many parameters to estimate are there?
K-1 explanatory variables
K parameters to estimate
What does βj where j=2…k represent?
Each β (other than β1) represents PARTIAL slope coefficients
Tf β2 measures change in mean Y per unit change in X2 (Ceteris paribus)
What is the technique for minimising sum of squares in multiple regression analysis?
Find derivatives dS/dβi, set all equal to 0 and solve
What is assumption must be modified to ensure that multiple regression OLS estimators are still BLUE?
Assumption 4 must be modified: it now must be extended across all explanatory variables in the model, so EACH regressor is uncorrelated with the error
What is assumption must be added to ensure that multiple regression OLS estimators are still BLUE?
No exact collinearity can exist between any of the variables (tf no exact linear relationship between any of the regressors)
Another name for exact collinearity?
Pure collinearity
Note: exact collinearity is very rare
How are the sampling distributions for the OLS estimators distributed?
Normally:
β(hat)j ~ N(βj,σ^2)
How do we know the sampling distributions of the OLS estimators are unbiased?
The means equal the true (but Unknown) values
What is the use of the R(bar)^2 statistic?
R^2 statistic is often used to compare models with the same dependent variable to see which model is better at explaining it. However, as more explanatory variables are added to a model, R^2 will naturally increase therefore can lead to incorrect conclusions about models. To penalise the use of the extra explanatory variables we use the R(bar)^2 statistic
The R(bar)^2 statistic will only increase if new variables ADD to the analysis
What test statistic, with how many degrees of freedom, would be used for a test involving one parameter? (Eg. βj=βj*) how would you do a test of significance?
T statistic, n-k DofF
Sub in βj*=0
See 4.4.1 to learn how to do it
Give two examples of testing a linear restriction?
Testing if Σparameters=1
Or
Testing if β2=β3
How would you go about testing if β2=β3 (LINEAR RESTRICTION) via a hypothesis test? (H0? Distribution of it? Test statistic?)
H0: β2-β3=0
Distribution: β2-β3 ~N(β2-β3, σ^2β2 +σ^2β3 -2cov(β2β3))
SEE 4.4.2
(T statistic)
What test statistic, with how many degrees of freedom, would be used for the testing of joint restrictions? (Eg. β2=β4=0)
F statistic with q DofF for numerator and n-k for denominator, where q=number of restrictions(number of β involved) and n=sample size and k=number of β in total
See 4.4.3 and learn it!
See more complex example for tests of joint restrictions in notes
Now
Thing to remember when doing an F-test on joint restrictions?
Don’t worry about two/one tailed test, if it says 5% level of significance then use 5% graph