L11 - Testing Linear Restrictions 2 Flashcards
How do you write Linear Restrictions in Matrix Form for a bivariate model?
- Where R is a matrix of coefficient, θ is a vector or parameters and r is a vector os restrictions
How do you write Linear Restrictions in Matrix Form for a multivariate model?
What are the three classical approaches to testing restrictions?
- Likelihood ratio
- Wald
- Langrange Multiplier
The classical approaches to testing can be explained best in the context of maximum likelihood estimation.
What is the Notation needed for a maximum Likelihood Problem with a single parameter?
What is the Likelihood Ratio approach when testing restrictions?
The Likelihood ratio test is based on the difference between the log- likelihoods of the restricted and unrestricted cases
What is the Wald approach when testing restrictions?
The Wald test is based on the difference between the restricted and unrestricted parameter estimate –> horizontal difference
What is the Lagrange Multiplier approach when testing restrictions?
The Lagrange multiplier test is based on the slope of the log-likelihood function at the restricted parameter value
How are all three test statistics distributed?
All three test statistics follow a chi-squared distribution under the null hypothesis with degrees of freedom equal to the number of restrictions.
What is the Wald Test an example of?
where k is the number of parameters to be estimated including slope and intercept
What is the Likelihood Ratio Test similar to?
How can you convert the t-test into a chi-squared distribution?
- you square it, this will give you a chi squared distribution with 1 degree of freedom ( considering that you are only testing for a value of β –> thus only one restriction)
Is the F-statistic positive or Negative?
Restrictions always increase the residual sum of squares. ( As the unrestricted value is already at the minimum)
Therefore the F statistic is always positive. if (RRSS-URSS > 0 then the F statistic is positive)
How are Regression Model Misspecified?
A regression model may be misspecified for a number of reasons:
- An incorrect choice of functional form –> assume its linear, but the relationship mat not be
- Omitted variable bias –> when we set up the model we may not have the correct set of X variables ;–> left out revelant variables
- Inclusion of irrelevant variables
- Measurement error in the regressors –> X variables not measured correctly (have negative variables but take logs of them –> which give zero because it cant be measured)
- Correlation of the independent variables with the errors
How can you test if there is an Omitted Variable Bias?
The error term disappears in the 3 line because the expected value of the error term is equal to 0
- There is only two condition where we can have an unbiased estimator:
- If β2= 0
- if E(Xi1Xi2)= 0 (they are correlated)
if this happen we still have an unbiased OLS estimator
How can you test for Inclusion of Irrelevant Variables?
Variance of β1(hat) > Variance of the OLS estimator –> the denominator of the Variance of β1(hat) is multiplied by a correlation coefficient (which is alwys between -1 and 1) thus the denominator should always be smaller allowing for the variance to be bigger for β1(hat)
Normally we want a variance as small as possible but when we do include irrelevant variables it is greater than the variance of the OLS estimator so it is unbiased but inefficient
- the degrees of inefficiency we depend on the correlation of the right hand size variables –> more correlated the higher the inefficiency will be