B15 Inference with Regression Models Flashcards
Is the t-statistic generated by the computer valid in a one tailed hypothesis test
No, it automatically assumes a two tailed hypothesis test
The hypothesis should be rejected in a two tailed test when the p value is greater than the significance level a
False it should be rejected when the absolute p-value is smaller than the significance level as it signifies the chance of getting a result so extreme assuming the null hypothesis is true and thus a smaller p-value than the a value means that it has crossed the limit of unlikelyness allowed.
Can we use the computer generated p and t value if we want to test if a coefficient is a nonzero value.
Yes becouse it is a two tailed test
How sould we use a computer generated p value in a one tailed hypothesis test
We should divide it in half to get the p-value for only one side.
What is the F-statistic
The mean square regression divided by the mean square error = MSR/MSE = (SSR/k)/(SSE/(n-k-1)). large F means that a large portion of the variation is explained by the regression model and that it thus is useful
In a test of joint significance the f-statistic is the same as the p-value
False, In a test with a single regressor the p value for the f statistic is the same as that of the t distribution. Thus the f-statistic is redundant if you dont to a joint test of significance.
The f-test tests if all the slope coefficients have a non zero value
False, it checks if at least one of the slopes is significant
How is a model restricted in a partial F-test
You remove the variables that you dont think are signoficant and then check the f statistic you get from the formula of linear restriction.
What are residual plots
When the residuals of our regression models are displayed as dots in a scaterplot where the x axis represents our functions result, the greater distance form the x axis the greater is the error.
What if you find patterns in the residual plot of a regression function
Then it is a sign that a condition of the OLS estimator is borken, for example that the relationship of the dependent variable on the independent variable is nonlinear.
Multicollinearity makes it dificult to attribute the effect to a specific variable
True at least if it is strong, if it is perfect it breaks the OLS estimator
What counts as severe multucollinearity
Correlation of at least 0.8 between two regressors
What effect does changing variability have on the OLS estimator and how can it be detected
If the variablility changes dependent on the residuals it will make the standard error missleading as well as the t- and f-test. You can detect it using a residual plot and see if the errors make a rising or falling pattern dependent on the residual.
The OLS estimator becomes biased when the variablliltiy changes with the regressors
false
Give an example of correlated observations
Time series data such as gdp, employment and asset returns. Correlated observations does not make the OLS estimator unbiased but it often brings down the standard error making the model seem stronger than it is.