Final Flashcards
We do not need the normality of the error term assumption to perform valid statistical inference if the other multiple linear regression model assumptions hold and we have a large sample.
True
Heteroskedasticity causes the OLS estimator to be biased.
False
Heteroskedasticity causes the OLS estimator to be inconsistent.
False
Heteroskedasticity causes the usual estimator of the variance of the OLS estimator to be inconsistent.
True
Heteroskedasticity-robust standard errors are valid only if the sample size is large.
True
Heteroskedasticity-robust standard errors are always larger than the usual standard errors.
False
When the error term in a regression model is heteroskedastic, the OLS estimator is not the best linear unbiased estimator (BLUE).
True
With a large sample size, heteroskedasticity-robust standard errors are valid even if the error term is homoskedastic.
True
Classical measurement error in the dependent variable does not cause bias in the OLS estimator, although it does increase the variance of the OLS estimator.
True
Under the classical measurement error assumption, measurement error in an explanatory variable causes attenuation bias.
True
If x is correlated with x* and if x is uncorrelated with the error term, u, then we say that x is a good proxy for x*
False
The F test is not useful in detecting functional form misspecification. Instead, one should use RESET or the Davidson-MacKinnon test.
False
Functional form misspecification is when the model does not properly account for the relationship between the dependent and explanatory variables, often because the appropriate explanatory variables are not observed.
False
RESET is useful in detecting functional form misspecification as well as general omitted variable bias.
False
Removes serial correlation via an iterative process
Cochrone Orcutt / Prais Winston
Tests for Functional Form Misspecification
Ramsey RESET
Tests for Heteroskedasticity
Breusch Pagan Test or White Test
Highly persistent series do not give biased estimates.
False
Eliminates high persistence in a time series
First Difference
Any result from a highly persistent series is spurious.
True
Tests for unit root
Dicky Fuller
Finding a relationship between 2 or more trending variables simple because each is growing over time.
Spurious Regression Problem
The null hypothesis of the dicky fuller test
unit root
How to correct for a spurious regression problem in a time series
Add a time trend
How do you account for seasonality
Include seasonal dummy variables and joint f test all seasonal dummy variables to check for significance.
The long run propensity (LRP) in a finite distributed lag model is the average of all the coefficients on the included lags of the variable of interest plus the value of the contemporaneous variable of interest.
False
The Dickey-Fuller test can be used to determine if there is evidence that the specified time series is not highly persistent.
True
What would be an appropriate procedure to correct the standard errors when serial correlation is present in a time series regression model?
Cochrane-Orcutt Estimation
The Cochrane-Orcutt estimation procedure should be used when regressing a highly persistent time series on another highly persistent time series in order to obtain unbiased parameter estimates.
False. So what is Conchrane Orcutt used for?
Serially correlated errors cause the OLS estimator to be biased and inconsistent.
False
Regressing a highly persistent time series on another highly persistent time series produces spurious results.
True
First differencing can be used to render a highly persistent time series weakly dependent.
True
Both first-differenced estimation and fixed-effects estimation can be used to estimate causal effects if the unobserved factors that are correlated with the independent variable of interest change over time.
False
Fixed-effects estimation can be used to estimate causal effects if the unobserved factors that are correlated with the dependent variable of interest are time invariant.
True
If the average value of the outcome variable is different for the treated and control groups before the treatment, difference-in-differences estimation will not be able to provide an unbiased estimate of the effect.
False
The validity of difference-in-differences estimation depends on the assumption that the change in the treated and control groups would have been the same had it not been for the treatment.
True
Under which conditions is the difference-in-differences estimator not able to provide an unbiased estimate of the effect?
Some other factor that changes over time affects the outcome for only the treated group.
With panel data, estimation in first differences and fixed-effects estimation are computationally identical.
False
If the dependent variable is a binary variable, the error term is obviously not normally distributed. This may result in biased OLS estimates.
False
To model if there are increasing or decreasing returns to a particular independent variable one should include an interaction term.
False
What is the null hypothesis for the Breusch-Pagan Test?
Ho: homoskedasticity