linear 22\23 Flashcards
What is the primary goal of the OLS method?
To minimize the sum of squared residuals and find the best-fitting line.
What is the formula for ß in OLS regression?
ß= Cov(yt,yt-1)/Var(yt)
what does it mean for an estimator to be BLUE?
Best Linear Unbiased Estimator: it has the smallest variance among all linear and unbiased estimators
why is E[et|yt-1]=0 important in regression
it ensures that the error terms does not systematically vary with the independent variable, making the estimator unbiased
what is homoscedasticity, and why is it necessary?
it means constant variance of the error term, necessary for valid standard errors
what is the impacy of violating the no-autocorrelation assumption?
it causes inefficient OLS estimators and incorrect inference due to underestimated standard errors
why is normality of errors important?
it ensures that finite sampling distribution of the parameters is normal, making hypothesis tesiting valid
what are the steps in hypothesis testing?
- state H0 and H1
- calculate test statistic
- determine critical values or p-value
- Make a decision (reject/fail to reject H0)
what does H0: a=0 signify in CAPM?
it tests whether there is an abnormal return (alpha); CAPM implies a=0
what do the different values of the Durbin-Watson statistic indicate?
DW = 2: no serial correlation
DW < 2: positive serial correlation
DW > 2: Negative serial correlation
how does serial correlation affect OLS estimates?
Makes OLS inefficient, underestimates error variance, and invalidates hypothesis testing
what is serial correlation in regression models?
it occurs when residuals et are correlated with et-1 or other lags
does serial correlation affect OLS unbiasedness?
No, OLS remains unbiased but looses efficiency and proper inference properties
under what conditions does the Gauss-Markov theorm hold?
when assumtions of linearity, no multicollinearity, homoscedasticity, and no autocorrelation are met
is normality of errors necessary for large samples?
No, the Central Limit Theory ensures that sampling distribution of OLS estimates is approximately normal
why is a t-test used to test a=0 in CAPM?
to determine if the abnormal return (alpha) is significantly different from zero
what’s the difference between finite and asymptotic properties of an estimator?
Finite properties apply to small samples, while asymptotic properties apply as sample size approaches infinity
how does multicollinarity affect OLS
it does not bias the estimator but inflates standard errors, reducing statistcal significance
why must the variance in the OLS be > 0?
without vairance in yt-1, ß cannot be calculated
what are the type 1 and type 2 errors in hypothesis testing?
type 1: rejecting H0 when it is truw
type 2: fsiling to reject H0 when it is false
what methods cna be used to adjust for serial correlationa?
use roburst standard error(e.g. Newey-west) or models like ARIMA
what does a significant alpha in regression imply for investrs?
Evidence of abnormal returns not explained by market risk (ß)
how can residual plots help detect serial corrrelation ?
Patterens in residuals, such as trends or cycles, indicated autocorrelation
what is the difference between homoscedasticity and heteroscedasticity?
homoscedasticity: constant error variance
heteroscedasticity: error variance depends on the independent variable
what are the assumptions needed for OLS to be BLUE?
Linearity, random sampling, no multicollinearity, zero conditional mean, homoscedasticity, no autocorrelation
what is the sampling distribution of an estimator?
the distribution of the estimator accross random samples, refecting its variability
linear in paramters
the stochastic process follows a linear model
No Perfect Collinearity
no idependent variable is a perfect linear combination of the other
zero conditional mean of error terms
the expeced value of the error terms, given the explanatory varables for all time periods is 0. error terms is uncorrelated with each explanatory variable in each time period
Strict Exogenity
error terms is uncorrelated with each explanatory variable in each time period
Weak exogeneity
Weak exogeneity requires the structural error to have zero conditional expectation given the present and past regres- sor values, allowing errors to correlate with future regressor realizations
homoscedasticity
conditional on X, the variance et is constant. et and X are independent
No serial correlation
Conditional on X, the errors in two different time periods are uncorrelated
Autocorrelation
Conditional on X, the error in two different time periods are correlated
Normality of error terms
The errors are independent of X and are independently and identically distributed