Quants Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is Regression Analysis?

A

A statistical process where we infer the influence of one/more(independent) variables on a single(dependent) variable or we predict a dependent variable (Criterion) based on other independent variables (predictors).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Simple linear regression vs Multiple linear regression?

A

Simple is when we have one dependent variable and one independent variable and multiple linear regression is when we have a single dependent variable and two or more independent variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What should be an Analyst’s focus?

A

The heavy computational work is done by statistical software like Excel, Python, R, etc.
An analyst should focus on:
A) Specifying the model correctly,
B) Interpreting the output of the software.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Uses of Multiple Linear Regression?

A

A) to identify relationship between variables
B) to test existing theories
C) to forecast/predict a criterion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the general form of Regression Equation? What is the intercept co-efficient and what are slope coefficients?

A

Yi = b0 + b1X1 + b2X2 + ei
b0 is the intercept co-efficient and it represents the expected value of Y(criterion) if all the predictors are zero.
b1, b2 etc. are partial/regression slope coefficients which measure how much the criterion changes when the independent variable changes by one unit, holding all other independent variables constant. We’ll always have k slope coefficients where k = number of independent variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Assumptions under multiple linear regression?

A

There are 5 in total:
A) Linearity - The relationship between criterion and each of the predictors should be linear. (The regression line should fit through the entire data points graphs)
B) Homoskedasticity - The relation of criterion with the errors. (Criterion on the X-axis and errors on the Y-axis, where errors should be within a range)
C) Independence of Errors - The observations should be independent of one another. Regression residuals should be uncorrelated across observations.
D) Normality - The error terms should be normally distributed. (Deviations from the diagonal past +/-2 standard deviations indicate that the distribution is fat-tailed.
E) Independence of Independent variables - Independent variables are not random and there is no linear relationship between two or more of the independent variables or combination of the independent variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the goodness of fit?

A

Goodness of fit shows us how well a particular regression model fits the given data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the simplest measure for goodness of fit?

A

R^2 or R-squared is the simplest measure to check/determine the goodness of fit.
In a simple regression model, R^2 or R-squared, the co-efficient of determination, is a measure of the goodness of fit of an estimated regression to the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do we calculate the co-efficient of determination?

A

R^2 = (Sum of squares regression)/(Sum of squares total) or (explained variation)/(total variation)
Numerator is total of [(Y-hat) - (Y-bar)]^2 / total of [(Yi) - (Y-bar)]^2 (where Y-hat is the predicted Y-value and Yi is the actual Y value and Y-bar is the average value of Y)
*notice the denominator isn’t based on the regression model.
The highest value of R^2 can be 1 and the lowest can be zero. (The higher the better)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

R^2 works well with simple regression model, but what is the problem with multiple linear regression?

A

As we add predictors to our model, R^2 increases even if the amount they explain is not statistically significant (has no explanatory power).
This leads to overfitting problem, which gives us an overly complex model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

So, how do we estimate the goodness of fit for a multiple linear regression model?

A

We use adjusted R^2.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How to calculate adjusted R^2?

A

Adjusted R^2 = 1 - [(n-1)/(n-k-1)] * (1 - R^2) where n = no. of observations and k = no. of predictors.
*(n-k-1) is the degrees of Freedom

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What happens to adjusted R^2 when we add new predictors in our regression model?

A

Adjusted R^2 increases if the coefficient’s t-statistic is > |1| and
Adjusted R^2 decreases if the coefficient’s t-statistic is < |1|

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Additional remarks about Adjusted R^2?

A

Adjusted R^2 can be negative (whereas R^2 has a lower bound of zero).
A high Adjusted R^2 means that the model is a good fit, but it doesn’t mean that the model is well specified (Means using all the right predictors and the predictors are in the correct form).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the shortcomings of Adjusted R^2?

A

In multiple regression, there is no neat interpretation of adjusted R^2 (like R^2 in simple regression is explained variation/total variation)
Doesn’t indicate if the coefficients are significant or if the predictions are biased.
Also, it’s not generally suitable for testing the significance of the model’s fit (for which, we explore the ANOVA further, calculating the F-statistic and other goodness of fit metrics)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the other metrics we use beyond adjusted R^2?

A

A) AIC (preferred if the model is used for prediction purposes)
B) BIC (preferred when best goodness of fit is the goal)
*The lower the number for both, the better for the model.

17
Q

How do we test if the coefficients in a regression model are significant or not?

A

By Hypothesis testing! *The null will always assume that coefficient values are zero.
For any single co-efficient (like b0 or b1 or b2) -> the testing is same as we’ve done before in level 1.
T-stat will be calculated and given in the ANOVA table and we compare it with a critical value and reject the null (which says value of co-efficient is zero).

18
Q

What is a Joint F-test?

A

It is used to jointly test a subset of variables in multiple regression.

19
Q

What are the two types of Joint F-test?

A

Restricted model vs Unrestricted model -> Unrestricted has all the coefficients.
Restricted model/nested model sets two or more coefficients to zero (to see if adding them has any statistical value addition or not)
The number of coefficients we set to zero is called as ‘q’.

20
Q

How is the F-statistic calculated for the restricted model?

A

Using the formula:
F = {[(Sum of squares error restricted model) - (sum of squares error unrestricted)]/q} / [Sum of squares unrestricted model/(n-k-1)]

21
Q

What is the General linear F-test?
How is the F-Stat calculated for the General linear F-test?

A

It is an extension of the Joint F-test, where we test the significance of the whole regression equation.
Null is that all coefficients are zero!
alternative is that at least one is not equal to zero.
Here,
*F-stat = Mean regression sum of squares / Mean squared error = MSR/MSE

22
Q

Imp points about F test?

A

We reject the null if the F-stat value exceeds the given critical value.
All the F-test are one-tailed whereas T-statistic test of any one co-efficient is a two-tailed test.

23
Q

What is Forecasting using multiple regression?

A

It’s using estimates for coefficients and assumed values for predictors and coming up with a regression model.
Basically, a regression model based on the data which is estimated and predictors are assumed.

24
Q

What are the principles we need to adhere for specifying a model?

A

A) Model should be grounded in economic reasoning (Choice of Variables should have economic reasoning)
B) Model should be parsimonious (Each variable chosen should play an essential role or we should have as few X variables as possible)
C) Model should perform well out of sample
D) Model function should be appropriate (If nonlinear relationship of predictors, then use appropriate nonlinear terms)
E) Model should satisfy regression assumptions

25
Q

Reasons for Failures in Regression Functional Form?

A

A) Omitted Variables
B) Inappropriate form of variables (Ignoring a nonlinear relationship between Criterion and a Predictor)
C) Inappropriate variable scaling (One or more variables may need to be transformed before regression for example, using common-sized balance sheets instead of actual numbers from financial statements when comparing different companies)
D) Inappropriate data pooling (using data which shouldn’t be used)

26
Q

How do we test for Heteroskedasticity?

A

We use the Breusch-Pagan (BP) Test.

27
Q

What are the steps of a BP test?

A

A) Null = There is no Heteroskedasticity
B) we run a normal regression and find residuals
C) Then we regress the residuals we found with X
D) Then we calculate BP test stat as = n * R^2
E) Compare with the critical value. If test stat > Critical value, then we reject the null & Vice versa.

28
Q

What do we need to know about P-value from level 1?

A

A higher t-stat corresponds to a lower p-value.
We compare this p-value to the level of significance in order to decide if we have to reject the null or not.
If the p-value is less than the level of significance, we reject the null hypothesis.

29
Q

How do we correct conditional Heteroskedasticity?

A

By computing Robust standard errors (a.k.a. heteroskedasticity-consistent standard errors or White-corrected standard errors)
*T-stat comes down and p-value increases.
Why? Cos SE is inflated.

30
Q

What are the consequences of serial correlation (autocorrelation)?

A

If the predictor is a lagged version of the criterion -> Then coefficient estimates and SE estimates both are Invalid.
If the predictor is not a lagged version of the criterion -> Then only SE estimates are Invalid.

31
Q

How do we test for Autocorrelation?

A

We use the Breusch-Godfrey (BG) test and Durbin-Watson (DW) test.
DW test only tests for first-order serial correlation.
*BG test can be used to test if there is no serial correlation in the model’s residuals up to lag P.
Just like Heteroskedasticity, Null is that there is no serial correlation and alternate is that there is SC.

32
Q

How is Serialcorrelation corrected?

A

By Robust standard errors.

33
Q

What are the consequences of Multicollinearity?

A

The Standard error is inflated* (Unlike Heteroskedasticity and SC where the Standard error was undervalued)
Thus, if SE is high, T-stat is small and we fail to reject the null.

34
Q

What are the classic symptoms for multicollinearity?

A

High R^2 and significant F-stat
t-stat for slope coefficients aren’t significant.

35
Q

How do we test for multicollinearity?

A

Variance Inflation Factor (VIF) is used to quantify/test for multicollinearity issues.
A VIF exists for each independent variable.
Basically, we regress each predictor against the remaining predictors and from that regression, we get R^2.
Using that R^2, VIF = 1/(1 - R^2).

36
Q

Imp points about VIF?

A

VIF = 1 is the best possible case and indicates there is no multicollinearity.
VIF from 1-5 is fine, from 5-10 requires further investigation and above 10 is a problem.

37
Q

How do we correct for multicollinearity?

A

A) Excluding one or more of the predictors
B) Using a different proxy for one of the predictors
C) Increasing the sample size