6. Regression assumptions, Diagnostics and Influencial cases Flashcards
How many assumptions are there for multiple linear regression?
9 mathematical
+2 design
=11
What are the 2 design assumptions of multiple linear regression?
Independence (each participant only 1 score on each IV)
Interval Scale on IV and DV (or dichotomous IV)
What are the 9 mathematical assumptions of multiple linear regression
Normality (6 sub assumptions)
No multicollinearity (3 ways to check) Linearity
Normal distribution of residuals
Independent Residuals
Residuals unrelated to predictors
Homogeneity of Variance
What are the 6 tests of normality?
Symmetry Modality Skew Kurtosis Outliers Shapiro-Wilk
What do you check with the assumption of symmetry?
Mean = Medium = Mode
What do you check for in modality?
Only 1 most frequently occurring score (Unimodal not multi/bimodal)
What do you check for in skew and kurtosis?
Skew / SE Skew
What constitutes an outlier?
95% of cases should be 1.96
No more than 3% of cases should be >2.58
If there are… they are outliers
What do you check for in the shaprio-wilk statistic?
That it is not sig. (>.05)
What are the 3 checks for multicollinearity?
Pearson Correlations .01
VIF
What does VIF stand for?
Variance inflation factor
Where and what for do you look to check whether the residuals have a normal distribution?
Mean of residuals = 0 No skew (Snaking) and No Kurtosis (Sag) in the P-P plot and histogram No outliers in the histogram
Why are the residual statistics so important?
Because if they aren’t normally distributed then we can’t say that 68% of cases will fall within + or -
the RMSE of the regression line
How do we check linearity and why do we check it?
Using pearson correlation
because if the IV is not related to the DV then it can’t be a good predictor
How is the Independence of Residuals tested?
Using the Durbin-Watson
What does the Durbin Watson show?
The independence of residuals
When reading the Durbin-Watson, what are we looking for to meet our assumption of independent residuals?
Values between 1.5-2.5
Actual range from 1 = strong pos. to 4= strong neg.
What do we look at to test homogeneity of variance?
The scatterplot of standardized predicted value against the studentized residual
(don’t want any funnelling or patterns)
What is it called if there is funnelling (and an unequal distribution on either side of x= 0) where the assumption of homogeneity is not met?
Heteroscedasticity
What is evidence of heteroscedasticity?
No pattern or funnelling
Equal distribution on either side of x=0 (divide graph in half)
How is the ‘Residuals unrelated to predictors’ assumption checked?
By obtaining a Pearson correlation between each IV and the unstandardized residuals (RES_1)
What should the correlation between the predictors and the residuals be?
0 and non sig.
What should we do if the assumptions are violated?
Question the validity of the model and caution about the interpretations
What three violations of assumptions cause the most problems for linear regression?
Normality (especially on the DV)
Homogeneity of Variance
Presence of outliers
What are the 3 options if normality is violated?
Transformation NO (if sig. skew) Bootstrap YES (less biased) Outliers FIRST (check influence)
What are considered extreme cases in a data set?
> 2SD from the mean
What are outliers a problem?
They affect the value of the estimated regression coefficients = biased model
Where are the problem cases located in SPSS output?
Casewise Diagnostics
What should you look at to determine the amount of influence the outliers are having?
Studentized Residuals (Y-Ypred: Error) Influential cases
What does it mean if a case has a large residual?
It doesn’t fit the model well and should be checked as a possible outlier
What are the 3 types of residuals?
Unstandardized
Standardized
Studentized (most precise)
What are the 8 statistics that can be used to assess the influence of a particular case on a model?
Adjusted predicted value. Deleted residual and the studentized deleted residual. DFFit and standardized DFFit. Cook’s distance. Leverage. Mahalanobis distances. DFBeta and Standardized DFBeta. Covariance ratio.
What is the rule for Adj Pred Value?
It should be = to predicted value
What is the rule for the studentized deleted residual?
Within the range of -2 to 2
What is the rule for Mahalanobis Distance?
When:
N= 500, 25+ = bad
N=100 and k=3, 15+=bad
N=30 and k=2, 11+=bad
What is the rule for Cook’s distance?
1.0+ = bad
Close to 0 = good
What is the rule for Leverage values?
If value is 2x more than average leverage value
Average Leverage value = (k +1) / n
What is the rule for the covariance ratio?
If above upper end of range (> 1 + [3(k + 1)/n]) DON’T DELETE
If BELOW lower end of range (
What is the rule for DFFit?
Depending on range of scale e.g. either 0-1 or 1-100… the DFFit value should be closer to 1 (i.e. in 0-1 a value of 0.5 is terrible but in 1-100 it’s nothing)
What is the rule for the SD DFFit?
Should be between -2 and 2
What is the rule for SD Df Beta?
If +-> 2 = bad
What should we do if we remove the outliers?
Run the regression again and compare the new and old
What should happen if the outliers have been correctly removed?
The RMSE should shrink
The Rsq should get larger
Assumptions should be closer to being met