Hierarchical Multiple Regression Flashcards

1
Q

rules for hierarchical MR/when to use

A

provides best prediction of criterion variable from predictors

predicted values are plotted on regression line passing through scatterplot

regression line attempts to minimise residuals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what does the regression model do?

A

statistically predicts outcome based on correlations on predictors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

regression model is a combination of…

A

correlation coefficients that maximise variance explained by predictors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

regression equation

A

Ŷ = b0 + b1x1 + b2x2… +error

Y is the criterion/outcome
B0 is intercept
B1 is coefficient associated with predictor
X1 is name of predictor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does multicollinearity do to beta errors and R and what does this mean?

A

increases beta errors, limits size of R and makes determining predictor importance difficult

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

addressing issues with multicollinearity

A

check errors in data entry and coding

reduce the number of predictor variables

delete predictors highly related to other predictors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is an outlier

A

substantially different value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

impact of outliers

A

large impact on results of regression analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

how are outliers detected?

A

scatterplot and residual plots

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

identifying outliers on SPSS tables

A

residual statistics table (standardised residuals, Cooks distance)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is a standardised residual

A

rescaling residual by residual/std. deviation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what do standardised residuals do?

A

help identify where predicted score is quite different to actual score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

identifying standardised residuals

A

fall outside +/- 3.3
Tabachnick and Fidell (2007)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what does Cooks distance show

A

influence of deleting a case

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

value for Cooks distance and meaning

A

> 1 are large values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

brief explanation of how hierarchical MR is done in SPSS

A

sequential
predictors entered in equation in specified order (blocks)
blocks assessed on what it adds to prediction

17
Q

steps in analysing hierarchical MR

A

check assumptions are met
assess model overall
evaluate predictor variables

formally report results

18
Q

check assumptions are met step

A

sample size - N>50+8M
multicollinearity - tolerance>.5, VIF<10
normality - histogram
linearity - p-plot
homoscedasticity - equal variances

outliers - residual stats, std. residual +/-3.3, cooks D <1

19
Q

assess model overall step

A

variables entered/removed - see whats included
model summary - adj. R2 = % variance
R2 change - effects after variable removed
ANOVA table - F( , ) = ,p<0.001
decimal found in F column (brackets found for between and within subjects figures)

20
Q

evaluating predictors step

A

coefficients table
regression equations

21
Q

formally reporting the results of hierarchical MR

A

A hierarchical MR was used to assess the ability of the predictor (name) on the criterion (name) after controlling for (other predictor variables)

Preliminary analyses were conducted to ensure no violation of the assumptions of normality, linearity, multicollinearity, homoscedasticity and sample size (state whether assumptions were met or not).

(name predictor variables) were entered into model 1, explaining (adj. R2)% of the variation in (criterion variable) (adj. R2 = ), F( , ) = ,p<0.001 (state which variables were/were not significant predictors of the criterion variable) (P<0.001).

After entry of (name another predictor variable) in model 2, the model explained (more/less) variance,
F( , ) = ,p<0.001
The total variance in (criterion) explained by the model as a whole was (adj. R2)% (adj. R2 = ). (state significant/non-significant predictors in model 2) (p<0.001)

Then interpret the results - explain what they mean in words.