Week 7 - SMR/HMR Flashcards

1
Q

• List the two types of tests in multiple regression (MR) 


A

Standard

Hierarchical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

• Explain the different questions that are addressed in bivariate regression (x1)

A

o Does the predictor account for significant variance in the DV?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

• Explain the different questions that are addressed in multiple 
regression 
(x5)

A

Extending on bivariate regression
Do predictors jointly account for significant variance in the DV?
• F test of Model R2
For each IV: does it uniquely account for variance in the DV?
• t-test of β for each IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

• Explain (in words) what R2 represents in each of the following cases: 

o Bivariate regression

o Multiple regression with uncorrelated predictors
o Multiple regression with correlated predictors
(x2)

A

• Strength of overall relationship between criterion and predictor/s
Variance accounted for by the IV/set of IVs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is r2 in bivariate regression? (x3)

A

coefficient of determination -
Proportion of variance in one variable that is explained by variance in another
(like eta2 in ANOVA - SSeffect/SStotal)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is error variance in bivariate regression? (x2)

A

1 -r2

SSresidual/SSy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

• Conceptually speaking (i.e. in words), how is shared variance determined in:
o Multiple regression with uncorrelated predictors
 (x1)

A

There isn’t any,

So R2 is just r2 for each IV/predictor added together

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

• Conceptually speaking (i.e. in words), how is shared variance determined in:
o Multiple regression with correlated predictors (x3)

A

It’s the overlap of the predictors with each other
So you have to account for it - so it doesn’t get used twice in calculating R2 (which measures non-redundant variance)
(can’t just add, as you do for uncorrelated)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

• Explain the difference between a zero-order (Pearson’s) correlation, a partial correlation and a semi-partial correlation 
(x1, x1, x1)

A

Zero-order - only 1 source of variance in DV, so none shared

Partial: relationship between predictor 1 and the criterion, with the variance shared with predictor 2 partialled out of BOTH dv and iv

Semi-partial: relationship between predictor 1 and criterion, after partialling out of predictor 1 variance shared between predictor 1 and 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

• In a Venn diagram representing one criterion and three predictors, indicate how shared variance between predictors 1 and 2 would be represented, and how the unique variance of predictor 3 would be represented. 
(x2)

A

1 and 2 would overlap with each other, and the DV

While 3 would only overlap the DV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

• List the 4 key differences between the structure of ANOVA tests and MR tests 


A

No test of overall model in ANOVA - auto in MR
Main effect of IV in ANOVA disregards effect of other variables - MR tests unique variance in each (controls for other variables)
Interactions auto in ANOVA - hard in MR (need MMR)
ANOVA = Fs, effect sizes for IVs/interaction, plus follow-ups
*MR = Rs F-test, beta t-tests, plus follow-ups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is semi-partial correlation squared (sr2)? (x3)

A

proportion of variance in DV uniquely accounted for by IV1, out of total

  • A/ A + B + C + D
  • (where C and D are shared IV2/DV variance)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is partial correlations squared (pr2)? (x3)

A

Proportion of residual variability in DV accounted for by IV1, after IV2 variance removed

  • A/A + B
  • (where a is unique, b is unaccounted for DV variance)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

• Explain the difference between the semi-partial correlation squared (spr2/sr2) 
and the partial correlation squared (pr2) 


A

Partial is like partial eta2 - leftover variability in DV that IV accounts for
Semi-Partial is like eta2 - bit of total DV variability that uniquely due to IV
*the go to effect size for regression

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

• Identify the linear model for a multiple regression analysis with 2 predictors (x2),
And explain what b1, b2, and a represent 


A

Ŷ = b1X1 + b2X2 + a
Ŷ - predicted score is still a function of slopes, X scores and constant
b1 - slope of plane relative to x1-axis
b2 - slope of plane relative to x2-axis
a - the constant (y-intercept, when x1 and x2 = 0)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

• Explain why means and standard deviations are not as critical for interpreting MR results as they are for t-tests and ANOVAs 


A

Because you’re not interpreting/comparing means, but the direction relationship between variables
Although SD still tells us change in DV for every SD change in it’s IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

• Explain what Cronbach’s is (x2), and what values of this index we would ideally like to have 
(x2)

A

index of internal consistency (reliability) for a continuous scale
*how well items “hang together”
Use scales with high reliability ( > .70) if available – less error variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

• Define validities (x1), and identify their ideal levels (i.e. high or low) 
(x1)

A

Relationship between IV and DV

High - show a strong association, yay!

19
Q

• Define collinearities (x1), and identify their ideal levels (i.e. high or low) 
(x1)

A

Relationships between IVs

Low, because the higher they are, more chance of redundant IV

20
Q

• Explain the principle of parsimony (x2)

A

The simplest explanation for the data is good

ie predictors explain different variance in the model

21
Q

How does parsimony relate to validities and collinearities 
(x2)

A

More parsimonious when high validity (predictor/criterion relationships)
And low collinearity (predictor relationships)

22
Q

• Define R (x1) R2, (x1) Radjusted, and R2adjusted (x1) in the context of multiple regression

A

Multiple correlation coefficient (R) is bivariate correlation between criterion (Y) and best linear combination of the predictors (Ŷ)
R2 - square R to find variance in Y accounted for by composite (Ŷ)
The adjusted versions attempt to correct positive sample bias by correcting for sample size (so no point for 30+)

23
Q

• Identify the test used to assess the overall variance explained (x1, plus explain x1), and explain what a 
significant result means 
(x1)

A

F-test: MSregression/MSerror
*variance accounted for (R2/p) divided by that not accounted for (error/N - p - 1)
The relationship between predictors (as a group) and criterion is different from zero

24
Q

• If the sr values for the predictors are known, explain how to work out the unique 
variance (x1) explained by each predictor and the variance shared between all predictors 
(x1)

A

Square the semi-partial correlation to get the coefficient

Shared variance = R2 - sum of all sr2

25
Q

• Explain the difference between a zero-order coefficient, a first-order coefficient, 
and a second-order coefficient 
(x3)

A

Zero-order - doesn’t take other predictors into account
First-order - 1 other taken into account
Second-order - 2 accounted for

26
Q

• Identify the type of coefficient that SPSS provides in its output 
(x2)

A

Highest order possible -

Auto control of all other variables

27
Q

• Identify the test used to assess predictor importance, and explain what a significant 
result means 


A

Use a t-test - beta divided by its standard error

That the variable contributes significantly to prediction of DV, independently of other predictors

28
Q

• Define a partial regression coefficient and it’s symbol/notation (x1, x1)
But… (x2)

A

Importance of the predictor, after correlation with other predictors has been adjusted for
b
Can’t use relative magnitude of b - scale-bound 
*Importance of given b depends on unit/variability of measure

29
Q

• Define a standardised regression coefficient and it’s symbol (x2, x1)

A

Rough estimate of relative contribution of predictors
*b divided by SD of the DV
beta

30
Q

• Explain why standardised coefficients can be more useful than unstandardised coefficients (x1, but remember… x1) 


A

Can compare samples within the study

but not across studies - different sample Sds

31
Q

• Explain the key difference between standard multiple regression (SMR) and hierarchical multiple regression (HMR) 
(x2)

A

SMR - simultaneous entry of predictors, shared variance never ‘owned’
HMR - sequential, shared variance goes to previous block

32
Q

• Explain how model R2 and b’s are assessed in SMR (x2), and at each step of HMR 
(x2)

A

SMR - b’s are evaluated for contribution beyond that of all others, R2 assessed in 1 step
HMR - b evaluated for unique contribution, controlling for other IVs in current and earlier steps, but not later ones
*R2 takes more than 1 step

33
Q

• In HMR, explain the difference (if any) between R2 and R2change: 

o at Step 1 of the analysis (i.e. Model 1)
o at Step 2 of the analysis (i.e. Model 2)
(x2)

A

R2 and R2change are same in step 1 - block and total accounting for same variance
At step 2, R2 becomes total of previous step’s R2, plus the R2 change between step 1 and 2

34
Q

• Identify the two rationales for the order in which predictors may be entered in HMR 
(plus explain/eg)

A

.Partial out effect of control variable
• Like ANCOVA: predictor(s) at step 1 like covariate
Build sequential model according to theory
• e.g., personality measure entered at step 1, attitudinal measure entered at step 2

35
Q

• Explain the link between results at Step 2 of HMR (e.g. where predictor A was 
entered at Step 1, and predictors B and C were entered at Step 2), and the results of an SMR (e.g. which includes predictors A, B, and C in the model) 
(x1)

A

R and R2 are the same as our full SMR

36
Q

• Explain the similarities and differences in the structure of SMR and HMR (with 2 steps) tests 


A

R2:
*At step 1, model fit of IVs added so far
*At step 2, same F/p-value as SMR
Coefficients:
*Step 1, beta/p-value of unique contribution at that step
*Step 2: same as you’d get in SMR (so step 1 beta/p-value drops)

37
Q

• Explain what statistics need to be reported in SMR and HMR tests 
(x4, x5)

A
SMR:
o	Table of M, SD, r  
o	Model R2 with F test
o	Each IV’s β with p values
o	Any relevant follow-ups

HMR:
o Table of M, SD, r
o Each block R2 change with F change test
o IVs’ βs with p-values from each block as entered
o Final model R2 with F test
o Any relevant follow-ups

38
Q

• Explain what multicollinearity and singularity are, and why they are a problem 
(x1, x1, x2)

A

Multicollinearity when predictors are highly correlated
Singularity when their explaining the same thing
Instability of regression coefficients:
*More type 1 AND 2, and changeable by moving single scores around

39
Q

• List the main assumptions re distribution of residuals in multiple regression analyses 
(x1, x2, x2, x2)

A

Conditional Y (y-hat) values normally distributed around regression line
Homoscedasticity: variance of Y values are constant across different values of Ŷ (homogeneity of variance)
• Similar DV variance at all levels of continuous IVs
No linear relationship between Ŷ and errors of prediction
• Error the same across different y-hat values
Independence of errors
• Can’t tell anything about error at one point from error on another – different people, different scores

40
Q

• List the main assumptions re predictor/criterion scales/scores in multiple regression analyses 
(x1, x1, x2, x1)

A

Variables are normally distributed
Linear relationship between predictors and criterion
Predictors are not singular (extremely highly correlated)
• If predictors are highly related, form a singularity – Ivs collapsing in on each other
Measured using a continuous scale (interval or ratio)

41
Q

What is the effect of correlated IVs on betas in MR?

A

bs and ßs (coefficients) measure unique contribution of each IV to variance left in DV after controlling for other IVs (i.e., they test the partial IV : DV relationship)
When IVs correlated, effects of each IV depend heavily on what other predictors are in model.
Variables can pop in/out of significance, even change directions (+ vs – ßs) when you add other variables.
*betas can be a lot stronger than the r, weaker, or even in opposite direction

42
Q

Why do we have to plan carefully (based on theory) the order of entry in MR? (think effect of correlated IVs… ) (x2)

A

If you just mess around, the instability associated with correlated IVs can produce both Type 1 and Type 2 error
And you will not know which.

43
Q

What is homoscedasticity? (x2)

What is the effect of violating this assumption of regression? (x1)

A

Homogeneity of variance of residuals
*residuals at each level of predictors should have same variance
Invalidates significance tests