Multiple Regression Flashcards

1
Q

Prediction

A

Not an experiment
Survey data
Relationships between things
Built on correlation analysis

In regressions always have at least 1 predictor (simple linear) or more (multiple)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Variable terms

A

Criterion=y
Analogous to DV (how labelled in SPSS)

Predictor(s)= x
Analogous to IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

One predictor

A

Y=mx+c (y=bx+a) gives line of best fit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Multiple predictors

A

Y=b1X1+bX+a
Y=b2X2+bX1+bX+a

(B1X1)= predictor 1
(BX)= predictor 2
b shoes strength of relationship

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Regression

A

Equation/formula/model

In correlation/regression, looking at association between outcome measure & 1 predictor

In multiple regression, looking at association between criterion & set of predictors

Model is full set of predictors & how you put them together to try & predict criterion (outcome measure)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The Prediction

A

R= correlation strength (-1-1 with 1 being perfect positive relationship)
The same as ‘r’ in simple correlation
Adjusted R square= variance explained

r/r^2= correlation or single regressions 
R/R^2= multiple regression
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Covariance

A

Covariance= how jointly variable 2 variables are
Also affected by SD
Tells you how much overlapping variance there is between 2 factors
SD’s tell you how much variance there is for each individual factor

Divide covariance by SD’s to get r/R

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

r^2/R^2

A

r/R x r/R

Gives you proportion of variance in the criterion variable that is explained by regression

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Adjusted R^2

A

Takes account of sampling bias (how big sample is, offsets data of v small sample)
Controls for the effects of sample size

Tells you what proportion of the variance in criterion variable you can predict (explain) with your predictor or model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Use of regression

A

Good for things that can’t be measured experimentally, e.g. hard to measure or unethical

Stats test (ANOVA) used to see if prediction is better than chance 
Reported like normal ANOVA, if significant then better than chance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Beta

A
B= strength between x & y value 
Beta= normalised/standardised version of B, shows effect size, as Beta gets bigger t gets bigger (t does same thing as beta but less important)

Predictors need to be standardised so can be compared to one another, beta does this

B’s can go below -1 & above 1 but beta’s can’t

Beta shows most useful predictors (explained the most variance in the predictor)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How to write it up

A

A standard multiple regression was conducted with {DV} as the criterion variable & {IV’s} as the predictor variables

The model accounted for {R^2}% (adjusted) of the variance

A significant model emerged (F(a,b)=c, p=d).

Significant contributions were made by _____ (predictor _), beta=a, t=b, p=c, and ___ (predictor _), beta=a, t=b, p=c.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

General notes

A

Only report p values to 3 dp, everything else to 2 dp

Italicise all stats letters in Latin alphabet

Don’t italicise Greek letters e.g. beta

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Assumptions

A

1) more predictors= more participants (atleast 15 pp per predictor)
2) do predictors correlate with the DV?
3) Do IV’s correlate too much (>0.8) with each other?- multicollinearity,
4) do you have unequal spread of error?- heteroscedasticity (similar residuals, similar to homogeneity if variance)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Additivity & linearity

A

The outcome variable is linearly related to any predictors

If you have several predictors then their combined effect is described by adding their effects together

If this assumption is not met then your model is invalid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Outliers can bias a parameter estimate

A

Assume there are linear effects (same spread of variance across data set) to tackle this

17
Q

Predictor variables

A

Typically regressions use interval-level data
But regressions are robust forms of analysis & can handle ordinal or categorical predictors
If you use a categorical predictor, your regression just becomes an ANOVA
With ordinal data, be careful you’re not violating assumptions of linear relationship between x & y

18
Q

Categorical variables

A

Binary categorical variables; easier to use

Still cannot use categorical outcome variables in normal linear regression

19
Q

Dummy variables

A

Rather than using nominal categories in our binary categorical data, we need to code one variable as 0 & the other as 1

Then we can look at outcome variable of this predictor changing from 0 to 1, let’s us generate beta score for categorical data