Statistics week 5 - 11 Flashcards

1
Q

what are the 3 stage to interpreting SPSS data from two way factorial ANOVA

A
  1. ANOVA itself - test of between subjects
  2. if main effects are significant AND have more than 2 levels then check Post Hoc results
  3. If interaction result is significant, THEN follow up with Profile plots, interpreting main effect of IV levels and their interaction (parallel lines indicates no interaction)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Assumptions of two-way independant ANOVAs

A
  • normality
  • Homogeneity of variance (variance in DV should be equivalent across conditions) (tested with Levenes, no correction).
  • Independence of observations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

non parametric equivalent for factorial ANOVAs

A

there isn’t one.

BUT they are really robust and only serious violations would be a problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

diff between partial eta squred and eta squared .
and why is it used in factorial two way ANOVA

A

eta squared is SSM/SST where in one way anovas, is the same as SSM/SSM+SSR

But in two way anovas this is not true because SST (total of summed squareds) involves all IV levels. BUT partial eta squared only involves one IV level.

i.e. because there are multiple IV levels in Factorial, a measure for each individual IV level is necessary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

post hoc tests are relevent when

A

main effect of IV is significant and IV has more than 2 levels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

difference in assumptions for repeated measures compared to independant. (ANOVA)
How is this assessed

A

spherecity of covariance
assessed via Mauchlys and corrected via greenhouse geisser

only when IV has more than 2 levels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The range within which 95% of scores in a
normally distributed population fall

formula

A

95% π‘π‘œπ‘π‘’π‘™π‘Žπ‘‘π‘–π‘œπ‘› π‘£π‘Žπ‘™π‘’π‘’π‘  π‘“π‘Žπ‘™π‘™:
πœ‡ Β± 1.96*SD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

t formula

A

. 𝑑 =

π‘₯̅𝐷/
𝐸𝑆𝐸

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

df for paired t-test

A

𝑑𝑓 = (𝑛 βˆ’ 1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

To calculate degrees of freedom for an
independent t-test

A

𝑑𝑓 = π‘›π‘‘π‘œπ‘‘π‘Žπ‘™ βˆ’ 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

theory behind how F is calculated

e.g. written out variance formula

A

𝐹 =
π‘£π‘Žπ‘Ÿπ‘–π‘Žπ‘›π‘π‘’ 𝑏𝑒𝑑𝑀𝑒𝑒𝑛 𝐼𝑉 𝑙𝑒𝑣𝑒𝑙𝑠/

(π‘£π‘Žπ‘Ÿπ‘–π‘Žπ‘›π‘π‘’ π‘€π‘–π‘‘β„Žπ‘–π‘› 𝐼𝑉 π‘™π‘’π‘£π‘’π‘™π‘ βˆ’π‘£π‘Žπ‘Ÿπ‘–π‘Žπ‘›π‘π‘’ 𝑑𝑒𝑒 π‘‘π‘œ π‘–π‘›π‘‘π‘–π‘£π‘–π‘‘π‘’π‘Žπ‘™ 𝑑𝑖𝑓𝑓𝑠)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Components of the F calculation for
ANOVAs, as provided in SPSS output

A

𝑆𝑆𝑀 + 𝑆𝑆𝑅 = 𝑆𝑆𝑇

𝑆𝑆𝑀/
𝑑𝑓𝑀
= 𝑀𝑆𝑀 (mean square of model)

𝑆𝑆𝑅/
𝑑𝑓𝑅
= 𝑀𝑆𝑅 (mean square residual)

𝐹 =
𝑀𝑆𝑀/
𝑀𝑆R

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

To calculate degrees of freedom for a
bivariate correlation

A

𝑑𝑓 = 𝑁 βˆ’ 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

R^2 Formula
(measure of effect size): the variance in
the outcome variable that is explained by the
regression model, expressed as a proportion
of total variance

A

𝑅^2 =
𝑆𝑆𝑀/
𝑆𝑆T

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

SSR =

A

sum of squares residual.

take diff between inidiv pp scores for group and that group mean. square and add them. (within groups diff)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

SSM =

A

take diff between indiv group mean and the grand mean. square and add. (between group model)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

MSm =

A

mean square model.

= SSm / dfm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

MSr

A

means sum residual

= SSr / dfr

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

diff between repeated measures and independent groups factorial ANOVA

A

no variance due to individual differences (within group variance is smaller)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Marginal means =

A

mean score for single IV level

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what does a significant interaction suggest

A

effect of IVA on DV is dependant on IVB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

strength of bivariate linear correlations

A

.1-.3 = weak
.4 - .6 = moderate
.7 . 9 = strong

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

what do inferential statistics measure

A

infer probability that we have observed a relationship of this magnitude when in fact the H0 is true.

e.g. accept 5% risk of type 1 error / false positive

24
Q

Parametric assumptions of Bivariate linear relationship

A
  • Both variables must be continuous (if both ordinal (categorical) then use non-parametric) can use for likert scales if have 6 0r 7 points
  • Related pairs (each pp have x and y)
  • Absence of outliers
  • Linearity (scatterplot shows straight and not curved line
25
non parametric equivalent of Bivariate linear correlation
Spearman's Rho
26
what is covariance
variance shared between x and y variable
27
what does pearsons r value represent
ratio of covariance to separate variances
28
when talking about relative strength of a relationship, you must report
R^2
29
if R^2 is .45 , what does this mean (bivariate correlation)
45 % of the variance is shared by x and y variable
30
partial correlation purpose
allows for examination of relationship without the influence of a 3rd variable
31
in partial correlation: look at diff between correlation when not partialed out and when partialed out. if correlation had decreased but remained significant, suggests if correlation had not decreased, would suggest
relationship between x and y was partially explained by z not influenced by z. BUT may be influenced by another variable still.
32
Regression model purpose
rel between x and y, allowing an estimate of how much y will change as a result of a change in x.
33
regression model y = x =
y = outcome variable. or dependent/criterion variable x = predictor variable or independent/explanatory variable
34
why use a regression model
- strength of x and y - can predict value of y if know x
35
Assumption of regression model result
- assume y is dependant on x (does not infer causality)
36
what is the F ratio of the regression model comparing
compares simpest model (average score as a line of best fit (SST)) Vs Best model (the regression line (SSR)) the difference between the two reflects improvement in prediction
37
The larger the SSM, the
the bigger the improvement (in prediction model)
38
assumptions of multiple regression
- Sample size - Linearity - outliers - Multicolinearity (predictors can't be correlated with one another) - normal p.p plot of regression - Scatterplot of regression rectangularly distributed = homoscedasticity
39
what does hierarchical regression ask
Does adding new predictor variables allow you to explain additional variance in the outcome variable? Examines influence of predictor variables on outcome variable after "partialing out" influence of other variables.
40
in a regression model: - beta represents
standardized slope
41
what are the different models in hierarchical regression?
model 1 (predictor to be controlled) model 2 (all predictors)
42
change statistics for model 1 of hierarchical regression
Compares simplest model (b=0) with model 1. (same job as standard regression)
43
change statistics for model 2 of hierarchical regression
compares model 1 to model 2 Tells about explanatory power of x, after effects of z are controlled for. Ξ”R^2 = how much overall variance in y is explained by x, after effects of z are controlled for. Ξ”F = provides a measure of how much the model has improved the prediction of y, relative to the level of inaccuracy of the model. Ξ”p if <.05 indicates that x explains significant proportion of y after z is partialed out
44
higher risk of what error in non-parametric statistics
type 2 e.g.(false negative) risk of failing to reject nul when it is false e.g. saying it's not significant when actually it is
45
Independent t-test non-parametric equivalent
Mann-Whitney U test remember, man is independent of whitney
46
paired t-test non-parametric equivalent
Wilcoxon T test
47
1-way indpendent ANOVA non-parametric equivalent
Kruskal Wallace test
48
1 way RM ANOVA non-parametric equivalent
Friedman test remember Layla Friedman is repeated measured
49
Factorial design non-parametric equivalent
non existant
50
how are Repeated measures designs shown to be normally distributed
the DV difference scores should be normally distributed, between each paired level of the IV
51
Normality assumption can be assessed with what test
Shapiro-Wilk test. use to decide parametric or non-parametric
52
Pearson's correlation coefficient non-parametric equivalent
Spearman's Rho: used when N > 20 Kendall's Tau: used when N < 20
53
parametric or non parametric when thinking about scale
if variable measured is ordinal scale, use non-parametric e.g. if intervals between values is not constant
54
partial correlation and regression non-parametric equivalent
non existant
55
what tests analyse categorical data
One variable Chi Square Chi-square Test of Independence ( two variables) no parametric equivalents
56