Statistics week 5 - 11 Flashcards
what are the 3 stage to interpreting SPSS data from two way factorial ANOVA
- ANOVA itself - test of between subjects
- if main effects are significant AND have more than 2 levels then check Post Hoc results
- If interaction result is significant, THEN follow up with Profile plots, interpreting main effect of IV levels and their interaction (parallel lines indicates no interaction)
Assumptions of two-way independant ANOVAs
- normality
- Homogeneity of variance (variance in DV should be equivalent across conditions) (tested with Levenes, no correction).
- Independence of observations
non parametric equivalent for factorial ANOVAs
there isnβt one.
BUT they are really robust and only serious violations would be a problem
diff between partial eta squred and eta squared .
and why is it used in factorial two way ANOVA
eta squared is SSM/SST where in one way anovas, is the same as SSM/SSM+SSR
But in two way anovas this is not true because SST (total of summed squareds) involves all IV levels. BUT partial eta squared only involves one IV level.
i.e. because there are multiple IV levels in Factorial, a measure for each individual IV level is necessary
post hoc tests are relevent when
main effect of IV is significant and IV has more than 2 levels.
difference in assumptions for repeated measures compared to independant. (ANOVA)
How is this assessed
spherecity of covariance
assessed via Mauchlys and corrected via greenhouse geisser
only when IV has more than 2 levels
The range within which 95% of scores in a
normally distributed population fall
formula
95% ππππ’πππ‘πππ π£πππ’ππ ππππ:
π Β± 1.96*SD
t formula
. π‘ =
π₯Μ
π·/
πΈππΈ
df for paired t-test
ππ = (π β 1)
To calculate degrees of freedom for an
independent t-test
ππ = ππ‘ππ‘ππ β 2
theory behind how F is calculated
e.g. written out variance formula
πΉ =
π£πππππππ πππ‘π€πππ πΌπ πππ£πππ /
(π£πππππππ π€ππ‘βππ πΌπ πππ£πππ βπ£πππππππ ππ’π π‘π πππππ£πππ’ππ πππππ )
Components of the F calculation for
ANOVAs, as provided in SPSS output
πππ + πππ = πππ
πππ/
πππ
= πππ (mean square of model)
πππ
/
πππ
= πππ
(mean square residual)
πΉ =
πππ/
ππR
To calculate degrees of freedom for a
bivariate correlation
ππ = π β 2
R^2 Formula
(measure of effect size): the variance in
the outcome variable that is explained by the
regression model, expressed as a proportion
of total variance
π
^2 =
πππ/
ππT
SSR =
sum of squares residual.
take diff between inidiv pp scores for group and that group mean. square and add them. (within groups diff)
SSM =
take diff between indiv group mean and the grand mean. square and add. (between group model)
MSm =
mean square model.
= SSm / dfm
MSr
means sum residual
= SSr / dfr
diff between repeated measures and independent groups factorial ANOVA
no variance due to individual differences (within group variance is smaller)
Marginal means =
mean score for single IV level
what does a significant interaction suggest
effect of IVA on DV is dependant on IVB
strength of bivariate linear correlations
.1-.3 = weak
.4 - .6 = moderate
.7 . 9 = strong
what do inferential statistics measure
infer probability that we have observed a relationship of this magnitude when in fact the H0 is true.
e.g. accept 5% risk of type 1 error / false positive
Parametric assumptions of Bivariate linear relationship
- Both variables must be continuous (if both ordinal (categorical) then use non-parametric) can use for likert scales if have 6 0r 7 points
- Related pairs (each pp have x and y)
- Absence of outliers
- Linearity (scatterplot shows straight and not curved line