L6: Repeated measures and Mixed ANOVA Flashcards
What is a repeated-measures design?
It’s when you have multiple levels of an independent variable (IV), and you measure the participants score on the dependent variable (DV) for every level of the DV.
You do this for every participant
One-way repeated measures ANOVA
What is a One-way repeated measures (O-RM) ANOVA?
How many variables/levels does it have?
A statistical test that analyses the variance of the model while reducing the error by the within person variance
- 1 dependent/outcome variable
- 1 independent/predictor variable
- 2 or more levels
- All with same subjects
One-way repeated measures ANOVA
What are the assumptions for O-RM ANOVA?
- Uni- or multivariate (referring to independent variable)
- Continuous dependent variable
- Normally distributed
- Shapiro-Wilk
- Q-Q plots
- Equality of variance of the within-group differences
- Mauchly’s test of sphericity
- Always met when having only 2 groups
What is the assumption of sphericity in simple terms?
Simple meaning
Sphericity is about assuming that the relationship between scores in pairs of treatment conditions is similar.
It can be likened to the assumption of homogeneity of variance in between-group designs.
What is the actual assumption of sphericity?
The assumption is that the variances of difference scores between pairs of treatment levels are equal.
It is tested by Mauchly’s test
You need at least three groups for the assumption to be an issue
Why do you need at least three groups for sphericity to be an issue?
Because it is about the variance of difference scores.
With only with two conditions you only have one set of difference scores, and only one variance.
You need at least three conditions for sphericity to be an issue.
How do you tell if sphericity is violated?
Look at the p-value given by JASP, if it’s significant (< 0.05) then the assumptions IS violated.
If it’s not significant, then it isn’t violated.
If it’s violated, apply corrections.
What are the two correction methods?
How do they work?
Greenhouse-Geisser and Huynh-Feldt corrections.
They adjust the degrees of freedom (like the welch test does).
Use them when the assumption of sphericity is violated.
They apply a correction that is proportionate to the extent of the violation (pretty cool)
When should you use the corrections?
- Johnny doesn’t really talk about when you should use them, but the book says you should just always use them since they apply a proportionate correction.
- I say don’t use them unless your assumption is violated, and then use both corrections.
- If one is just significant, and the other isn’t significant, don’t cherry pick the significant one, talk ab both.
!!! The stuff above is based on this lecture, I’m watching the non-parametric one rn, and johnny said just apply it. I’m keeping the text so you can see my hard work, but just apply the corrections
In which variance does the experimental effect appear in RM designs?
The effect of the experiment shows up in the within-participant variance rather than between-group variance.
In independent designs, the within-participant variance is the residual sum of squares (SS_R): The variance created by individual differences in performance.
The types of variances are the same as in independent designs, but the difference is where they come from.
Look at picture 6.1
We will go through each variance in the tree diagram
What is the total sum of squares?
An underscore means subscript
Look at picture 6.3.
It is SST
It explains how much variance there is in your data
The grand variance is the variance of all socres when we ignore which group they belong to.
N-1 is just your degrees of freedom
I tried doing underscores for the sums of squares, but it kept italicizing it, so SS(x) means sum of squares, so when you see SSR, its residual sum of squares, not the course.
What is the within-participant sum of squares?
SSW
It tells you how much variance is explained by individuals’ performances under different conditions
Look at picture 6.4
n _ i refers to the number of observations within the individual, n refers to number of individuals.
so the (n _ i -1)n is just degrees of freedom
s_i refers to an individuals variance.
In independent designs, this is just the residuals (SSR)
In RM designs, you’re interested in the variation within an individual
What is the model sum of squares?
SSM
Picture 6.5
It tells you how much variance is explained by our manipulation.
n _ k refers to the condition size, and X_k is the mean of the group.
Degrees of freedom is just k - 1
You do this each condition/group, and sum it.
What is the residual sum of squares?
!!! SSR is the same as SSerror, they aren’t different!!!
SSR
It tells you how much variance cannot be explained by the model
If you know SSW and SSM, you can calculate it
SSR = SSW - SSM
The degrees of freedom for residuals is calculated in the same way
What is the between-participant sum of squares?
SSB
It represents the individual differences between cases.
SSB = SST - SSW