L6: Repeated measures and Mixed ANOVA Flashcards

1
Q

What is a repeated-measures design?

A

It’s when you have multiple levels of an independent variable (IV), and you measure the participants score on the dependent variable (DV) for every level of the DV.
You do this for every participant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

One-way repeated measures ANOVA

What is a One-way repeated measures (O-RM) ANOVA?
How many variables/levels does it have?

A

A statistical test that analyses the variance of the model while reducing the error by the within person variance

  • 1 dependent/outcome variable
  • 1 independent/predictor variable
    • 2 or more levels
  • All with same subjects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

One-way repeated measures ANOVA

What are the assumptions for O-RM ANOVA?

A
  • Uni- or multivariate (referring to independent variable)
  • Continuous dependent variable
  • Normally distributed
    • Shapiro-Wilk
    • Q-Q plots
  • Equality of variance of the within-group differences
    • Mauchly’s test of sphericity
    • Always met when having only 2 groups
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the assumption of sphericity in simple terms?

A

Simple meaning
Sphericity is about assuming that the relationship between scores in pairs of treatment conditions is similar.
It can be likened to the assumption of homogeneity of variance in between-group designs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the actual assumption of sphericity?

A

The assumption is that the variances of difference scores between pairs of treatment levels are equal.
It is tested by Mauchly’s test
You need at least three groups for the assumption to be an issue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why do you need at least three groups for sphericity to be an issue?

A

Because it is about the variance of difference scores.
With only with two conditions you only have one set of difference scores, and only one variance.
You need at least three conditions for sphericity to be an issue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How do you tell if sphericity is violated?

A

Look at the p-value given by JASP, if it’s significant (< 0.05) then the assumptions IS violated.
If it’s not significant, then it isn’t violated.

If it’s violated, apply corrections.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the two correction methods?
How do they work?

A

Greenhouse-Geisser and Huynh-Feldt corrections.
They adjust the degrees of freedom (like the welch test does).
Use them when the assumption of sphericity is violated.
They apply a correction that is proportionate to the extent of the violation (pretty cool)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

When should you use the corrections?

A
  • Johnny doesn’t really talk about when you should use them, but the book says you should just always use them since they apply a proportionate correction.
  • I say don’t use them unless your assumption is violated, and then use both corrections.
  • If one is just significant, and the other isn’t significant, don’t cherry pick the significant one, talk ab both.

!!! The stuff above is based on this lecture, I’m watching the non-parametric one rn, and johnny said just apply it. I’m keeping the text so you can see my hard work, but just apply the corrections

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In which variance does the experimental effect appear in RM designs?

A

The effect of the experiment shows up in the within-participant variance rather than between-group variance.
In independent designs, the within-participant variance is the residual sum of squares (SS_R): The variance created by individual differences in performance.
The types of variances are the same as in independent designs, but the difference is where they come from.

Look at picture 6.1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

We will go through each variance in the tree diagram

What is the total sum of squares?

An underscore means subscript

A

Look at picture 6.3.
It is SST
It explains how much variance there is in your data
The grand variance is the variance of all socres when we ignore which group they belong to.
N-1 is just your degrees of freedom

I tried doing underscores for the sums of squares, but it kept italicizing it, so SS(x) means sum of squares, so when you see SSR, its residual sum of squares, not the course.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the within-participant sum of squares?

A

SSW
It tells you how much variance is explained by individuals’ performances under different conditions
Look at picture 6.4
n _ i refers to the number of observations within the individual, n refers to number of individuals.
so the (n _ i -1)n is just degrees of freedom
s_i refers to an individuals variance.

In independent designs, this is just the residuals (SSR)
In RM designs, you’re interested in the variation within an individual

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the model sum of squares?

A

SSM
Picture 6.5
It tells you how much variance is explained by our manipulation.
n _ k refers to the condition size, and X_k is the mean of the group.
Degrees of freedom is just k - 1
You do this each condition/group, and sum it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the residual sum of squares?

A

!!! SSR is the same as SSerror, they aren’t different!!!

SSR
It tells you how much variance cannot be explained by the model
If you know SSW and SSM, you can calculate it

SSR = SSW - SSM
The degrees of freedom for residuals is calculated in the same way

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the between-participant sum of squares?

A

SSB
It represents the individual differences between cases.
SSB = SST - SSW

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the model mean square?

A

MSmodel
It represents the average variation explained by the model
MSmodel = SSmodel/DFmodel
(Reminder, DFmodel = k-1)

17
Q

What is the error/residual mean square?

A

MSerror
It is a gauge of the average variation explained by extraneous variables
MSerror = SSerror/DFerror

18
Q

How can you calculate DFerror?

A

DFerror = (n-1)(k-1)
n= number of measurements per group
k = number of groups

You can also cheat, and do DFwithin - DFmodel

19
Q

What is the F-statistic?

A

The ratio of variation explained by the model and the variation explained by unsystematic factors.

F = MSmodel/MSerror

20
Q

Effect size

A

General measures
You can look at both eta squared and omega squared.
Eta squared is good for explained variance, but if you do look at explained variance, use partial eta squared.

21
Q

How do you report O-RM results?

A

PAGE 589 HAS THE EXAMPLES, for exam.
The (sphericity correction name) estimate of the departure from sphericity was x. The (dependent variable) was (not significantly/significantly) affected by the (independent variable), F(df1, df2) = (f-stat), p = (x), <math></math>

<msup>
<mi>&#x3C9;</mi>
<mn>^2</mn>
</msup>

</math>= (x). In short, the training/treatment was (successful/unsuccessful).

22
Q

What is a factorial RM ANOVA?
How many variables/levels does it have?

A

Analyses the variance of the model while reducing the error by the within person variance

  • 1 dependent/outcome variable
  • 2 or more independent/predictor variable
    • 2 or more levels
  • All with same subjects
23
Q

What are its assumptions?

A

Its the exact same as one-way, but ill put them again for convenience.

  • Uni- or multivariate
  • Continuous dependent variable
  • Normally distributed
    • Shapiro-Wilk
    • Q-Q plots
  • Equality of variance of the within-group differences
    • Mauchly’s test of sphericity
    • Always met when having only 2 groups
24
Q

How is it different to one-way?

A

Not very different, only difference is in JASP pretty much.
Theres also interaction effects since you’ve got 3 variables.
Otherwise, its pretty much the same, you just have to look at more p-values.

25
Q

How do you report factorial RM ANOVA results?

A

It’s on page 606-607, theres different variants depending on your result, but each one is long so i’m not gonna break them down here.

26
Q

What is a Mixed RM ANOVA?

A

The mixed ANOVA analyses the variance of the model while reducing the error by the within person variance.

  • 1 dependent/outcome variable
  • 1 or more independent/predictor variable with same subjects
    • 2 or more levels
  • 1 or more independent/predictor variable with different subjects
    • 2 or more levels
27
Q

What does it mean by different subjects?

A

You have your main independent variables (repeated measures factors that everyone is tested on.
Then you have your between subject factor (a box in JASP), which has multiple levels, and people are only tested on one.
(for example, if gender is your between-subject factor, then you can now compare the effect of gender on your IV and DV)

28
Q

What are its assumptions?

A

The same as prior 2 RM ANOVAs

  • Uni- or multivariate
  • Continuous dependent variable
  • Normally distributed
    • Shapiro-Wilk
    • Q-Q plots
  • Equality of variance of the within-group differences
    • Mauchly’s test of sphericity
    • Always met when having only 2 groups
29
Q

Time points in lecture for JASP examples

A

JASP DEMONSTRATION FOR ONE-WAY RM ANOVA
37:53-> 48:43

JASP DEMONSTRATION FOR FACTORIAL RM ANOVA
51:35 -> 1:03:11

JASP DEMONSTRATION FOR MIXED RM ANOVA
1:03:40 -> 1:10:45

30
Q

How are repeated measures ANOVA similar to paired samples t-test?

A

They both take into account that people have inherent differences on the latent variable, so you have to look person by person.
A regular t-test doesn’t, and just compares all the results to each other.

If you take the T-statistic from a paired sample t-test, and square it, its the same value as the F-statistic if you did a RM ANOVA on the same data.