Lecture 4: One Way & repeated measures ANOVA Flashcards

1
Q

What’s a one way ANOVA?

A

ANOVA stands for Analysis of Variance
One way ANOVAs are parametric tests that allow us to look at differences between 3 or more conditions
They are the parametric equivalents of the tests you learned about in PY1124:
Kruskall-Wallis
Friedman’s

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What different types of one way ANOVA are there?

A
Essentially two:
Within subjects/repeated measures
Between subjects/independent groups
They both compare scores if you have 3 or more conditions
As usual Jamovi treats them differently:
They use different menu commands
The output is different
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What do ANOVAs do?

A

Essentially they do what’s on the tin
They analyse variance!
This goes back to what we said before about variance
You end up with an F statistic that tells you about the ratio of between condition to within condition variance
Like the t statistic:

F = Between condition variance/differences between conditions
Within condition (error/unexplained) variance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What the F?

A

A big F statistic means there are bigger differences between the conditions than within the conditions
A small F statistic means there are bigger differences within the conditions than between the conditions
Which is why F < 1 is sometimes reported: it’ll never be significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Same as the t-test, just more complicated

A

Really, this is the same as the t-test: the F statistic gives you the ratio of between condition/group differences to within condition/group differences
Like the cats and dogs examples, but we can add another animal as we can compare more than two, maybe we can get our shot putting team to try throwing cats, dogs, and guinea pigs to see how far they travel

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

So what does ANOVA do?

A

Essentially calculate how much variance there is BETWEEN conditions (differences between dogs & cats, cats & guinea pigs, and dogs and guinea pigs

Then calculate how much variance there is WITHIN conditions (differences between dogs and dogs, cats and cats, and guinea pigs and guinea pigs)

Then look at the ratio of the two sources of variance…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Post hoc tests and planned comparisons…

A

What you may have noticed that the replication showed there was a difference between dogs, cats, and guinea pigs
BUT not if there were differences between these groups

In the old days you’d calculate three t-tests to make the following comparisons:
Dogs vs. cats
Cats vs. guinea pigs
Dogs vs. guinea pigs
Trouble is, the chances of making a type 1 error (saying there’s a difference when there isn’t one) increases with each comparison you make from the same data set
So you’d make a ‘Bonferroni correction’ to the value of p accepted as statistically significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Bonferroni what?

A

Fair enough, like I said the chances of making a type 1 error increases with each comparison you make
Here we have three comparisons to make so we divide the alpha(𝛼) value (the value of p required for a difference to be statistically significant: usually 𝛼 = .05)
You divide your normal 𝛼 =.05 by the number of comparisons you’re making, so for the current experiment 𝛼 = .05/3 = .017
So you’d need a p value of less than .017 for the differences to be considered statistically significant (𝛼 = .017)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly