WEEK 4: One-Way ANOVA (Related) Flashcards
Learning Objectives:
- Be able to report the results of a 1 way ANOVA
- Understand and be able to explain why a Bonferroni correction is necessary
- Understand and be able to explain the concepts of ‘equal variance’ and ‘sphericity’
- Understand and be able to explain the difference between a within and between participants design from the perspective of the calculation used in each type of ANOVA
Introducing the One-Way Related ANOVA
So far we’ve looked at the between participants ANOVA of course there is also a within participants version.
Let’s take the same data but now instead of each column representing a separate group each row represents a participant who has written an essay in silence and whilst listening to classical music and whilst listening to rock music.
Now looking at ROWS (separate participants) rather than COLUMNS (separate groups)
Run Through of how it works
Now looking at ROWS (separate participants) rather than COLUMNS (separate groups)
Say participant 1 scored 25 in their essay when they wrote it in silence and 20 when listening to classical music and 21 when listening to rock music etc.
We calculate a between conditions variance in a similar way to the independent ANOVA
The variance is calculated in the same way by working out what the difference is between the grand mean of the participant (condition means added up/ 3 or n of conditions) and each condition mean for the participant.
When we had the separate groups design this value consisted of the treatment effect plus the ind diffs and the sampling error –>n ow we don’t need to take into account the ind diffs because we have the same people in each condition, this results in us having a greater chance of finding an effect if there is one…
Within groups variance vs within ppt variance
Again, for the independent ANOVA we also calculated a within group variance by taking the mean for each group and working out the difference between each data point and the group mean.
For the within design the calculation involves finding the average for each participant across each condition and calculating the difference between the participant’s mean and their score in each condition.
Of course this value is squared and a ‘mean square’ is calculated from the sum of squares.
Just like the between variance you now don’t need to take individual differences into account because you have the same people taking part in each condition.
Despite the new layout of the data the fundamentals of the ANOVA are the same, we take a measure of the effect – in other words the differences between each condition and divide that by the amount of error that we have measured within the groups
Again, no longer have the problem of individual differences because each person took part in each condition so our effect now includes JUST the treatment effect and any sampling error so the only error we have to take into account within the groups is the SAMPLING ERROR because each person acts as their own control
What the ANOVA is looking for…
> Is there a difference between any of your conditions?
If the ANOVA is significant do the t tests
If the ANOVA is not significant don’t do the t tests
Assumptions of a related one-wa ANOVA
Sphericity
> Continuous (Scale) dependent variable > Normal distribution > No outliers (extreme scores) > Equal Sphericity - differences between conditions
Just like the independent anova the related anova requires the same assumptions to be met but instead of requiring equal variances we now require the sphericity of each condition to be equal.
Sphericity is a concept that is similar to the idea of equal variances but instead of looking at within group differences we look at differences between conditions.
Sphericity Example
Sphericity is a concept that is similar to the idea of equal variances but instead of looking at within group differences we look at differences between conditions.
For example if we take 3 conditions
- calculate the difference between the scores in condition A compared to condition B
- then calculate the difference between the scores in conditions A & C and B & C we will get a measure of the variances for the differences between each condition which we can total and compare for equality.
If the differences in the variance of the differences between each of the conditions in our experiment are roughly equal then we have not violated sphericity
Mauchly’s Test of Sphericity
Unlike the independent ANOVA we do not need to tick a specific box to get SPSS to check sphericity for us it will be calculated anyway using a test called MAULCHY’S test of sphericity.
The output of this test looks like this and we want it to be NOT significant! If it is significant then we do not have homogeneity in the differences between conditions and we have violated the assumption of sphericity.
Related One-Way ANOVA Output
Look at the Greenhouse Geisser lines (2 - one for factor, one for error). D & R suggest that you report the GG line, see we have some sums of squares some df and some mean squares just like in the independent ANOVA. Also have an F value and a significance value.
Top box = between conditions variance (effect)
Bottom box = within participants variance (error)
If the results of your test of sphericity are significant you should report the GG line at the bottom.
If the result of your test of sphericity are not significant you can report the top line that assumes sphericity is not violated.
We also need to look at the box of data that represents the error, but all we’re interested in here is the df because we need to know our error df in order to report it.
Violations of assumptions for a related 1-way ANOVA
If you violate assumptions use a non-parametric ANOVA (to be covered next week)
Although ANOVA are robust to violations of distribution, outliers and type of data, if you violate the assumption of equal variances in an independent ANOVA you would be advised to conduct a non-parametric version that we’ll cover next week.
Reporting the result of a related 1-way ANOVA
A one way Between/Within participants ANOVA showed that there was a significant effect of music on the quality of essays (F(df between, df within) = [F value], p = [p value].
Bonferroni corrected post hoc tests revealed that [condition 1] resulted in improved performance (M = ___, SD = ___ ) compared to [condition 2] (M = ___, SD = ___; t(df) = [t value], p = [p value]). The comparison between [condition 1] and [condition 3] (M = ___, SD = ___ ) was also significant (t(df) = [t value], p = [p value]).
However, the contrast between [condition 2] and [condition 3] was not significant (t(df) = [t value], p = [p value]).
We can say with 95% certainty, that in the population [DV] will improve by [lower CI] to [upper CI] points if written whilst [condition 1 compared to 2] and by [lower CI] to [upper CI] points if written whilst [condition 1 compared to 3]
This suggests that _______________________