ANOVA Flashcards
What does ANOVA stand for?
ANalysis Of VAriance
When would you use ANOVA?
Parametric Data
3 or more groups/conditions
Can ANOVA be used for independent groups or repeated measures design?
BOTH
What is a type I error?
when we say there is an effect when there’s not lol.
What is a type II error?
When we say there isn’t an effect when there actually is! oops
Why do we use ANOVA instead of doing lots of t-tests?
t-test doesn’t allow more than 2 conditions of an I.
It also inflates the overall type I error rate.
What is the overall type I error rate called?
the family wise error rate.
What is the calculation for familywise error rate?
1 – (0.95)n
n = in the squared position and stands for the number of comparisons e.g. 3
If you have 3 groups what is the family wise error?
1- (0.95) 3(cubed) = .14
If the familywise error rate is .14 instead of .05 what does this tell us?
The likelihood of making a type I error increases from 5% to 14%.
What does ANOVA control for?
It controls for type I error rate.
ANOVA is a _____ test.
Parametric
ANOVA tests whether the ____ of one or more IVs has a statistically significant influence on the value of the _____.
Manipulation
DV
Our alternative hypothesis for any ANOVA will be that the ____ differ.
means
ANOVA takes into account how a set of scores ____ around the means for each condition or group.
vary
What are the 3 causes of variability?
Treatment effects- effect of the IV
Individual Differences- within group variability e.g. the level to which individuals in a group differ even though they have received the same treatment
Random error- experimental error e.g. all skilled participants randomly end up in same group
The 3 causes of variability are split into 2 groups- wat are these groups and how are they split?
Systematic Variance:
1. Treatment Effects
Experimental Variance:
- Individual differences
- Random errors
ANOVA _____ out the different causes of variance in a data set.
Partitions out!
What Variance does ANOVA calculate?
Total variability = systematic Variance + Experimental Variance
Systematic Variance
Experimental Variance/ Error Variance
In ANOVA what do we use to calculate variance?
Sum of Squares! SS
How do we calculate sum of squares?
SS = ∑x2 – (∑x)2/N x = individual scores N= sample size
After we calculate the SS how do we calculate variance?
We divide SS by df
SS/df
What does ANOVA not tell us?
It tells us there’s a significance difference in means BUT doesn’t tell us which ones!
If we have significant ANOVA results what is important to do?
Follow up tests!
to see where the differences lie!
ANOVA is a parametric test, what does this mean about our data?
> Homogeneity of variance (equal variance across samples)
Normal distribution
Interval/ ratio dats
at least 10 people
Independent variables are also called _____ and their values are called ______.
Factors
Levels :)
What ANOVA do you use if you have 3 independent groups?
one-way independent ANOVA
Before we run an ANOVA we must check _____.
Assumptions
What are the 3 assumptions we must check?
> Normality of data
Ratio/interval scale
Homogeneity of variance
How do we check for Normality?
Run the Sharpiro Wilks test.
If the Shapiro Wilks test is:
p > .05
what does this mean?
Your data is normally distributed.
Apart from Shapiro Wilks what else would tell you about Normality?
Distribution Curve
Histogram
How do we check for homoegeneity of variance?
Levene’s Test of Homogeneity :)
If assumptions are violated in an independent groups ANOVA what do we do?
Conduct a Kruskal-Wallis Test
What are the disadvantages of independent groups design for ANOVA?
> More participants, potentially more expensive
>Does not partition out the 2 sources of error/experimental variance
What is randomisation?
When you randomly allocate participants to groups.
What’s the symbol for ANOVA?
F
the F ratio!
What is the F ratio?
systematic variance/ experimental error variance
Can be also referred to as experimental/error just to confuse shit.
If systematic variance is larger than experimental error variance what is the F ratio likely to be?
Significant
F = 10/1 = 10 =big!
If systematic variance is smaller than experimental error variance what is the F ratio likely to be?
Not significant!
F = 1/10 = 0.1 = small :(
The larger the F ratio the _____ the variance.
The greater
The larger the F ratio, the more ___ the result.
significant
How do we calculate how much variability there is between all of the scores?
Get the Total Sum of Squares (SS total)
How do we calculate how much of this variability can be explained by the IV?
Get the Treatment Sum of Squares
(SS treatment)
How do we calculate how much of this variability can’t be explained?
Get the error sum of squares.
ss error
After getting the SS treatment and SS error what do we calculate?
Mean Squares Treatment
Mean Squares error
What is this the formula for?
SS treatment/ df treatment
Mean Squares Treatment
What is this the formula for?
SS error/ df error
Mean Squares error
How do you calculate the df treatment?
Number of groups - 1
How do you calculate the df error?
The no. of participants - the number of conditions
F distribution is _______. Values are ______ and we don’t ___ the p value for 1 tailed hypotheses.
Non-symmetrical
positive
half
If you get a non significant result in Levene’s test of equality of variances what does this mean?
Groups are approx. equal wooo :)
How do we get the F ratio from the SPSS table?
divide group mean square by error mean square
What is partial eta squared?
The effect size for ANOVA
What is considered a small, medium and large effect size in ANOVA?
small = .01 medium= .09 large = .25
How do you report an ANOVA?
F( df treatment, df error) = F ratio, p < .xxx, (if significant report effect size: n2p = .xx)
What is the difference between repeated measures ANOVA and independent groups ANOVA?
repeated measures ANOVA = 3 or more conditions.
Independent groups ANOVA = 3 or more groups
In a repeated measures ANOVA if the data is non-parametric, what test should you use?
Friedman Test
Friedman Test is an extension of the ….
Wilcoxon Signed Ranks.
What are the ANOVA advantages of a repeated measures study?
requires less participants = less expensive
scores for each participant are related
We can calculate and remove the variance due to individual differences :) - can’t do this for independent groups design.
What are the ANOVA disadvantages of repeated measures study?
Assumes scores are independent but in fact these are related.
This extra assumption is called sphericity!
What extra assumption in repeated measures ANOVA do we need to consider?
Sphericity
What is sphericity?
RM
It assumes the differences between conditions is equal
What can a violation of sphericity do?
RM
Invalidate the F ratio
What test do we use to test for sphericity?
(RM
Mauchly’s Test of Sphericity.
What does it mean when Mauchly’s test of sphericity is:
p >.05
(RM)
Sphericity is not violated and we can carry on!
What does it mean when Mauchly’s test of sphericity is:
p
Sphericity is violated :(
What do we do when sphericity has been violated?
RM
Report the “Greenhouse-Geisser Epsilon Correction”
What does the Greenhouse Geisser do?
RM
Reduces the chance of making a type II error.
Apart from sphericity what is another issue with repeated measures designs?
Order effects!
performance on one condition can change performance on another condition
How are order effects resolved?
RM
BY COUNTERBALANCING
What is counterbalancing?
RM
Person 1 = 1 2 3
Person 2 = 2 1 3
Person 3 = 3 1 2
or by randomly presenting conditions across trials
In repeated measures ANOVA- Effect of IV is found in the _____ participants variance.
within!
independent = between group variance
What is variance caused by?
Systematic error- effect of IV
Experimental Error -
individual differences and random error
In RM ANOVA what can we do with error variance?
we can split it in 2 because we can calculate the individual differences- we just don’t know the random error part now!
What is the F ratio in RM?
systematic variance/ random error variance
We want F to be as ____ as possible for it to be significant.
Large
The smaller the random/residual error- the ____ the F ratio.
Bigger
How do you calculate df treatment?
number of conditions/groups- 1
How do you calculate df error?
no. of participants- number of conditions
How do you calculate the mean square?
total sum of squares / df error
How do you calculate the F ratio on the SPSS table?
mean square treatment / mean square error
What is an a priori comparison?
Comparison chosen before data has been collected.
What is a post hoc comparison?
Chosen by the experimenter after the data has been collected.
If we are making multiple comparisons, you need to control for…
Type I errors
What is another term for controlling type I errors?
controlling alpha inflation
What are the 2 ways to control for type I errors?
Alpha controlled per comparison (probability of making a Type I error on any given comparison) e.g. .05
Alpha controlled familywise (probability of making at least one Type I error on a set of comparisons
What is the chance of finding a Type I error for 6 comparisons:
Familywise Error = 1 – (α) 6
1- (0.95))6
= .27
27% of making a type I error.
Before conducting Bonferonni Correction t test- what dowe not need to do?
we don’t have to do the Bonferonni correction e.g. divide significance value (.05) by number of t-tests being conducted. BECAUSE spss does it for us :)
What do post hoc comparisons allow?
all possible comparisons to be compared
When would you do a post hoc comparison?
When you get a SIGNIFICANT ANOVA
If we avoid type I error what’s likely?
More chance of type II error
For independent ANOVA what follow up test is used?
Tukey HSD Test
For repeated measures, what follow up test is used?
Bonferroni corrected tests