WEEK 4: One-Way Anova (Independent) Flashcards
Lecture Overview:
> From t-test to ANOVA
1-way ANOVA: a conceptual approach
Following up a significant ANOVA result
Learning Objectives:
- Be able to report the results of a 1 way ANOVA
- Understand and be able to explain why a Bonferroni correction is necessary
- Understand and be able to explain the concepts of ‘equal variance’ and ‘sphericity’
- Understand and be able to explain the difference between a within and between participants design from the perspective of the calculation used in each type of ANOVA
From T-Tests to ANOVAs
Why are ANOVA’s necessary??
T-test:
1 IV with 2 conditions or groups
1) Music, 2) Silence
1) Caffeine, 2) Placebo
ANOVA:
1 Factor with more than 2 levels
1) Rock music, 2) Classical music, 3) Silence
1) Caffeine, 2) Alcohol, 3) Placebo
Until now you’ve been referring to an IV with 2 conditions or 2 groups. With an ANOVA the language changes a little and we now talk about a ‘Factor’ with a number of levels.
IV e.g. music = Factor Conditions e.g. rock = Levels
ANOVAs are necessary because if we were to just use t-tests, multiple tests would have to be conducted. If we had 3 levels to test them we could treat the without music condition as a control and do 2 t tests. Or we could do 3 t tests to also test classical music directly against rock music
With a cut off of .05 (p value) as our criterion for ‘significance’ we are likely to see an effect that doesn’t represent the population in 1 out of every 20 cases (5% of the time) and we won’t know if that is the first comparison we do or the last!
This means…
If we were to complete 3 separate t-tests…
t test 1 = 5% chance of error
t test 2 = 5% chance of error
t test 3 = 5% chance of error
So each of our comparisons has a 5% chance of being erroneous and we’re testing the same samples repeatedly which results in an increase of 15% in our chances of thinking that we have an effect when we don’t.
With 10 t tests we would make a type I error 50% of the time
We fix this using bonferroni correction…
Bonferroni correction
But if we divide the .05 by the number of comparisons that we do we get ourselves back to a total of .05 probability of making a type I error overall.
t test 1 = 5% chance of error
t test 2 = 5% chance of error
t test 3 = 5% chance of error
.05 / 3 = .016
If you had 4 t tests what would you divide .05 by?… 4
What would the alpha be with 10x t tests?… 10 = .005
So we could just conduct a number of t tests and lower our cut off criterion for significance. Which would be OK if you just have 3 t tests to run but what if you had 9 or 10 t tests to run, for a start you’d end up with a criterion for significance that was extremely low and secondly you could find that nothing is significant.
So to avoid having to run multiple t tests you can put all your variables into one test that we call an ANOVA
Why we use ANOVAs: Basic terms
Basically, without using ANOVA the more conditions you have = more t-tests need to be conducted, so the more the point at which you can accept significance (usually 0.05 p value) needs to be lessened, so harder to say results are significant
1-Way Independent ANOVA
Introduction
> The ANOVA outcome indicates whether there is a difference between any of your conditions
t tests are conducted following a significant ANOVA
If the ANOVA is not significant t tests are not conducted
This analysis will test all your variables against each other and tell you whether it is likely that any of your comparisons are significant
So if the results of the ANOVA are significant some or all of your comparisons are likely to be significant BUT if the ANOVA is not significant then your comparisons won’t be significant (if the number of comparisons are taken into account).
The aim of stats tests
Remember the aim of stats tests is to divide the effect by the error and produce a ratio that tells us whether the effect we have is bigger than the error
The ANOVA does exactly this
The ANOVA results in a calculation that takes the effect, which is the difference between the groups and divides it by the error which is the difference within the groups
Variance Recap:
It is the variability of the data, how spread out the data is around a certain point.
Calculated by determining how much each score differs from the mean average of the sample, squaring each value, then adding then all up and dividing by the number of scores (squaring accounts for there being both negative and positive values)
- Dividing by n gives the variance in the sample (when using whole population)
- Dividing by n-1 gives an estimate of variance in the population when working with a sample of a population (mean square)
It is difficult to see how variance values relate to the measure you have (the dependent variable) so you take the square root in order to get back to where you started before squaring everything - this is standard deviation
Independent One-Way ANOVA
Between groups variance
So with an ANOVA, variance is calculated in the same way but both between the groups and within the groups.
The ‘between groups variance’ is the difference between the groups, and we assume that this is due to the manipulation that we have applied to the groups (i.e. asking them to complete a task under different conditions).
We assume that any differences found here are due to the fact that Ps did different things in each of the groups
This is also known as the ‘treatment effect’
- the effect (difference in performance between groups) we expect to see as a result of manipulation of the IV/ factor
REMEMBER…
Elements of error in between participants design
If you have a between participants design, in your between groups variance you have the treatment effect, but you also have individual differences because you have different people in the groups and potential sampling error, which remember could be the result of sampling from an unintended sample or just from the fact that we use a sampling methodology
Between group variance
Calculation
- You calculate the grand mean by working out the mean of each group mean (add up the three group/condition means and / 3)
- The between groups variance is the difference between the grand mean and each group mean
There is another form of variance in an ANOVA which is referred to as the within group variance.
Within group variance
Calculation
There is another form of variance in an ANOVA which is referred to as the within group variance.
- This is the variance that you have between the scores of each participant in each of your groups and the group mean.
In your within groups variance you also have individual differences and potential sampling error.
The F-ratio for an Independent ANOVA
**Calculation:
So in the case of our independent ANOVA we’ve got the treatment effect, individual diffs and any sampling error in our between groups variance
And we’ve got individual diffs & sampling error in our within groups variance also.
We therefore have individual diffs and error included in both sets of variance but the between groups variance also contains the treatment effect
F = Between g variance / within g variance
F = Treatment effect + Ind. diff. + Error variance / Ind. diff + error variance
SO, the sum of the factors on top should be greater than the bottom, so when divided if the value is less than one… there is more error than effect.
So like the t-value in t-tests, if the f-value is less than one then its telling you that you have more error than effect.
All of these values can in fact be found on our output (“ANOVA”)
Reporting an ANOVA
F (between df, within df) = [F value], p= [p value]
What is the 1-way Independent ANOVA doing?
> Tests the null hypothesis that all groups are the same - no difference between groups, no effect in population
Omnibus test (another name for it, because it’s a test of a lot of things at once)
- Is there an overall effect? (more effect than error)
- A significant value (low p-value, f-value greater than 1) indicates that there is a low probability that differences would be observed if there is no effect in the population
Doesn’t tell us where the differences come from…