Week 2 ANOVA (Text books) Flashcards
To supplement the lecture content and underpin our knowledge of ANOVA First slides from Andy Field, Pallant, G& W
Rather than run an ANOVA, why don’t we simply use multiple t-tests when we have more than 2 groups/conditions?
- because of an increased type 1 error rate - each time we test a group the probability of falsely rejecting the null hypothesis is 5%; which with 3 groups would rise to 14.3%
- ANOVA hold the risk of a type 1 error at 5%
- If 5 conditions rather than 3 error rate rises to 40% if run t-tests!
What is the name for this error rate across statistical tests conducted on the same experimental data?
*Familywise error rate
or
*Experimentwise error rate
What is the equation for the familywise error rate?
familywise error = 1 - (-.95) to the power of n
n = number of groups
In a nutshell, what does ANOVA do?
- ANOVA tells us whether 3 or more means are the same
- so it tests the null hypothesis that all group means are equal
- ANOVA tests for an overall experimental effect
- ANOVA is a special type of regression
The F ratio of ANOVA tells us if there is an overall experimental effect, what next?
Post hoc tests, such as Tukey’s HSD, tell us if where the significance is
So, what does ANOVA do?
ANOVA compares the ratio of systematic variance to unsystematic variance in an experimental study (using the F ratio)
*The F ratio assesses how well a regression model can predict an outcome compared to the error within that study
Why is it important to have a control group in social science experiments?
- They act as a baseline for other groups, which is a reference point.
- when I compare the results, I will see how different each treatment condition is to the control/baseline group to determine if that treatment is effective
What are Andy’s wise words regarding the difference between balanced (equal) and unbalanced (unequal) groups when running ANOVA’s?
*In unbalanced designs, it is important to have a fairly large number of cases to ensure that the estimates of the regression coefficients are reliable.
Post hoc tests are designed to reduce the likelihood of a type 1 error rate, but what is the downside according to Pallant?
With post hoc tests the approach is stricter making it more difficult to obtain statistically significant differences
How do G & W partition the degrees of freedom for an independent measures ANOVA?
df total = N - 1
df between = k - 1
df within treatments = N - k
According to G & W what do SSbetween & MSwithin provide?
SSbetween and MSwithin provide a measure of how much difference there is between treatment conditions
What are the H0 and H1 for ANOVA?
- The Null hypothesis (H0) is that in the general population there are no mean differences among the treatments
- The H1 (Experimental hypothesis) is that at least one mean is different from another
What are the effect sizes for Cohen’s d?
d = 0.2: Small effect (mean difference around 0.2 standard deviation) d = 0.5: Medium effect (mean difference around 0.5 standard deviation) d = 0.8: Large effect (mean difference around 0.8 standard deviation)
When is ANOVA considered to be robust?
When n is large (20-30+ in each group)
Therefore, violation on the normality assumption would have little effect on its accuracy.
P.93
Characteristics of large sample sizes in ANOVA
With large and equal groups in ANOVA =
Also robust against heterogenous variances.
P.93