Mod 11, Hypothesis Testing 3 Samples/ ANOVA Flashcards
ANOVAS ON MULTIPLE GROUPS DESIGN
One way because we’re splitting people based on one predictor with multiple levels
One IV with three or more levels, and a continuous DV
Basically just an extension of the two group designs
Still comparing the means between 3 or more different groups to see if they’re different
Running ANOVA on two groups
You can run an ANOVA with two groups and youll get the same result as a T-test
General H0 and Ha in ANOVA
ALTERNATIVE HYPOTHESES: at least one of the means is different from the others
Null Hypotheses: all population means are equal
Important consideration when comparing 3 different means
Need to consider that there are multiple reasons averages could be different from each other: there is variation within each group that makes it difficult to tell if there is an effect, and there is also variation between groups: WHEN DOING AN ANOVA, INSTEAD OF LOOKING AT DIFFERENCE BETWEEN MEANS, WE’RE LOOKING AT THE OVERALL MEAN AND COMPARING EACH GROUP’S MEAN TO THAT
Grand mean
the overall mean, comparing groups to this in an ANOVA
ANOVA as a Ratio
ANOVA is a ratio: compare how much sample means differ from the overall mean, to how much individual scores differ from their own sample means (on average)
If more between than within group variability, then samples are not all from the same population and we can reject null and conclude the samples are NOT all from the same population (null hypothesis is that they are all from the same population)
F-Statistic
F= Between groups variability / within groups variability
Between group variability vs within group variability
Between groups variability: variability due to IV/ YOUR MANIPULATION and and variability due to error, measurement error etc. (still need to acknowledge there might be error there)
Within Groups Variability: variability due to error measurement error
Assumptions about error in between and within groups
Assumptions: error between and within groups should be similar assuming no confounds according to stats laws
F stat when the IV has had no effect and why
If the IV has no effect, our f statistic will be approximately 1 (saying that the between sample error and within sample error is very similar, and if you divide a number by itself its 1)
If the F value is close to 1= very little likelihood that the IV caused an effect
Higher F: greater likelihood that the IV caused an effect
Assumptions of the F statistic
ASSUMPTIONS: samples are independent of each other; each sample comes froma normal population, each population has the same variance and is fairly robust against violations of homogeneity of variances
The Tukey Procedure
Tells us which population means are significantly different: ONLY DONE WHEN WE HAVE A SIGNIFICANT F TEST
Done after rejection of equal means in ANOVA
Allows paired comparisons: compares absolute mean differences with critical range
Q Value
Q= critical value from Studendentiized Range Distributoon using dfD
k
number of groups in a. set
Conservative vs Liberal tests
Conservative tests: more likely to miss effect that is there, but less likely to have Type I error (more likely to have Type II)
Liberal Tests: less likely to miss effect that is there, but more likely to have TYpe I
Fisher’s Least Squares Difference: MOST LIBERAL
Tukey: moderately liberal
Bonferroni: moderately conservative
Scheffe: extremely conservative