anova Flashcards
When would you run an anova for one way and repeated measures and what does it stand for?
- assessment of differences among 3 or more independent groups
- assessment of 3 or more data sets from the same participant at different times, under different conditions.
- analysis of variance
Why would you use an anova compared to using several t tests/what are the disadvantages of using several t tests?
- anova can look at several independent variables against a dependent variable - t test can’t
- inflates type I error - rejecting the null hypothesis when it is really true
- error/familywise error increases as a function of number of tests
considering the theory of anova, what does SSt, SSm and SSr stand for?
- Sums of square total - total variability between the scores
- Sums of squares model - how much variability is explained by the model we fit to the data
- residual sums of squares - how much variability cannot be explained / residual error
How does an anova test determine is there are significant differences between groups?
- compares variances using a f ratio
- the F-ratio = MSm / MSr
- calculates the variance that can be explained by the model (experimental manipulation) and the residual variance / error (cannot be explained by the model)
- the F ratio is assessed against a critical value based on the DF.
what are mean squared differences?
- sums of squares corrected for the number of observations
- mean squares rather than sums of squares to remove the effect of individual group sizes and to account for the different number of scores one each group
what does a f ratio of less than 1 mean?
- can never be significant as the variance the model explains less than the error/residual variance.
- f test compares the f ratio with a critical f ratio
what does an anova tell us?
- tests null hypothesis (means are the same) and experimental hypothesis (means differ)
Omnibus test - overall difference between groups
- tell us group means are different but doesn’t tell us exactly which means differ
What are the assumptions for carrying out a one way anova?
- normal distribution in population (Kolmogorov-smirnov test n > 50, Shapiro wilk if less than 50)
- homogeneity of variance
- scores in various groups are independent
- data measured at interval or ratio level
What is it and how would you test for homogeneity of variance?
- variability in samples are similar or equal
- Rule of thumb - largest of the samples SDs should not be greater than twice the smallest SDs
- tested through levene’s test if P > 0.05 we can assume homogeneity of variance
- if not we should consider using data transformation or using non parametric
what is the non parametric alternative for one way anova if the data homogeneity of variance isn’t assumed and for repeated measures if sphericity is not assumed?
- Kruskal-Wallis h test
- Friedman’s
how does bonferroni control for family wise error?
- re calculates the critical alpha value as a/n
- n is the number of pairwise comparisons - adjusts value in accordance to number of comparisons
what are the benefits of using repeated measures design?
improved sensitivity - unsystematic variance is reduced - more sensitive to experimental effects economy - fewer participants are needed
what are the assumptions for repeated measure anova?
- same as one way anova
- sphericity must be met - homogeneity of variance and covariance
- Sphericity assumes that variances in the differences between conditions is equal
how do you test for sphericity?
- mauchlys test
- sphericity is met is the p > 0.05 - sphericity assumed results
- P < 0.05 - then sphericity is violated and a corrected score from greenhouse-geisser or huynh-feldt rows should be read (these adjust DF to reduce risk of type I error)
What are the two degrees of freedom for anova tests?
- DFmod is the number of samples subtracted by 1 (DFmod = k -1)
- DFerror is the number of participants subtracted by the number of samples (DFerror = n-k)
- to obtain mean squares for between and within groups divide the sums of squares by degrees of freedom