ANOVA Flashcards
What is ANOVA
Also known as Analysis of variance.
* Is used when there are more than two sample population, on which statistics have to be performed.
When there are more than two groups of sample population, this method has a tactical advantage over t-test.
* In the context of ANOVA, the independent factor is termed as ‘factor’.
- greater flexibility in designing experiments
Multiple t test vs one way anova
ANOVA tests k groups (typically 3 or more) together in one model. Thus, no matter how many different means are being compared, ANOVA
uses one test with one alpha level to evaluate the mean differences, and
thereby avoids the problem of an inflated experiment-wise alpha level.
Increase of type 1 error
Each test has a risk of a Type I error, and the more tests you do, the more risk there is.
Meaning it makes it more likely to reject the null and detect an effect of the IV, where
there isn’t any.
experimentwise alpha
The probability of that an experiment will produce any TYPE I error is called experimentwise alpha (αEW)
When will the experimentwise alpha be larger than the alpha
When t-tests are performed freely in a multi-group experiment, the experimentwise alpha (αEW) will be larger than the alpha used for each t test.
When will the experimentwise alpha increase?
will increase as the number of groups increases because of the increasing number of opportunities to make a TYPE I error
Experimentwise alpha = ?
= 1-(1-α)^j
a = significance level chosen for an experiment.
*
j = number of groups.
Variables in a one way ANOVA
- One categorical independent or quasi-independent variable (technical name: factor)
with at least two independent groups (technical name: levels). - One DV - continuous variable (e.g., achievement test scores).
Null Hypothesis
There are no differences among the populations (or
treatments). The observed differences among the sample means are caused
by random, unsystematic factors (sampling error) that differentiate one
sample from another.
Alternative hypothesis
The populations (or treatments) really do have
different means, and these population mean differences are responsible for
causing systematic differences among the sample means.
Between treatment variance measures
- Systematic treatment effects
- Random unsystematic factors
Within treatment effects
Measures differences caused by random, unsystematic factors
The two basic components of the analysis process
Between-
Treatments Variance and Within-Treatment Variance.
When the F ratio is 1
there are no systematic treatment effects,
When the f ratio numerator is greater then the denominator
When the treatment does have an effect,
SSB
*This is the sum of the squared differences between each group mean and the grand mean.