Anova Analysis Of Variance Flashcards
Assumptions underlying analysis of variance
- The measure taken is on an interval or ratio scale.
- The populations are normally distributed
- The variances of the compared populations are the same.
- The estimates of the population variance are independent
ANOVA
Analysis of variance uses the
the ratio of two sources of variability to test the null hypothesis
•Between group variability estimates both experimental error and treatment effects
•Within subjects variability estimates experimental error
•The assumptions that underly this technique directly follow on from the F-ratio.
Variability and averages
Graph 1: peaks in same place and different width
Different variability
Same averages
Graph 2: peaks in different places and same width
Same variability
Different averages
The normal distribution is used in
statistical analysis in order to make standardized comparisons across different populations (treatments).
The kinds of parametric statistical techniques we use
assume that a population is normally distributed. This allows us to compare directly between two populations
The Normal Distribution is a mathematical function that
defines the distribution of scores in population with respect to two population parameters
(Normal distribution)
The first parameter is the
Greek letter (m, mu). This represents the population mean.
(Normal distribution)
The second parameter is the
Greek letter (s, sigma) that represents the population standard deviation.
Different normal distributions are generated whenever
the population mean or the population standard deviation are different
Normal distributions with different population variances and the same population mean
Different length peak troughs but all at the same place
•Normal distributions with different population means and the same population variance
Peaks and troughs the same size and width but in different places, overlapping next to each other
•Normal distributions with different population variances and different population means
Different size and width peak and troughs and not next to each other
Most samples of data are
normally distributed (but not all)
When the null hypothesis (Ho) is approximately true we have the following:
There is almost a complete overlap between the two distributions of scores
Very similar in terms of shared variances and similar averages. Same size peak and trough same weak and very close to be completely overlapping
When the alternative hypothesis (H1) is true we have the following:
•There is very little overlap between the two distributions
Similar peaks and troughs and width but very far from over lapping
The crux of the problem of rejecting the null hypothesis is
the fact that we can always attribute some portion of the difference we observe among treatment parameters to chance factors
•These chance factors are known as experimental error
An experimental error
The crux of the problem of rejecting the null hypothesis is the fact that we can always attribute some portion of the difference we observe among treatment parameters to chance factors
What are the potential contributors to an experimental error
All uncontrolled sources of variability in an experiment
There are two basic kinds of experimental error:
- individual differences error
* measurement error.
In a real experiment both sources of experimental error will
influence and contribute to the scores of each subject.
•The variability of subjects treated alike, i.e. within the same treatment condition or level, provides a measure of the experimental error.
•At the same time the variability of subjects within each of the other treatment levels also offers estimates of experimental error
Estimate of treatment effects
- The means of the different groups in the experiment should reflect the differences in the population means, if there are any.
- The treatments are viewed as a systematic source of variability in contrast to the unsystematic source of variability the experimental error.
- This systematic source of variability is known as the treatment effect.
What is the treatment effect?
Systematic source of variability
What is Partitioning?
Dividing the deviation from the grand mean
Each of the deviations from the grand mean have a specific name
AS25-T …
Is called the total deviation
Each of the deviations from the grand mean have specific names
A2-T
Is called th between groups deviation
Each of the deviations from the grand mean have specific names
AS25-À
Is called within subjects deviation
The between groups deviation…
Ā2-T
Represents the effects of both error and treatment
The within subjects deviation…
AS25- Ā
Represents the effect of error alone
If we consider the ratio of the between groups variability and the within groups variability
Differences among treatment means divided by difference among subjects treated alike
Then we have…
Experimental error + treatment effects divided by
Experimental error
If the null hypothesis is true then
The treatment effect is equal to zero Experimental error + 0 Divided by Experimental error = 1
If the null hypothesis is false then the
Treatment effect is greater than zero Experimental error + treatment effect Divided by Experimental error =Greater than 1