Multiple Level IV and Continuous DV Flashcards
What are the two procedures we talked about that involve an IV with multiple levels and a continuous DV?
- One-way between subjects ANOVA
2. One-way within subjects (repeated-measures) ANOVA
One-way between subjects ANOVA
Tests for mean differences in between subjects design with three or more levels of an independent variable
One-way within subjects (repeated-measures) ANOVA
Tests for mean differences in within subjects (repeated measures) design with three or more levels of an independent variable
An independent variable (IV) in ANOVA is also known as a _______
factor
What is the null hypothesis for one-way ANOVA?
All mean levels of the IV are equal (no differences exist among them)
What is the alternative hypothesis for one-way ANOVA?
At least one mean level of the IV is different from the others
What is one of the main differences between t-test calculation and ANOVA calculation?
For t-test calculations the means are inputed into the calculation whereas for ANOVA calculations variances are inputed into the calculation
In ANOVA we assess…
the amount of variability (size of difference among scores) and then explain the source of that variability
After computing total variability we must divide it into two sections:
Why?
- Between treatments (conditions) variability
- Within treatments (conditions) variability
We do this to assess what amount of the total variability is caused by between treatments variability and what amount of the total variability is caused by within treatments variability.
If we compare a single score drawn from each of two conditions (between conditions), these two scores could be different for three reasons…
- Treatment (condition) effect
- Individual differences
- Experimental error
If we compare two scores drawn from the same condition (within conditions) these scores could be different for two reasons…
- Individual differences
- Experimental error
**Not treatment effect because this is a constant
What is the test statistic associated with ANOVA?
F-ratio
Conceptually the F-ratio is defined as…
the ratio of variance in the scores
F = variance between treatments/ variance within treatments
How do we re-express the F-ratio formula with regard to specific sources of variance for between subjects?
F = treatment effect + individual differences + experimental error/ individual differences + experimental error
What helps to yield a larger F-score?
Having a large treatment effect
Having small values for individual differences and experimental error
If the null hypothesis of a one-way ANOVA is true, how is this reflected in the F-ratio formula?
The variance associated with treatment effect should be zero or nearly equal to 1 (cannot be less than zero since we cannot have negative variance)
If the null hypothesis of a one-way ANOVA is false, how is this reflected in the F-ratio formula?
The variance associated with treatment effect should be larger than 1
What is the numerator of the F-ratio formula?
Measures error variability (individual differences + experimental error) as well as variability arising from systematic influences (+ treatment effect)
What is the denominator of the F-ratio formula?
Measures unsystematic variability and is often called the error term (individual differences + experimental error)
Analysis of variability involves two parts:
- Analysis of sums of squares (SS)
2. Analysis of degrees of freedom (df)
In ANOVA the term for variance is…
Mean squares (MS)
We conduct follow-up tests when there is…
more than two means involved
F-test is an omnibus test. What does this mean?
It is a test that evaluates a general research question
A posteriori (post hoc) tests
follow-up tests that are not based on prior planning or clear hypotheses
**Only considered appropriate when omnibus F-test is significant
A priori tests
follow-up tests that are planned and/or theoretically driven
**Allow us to test for more specific comparisons
Family-wise error
Cumulative likelihood of making a type I error
Post hoc tests control for family-wise error which is helpful but this can also be problematic. How?
The more we try to hold down this family-wise error, the more power also goes down (likelihood of making a type II error increases)
Examples of a posteriori (post hoc) tests
Least-Significant Difference (LSD)
- Does not control for family-wise error, thus it is like doing every t-test for every pair of means
Bonferroni Adjustment
- Adjusts alpha by taking alpha and dividing by # of comparisons
- Good for 3 or 4 comparisons, going beyond makes power very poor
Tukey Honestly Significant Difference (HSD)
- Good when testing lost of comparisons
- Strong control of family-wise error
Example of a priori tests
Planned contrast
How do we re-express the F-ratio formula with regard to specific sources of variance for within subjects?
F = treatment effect + experimental error/ experimental error
Notice lack of individual differences (this is a constant)
Why is a one-way within subjects ANOVA more powerful than a one-way between subjects ANOVA?
- Less sources of error
- Needs fewer subjects to attain appropriate level of precision
Non-orthogonal contrasts
Results of contrasts overlap and are NOT independent of one another
Orthogonal contrasts
The results of one contrast are completely independent of the other