Week 7 - planned comparision and post hoc tests Flashcards
Why does the F ratio not paint the whole picture?
only tells there is a difference somewhere between the means We need an analysis that helps to determine where the difference(s) are
What are the two basic approaches to comparisions?
- A priori (or planned) comparisons
- Post hoc comparisons
What is a priori (or planned) comparisons
- If we have a strong theoretical interest in certain groups and have evidence-based specific hypothesis regarding these groups then we can test these differences upfront
- Come up with these before you do your study
- Seek to compare only groups of interest
- No real need to do the overall ANOVA we do it because of tradition. Hence, reports often start with the F test and progress to planned comparisons
rather be in a prior hypothesis then be in post hoc
What is a Post hoc comparisons?
- If you cannot predict exactly which means will differ then you should do the overall ANOVA first to see if the IV has an effect, then
- Post hoc comparisons (post hoc = after the fact/ANOVA)
- seek to compare all groups to each other to explore differences.
- Less refined – more exploratory.
What are the two types of A priori/ Planned comparisons
Simple
Complex
What is a simple a priori comparison?
comparing one group to just one other group
What is a complex a priori comparison?
comparing a set of groups to another set of groups
*In SPSS we create complex comparisons by
assigning weights to different groups
How to conduct a priori comparison (how to weight it)
Create 2 sets of weights
- 1 for the first set of means
- 1 for the second set of means
- Assign a weight of zero to any remaining groups
- Set 1 gets positive weights
- Set 2 gets negative weights
- They must sum to 0
A simple rule that always works –> The weight for each group is equal to the number of groups in the other set
What are the assumptions of a priori/ planned comparisons
- Planned comparisons are subject to the same assumptions as the overall ANOVA - particularly homogeneity of variance as we use pooled error term.
- Fortunately, when SPSS runs the t-tests for our contrasts it gives us the output for homogeneity assumed and homogeneity not assumed
- If homogeneity is not assumed SPSS adjusts the df of our F critical to control for any
inflation of TYPE 1error
What are Orthogonal contrasts
- One particularly useful kind of contrast analysis is where each of the contrasts tests something completely different to the other contrasts
Principle:
Once you have compared one group (e.g., A) with another (e.g., B) you don’t compare
them again.
Example
Groups 1,2,3,4
Contrast 1 = 1,2 vs 3,4
Contrast 2 = 1 vs 2
Contrast 3 = 3 vs 4
Cool things about orthogonal contrasts
- A set of k-1 orthogonal contrasts (where k is the number of groups) accounts for all of the differences between groups
- According to some authors a set of k-1 planned contrasts can be performed without adjusting for type-I error rate
Post-Hoc comparisons
- Let’s say we had good reason to believe that sleep deprivation would impact performance but did not know at exactly what level of sleep deprivation this would occur. So, we had no specific hypothesis about what difference would emerge between which conditions.
- In this case, planned comparisons would not be appropriate
- Here you would perform the overall F analysis first
- If overall F is significant, we need to perform post-hoc tests to determine where the differences actually are
What do post hoc comparisions seek to compare?
Post-hoc tests seek to compare all possible combinations of
means
* This will lead to many pair-wise comparisons
* e.g., With 4 groups, 6 comparisons
* 1v2, 1v3, 1v4, 2v3, 2v4, 3v4
How does post hoc comparisions increase the risk of type 1 errors?
- So, as we know when we find a significant difference there is an alpha chance that we have made a Type I error.
- The more tests we conduct the greater the Type I error rate
What is the error rate per experiment (PE)
the total number of Type 1 errors we are likely to make in conducting all the tests required in our experiment.
* The PE error rate <= x number of tests
* <= means it could be as high as that value