8. Planned Comparisons and Pot Hoc Tests and Power and Effect Flashcards
what does the ANOVA tell us?
that there is a difference somewhere between the means
how to we determine where the difference(s) are?
with a priori and Post Hoc comparisons
when do you decide an a priori test
before to test a specific hypothesis
when are post hoc comparisons made?
after assessing the F ratio
when should a priori tests be used?
if we have a strong theoretical interest in certain groups and have evidenced based specific hypothesis regarding these groups then we can test these differences using a priori tests
what sort of tests are a priori?
planned comparisons or t-tests
what do a priori seek to compare?
only groups of interest
when should post hoc comparisons be used?
if we cannot predict exactly which means will differ.
what should be done before doing a post hoc comparison?
the overall ANOVA to see if the independent variable has an effect
what does post hoc mean?
after the fact
what does post hoc comparisons seek to do?
compare all groups to each other to explore differences this comparing all possible combination of means
what are the characteristics of a post hoc comparison/
less refined - more specific
what is an omnibus
the initial f ratio
what are planned comparisons also known as?
planned contrasts
what is weighting our group means?
we assign weights of contrast coefficients (c) to reflect the groups means (M) we wish to compare
what is the point of weighting our group means?
how we communicate with SPSS
how would we weight groups 1 and 2 when comparing them?
a weight (c_1) of 1 to mean group 1 (M_1) a weight (C_2_ of -1 to mean of group 2 (M_2) A wright of 0 to groups 3 and 4 as they are not in the analysis we are condicting
true or false
weights and contrasts are the same thing?
true
what must the sum of all coefficients be when weighting?
0
why must the sum of all coefficients be 0?
because this is SPSS’s way of knowing that everything is fair and balanced. Groups (or sets of groups) which are being compared in a hypothesis must have equal, but opposite coefficients / weights
i.e. one group would be 1 and the other -1
what happens to the weights of groups when we are lumping them together in a hypothesis?
they must be given equal coeficcients of the same sign
what coefficient must the groups not being compared be assigned to?
0
what is the equation to test the significance of contrasts?
F_contrast = MS_contrast / MS_within
what is used for the error term in the F test for a contrast?
MS within from our ANOVA
how do we calculate the MS_contrast?
similar way to SS_between. the df is always 1
why is the df always 1 for a comparison F?
because we are only comparing two means (or two groups of means).
df = number of groups - 1
thus df = 2-1 = 1
what is the MS comparison the same as?
SS_comparison
when is the difference between two means not significant?
when:
F_observed
when is the difference between two means significant?
when:
F_observed > F_critical
wht are the assumptions for planned comparisons?
the same as the overall ANOVA:
all samples are independent
normality of the distribution
homogeneity of variance
How does SPSS help us overcome the homogeneity of variance assumption with planned comparisons?
when it runs the t-test for our contrasts it gives us the output for homogeneity assumed and homogeneity not assumed
If homogeneity is not assumed SPSS adjusts the df of our F critical to control for any inflation of type 1 error
what happens with error when we find a significant difference?
there is a chance we have a type 1 error
the more tests we conduct..?
the greater the type 1 error rate
what is the error rate per comparison (PC)?
the type 1 error associated with each individual test we conduct