Week Nine Flashcards
types of follow up tests
○ A priori - decided before to test specific hypotheses.
○ Post hoc - comparisons made after assessing F ratio.
○ Nature of hypothesis tells you which test to use.
a priori tests
(using t tests or planned comparisons)
○ Seek to compare only the groups of interest
post hoc
- If you cannot predict exactly which means will differ then you should do the overall ANOVA first to see if the IV has an effect, then
○ Post hoc comparisons
○ Seek to compare all groups to each other to explore differences
○ Less refined - more exploratory
planned comparisons procedure (a priori)
- To do this, we weigh our group means.
○ We assign weights or contrast coefficients (c) to reflect the means (M) we wish to compare.- In an example with 4 groups, assign group 1 a value of 1, group 2 a value of -1 and groups 3 and 4 a value of 0
- Weights and coefficients are the same
- They are numbers we assign to groups to communicate to SPSS and ourselves which groups we wish to compare.
assigning weights
- sum of all must be 0
- groups that are being compared must have equal but opposite co-efficient.
- When comparing two groups use 1 and -1.
- When comparing groups use all parts of the group as 1 or -1.
- Try to assign the group with the higher mean a value of 1.
- If you are comparing one group to two groups, the other value will need to be + or - 2.
- Use whole numbers in weights.
planned contrasts and t
- T^2 = F
- Therefore, can essentially compute a t statistic.
- Easier to run the F test though.
assumptions for planned contrasts
- Subject to the same assumptions as the ANOVA.
○ Particularly homogeneity of variance as we use pooled error term.
○ Do not have to run again though.
○ SPSS accounts for homogeneity of variance by giving homogeneity assumed and not assumed.
type 1 errors and comparisons
- The more tests we conduct, the greater the chance of a type I error.
- Alpha needs to be balanced, therefore lower the alpha rate by dividing 0.05 by the number of comparisons.
○ This is called a Bonferroni Adjustments
○ Then assess the tests using the new a value as the cut off - Planned comparison error rate = alpha
- The error rate per experiment (PE) s the total number of Type 1 errors we are likely to make in conducting
○ PE= a x number of tests
- Alpha needs to be balanced, therefore lower the alpha rate by dividing 0.05 by the number of comparisons.
post hoc comparisons
- Need to correct type 1 error to maintain acceptable experiment error rate.
- If each comparison is set at 0.05 and there are 6 comparisons. EW error rate - 0.35 which is not acceptable.
- LSD method: least significant method (does no adjustment, alpha rates are not adjusted).
○ Can divide 0.05 by the number of tests and compare significance again. - Bonferroni method will adjust for you.
- Tukey still uses 0.05 for comparisons.
- All tests but LSB use 0.05 cut off.
- When doing a post hoc method, you need to report on ALL results.
effect size
- A significant F tells us that there is a difference between means. The Iv is having an effect.
- Planned contrast or post hoc tests tell use where there effect is.
- It does not tell us how strong or important this effect is.
- We need a statistic that summarises the strength of the treatment effect:
○ Eta squared (n^2)
○ Indicates the proportion of the total variability in the data accounted for by the effect of the IV. - ** need to calculate n2 manually so remember the formula.
eta squared
n^2= t^2/(t^2+df) = SSbetween/SStotal - result says that that % of the variability in errors is due to the manipulation of the IV. - ranges from 0 to 1. Cohen suggests; - 0.01= small effect 0.06= medium 0.14= large
problems with eta squared
- descriptive not inferential so not the best indicator of effect size in population.
tends to overestimate the effect of size in population.
cohen’s d
eta does not give an effect size for follow up tests - Cohen’s d is useful to measure effect sizes for a comparison of two means.
○ A priori and post hoc
- do not report cohen’s as a minus, use absolute value.
cohen’s d formula
= u1-u2/ pop SD
= M1-M2/sqrt MSwithin
- 2= small
- 5 = medium
- 8= large
power
- Power is the probability of finding a significant effect when one exists.
- Power = 1 - beta.
- Power is a quantitative index of sensitivity which tells us the probability that our experiment will detect this effect.
- Ideally, power should be > 0.80.
- Power is a design issue.