Week 8 Flashcards
ANOVA H0
means for all groups are identical
Why use ANOVA instead of t-tests
cannot compare each pair with a t test
cannot look at more than one IV at a time
inflates T1 error rate
When assumptions are met, ANOVA is more powerful than t-tests for more than two groups
ANOVA allows us to evaluate all the means in a single hypothesis test, and keep our a at .05
family wise alpha level
probability of making at least one T1 error among a series of comparisons
decisionwise alpha level
alpha level for each comparison
SST
total variability between scores
SSM
variability between group means (our model)
how much variability is accounted for by IV
SSR
residual variability
unexplained variance due to chance (within groups variability or error of SS)
SST=
SSM+SSR
In an ANOVA, we need to determine
whether thee model explains more variability than the residuals, using the F-ratio
SSM>SSR
F=
MS model/MS residual
MS between tx/ MS within tx
MSM=
SSM/df model
SSM/k-1
MSR=
SSR/df residual
SSR/n-k
No treatment effect
MSR>MSM
F<1
non significant
Treatment has an effect
MSM>MSR
F>1
significant
ANOVA assumptions
independence of observations
interval/ratio data
normality
homogeneity of variance
checking homogeneity of variance
boxplots between treatment groups
if normality is violated
n equal across groups + large, then ok
if not:
transform data
use a non-parametric test (Kruskal-Wallis test)
homogeneity of variance is violated when
Levene’s test p<.05
so Levene’s should be non sig for assumption to be met
If homogeneity of variance is violated
use Brown-Forsythe or Welch F, df, and p-values instead of F
If independent observations violated
use repeated measures ANOVA
for RM ANOVA SSerror=
SSwithin groups-SS subjects
Within subjects/RM ANOVA assumptions
continuous DV, categorical IV
normality
homogeneity of variance between groups
post hoc tests
conducting analysis after original a-priori hypothesis was rested and we know results
orthogonal contrasts/comparisons
planned a priori (have to have a strong justified reason for looking at specific groups only and hypothesis driven)
post hoc tests
not planned or hypothesised
compare all pairs of means
post hoc Bonferroni
must use stricter alpha DW (decision wise alpha= T1 error) to accept effects as significant
bonferroni adw= afw/number of tests/comparisons
eta squared
biased effect size estimate
overestimates proportion of variability accounted for
eta squared= n2
SSM/SST
Small eta squared
0.01
Medium eta squared
0.09
Large eta squared
0.25
Eta squared for repeated measures ANOVA
report partial eta squared
SS of subject variability removed from denominator
Omega squared= w2
better than eta, especially for smaller n as it is unbiased
uses more info from data including df, so more accurate
small w2
0.01
medium w2
0.06
large w2
0.14
cohens d
degree of separation between two distributions
how far apart in standardised units the means of the two distributions are
small cohens d
0.20
medium cohens d
0.50
large cohens d
0.80
cohens d =
mean difference/ SD
effect sizes tell us
the proportion of variance accounted for
effect size examples
r2
eta squared n2
omega squared w2
cohens d