Week 8 Flashcards

1
Q

ANOVA H0

A

means for all groups are identical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why use ANOVA instead of t-tests

A

cannot compare each pair with a t test
cannot look at more than one IV at a time
inflates T1 error rate
When assumptions are met, ANOVA is more powerful than t-tests for more than two groups
ANOVA allows us to evaluate all the means in a single hypothesis test, and keep our a at .05

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

family wise alpha level

A

probability of making at least one T1 error among a series of comparisons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

decisionwise alpha level

A

alpha level for each comparison

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

SST

A

total variability between scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

SSM

A

variability between group means (our model)
how much variability is accounted for by IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

SSR

A

residual variability
unexplained variance due to chance (within groups variability or error of SS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

SST=

A

SSM+SSR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

In an ANOVA, we need to determine

A

whether thee model explains more variability than the residuals, using the F-ratio
SSM>SSR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

F=

A

MS model/MS residual
MS between tx/ MS within tx

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

MSM=

A

SSM/df model
SSM/k-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

MSR=

A

SSR/df residual
SSR/n-k

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

No treatment effect

A

MSR>MSM
F<1
non significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Treatment has an effect

A

MSM>MSR
F>1
significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

ANOVA assumptions

A

independence of observations
interval/ratio data
normality
homogeneity of variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

checking homogeneity of variance

A

boxplots between treatment groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

if normality is violated

A

n equal across groups + large, then ok
if not:
transform data
use a non-parametric test (Kruskal-Wallis test)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

homogeneity of variance is violated when

A

Levene’s test p<.05
so Levene’s should be non sig for assumption to be met

19
Q

If homogeneity of variance is violated

A

use Brown-Forsythe or Welch F, df, and p-values instead of F

20
Q

If independent observations violated

A

use repeated measures ANOVA

21
Q

for RM ANOVA SSerror=

A

SSwithin groups-SS subjects

22
Q

Within subjects/RM ANOVA assumptions

A

continuous DV, categorical IV
normality
homogeneity of variance between groups

23
Q

post hoc tests

A

conducting analysis after original a-priori hypothesis was rested and we know results

24
Q

orthogonal contrasts/comparisons

A

planned a priori (have to have a strong justified reason for looking at specific groups only and hypothesis driven)

25
Q

post hoc tests

A

not planned or hypothesised
compare all pairs of means

26
Q

post hoc Bonferroni

A

must use stricter alpha DW (decision wise alpha= T1 error) to accept effects as significant
bonferroni adw= afw/number of tests/comparisons

27
Q

eta squared

A

biased effect size estimate
overestimates proportion of variability accounted for

28
Q

eta squared= n2

A

SSM/SST

29
Q

Small eta squared

A

0.01

30
Q

Medium eta squared

A

0.09

31
Q

Large eta squared

A

0.25

32
Q

Eta squared for repeated measures ANOVA

A

report partial eta squared
SS of subject variability removed from denominator

33
Q

Omega squared= w2

A

better than eta, especially for smaller n as it is unbiased
uses more info from data including df, so more accurate

34
Q

small w2

A

0.01

35
Q

medium w2

A

0.06

36
Q

large w2

A

0.14

37
Q

cohens d

A

degree of separation between two distributions
how far apart in standardised units the means of the two distributions are

38
Q

small cohens d

A

0.20

39
Q

medium cohens d

A

0.50

40
Q

large cohens d

A

0.80

41
Q

cohens d =

A

mean difference/ SD

42
Q

effect sizes tell us

A

the proportion of variance accounted for

43
Q

effect size examples

A

r2
eta squared n2
omega squared w2
cohens d