ANOVA Flashcards

1
Q

When do we use ANOVA?

A

When we are comparing more than 2 groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How do we calculate a one way BS ANOVA? (Not formula)

A
  1. Calculate within group variance (experimental error)
  2. Calculate between group variation (treatment effect)
  3. Then we can work out the F ratio which tells us if we have a significant TE
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do you calculate variance?

A

∑(Y - MeanY)^2/n-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do we calculate F?

A

F = TE + EE/EE

F = MS factor / MS error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What F would we expect if there is no TE?

A

F = 1 as… F = 0 + EE/EE

This is not possible however due to chance factors varying

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do we decide whether to accept or reject H0?

A

Sampling distribution - larger F ratio = less frequent it would occur if H0 were true

Probability value of 0.05 suggests that the F value occurs less than 5% of the time when H0 is true so its unlikely to occur and we reject H0

This probability error is referred to as alpha level and defines the risk of a type 1 error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is experimental error?

A

A combination of individual differences, researcher error and chance factors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the assumptions of one way BS ANOVA?

A

Assumption of normality - dv is normally distributed (check on SPSS by splitting file and producing histograms showing normal curve)

Assumption of independence - one score is in no way related to any other score in any group

Assumption of equal variance - EE is approximately equal in each group (check using Fmax test)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the Fmax test?

A

Used to test assumption of equal variance is met in 1-way BS ANOVA

Fmax = largest variance / smallest variance

> 3 then alpha level must be changed from 5% to 2.5%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do you report an ANOVA?

A

F(df for effect, df for error) = F value, probability level, mean square error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a WS (repeated measure) ANOVA?

A

An ANOVA where the same participant serves in each condition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the benefit of a WS design?

A

Minimises error variance - eliminated ID

SSresidual makes F ratio more sensitive so a significant difference is more likely to be found

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is SStotal made up of?

A

SSeffect = differences among means

SSerror = SSsubject (ID) and SSresidual (other error)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How are individual differences removed from WS design?

A

You subtract their mean performance from their scores in each condition which will give you a score that was a deviation from their usual performance i.e independent of the individual difference variable

Relationship between data remains the same

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the assumptions of 1-way WS ANOVA?

A

Sphericity assumption - correlations between treatment conditions scores are equal (Mauchley’s test, if significant you have violated the assumption you need to use Greenhouse-Geisser or Huynh-Feldt to make F test more conservative using fewer df to calculate MS)

Homogeneity of covariance - variances of the differences between all combinations of pairs of conditions are equal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does post-hoc testing show?

A

A significant F value informs you a significant difference exists among treatment groups post hoc is needed to determine which of the groups differ significantly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the types of post hoc comparisons?

A

Pairwise - 1 v 2, 2 v 3 and 1 v 3

Complex - av (1,2) v 3, av (2,3) v 1 and av (1,3) v 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Why is post hoc testing better than running multiple ANOVAs?

A

Alpha level represents the possibility of making a type 1 error per comparison (alpha pc)

Post hoc is interested in a family of comparisons (alpha fw) which is the number of comparisons X alpha pc = 0.5 X n

Post hoc testing corrects this problem so alpha fw = alpha pc by reducing alpha level or increasing critical value of F

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the types of post hoc?

A

Tukey (Ft) - only pairwise

Scheffe (Fs) - all comparisons

Dunnett (Fd) - control to each experimental group

Bonferroni (Fb) - adjust probability so its alpha/number of comparisons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are the steps of post hoc?

A
  1. Calculate F comp for each comparison
  2. Calculate modified F value (Ft/s/d)
  3. If Fcomp > Ft/s/d then there is a significant difference
  4. Reject H0
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is a factorial design?

A

An ANOVA with more than 1 IV

22
Q

What are the advantages of a factorial design?

A

Economy - more info for same amount of work

Experimental control and increased generality of results - by including a possible extraneous factor as an IV

Interaction - effect on IC rarely occurs in isolation

23
Q

What are the F ratios calculated in a factorial design?

A

F(main effect A) = variability due to A / within V

F(main effect B) = variability due to B / within V

F(interaction of A X B) = variability due to interaction of A X B / within V

24
Q

How to calculate main effect?

A

To find out separate effect of each IV on DV

You average scores and compare for each factor e.g average scores for a1 and a2

25
Q

How can you visually tell if there is an interaction?

A

If the patterns of data are different

26
Q

What is a mixed ANOVA model?

A

This type of design involves 2+ IVs where some are WS and some BS

27
Q

What are the two perspectives variability can be examined from?

A
  1. Systemic (controlled) v unsystemic (error variance)

2. Whether source of variance is BS or WS

28
Q

What is the difference between how F ratios are calculated in mixed ANOVA designs?

A

They are evaluated against the error term which reflects that component of study

E.g if A was BS and B WS —> Fa would use BS error term and Fb and Fab would use WS error

29
Q

What does a significant interaction mean for main effects?

A

They may be misleading

30
Q

What are the follow up tests for factorial and mixed ANOVAs?

A

Keppel’s follow up strategy:

If there is a significant interaction —> do simple effects test followed by simple comparisons (error term differs depending on simple effects you decide to do if its a mixed design)

If there is not a significant interaction —> do main comparisons on significant factor

31
Q

What are simple effects tests?

A

Investigate the effect of 1 IV on DV at each level of the other IV

Used to understand nature of interaction

Can follow two strategies

ANOVA’s using pooled error term to calculate F

32
Q

Why do you use the pooled error term to calculate F in simple effects and comparisons?

A

Based on all the data

Better estimate with more df

Captures total variance in data

33
Q

What are the differences in follow up tests for mixed ANOVA?

A

Two strategies differ:

> for the effect of the WS IV on DV at each level of group (BS IV) = RM ANOVA using pooled error term and WS error

> for effect of the BS IV on DV at each level of WS IV = BS ANOVA using pooled error term AND within cell error (calculate yourself and is the average of variances from all cells in the design)

34
Q

What is an effect size?

A

A measure of the magnitude of the experimental effect and the strength of association between 2 or more variables.

When IV’s have > 2 levels (ANOVAs) or are continuous, effect size estimates usually describe the proportion of variability accounted for by each IV

35
Q

Why do we need effect sizes?

A

Meta-analyses

Compare similar studies with diff designs

Practical importance

Calculate power and estimate required sample size

36
Q

Why is effect size more reliable than F?

A

It is independent of sample size so we can compare across studies

37
Q

What are the families of effect sizes?

A

D family - standardised differences between means

R family - variations on the correlation coefficient

38
Q

How is effect size calculated in general?

A

ES = SS effect / SS total

39
Q

What are the different effect sizes?

A

Eta squared (n^2)

Partial eta squared (np^2)

Omega squared (w^2)

Partial omega squared (wp^2)

40
Q

Eta squared

A

Proportion of the total variance in the DV that is attributed to an effect

n^2 = SS effect / SS total

Problem: the values for an effect are dependent on the number of other factors in the ANOVA and their magnitude

41
Q

Partial eta squared

A

Resolves the n^2 problem by examining the effect of a given IV relative to only two variable components

In the factorial model rather than the total variance: that due to IV effect and that due to error variance

When there is 1 IV n^2 = np^2 but when there are more n^2 < np^2

np^2 = SS effect / SS effect + SS error

42
Q

Omega squared

A

Is used to obtain a relatively unbiased estimate of the variance explained in the population by an effect

Unlike n^2 it takes random error into account (MS error)

w^2 = SS effect - (df effect X MS error) / SS total + MS error

w^2 < n^2 values

43
Q

Partial omega squared

A

Similarly to partial eta squared it represents an unbiased stigmata of rage population proportion of variance in DV associated with an effect, after variability associated with all other effects has been removed from consideration

wp^2 = SS effect - (df effect X MS error) / SS effect + (N - df effect) X MS error

44
Q

How do you interpret effect size?

A

Small > 0.01

Medium > 0.06

Large > 0.15 (0.14 cohen)

45
Q

Why do we conduct significant testing?

A

To know if we should accept or reject H0

46
Q

Significant testing errors?

A

Type 1 = rejecting the null hypothesis when it is true (alpha level indicated probability of this error)

Type 2 error = failing to reject the null hypothesis when it is false (there is an effect) (beta level (power 1-B) indicates probability of making this mistake)

47
Q

What happens if you decrease alpha?

A

Power (1-B) decreases as B increases

48
Q

What is power?

A

Ability to detect a difference when it exists

49
Q

How can you increase power?

A

Decrease EE

Use more sensitive design (WS)

Increase sample size

Increase TE

50
Q

What is the minimum power should be?

A

80% - Cohen (1977)

Balances a and B risk as type 1 error is more serious than type 2

a = 0.05, B = 0.2 and power = 0.8

51
Q

What are the dangers of underpowered studies?

A

Sig effects not found when they do exist

Non-significant effects cant be interpreted

Inconsistent results

Problems for replication studies

52
Q

What do you need to calculate power?

A

Sample size

Effect size (expected)

Alpha

Use SPSS