ANOVA Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What does ANOVA stand for?

A

ANalysis Of VAriance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

When would you use ANOVA?

A

Parametric Data

3 or more groups/conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Can ANOVA be used for independent groups or repeated measures design?

A

BOTH

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a type I error?

A

when we say there is an effect when there’s not lol.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a type II error?

A

When we say there isn’t an effect when there actually is! oops

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why do we use ANOVA instead of doing lots of t-tests?

A

t-test doesn’t allow more than 2 conditions of an I.

It also inflates the overall type I error rate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the overall type I error rate called?

A

the family wise error rate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the calculation for familywise error rate?

A

1 – (0.95)n

n = in the squared position and stands for the number of comparisons e.g. 3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

If you have 3 groups what is the family wise error?

A

1- (0.95) 3(cubed) = .14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

If the familywise error rate is .14 instead of .05 what does this tell us?

A

The likelihood of making a type I error increases from 5% to 14%.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does ANOVA control for?

A

It controls for type I error rate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

ANOVA is a _____ test.

A

Parametric

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

ANOVA tests whether the ____ of one or more IVs has a statistically significant influence on the value of the _____.

A

Manipulation

DV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Our alternative hypothesis for any ANOVA will be that the ____ differ.

A

means

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

ANOVA takes into account how a set of scores ____ around the means for each condition or group.

A

vary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the 3 causes of variability?

A

Treatment effects- effect of the IV

Individual Differences- within group variability e.g. the level to which individuals in a group differ even though they have received the same treatment

Random error- experimental error e.g. all skilled participants randomly end up in same group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

The 3 causes of variability are split into 2 groups- wat are these groups and how are they split?

A

Systematic Variance:
1. Treatment Effects

Experimental Variance:

  1. Individual differences
  2. Random errors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

ANOVA _____ out the different causes of variance in a data set.

A

Partitions out!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What Variance does ANOVA calculate?

A

Total variability = systematic Variance + Experimental Variance

Systematic Variance

Experimental Variance/ Error Variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

In ANOVA what do we use to calculate variance?

A

Sum of Squares! SS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How do we calculate sum of squares?

A
SS = ∑x2 – (∑x)2/N
x = individual scores
N= sample size
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

After we calculate the SS how do we calculate variance?

A

We divide SS by df

SS/df

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What does ANOVA not tell us?

A

It tells us there’s a significance difference in means BUT doesn’t tell us which ones!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

If we have significant ANOVA results what is important to do?

A

Follow up tests!

to see where the differences lie!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

ANOVA is a parametric test, what does this mean about our data?

A

> Homogeneity of variance (equal variance across samples)
Normal distribution
Interval/ ratio dats
at least 10 people

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Independent variables are also called _____ and their values are called ______.

A

Factors

Levels :)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What ANOVA do you use if you have 3 independent groups?

A

one-way independent ANOVA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Before we run an ANOVA we must check _____.

A

Assumptions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What are the 3 assumptions we must check?

A

> Normality of data
Ratio/interval scale
Homogeneity of variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

How do we check for Normality?

A

Run the Sharpiro Wilks test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

If the Shapiro Wilks test is:
p > .05

what does this mean?

A

Your data is normally distributed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Apart from Shapiro Wilks what else would tell you about Normality?

A

Distribution Curve

Histogram

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How do we check for homoegeneity of variance?

A

Levene’s Test of Homogeneity :)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

If assumptions are violated in an independent groups ANOVA what do we do?

A

Conduct a Kruskal-Wallis Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What are the disadvantages of independent groups design for ANOVA?

A

> More participants, potentially more expensive

>Does not partition out the 2 sources of error/experimental variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is randomisation?

A

When you randomly allocate participants to groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What’s the symbol for ANOVA?

A

F

the F ratio!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is the F ratio?

A

systematic variance/ experimental error variance

Can be also referred to as experimental/error just to confuse shit.

39
Q

If systematic variance is larger than experimental error variance what is the F ratio likely to be?

A

Significant

F = 10/1 = 10 =big!

40
Q

If systematic variance is smaller than experimental error variance what is the F ratio likely to be?

A

Not significant!

F = 1/10 = 0.1 = small :(

41
Q

The larger the F ratio the _____ the variance.

A

The greater

42
Q

The larger the F ratio, the more ___ the result.

A

significant

43
Q

How do we calculate how much variability there is between all of the scores?

A

Get the Total Sum of Squares (SS total)

44
Q

How do we calculate how much of this variability can be explained by the IV?

A

Get the Treatment Sum of Squares

(SS treatment)

45
Q

How do we calculate how much of this variability can’t be explained?

A

Get the error sum of squares.

ss error

46
Q

After getting the SS treatment and SS error what do we calculate?

A

Mean Squares Treatment

Mean Squares error

47
Q

What is this the formula for?

SS treatment/ df treatment

A

Mean Squares Treatment

48
Q

What is this the formula for?

SS error/ df error

A

Mean Squares error

49
Q

How do you calculate the df treatment?

A

Number of groups - 1

50
Q

How do you calculate the df error?

A

The no. of participants - the number of conditions

51
Q

F distribution is _______. Values are ______ and we don’t ___ the p value for 1 tailed hypotheses.

A

Non-symmetrical
positive
half

52
Q

If you get a non significant result in Levene’s test of equality of variances what does this mean?

A

Groups are approx. equal wooo :)

53
Q

How do we get the F ratio from the SPSS table?

A

divide group mean square by error mean square

54
Q

What is partial eta squared?

A

The effect size for ANOVA

55
Q

What is considered a small, medium and large effect size in ANOVA?

A
small = .01
medium= .09
large = .25
56
Q

How do you report an ANOVA?

A

F( df treatment, df error) = F ratio, p < .xxx, (if significant report effect size: n2p = .xx)

57
Q

What is the difference between repeated measures ANOVA and independent groups ANOVA?

A

repeated measures ANOVA = 3 or more conditions.

Independent groups ANOVA = 3 or more groups

58
Q

In a repeated measures ANOVA if the data is non-parametric, what test should you use?

A

Friedman Test

59
Q

Friedman Test is an extension of the ….

A

Wilcoxon Signed Ranks.

60
Q

What are the ANOVA advantages of a repeated measures study?

A

requires less participants = less expensive

scores for each participant are related

We can calculate and remove the variance due to individual differences :) - can’t do this for independent groups design.

61
Q

What are the ANOVA disadvantages of repeated measures study?

A

Assumes scores are independent but in fact these are related.
This extra assumption is called sphericity!

62
Q

What extra assumption in repeated measures ANOVA do we need to consider?

A

Sphericity

63
Q

What is sphericity?

RM

A

It assumes the differences between conditions is equal

64
Q

What can a violation of sphericity do?

RM

A

Invalidate the F ratio

65
Q

What test do we use to test for sphericity?

(RM

A

Mauchly’s Test of Sphericity.

66
Q

What does it mean when Mauchly’s test of sphericity is:
p >.05

(RM)

A

Sphericity is not violated and we can carry on!

67
Q

What does it mean when Mauchly’s test of sphericity is:

p

A

Sphericity is violated :(

68
Q

What do we do when sphericity has been violated?

RM

A

Report the “Greenhouse-Geisser Epsilon Correction”

69
Q

What does the Greenhouse Geisser do?

RM

A

Reduces the chance of making a type II error.

70
Q

Apart from sphericity what is another issue with repeated measures designs?

A

Order effects!

performance on one condition can change performance on another condition

71
Q

How are order effects resolved?

RM

A

BY COUNTERBALANCING

72
Q

What is counterbalancing?

RM

A

Person 1 = 1 2 3
Person 2 = 2 1 3
Person 3 = 3 1 2

or by randomly presenting conditions across trials

73
Q

In repeated measures ANOVA- Effect of IV is found in the _____ participants variance.

A

within!

independent = between group variance

74
Q

What is variance caused by?

A

Systematic error- effect of IV

Experimental Error -
individual differences and random error

75
Q

In RM ANOVA what can we do with error variance?

A

we can split it in 2 because we can calculate the individual differences- we just don’t know the random error part now!

76
Q

What is the F ratio in RM?

A

systematic variance/ random error variance

77
Q

We want F to be as ____ as possible for it to be significant.

A

Large

78
Q

The smaller the random/residual error- the ____ the F ratio.

A

Bigger

79
Q

How do you calculate df treatment?

A

number of conditions/groups- 1

80
Q

How do you calculate df error?

A

no. of participants- number of conditions

81
Q

How do you calculate the mean square?

A

total sum of squares / df error

82
Q

How do you calculate the F ratio on the SPSS table?

A

mean square treatment / mean square error

83
Q

What is an a priori comparison?

A

Comparison chosen before data has been collected.

84
Q

What is a post hoc comparison?

A

Chosen by the experimenter after the data has been collected.

85
Q

If we are making multiple comparisons, you need to control for…

A

Type I errors

86
Q

What is another term for controlling type I errors?

A

controlling alpha inflation

87
Q

What are the 2 ways to control for type I errors?

A

Alpha controlled per comparison (probability of making a Type I error on any given comparison) e.g. .05

Alpha controlled familywise (probability of making at least one Type I error on a set of comparisons

88
Q

What is the chance of finding a Type I error for 6 comparisons:

Familywise Error = 1 – (α) 6

A

1- (0.95))6
= .27

27% of making a type I error.

89
Q

Before conducting Bonferonni Correction t test- what dowe not need to do?

A

we don’t have to do the Bonferonni correction e.g. divide significance value (.05) by number of t-tests being conducted. BECAUSE spss does it for us :)

90
Q

What do post hoc comparisons allow?

A

all possible comparisons to be compared

91
Q

When would you do a post hoc comparison?

A

When you get a SIGNIFICANT ANOVA

92
Q

If we avoid type I error what’s likely?

A

More chance of type II error

93
Q

For independent ANOVA what follow up test is used?

A

Tukey HSD Test

94
Q

For repeated measures, what follow up test is used?

A

Bonferroni corrected tests