Week 5: Comparing Means - One-way ANOVA Flashcards

1
Q

What does ANOVA stand for?

A

Analysis of Variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What

What is the decision tree for choosing a one-way ANOVA? - (5)

A

Q: What sort of measurement? A: Continuous
Q:How many predictor variables? A: One
Q: What type of predictor variable? A: Categorical
Q: How many levels of the categorical predictor? A: More than two
Q: Same or Different participants for each predictor level? A: Different

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

When does ANOVA be used?

A

if you are comparing more than 2 groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Example of ANOVA RQ

A

Which is the fastest animal in a maze experiment - cats, dogs or rats?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

We can’t do three separate t-tests for example what is the fastest animal in a maze experiment - cats, dogs or rats as

A

Doing separate t-tests inflates the type I error (false positive - e.g., pregnant man)

The repetition of the multiple tests adds multiple chances of error, which may result in a larger α error level than the pre-set α level - Family wise error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is familywise or experimentwise error rate?

A

This error rate across statistical tests conducted on the same experimental data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Family wise error is related to

A

type 1 error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the alpha level probability

A

probability of making a wrong decision in accepting the alternate hypothesis = type 1 error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

If we conduct 3 separate t-tests to test the comparison of which is the fastest animal in experiment - cats, dogs or rats with alpha level of 0.05 - (4)

A
  • 5% of type 1 error of falsely rejecting H0
  • Probability of no. of Type 1 errors is 95% for a single test
  • However, for multiple tests the probability of type 1 error decreases as 3 tests together => 0.950.950.95 = 0.857
  • This means probability of a type 1 error increases: 1- 0.857 = 0.143 (14.3% of not making a type 1 error)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Much like model for t-tests we can write a general linear model for

A

ANOVA - 3 levels of categorical variable with dummy variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

When we perform a t-test, we test the hypothesis that the two samples have the same

A

mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

ANOVA tells us whether three or more means are the same so tests H0 that

A

all group means are equal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An ANOVA produces an

A

F statistic or F ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The F ratio produced in ANOVA is similar to t-statistic in a way that it compares the

A

amount of systematic variance in data to the amount of unsystematic variance i.e., ratio of model to its error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

ANOVA is an omnibus test which means it tests for and tells us - (2)

A

overall experimental effect

tells whether experimental manipulation was successful

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

An ANOVA is omnibus test and its F ratio does not provide specific informaiton about which

A

groups were affected due to experimental manipulation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Just like t-test can be represented by linear regression equation, ANOVA can be represented by a

A

multiple regression equation for three means and models acocunt for 3 levels of categorical variable with dummy variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

As compared to independent samples t-test that compares means of two groups, one-way ANOVA compares means of

A

3 or more independent groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

In one-way ANOVA we use … … to test assumption of equal variances across groups

A

Levene’s test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is one-way ANOVA also called?

A

within-subject (all pps attend to every condition) ANOVA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What does this one-way ANOVA output show?

A

Leven’s test is non-significant so equal variances are assumed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What does this SPSS output show in one-way ANOVA?

A

F(2,42) = 5.94, p = 0.005, eta-squared = 0.22

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How is effect size (eta-squared) calculated in one-way ANOVA?

A

Between groups sum of squares divided by total sum of squares

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the eta-squared/effect size for this SPSS output and what does this value mean? - (2)

A

830.207/3763.632 = 0.22
22% of the variance in exam scores is accounted for by the model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Interpreting eta-squared, what does 0.01, 0.06 and 0.14 eta-sqaured means? - (3)

A
  1. 0.01 = small effect
  2. 0.06 = medium effect
  3. 0.14 = large effect
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What happens if the Levene’s test is significant in the one-way ANOVA?

A

then use statistics in Welch or Brown-Forsythe test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

The Welch or Brown-Forsythe test make adjustements to DF which affects

A

statistics you get and affect if p value is sig or not

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What does this post-hoc table of Bonferroni tests show in one-way ANOVA ? - (3)

A
  • Full sleep vs partial sleep, p = 1.00, not sig
    • Full sleep vs no sleep , p = 0.007 so sig
    • Partial sleep vs no sleep = p = 0.032 so sig
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Diagram of example of grand mean

A

Mean of all scores regardless pp’s condition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What are the total sum of squares (SST) in one-way ANOVA?

A

difference of the participant’s score from the grand mean squared and summed over all participants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is model sum of squares (SSM) in one-way ANOVA?

A

difference of the model score from the grand mean squared and summed over all participants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What is residual sum of squares (SSR) in one-way ANOVA?

A

difference of the participant’s score from the model score squared and summed over all participants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

The residuals sum of squares (SSR) tells us how much of the variation cannot be

A

explained by the model and amount of variation caused by extraneous factors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

We divide each sum of squares by its

A

DF to calculate them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

For SST its DF we divide by is

A

N-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

For SSM its DF we divide by is

A

number of group (parameters), k,in model minus 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

For SSM if we have three groups then its DF will be

A

3-1 = 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

For SSR we divivde by its DF to calculate which will be the

A

total sample size, N, minus the number of groups, k

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Formulas of dividing each sum of squares by its DF to calculate it - (3)

A
  • MST = SST (N-1)
  • MSR = SSR (N-k)
  • MSM = SSM/k
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

SSM tells us the total variation that the

A

exp manipulation explains

41
Q

What does MSM represent?

A

average amount of variation explained by the model (e.g. the systematic variation),

42
Q

What does MSR represent?

A

average amount of variation explained by extraneous variables (the unsystematic variation).

43
Q

The F ratio in one-way ANOVA can be calculated by

A
44
Q

If F ratio in one-way ANOVA is less than 1 then it represents a

A

non-significant effect

45
Q

Why F less than 1 in one-way ANOVA represents a non-significant effect?

A

F ratio is less than 1 means that MSR is greater than MSM = more unsystematic than systematic

46
Q

If F is greater than 1 in one-way ANOVA then shows likelhood … but doesn’t tell us - (2)

A

indicates that experimental manipulation had some effect above and beyond effect of individual differences in performance

Does not tell us whether F-ratio is large enough to not be a chance result

47
Q

When F statistic is large in one-way ANOVA then it tells us that the

A

MSM is greater than MSR

48
Q

To discover if F statistic is large enough not to be a chance result in one-way ANOVA then

A

compare the obtained value of F against the maximum value we would expect to get by chance if the group means were equal in an F-distribution with the same degrees
of freedom

49
Q

High values of F are rare by

A

chance

50
Q

Large values of F are more common with studies with low number of

A

participants

51
Q

The F-ratio tells us in one-way ANOVA whether model fitted to data accounts for more variation thane extraneous and does not tell us where

A

differences between groups lie

52
Q

If F-ratio in one-way ANOVA is large enough to be statistically significant then we know

A

that one or more of the differences between means is statistically significant (e.g. either b2 or b1 i statistically significant)

53
Q

It is necessary after conducting an one-way ANOVA to carry out further analysis to find out

A

which groups differ

54
Q

The power of F statistic is relatively unaffected by

A

non-normality

55
Q

when group sizes are not equal the accuracy of F is

A

affected by skew, and non-normality also affects the power of F in quite unpredictable ways

56
Q

When group sizes are equal, the F statistic can be quite robust to

A

violations of normality

57
Q

What tests do you do after performing a one-way ANOVA and finding significant F test? - (2)

A
  • Planned contrasts
  • Post-hoc tests
58
Q

What do post-hoc tests do? - (2)

A
  • compare all pairwise differences in mean
  • Used if no specific hypotheses concerning differences has been made
59
Q

What is the issue with post-hoc tests?

A
  • because every pairwise combination is considered the type 1 error rate increases, so normally the type 1 error rate is reduced by modifying the critical value of p
60
Q

Post-hoc tests are like two or one tailed hypothesis?

A

two-tailed

61
Q

Planned contrasts are like one or two-tailed hypothesos?

A

One-tailed hypothesis

62
Q

What is the most common modification of the critical value for p in post-hoc?

A

Bonferroni correction, which divides the standard critical value of p=0.05 by the number of pairwise comparisons performed

63
Q

Planned contrasts are used to investigate a specific

A

hypothesis

64
Q

Planned contrasts do not test for every

A

pairwise difference so are not penalized as heavily as post hoc tests that do test for every difference

65
Q

With planned contrasts test you dervivie the hypotheses before the

A

data is collected

66
Q

Diagram of planned contrasts

A

Contrast 1 = Treatment vs control
Contrast 2 = Treatment 1 vs Treatment 2

67
Q

In planned contrasts when one condition is used it is

A

never used again

68
Q

In planned contrasts the number of independent contrasts you can make can be defined with

A

k (number of groups) minus 1

69
Q

How does planned contrasts work in SPSS?

A

Coefficients add to 0 for each contrast (-2 + 1 +1) and once group used alone in contrast then enxt contrasts set coefficient to 0 (e.g., -2 to 0)|

70
Q

SPSS has a lot of contrasts that are inbult but helpful if

A

you know what these contrasts do before entering the data as depend on the order in which you coded your vairables

71
Q

What are weights?

A

Values we assign to the dummy variables e.g., -2 in the box

72
Q

One type of planned contrasts is a polynominal contrast which

A

tets for trends in data and in its most basic form looks for lienae treat (i.e., group means increase proportionately)

73
Q

Polynominal contrasts can also look at more complex trends other than linear such as

A

quadratic, cubic and quartic

74
Q

What does a linear trend represent?

A

simply proportionate change in the value of the dependent variable across ordered categories

75
Q

What is a quadartic trend?

A

one change in the direction of the line (e.g. the line is curved in one place)

76
Q

What is a cubic trend?

A

two changes in the direction of the trend

77
Q

What is a quartic trend?

A

has three changes of direction

78
Q

The Bonferroni post-hoc ensures that the type 1 error is below

A

0.05

79
Q

With Bonferroni correction it reduces type 1 (being conserative in type 1 error for each comparison) it also

A

lacks statistical power (probability of type II error will be high [ false negative]) so increasing chance of missing a genuine difference in data

80
Q

What post hoc-tests to use if you have equal sample sizes and confident that your group variances are similar?

A

Use REGWQ or Tukey as good power and tight control over Type 1 error rate

81
Q

What post hoc tests to use if your sample sizes are slightly different?

A

Gabriel’s procedure because it has greater power,

82
Q

What post-hoc tests to use if your sample sizes are very different?

A

if sample sizes are very different use
Hochberg’s GT2

83
Q

What post-hoc test to run if Levene’s test of homeogenity of variance is significant?

A

Games-Howell

84
Q

**

What post=hoc test to use if you want gurantee control over type 1 errror rate?

A

Bonferroni

85
Q

What does this ANOVA error line graph show? - (2)

A
  • Linear trend as dose of Viagra increases so does mean level of libido
  • Error bars overlap indicating no between group differences
86
Q

What does the within groups gives deails of in ANOVA table?

A

SSR (unsystematci variation)

87
Q

The between groups label in ANOVA table tells us

A

SSM (systematic variation)

88
Q

What does this ANOVA table demonstrate? - (2)

A
  • Linear trend is significant (p = 0.008)
  • Quadratic trend is not significant (p = 0.612)
89
Q

When we do planned contrasts we arrange the weights in such that we compare any group with a positive weight

A

with a negative weight

90
Q

What does this output show if we conduct two planned comparisons of:

one to test whether the control group was different to the two groups which received Viagra, and one to see
whether the two doses of Viagra made a difference to libido

  • (2)
A

the table of weights shows that contrast 1 compares the placebo group against the two experimental groups,

contrast 2 compares the low-dose group to the high-dose group

91
Q

What does this table show if levene’s test is non significant =equal variances assumed

To test hypothesis that experimental groups would increase libido above the levels seen in the placebo group (one-tailed)

To test another hypothesis that a high dose of Viagra would increase libido significantly more than a low dose

  • (2)
A

for contrast 1, we can say that taking Viagra significantly increased libido compared to the control group (p = .0029/2 = 0.0145)

. The significance of contrast 2 tells us that a high dose of Viagra increased libido significantly more than a low dose (p(one-tailed) = .065/2 = .0325)

92
Q

If making a few pairwise comparisons and equal umber of pps in each condition then … if making a lot then use. - (2).

A

Bonferroni
Tukey

93
Q

a

A
94
Q

Assumptions of ANOVA - (5)

A
  • Independence of data
  • DV is continuous; IV categorical (3 groups)
  • No significant outliers;
  • DV approximately normally distributed for each category of the IV
  • Homogenity of variance = Levene’s test
95
Q

Example of

A
96
Q
A
97
Q

ANOVA compares many means without increasing the chance of

A

type 1 error

98
Q

In one-way ANOVA, we partiton the total variance into

A

IV and DV