Factorial ANOVA Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Overview of a two-way factorial design

A

A two-way factorial ANOVA involves the study of two independent variables. Such a study allows us to examine the effects of each independent variable and also the combination of the two factors. The effects of each independent variable are known as main effects. If the effect of one factor modifies the effect of the other factor, then this is known as an interaction effect. For example, if you have a two-factor experiment investigating problem-solving ability, with age (young versus old) and sex (male versus female) as the two factors, you might find in your overall analysis that there is an interaction between age and sex. This indicates that the difference between male and female problem-solving ability varies as a function of the age of the subjects. Equivalently, it means that age related differences in problem solving ability depend on gender of participants. Because we have two variables interacting, both interpretations are technically correct. Which one you choose will ultimately depend on the research question that motivates your data collection and analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Example of two-way ANOVA, main effects, and interactions

A

The concepts of main effects and interactions were also covered in Research Methods A, but there is one important new concept which we will study in this topic. This is simple effects. Simple effects are examined when there is a significant interaction, and these (simple effects) help us examine more fully the nature of the interaction. Consider the following 2 x 2 design, where age and gender are the IVs and problem-solving performance is the DV (higher scores = better performance):

If we simply conducted a one-way ANOVA looking at gender differences in problem solving, we would form the conclusion that males (M = 52.5) and females (M = 52.5) do not differ in this skill. Likewise, if we conducted a one-way ANOVA with age as the IV, we would conclude that there are no age related differences (declines/increases) in performance: Mold = 52.5 vs. Myoung = 52.5. However, a cursory glance at the table above indicates that such a conclusion is misguided and incomplete. Really, there is an interactive effect here: the size (and in this case, direction) of the difference in performance between males and females depends on their age. For older participants, males have considerably poorer performance scores than females in the same age bracket. On the other hand, at younger ages, males have better performance than female counterparts. This is a classic example of an interaction effect. Whenever we find an interaction effect (usually indicated with a significant p value), we want to follow up to determine how the main effect of one of our IVs differs as a function of scores on the other IV. If our primary interest is in gender related differences in performance, we would conduct simple effects tests where we look at the gender difference in performance for young participants (the highlighted portion in yellow is the only portion used in the analysis) and then, separately, we would evaluate gender differences in performance for the older participants (highlighted portion in blue used in this instance). This additional analysis (simple effects) gives us a fuller understanding of how the IV-DV relationship varies as a result of another IV.

To recap, simple effects involve an analysis of only part of a factorial design (or, put another, only part of the overall sample). If we find an interaction effect between two IVs, we need to conduct this simple effects analysis to understand how the impact of one IV on the DV is altered as a result of levels of the second IV. These simple effects tests are carried out in the same fashion as single factor ANOVAs except that you use the error term from the overall ANOVA.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Logic of two-way ANOVA:

A

In a two-way ANOVA we want to calculate three F-ratios: one for each of the factors (called main effects); and one for the interaction of these factors (called the interaction effect). As in a one-way ANOVA, each of the F-ratios involves a comparison of two variances: one associated with a treatment (or model) effect, and the other a measure of the error (residual), and the first step to calculating these variances is our trusty sum of squares!

The logic behind the testing of each of these F-ratios is also the same as that for the simple ANOVA in Week 4. You are comparing the measure of variability provided by a treatment variance with the error (residual). For example, the F-ratio to test for the effect of Factor A would be:

F = effect of Factor A/error

If that treatment effect exists for that factor (i.e., if there is a main effect of that factor), the F-ratio will be greater than 1, whereas if the Ho is true, the F-ratio will be equal to or less than 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Representation of a factorial design:

A

Before we start, we need to define the terms cell means and means for individual levels of each factor. We shall use a concrete example. Imagine that we extended our experiment on the effect of temperature on learning performance to study two age groups (‘old’ and ‘young’) separately and obtained the results tabulated below:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Representation of a factorial design: Sums of squares

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Representation of a factorial design: Main effects sums of squares

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Representation of a factorial design: Interaction sums of squares

A

The sum of squares for the interaction is calculated in two parts:

  1. Calculation of the sums of squares for all the cell means (what Field calls SSmodel).

The formula for this is:

SSmodel = nΣ ( Xij. -X..)2

= 5((1 – 3)2 + (4 – 3)2 + (1 – 3)2 + (4 – 3)2 + (4 – 3)2 + (4 – 3)2)

= 5(4 + 1 + 4 + 1 + 1 + 1) = 60

where the Xij. are the cell means. As usual, the number outside the summation sign is the number of subjects contributing to each of the cell means, which is n. This gives the variability among all the cells or conditions in the experiment.

  1. The interaction between factors A and B is the part of the between-cells variability not due to A or B, and therefore the interaction sum of squares can be calculated by subtracting the SSA and SSB from the SSmodel.

The formula for this is:

SSAB = SSmodel – SSA – SSB

= 60 – 30 – 15 = 15

Note that, as in a one-way, all the component sums of squares should add up to the total sums of squares, i.e.,

SSA + SSB + SSAB + SSresidual = 30 + 15 + 15 + 32 = 92 = SStotal

and this fact can be used to check calculations. Our calculations must have been correct to get these figures agreeing!

Degrees of freedom

The next step is to calculate degrees of freedom. As usual, these are calculated by subtracting one from the number of scores being compared. The formulae are:

dfA = number of levels of factor A minus 1

= a – 1

= 2 – 1 = 1

dfB = number of levels of factor B minus 1

= b – 1

= 3 – 1 = 2

dfAB = dfA x dfB

= (a – 1)(b – 1)

= (2 – 1)(3 – 1) = 2

dfresidual = number of groups x within-groups df

= ab(n – 1)

= 2 x 3(5 – 1) = 6 x 4 = 24

dftotal = total number of scores minus 1

= N-1

= 30 – 1 = 30 – 1 = 29

Note that dfA + dfB + dfAB + dfresidual= 1 + 2 + 2 + 24 = 29 = dftotal

Mean squares

As usual, mean squares are calculated by dividing sums of squares by degrees of freedom as follows:

MSA = SSA/dfA

= 30/1 = 30

MSB = SSB/dfB

= 15/2 = 7.5

MSAB = SSAB/dfAB

= 15/2 = 7.5

MSresidual = SSresidual/dfresidual

= 32/24 = 1.33

F-ratios

Once the mean squares have been calculated, we can calculate the F-ratios for each of the main effects, and interaction by dividing each treatment mean square by MSresidual. This gives:

FA = MSA/MSresidual = 30/1.33 = 22.50

FB = MSB/MSresidual = 7.5/1.33 = 5.63

FAB = MSAB/MSresidual = 7.5/1.33 = 5.63

We can now complete the ANOVA table:

SourceSSdfMSFA30 13022.50B1527.55.63AB1527.55.63Residual32241.33 Total9229

Significance is then evaluated in the usual way by comparing the F values with the critical value found from the tables in Appendix A.3 in Field (2018) on pp. 1001-1002 (pp.894-897 in Field, 2013). For our example we find:

For the main effect of A, F(1,24) = 22.50, p < .001

For the main effect of B, F(2,24) = 5.63, p < .01

For the interaction of A and B, F(2,24) = 5.63, p < .01

So all of our major effects are significant, but remember that when an interaction is significant the main effects must be interpreted with caution. What we really need to do now in order to interpret these results is to analyse the simple effects to find out what sort of interaction is present. It is clear from the cell means that there seems to be a different effect of temperature for old and young subjects, so it makes sense to look at the simple effects of temperature for old and young subjects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Simple effects:

A

A simple effect is an effect of one variable at one level of the other variable, e.g., the effect of temperature for old subjects only. The calculations are virtually identical to those for a main effect in a one-way ANOVA (as if we had done the experiment only on old subjects). The one difference is that we use MSresidual from the overall design. This provides a better estimate with more degrees of freedom than if based on only the error term from older subjects.

To calculate sum of squares for the simple effect of temperature (Factor B) for older people (A1), SSB at old , we only use the data for older subjects. The formula is:

(see pic)

Note that simple effect sum of squares B at A1, and B at A2 add up to 30 = SSB + SSAB. This shouldn’t be much of a surprise by now. You must have noticed that ANOVA is full of partitions (separate components which add up to the whole). It should also make sense that the effect of temperature on old people, plus the effect in young people, should add up to the overall effect of temperature plus its interaction with age.

To turn these sums of squares into mean squares, we divide by the degrees of freedom (I hope this is giving you a sense of déjà vu by now!). The degrees of freedom for simple effects are the same as for main effects since the same number of means are being compared.

Thus the formulas are:

dfB at A1 = b – 1

So dftemp at old = 3 – 1 = 2

dfB at A2 = b – 1

So dftemp at young = 3 – 1 = 2

The mean squares are therefore:

MStemp at old = 30 /2 = 15

MStemp at young = 0/2 = 0

And the F-ratios are:

Ftemp at old = MStemp at old/MSresidual = 15/1.33 = 11.25

Ftemp at young = MStemp at young/MSresidual = 0/1.33 = 0

We can now fill in an ANOVA table further:

SourceSSdfMSFAge (A)30 13022.50Temp (B)1527.55.63 Temp at old3021511.25

(B at A1)

Temp at young

(B at A2)

0200AB1527.55.63Residual32241.33 Total9229

We evaluate the F-ratios of the simple effects in the usual way and find a significant simple effect of temperature for old people and no effect for young people, as we expected. This, then, tells us the source of the significant interaction effect that we had found.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Three important points about simple effects testing should be stressed:

A
  • In general you look at simple effects only when there is a significant interaction, and/or you have a hypothesis/research question that is addressed by consulting the simple effects;
  • Always plot your data and consider what they mean before attempting to interpret interactions and subsequent simple effects tests;
  • Carrying out multiple simple effects tests raises the problem of inflated Type 1 error rates: A general rule is to set up, a priori, a limited set of relevant simple effects to be tested in the event of a significant interaction.
  • If there is no a priori basis for choosing which simple effects to test then, after graphing the interaction, chose to test those which most clearly explain the nature of the interaction (generally a set in which some are significant and some are not).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly