Final Material Flashcards

1
Q

Describe the two-factor mixed design ANOVA

A
  • The two-factor mixed design has one between-subjects factor and one within-subjects factor
  • Mixed designs are useful when it’s impossible or inadvisable to manipulate a factor within subjects
  • Ex: the participant might be permanently changed by levels of the factor (e.g., removal of a brain region in rats) or the factor is an immutable characteristic of the participant (e.g., incurable medical condition)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What’s the notational system for the two-factor mixed design ANOVA?

A

A: between-subject factor
B: within-subject factor
sij: subject i in group j (e.g., s23 is subject 2 in level a3) Xijk: individual score of subject i in group j in level k of the within-subjects factor B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How many factors can you have in the between-subjects and within-subjects for a two-factor mixed design ANOVA

A

You can have more than one factor for both the between subjects and the within subjects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the different hypotheses for the two-factor mixed designs ANOVA?

A
  • Main effect of the between-subjects factor A
    H0A :μ1 =μ2 =…=μa
    H1A : Not all μa’s are the same
  • Main effect of the within-subjects factor B
    H0B :μ1 =μ2 =…=μb
    H1B : Not all μb’s are the same
  • Interaction between factors A and B
    H0AB: there is no interaction between AxB
    H1AB: there is an interaction between AxB
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What’s the F ratio formula for the between-subjects main effect of A in a two-factor mixed designs ANOVA?

A

F = MS(A) / MS(S/A)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What’s the F ratio formula for the within-subjects main effect of B in a two-factor mixed designs ANOVA?

A

F = MS(B) / MS(BxS/A)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What’s the F ratio formula for the within-subjects interaction effect of AxB in a two-factor mixed designs ANOVA?

A

F = MS(AB) / MS(BxS/A)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the assumptions for a mixed-design ANOVA?

A
  • The assumptions of the mixed ANOVA contain assumptions from both between-subjects ANOVA and within-subjects ANOVA
    Between-subjects assumptions:
  • Normal distribution of scores
  • Homogeneity of variances (at each combination of levels of factors A and B) -> at the population level
  • Independence of observations for between-subjects
  • The within-subjects effect requires the assumption sphericity
  • Tested only for the within-subject main effect
  • Mauchly’s W test can be used to check the assumption of sphericity
  • If sphericity is violated, the same ε is used for both within-subject effects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the effects of violations of the sphericity assumption?

A
  • Changes the probability of making a Type I error rate
  • The consequences are not appreciable for the test of the between-subject factor, but the probability of falsely rejecting the null hypothesis increases for the within-subject effects
  • Using a more conservative F test (i.e., Greenhouse-Geisser or Huynh-Feldt corrections) solves this issue
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the effects of violations of the homogeneity of variances assumption?

A
  • Changes the probability of making a Type I error rate
  • The consequences depend on equality of within-group sample sizes
  • Type I error rate > αlpha when smaller groups have higher variability (increased rate of false positive findings)
  • Type I error rate < αlpha when smaller groups have lower variability (lower power to detect effects)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the sources of variation in a two-factor mixed design ANOVA?

A
  • Main effect of the between-subjects factor A
  • Subject variation at levels of the between-subjects factor (S/A)
  • Main effect of the within-subjects factor B
  • Interaction between the between-subjects factor and the within-subjects factor AxB
  • Interaction between the within-subjects factor and subjects nested in levels of the between-subjects factor (BxS/A)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What measure of effect size do we use for the main effect of the between-subject factor in a two-factor mixed design ANOVA?

A

Partial omega-squared (ωA2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What measure of effect size do we use for the within-subject effects in a two-factor mixed design ANOVA?

A

Descriptive partial effect size measures (η2B & η2AxB)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

R2 is the same as what?

A

η2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does omnibus mean?

A

All encompassing, assessing that all means are equal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What’s the model?

A

Group membership

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Why is the F ratio or F test considered an omnibus test?

A
  • The F ratio or F test gives a global effect of the independent variable on the dependent variable (omnibus or overall test)
  • It does not tell us which pairs of means are different
  • We need to perform post hoc tests to make further inferences about which means are different
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are post hoc tests?

A
  • Post hoc (a posteriori/unplanned) comparisons
  • Decided upon after the experiment
  • In the case of a one-way between-subjects ANOVA, used if 3 or more means were compared
  • Examples of 2 post-hoc tests: Sheffé & Tukey’s Honestly Significant Difference (HSD) test
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What’s Sheffé’s Test?

A
  • Post hoc test
  • Can be used if groups have different sample sizes
  • Less sensitive to departures from the assumption of normality and equal variances in the population (violations of the assumptions)
  • It’s the most conservative test (very unlikely to reject H0)
  • This means it is a good choice if you wish to avoid Type I errors, but it has lower power to detect differences
  • If the null hypothesis is true to the population then the conservative test is good but if the null hypothesis isn’t true to the population then it has less power
  • Uses F ratio to test for a significant difference between any 2 means (e.g., H0 : μ1 = μ2)
  • But, uses a larger critical value
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How is the updated critical value for Sheffé’s Test obtained?

A
  • Obtain the critical value of F with dfM =k−1 and dfR =N−k (in other words, obtain the critical value as usual)
  • Then multiply this value by k − 1
  • The SS for the specific comparisons need to be calculated based on which groups are being compared and the residual SS is simply the SSresidual from the main analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the steps in Sheffé’s Test?

A
  • Calculate the SScomparison, MScomparison, and Fcomparison
  • MScomparison = SScomparison
  • Fcomparison = MScomparison/MSresidual
  • Compare the observed Fcomparison to (k − 1)x(Fcritical(k−1),(N−k))
  • If the observed Fcomparison is greater, conclude that the pair of means is significantly different from 0
  • If not, the pair of means being compared is not significantly different
  • The df comparison will always be 1 because you’re always comparing 2 groups at a time
  • All SScomparison are calculated using means of each level of the independent variable
22
Q

In Sheffé’s Test, what kind of coefficients does each group receive?

A
  • The groups being compared receive coefficients of -1 and 1
  • The group(s) excluded from this particular comparison receive a coefficient of 0
  • Ex: if not comparing c3 -> give it 0 -> c1 = 1 and c3 = 0 then c2 = -1
23
Q

Describe Tukey’s HSD test

A
  • Typically used if groups have equal sample sizes and all comparisons represent simple differences between 2 means
  • This test uses the studentized range statistic, Q
  • The observed Q value is compared against a critical value of Q for α = .05 which is associated with k and N − k
  • Call this critical value Qcrit
  • Reject H0 : μg = μg′ , when the observed Q value is greater than or equal to Qcrit, the critical value
24
Q

What does HSD mean?

A

Minimum absolute difference between 2 means required for a statistically significant difference

25
Q

Do you need to modify the formula for Tukey’s HSD test if the sample sizes are unequal?

A
  • Yes
  • This modified test is often called the Tukey-Kramer test when the sample sizes are not the same
  • If we had identical sample sizes, we would compare each group to the same HSD
  • Due to our unbalanced design, our HSD is different for each pairwise comparison
26
Q

What are the steps to Tukey’s HSD test?

A
  1. Perform ANOVA
  2. Calculate differences in means (Row - Column)
  3. Find Qcrit
  4. Calculate HSD (unequal sample sizes)
27
Q

What’s the difference between Sheffé’s test and Tukey’s HSD?

A
  • Sheffé’s test is more conservative than Tukey’s HSD
  • This means that Sheffé’s test will reject H0 less often than Tukey’s
    HSD
  • An insignificant pair of means under Sheffé’s test may be significantly different under Tukey’s HSD test
28
Q

What’s the Bonferroni post hoc test?

A
  • The nominal Type I error rate for each comparison is equal to αFW/C for C comparisons
  • Use the critical value based on the adjusted Type I error rate
29
Q

What’s the Šidák post hoc test?

A

Works by adjusting the Type I error rate by the number of tests (like Bonferroni), but it is less conservative

30
Q

What’s the Dunnet post hoc test?

A
  • Used when one group (usually the control group) is compared to the other k − 1 groups
  • Requires critical values from a specialized table
31
Q

What’s the Holm post hoc test?

A

Sequential mean comparisons with a readjustment to the nominal Type I error using the Bonferroni correction

32
Q

What’s the Fisher-Hayter post hoc test?

A
  • Following a significant omnibus F ratio, compute a minimum absolute difference between means required for statistical significance based on Qcrit with k − 1 degrees of freedom (1 less df than required for Tukey’s HSD) and compare the 2 most discrepant means first
  • Less conservative than Tukey’s HSD
33
Q

What’s the Newman-Keuls post hoc test?

A
  • Starts with the largest mean difference, if the H0 that these means are equal is rejected, the test proceeds until H0 is retained for a pair of means (just like the Fisher-Hayter procedure)
  • The minimum absolute difference between means required for statistical significance is recalculated at each step (the number of groups that remains to be compared is reduced by 1 at each step)
34
Q

What’s the Duncan post hoc test?

A

Same procedure as Newman-Keuls but using different Fcritical (the same critical value as Šidák’s post hoc test)

35
Q

What are the follow-up analyses in a two-way ANOVA?

A
  • If the interaction is significant, usually less attention is paid to the main effects, and the focus of the follow-up analysis are patterns of individual cell means
  • If the interaction is not significant or is significant but is relatively small in size, the focus of the follow-up analyses are on comparisons between marginal means of the factors
36
Q

Describe the simple main effect comparisons of the marginal means approach to follow-up analyses for a significant two-way interaction

A
  • The marginal means for one factor are obtained by averaging over the levels of the other factor
  • The sums of squares for the contrast takes into account N mean (the number of observations that contributed to each mean)
  • The df=1 so MSφA = SSφA
  • The observed F for the comparison between marginal means of a
    single factor is equal to F = MSφA/MSResidual where MSResidual comes from the ANOVA summary table for the omnibus test of two-way ANOVA
  • The degrees of freedom for the appropriate value are 1 and ab(n − 1)
  • The same computations apply for comparisons of marginal means of factor B
37
Q

What are simple main effects follow-up analyses for a significant two-way interaction?

A
  • A simple main effect of the effect of one factor (e.g., A) at one level of the other factor (e.g., b1)
  • The appropriate follow up analyses in a two-way ANOVA depend on the results of the significance tests for each effect and the pattern of means
38
Q

When should you use the comparisons of marginal means approach when conducting follow-up analyses for a significant two-way interaction?

A
  1. If the interaction is not significant but at least one of the main effects is, the comparisons of marginal means of each factor are appropriate
  2. If the interaction is significant but it is dominated by the main effect, it would make sense to understand the main effect first and then evaluate how it changes at the levels of the other factor
39
Q

When should you use the simple main effect approach to conduct follow-up analyses for a significant two-way interaction?

A

If the significant interaction dominates the main effect, we are justified in ignoring the main effects altogether and examining the individual simple main effects separately

40
Q

How do we conduct simple main effects?

A
  • We can compute simple main effects of Factor B for the levels of Factor A
  • We can also compute simple main effects of Factor A at all levels of Factor B
  • We generally select only one of these simple effects to compute (but there is no rule against computing both) based on which simple main effects are more useful, potentially revealing, and easy to explain
  • This is usually chosen based off of the recommendations from Keppel and Wickens for considerations other than interpretation that can help you decide which simple effects to examine
41
Q

What are Keppel and Wickens recommendations for considerations other than interpretation that can help you decide which simple effects to examine

A
  1. Choose the factor with the greatest number of levels: The simple effects of the factor with the greatest number of levels are easiest to visualize and this strategy minimizes the number of simple effects you need to compute
  2. Choose a quantitative factor: Effects of quantitative factors (e.g., amount of medication) tend to be easier to describe
  3. Choose the factor with the greatest SS for the main effect: This choice will maximize the amount of variability that the simple main effects explain and may provide more information than simple main effects for the factor that explains less variation in the DV
  4. Choose an experimentally manipulated factor: Usually it’s more interesting to study how levels of the manipulated factor affect the DV at different levels of a classification/blocking (non-manipulated) factor
42
Q

What are simple comparisons?

A
  • Simple comparisons are comparisons among means of the factor for which we identified a significant simple main effect
  • The df = 1 for simple comparisons because these tests are comparisons between 2 groups
  • Computation of F ratio:
    FSimpleComparison = MSSimpleComparison/MSResidual
  • MSResidual refers to the Residual from the two-way ANOVA summary table
43
Q

What are some tips for controlling Type I Error when conducting follow-up analyses for significant main effects?

A
  • There’s general consensus that the main effects and interaction in a two-way ANOVA count as planned comparison, so there’s no need to adjust the αFW (even though we have 3 significance tests in our ANOVA summary table)
  • When only main effects are significant, the main effect comparisons are treated as separate analyses, so the αFW is set to the desired nominal level (usually 0.05) for each set of post hoc tests
  • The individual αlpha for each comparison can be adjusted using Bonferroni’s correction by dividing αFW by number of post hoc comparisons conducted for that main effect (ex: with 3 levels of factor B, if we conduct 3 post hoc comparisons using marginal means of factor B, our αlpha for each comparison would be (0.05/3=0.017)
  • The entire set of pairwise comparisons can also be tested using Tukey’s HSD; this approach already incorporates Type I error control so there’s no need to adjust the αlpha for each comparison
44
Q

What are some tips for controlling Type I Error when conducting follow-up analyses for significant interactions?

A
  • The issue of Type I error adjustment for simple main effects and simple comparisons is more complex and there’s no general consensus
  • The recommendation in Keppel and Wickens is to set αFW = 0.10 for the set of simple main effects because these effects incorporate variation due to both the main effect of the factor for which we are computing simple main effects and the interaction
  • The recommendation is to set the αFW for the simple comparisons to the same level as the αFW for the simple main effect, and to use the Bonferroni correction to calculate the αlpha of each individual comparison
45
Q

What’s the difference between post hoc tests and planned comparisons?

A
  • Post hoc tests are performed following a significant F ratio to make further inferences about which means are different
  • Unlike post hoc tests, planned comparisons are designed before the data are even collected, and the hypotheses they test are usually more specific than post hoc test hypotheses (e.g., post hoc tests may consist of all pairwise group comparisons whereas planned comparisons may focus on just a handful of meaningful comparisons instead)
  • With planned comparisons, we distinguish between orthogonal (independent) and non-orthogonal (dependent) comparisons
46
Q

What are orthogonal planned comparisons?

A
  • Hypotheses of orthogonal comparisons are independent of each other
  • We can use the coefficients from the comparisons to evaluate whether 2 comparisons are orthogonal
  • Ex: in a one-way ANOVA clinical trials example, there are 5 groups total. Groups 1-3 involve different types of medication (Med1, Med2, and Med3) and groups 4-5 involve therapy (group 4 receives CBT, and group 5 receives psychoanalysis)
  • Assume the following hypotheses are of interest to test:
    H01: (μMed1+μMed2+μMed3) / 3 = μCBT +μpsychoanalysis / 2
    H02 : μCBT = μpsychoanalysis
  • We determine whether they’re orthogonal based on the coefficients given to each one
  • If the sum at the end of the row is = 0 -> orthogonal
  • There is a maximum of a − 1 orthogonal comparisons that can be performed for factor A
  • In balanced designs, the SS of all orthogonal contrasts adds up to SSmodel
47
Q

What are non-orthogonal planned comparisons?

A
  • If the sum at the end of the row isn’t = 0, then this isn’t orthogonal
  • Ex: Assume these hypotheses are of interest to test:
    H01 : (μMed1+μMed2+μMed3) / 3 = μCBT +μpsychoanalysis / 2
    H03 : (μMed1+μMed2) / 2 = μMed3
  • The comparisons in this example are non-orthogonal
  • The computations for testing the null hypotheses of orthogonal and non-orthogonal contrasts are the same, but the only difference is that non-orthogonal contrasts
48
Q

How can you determine if a planned comparison is orthogonal or not?

A
  • You find the product for each column and then add these up across each column
  • If= 0 -> orthogonal
  • If not = 0 -> non-orthogonal
49
Q

The following formula: SSC1 + SSC2 = SSModel belongs to orthogonal or non-orthogonal contrasts?

A

Orthogonal (only when group sample sizes are equal)

50
Q

What’s the denominator of the F-ratios for C1 and C2?

A

MSwithin from the one-way ANOVA