13. Simple Effects Flashcards

1
Q

What are main effects?

A

An overall, or average, effect of a condition

E.g.
Is there an effect of Treatment averaged over Hospital?
Is there an effect of Hospital averaged over Treatment?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are simple contrasts/effects?

A

An effect of one condition at a specific level of another

Is there an effect of Hospital for those receiving Treatment A? (…and so on for all combinations.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are interactions?

A

A change in the effect of some condition as a function of another

Does the effect of Treatment differ by Hospital?
With effects coding, we can also think of this as a difference in simple effects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do you find simple effects in r?

A

Use emmeans

  • Use pairs () in R
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does the marginal mean, mean?

A

It is the average of mean values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What do beta coefficients represent in effects interactions (with contrasts)?

A

Differences between marginal mean and grand mean

Aim: Whether the distance of the row or column marginals from grand mean differs depending on level of another condition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

With the example of the hospital data set, what would each coefficient mean?

yijk = beta 0 + (b1E1 + b2E2) + b3E3 + (b4E13 +b5E23) + Ei
Treatment Hospital Interactions

A

b0 = Grand mean
b1 = Difference between row marginal for treatment A & grand mean
b2 = Difference between row marginal for treatment B & grand mean
b3 = Difference between column marginal for hospital 1 and grand mean
b4 = Does the effect of treatment a differ across hospital 1 and hospital 2?
b5 = Does the effect of treatment b differ across hospital 1 and hospital 2?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How can you find the overall effect of the interaction?

A

Compare empty model that doesn’t include interaction and then compare model with interaction in anova

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What do pairwise tests compare?

A

compare all levels of a given predictor variable with all levels of the other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why do we use pairwise comparisons?

A

Sometimes we don’t have a concrete hypothesis to test, even if we do = explanatory analysis (still some useful information for the field)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the issue of multiple comparisons?

A

The more tests we do, the higher chance of a type 1 error (false-positive)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What our type I error in a single test?

A

Set at 0.05

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How does type I error differ in multiple tests?

A

P(Type I error) = a
P(X making Type I error) = 1-a
P(Not making Type 1 in m) = (1-a)m
P(making type I error in m) = 1 - (1-a)m

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the family wise error rate?

A

P(making type I error in m) = 1 - (1-a)m

Increased family (set of tests) = Increased chance of a false positive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the different types of corrections we can use to keep the type I error rate at the set alpha rate?

A

Bonferroni
Sidak
Tukey
Scheffe

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How do Bonferroni and Sidak keep the type I error rate at the set alpha rate?

A

Conservative adjustments

Treats individual test in family like they’re independent

It’s moving alpha by how many tests we’re conducting

Bonferroni alpha = alpha/number of tests

To use this value, we would compare the exact
p-value of a particular test to the adjusted α
Alternatively, we can adjust the p-value itself, and then compare the adjusted p-value to the original α
This is what emmeans does.

Sidak = 1-(1-a)1/number of tests

16
Q

How does Scheffe keep the type I error rate at the set alpha rate?

A

Scheffe = Calculates p-value from F-distribution

  • Adjustment related to the number of comparisons made
  • Makes F-value increased for a fixed alpha

Essentially makes the critical value of F larger for a fixed α, dependent on the number of tests.

The square-root of the adjusted F provides an adjusted t

17
Q

How does Tukey keep the type I error rate at the set alpha rate?

A

Tukey’s HSD
- Compares all pairwise group means.
- Each difference is divided by the SE of the sum of means.
- This produces a q statistic for each comparison.
- And is compared against a studentized range distribution.

18
Q

How does using anova() for a single model, and for the model comparison approach yield slightly different results?

A

Sums of squares difference is the same
Degrees of freedom are the same
F is slightly different for Treatment and Hospital (and therefore so is p-value)

Note the main conclusions do not change.

This difference relates to differences in the degrees of freedom associated with the F-test.

19
Q

What is a benefit of Bonferroni compared to Tukey?

A

One benefit of Bonferroni is that it can be applied to any set of P-values, whereas Tukey only applies when comparing the means of levels of a factor. The downside, however, is that it may be overly conservative (i.e. reduce our power to detect an effect that is truly there).