Chapter 11 part b: GLM1 Flashcards
contrasts
necessary after conducting an ANOVA to find out which groups differ
A way to contrast the different groups without inflating Type I error rate
- break down variance accounted by model into component parts (planned contrasts)
- compare every group (as if conducting several t-tests) but to use a stricter acceptance criterion such that the familywise error rate does not rise above .05 (Post-Hoc Test)
The difference between planned comparisons and post hoc tests:
- planned comparisons are done when you have specific hypotheses that you want to test,
- whereas post hoc tests are done when you have no specific hypotheses
Planned Contrasts
- used when testing specific hypothesis
- Example:
— H1: any dose of Viagra changes libido compared to the placebo group
— H2: high dose should increase libido more than a low dose
Standard Planned Comparisons
- Contrast I: compare experimental conditions to control
- Contrast II: check the difference between the experimental groups
—> when 2 experimental groups: C.2: E1 vs E2
—> when 3 experimental groups C.2: E1 vs E2, E3
C.3: E2 vs E3
Rules of Planned Contrasts
I. control group to compare it against other groups
II. Each contrast must compare only two ‘chunks’ of variation
III. Once a group has been singled out in a contrast it can’t be used in another contrast
Number of Planned Contrasts
k-1
[ # predictor categories - 1]
Why compare only 2 chunks of variation at a time in planned contrasts?
- we can be sure that a significant result represents a difference between these 2 portions of variation
Planned Contrasts:
- If contrast I is significant, conclude that:
the average of experimental groups is significantly different from the average of control
Planned Contrasts:
- If standard errors (SE) are the same:
- experimental group with the highest mean will be significantly different from the mean of placebo group
- for experimental group with the lowest mean: do a post hoc to determine if it differs from placebo
Planned Contrasts: Weights
- To carry out planned contrasts assign certain values to dummy variables in regression model
- The values assigned to dummy variables: weights
Rules for assigning weights in Planned Contrasts:
Rule I:
Compare only 2 chunks of variation and that if a group is singled out in one comparison, that group should be excluded from any subsequent contrasts
Rule 2:
assign one chunk of variation positive (+) weights and the opposite chunk negative (-) weights
Rule 3:
The sum of weights for a comparison should be 0
Rule 4:
If a group is not involved in a comparison, automatically assign it a weight of zero
Rule 5:
Weights assigned to the group(s) in one chunk of variation should be equal to the number of groups in the opposite chunk
- assign control (+3) when three experimental groups (-1, -1, -1)
Planned Contrasts: Orthogonal
Independent Contrasts
- if sum product of contrasts equals 0
- contrast 1 x contrast 2 for each variable (including control) and add them all
Planned Contrasts: Equation
Outcome = bo + b1Contrast1 + b2Contrast2
Control Mean = Grand mean - 2b1
(bo: grand mean)
(Contrast 1: weight of the control group for contrast 1)
(Contrast 2: weight of control group for this contrast is 0)
Experiment Group 1 Mean = Grand mean + b1 + b2
Planned Contrast: Equation
- depends on the weights we give to each chunk or dummy variables
Planned Contrast: Equation
- What is b1?
- the difference between the average of experimental groups and control group
Planned Contrast: Equation
- What is b2?
- difference between mean of each experimental group divided by the number of groups in this contrast
- contrast 2: high dose vs low dose
- b2: difference between mean of high dose and low dose divided by 2
Planned Contrast: Equation
- Why divide difference of means of each experimental group by number of groups in that contrast to obtain b2?
- to control for family wise error rate
Non-orthogonal Contrasts
- non-independent contrasts
- disobeying Rule I: Once a group has been singled out in a contrast it can’t be used in another contrast
- Example:
- Contrast 1: Compare experimental groups against control
- Contrast 2: Compare high-dose group to control
Non-orthogonal Contrasts
- sum of product of contrasts is not 0
- not wrong
- careful with interpretations because comparisons are related and so will the p values
- hence, use a more conservative alpha
Standard Contrasts
- under most circumstances: you can design your own contrasts
- standard contrasts: special contrasts to compare certain situations
Orthogonal Standard Contrasts
- Helmert
- Difference (Reverse Helmert)
Non-orthogonal Standard Contrasts
- Deviation (first, last)
- Simple (first, last)
- Repeated
Helmert Standard Contrasts
- orthogonal
- each category (except last) is compared to the mean effect of all subsequent categories
Example:
- 3 Categories - 4 Categories - 1 vs (2,3) - 1 vs (2,3,4) - 2 vs 3 - 2 vs (3,4) - 3 vs 4
Difference Standard Contrasts
- Reverse Helmert
- orthogonal
- each category (except 1st) is compared to the mean effect of all previous categories
- 3 Categories - 4 Categories
- 3 vs (1,2) - 4 vs (1,2,3)
- 2 vs 1 - 3 vs (1,2)
- 2 vs 1
Deviation Standard Contrast
First
- non-orthogonal
- compare the effect of each category (except 1st) to the overall experimental effect
- 3 Categories - 4 Categories
- 2 vs (1,2,3) - 2 vs (1,2,3,4)
- 3 vs (1,2,3) - 3 vs (1,2,3,4)
- 4 vs (1,2,3,4)
Deviation Standard Contrast
Last
- non-orthogonal
- compare the effect of each category (except last) to the overall experimental effect
- 3 Categories - 4 Categories
- 1 vs (1,2,3) - 1 vs (1,2,3,4)
- 2 vs (1,2,3) - 2 vs (1,2,3,4)
- 3 vs (1,2,3,4)
Simple Standard Contrast
First
- non-orthogonal
- each category is compared to the 1st category
- 3 Categories - 4 Categories
- 1 vs 2 - 1 vs 2
- 1 vs 3 - 1 vs 3
- 1 vs 4
Simple Standard Contrast
Last
- non-orthogonal
- each category is compared to the last category
- 3 Categories - 4 Categories
- 3 vs 1 - 4 vs 1
- 3 vs 2 - 4 vs 2
- 4 vs 3
Repeated Standard Contrasts
- non-orthogonal
- each category (except 1st) is compared to previous category
- 3 Categories - 4 Categories
- 1 vs 2 - 1 vs 2
- 2 vs 3 - 2 vs 3
- 3 vs 4
Polynomial Contrasts
- trend analysis in the categorical predictor
- orthogonal
- no need to construct your own codes
> Linear Trend: Proportionate
Quadratic Trend: at least 3 categories
Cubic Trend: at least 4 categories and 2 changes in the direction of a trend
Quartic Trend: at least 5 categories and 3 changes in the direction of the trend
- Example: Viagra - control, high, low
—> 3 categories: check if linear or quadratic
Post-Hoc
- consists of pairwise comparisons
- compares all different combinations of predictor variables
Post-Hoc:
How do pairwise comparisons control for family-wise error rate?
- by correcting level of significance for each test so that Type 1 remains .05 across all comparisons
- Bonferroni Correction
Which Post-Hoc test performs best?
Depends on 3 criteria:
I- Does the test control for Type 1
II- Does the test control for Type 2?
III- Is the test Robust?
Type 1 and Type 2 error rate for Post-Hoc tests
- conservative Post-Hoc test:
- Type 1 error small
- Type 2 error high: lack power
Liberal Post-Hoc
- least significant difference (LSD)
- studentized Newman-Keuls
—> high power
Conservative Post-Hocs
- Bonferroni and Turkey’s: lack power
~ Bonferroni more powerful when number of comparisons is small
~ Tukey’s more powerful when testing large number of means - REGWQ: good power & type 1 control
Are Post Hoc tests robust?
- most Post Hoc test perform relatively well under small deviations of normality
- perform badly when group sizes and variances are NOT equal
Post Hoc tests when group sizes are slightly unequal:
- Hochberg’s GT2
- Gabriel’s
Post Hoc tests when group sizes and variances are very different:
- Tamhane’s T2
- Dunnett’s T3
- Games-Howell
- Dunnett’s C
Best Post Hocs:
- Equal Sample Sizes and Variances
- REGWQ or Turkey
Best Post Hoc:
- guaranteed control for Type 1 error
- Bonferroni
Best Post Hoc:
- sample sizes slightly different
Gabriel’s
Best Post Hoc:
- sample sizes very different
- Hochberg’s GT2
Best Post Hoc:
- variances: unequal
- Games-Howell
- run this test in addition to any other post Hoc tests to control for homoscedasticity
Running One-Way ANOVA: SPSS
I. Analyze
II. Compare Means
III. One-way ANOVA
One-Way ANOVA Procedure
I. Check for assumptions, outliers, influential cases
II. Correct for outliers or other violations
III. Run One-Way ANOVA
IV. Follow-up: Contrasts
V. Calculate effect sizes
Planned Contrasts: SPSS
I. One-Way ANOVA II. Contrasts III. Tick Polynomial to find trends —> important to have coded our groups in ascending order such as: Control: 1 Low viagra: 2 High viagra: 3
IV: Degree: Quadratic or Cubic depends on how many categories in total
For planned contrasts:
—> Coefficients: Add (ascending order)
—> Click Next to add weights of Contrast 2
—> Do NOT forget to assign 0 to the group you’re not using in that contrast
Post-Hoc: SPSS
- either do planned contrasts or Post-Hoc tests: NOT both
I. One-Way ANOVA
II. Post-Hoc
- Always select Dunnett because it is the only post Hoc test that enables us to compare means to control group mean
- which group is your control?
III. Choose 2-sided
IV: Choose cases analysis by analysis
One-Way ANOVA: Options
- Descriptive
- Homogeneity of variances test: Levene’s
- Brown-Forsythe (concerned about unequal variances)
- Welch (concerned about unequal variances)
- means plot
- Exclude cases analysis by analysis
One-Way ANOVA: Bootstrapping
- good way to overcome bias
- if sample size is very small: do NOT select Bootstrap
Output: One-way ANOVA
- Descriptives
- Test of Homogeneity of Variances: check if Levene’s test is significant - violation of homoscedasticity
- ANOVA
> within groups: gives details on unsystematic variation within data (SSR)
Output: One-way ANOVA
- Trend Analysis -
- look at linear component:
* This comparison tests whether the means increase across groups in a linear way: if p
Output: One-way ANOVA
- Welch and Brown
Report this F Statistics when there is a violation of homogeneity of variances
Are we allowed to halve the significance of a two tailed test ANOVA?
No!
- when comparing more than 2 means: no directional hypothesis
Output: One-way ANOVA
- Tukey’s and REGWQ
- These tests display subsets of groups that have the same means
- if p non significant: means of both groups are statistically similar
- Harmonic mean: weighted version of mean by taking into account relationship between sample size and variances
- to reduce bias brought by unequal sizes: still biased though
Effect Size of ANOVA: Eta Squared (mu^2)
- R^2 = r^2 = SSM/SST
- this measure slightly biased
Effect Size of ANOVA: Omega Squared
- Best way
- w^2=[SSM-(dfM) x MSR]/(SST+ MSR)
- small effect .01
- medium effect .06
- large effect .14
Effect Size of Planned Contrasts
- r contrast =square root [t^2/(t^2+df)]
- eta squared criteria
Reporting Results: One-Way ANOVA
- There was a significant effect of Viagra on levels of libido, F(2, 12) = 5.12, p =.025, ω = .60.
- There was a significant linear trend, F(1, 12) = 9.97, p =.008, ω =.62, indicating that as the dose of Viagra increased, libido increased proportionately.
- Planned contrasts revealed that having any dose of Viagra significantly increased libido compared to having a placebo, t(12) = 2.47, p =.029, r =.58, but having a high dose did not significantly increase libido compared to having a low dose, t(12) = 2.03, p =.065, r =.51.