Week 10 - RM ANOVA Flashcards
• Explain the 2 sets of limitations associated with between-participants designs
Ps can only serve in one cell, as otherwise violate independence of scores in factorial ANOVA
Any diffs between Ps is viewed as error
• Explain how within-participants designs can overcome the 2 limitations of between-Ps designs (x2)
Calculate and remove (from treatment and error) any variance due to dependence/individual difference (e.g., maybe more variance in the group, than between them)
o Thus, we can reduce our error term and increase POWER
• Describe the sources of systematic variance in within-participants designs (x3)
No within-cell variance, as only 1 data point in each, so:
Between-Ps variance -individual difference (partitioned out of error first)
Within-Ps variance - between-treatment (effect) and error
• Describe the sources of error/unexplained/residual variance in within-participants designs (x2)
It’s the interaction of participant and treatment
ie, inconsistencies of treatment effect/factor between Ps at each level of treatment
• Compare the sources of error variance in within-participants ANOVA and between-participants ANOV A
In between:
o Total Variance = Between group (treatment effect) + within group (error)
In within:
o Total Variance = Between Ps/group (individual differences) + within Ps/group (treatment and error)
• Explain how between-participants variance and within-participants variance are used in within-participants ANOVA
Between Ps is individual diffs - so is removed from both treatment and error???
Within-Ps diffs
• State the formula for the calculated F ratio in 1-way within-participants ANOVA
Treatment divided by
Treatment x Ps interaction
• Explain how the formula for the calculated F ratio in 1-way within-participants ANOVA is similar (x1) and different (x1) to the formula for calculated F in 1-way between-participants ANOVA
Still equates to MStreat/MSerror
Just had individual diffs removed
• Identify the structural model for within-participants ANOVA
(x1)
And define components (x5)
Xij = μ + πi + τj + eij
For i cases and j treatments:
Xij, any DV score is a combination of:
o μ - the grand mean,
o πi - variation due to the i-th person (μi - μ) (think p for pi and Ps)
o τj - variation due to the j-th treatment (μj - μ)
o eij - error - variation associated with the i-th cases in the j-th treatment – error = πτij (interaction, plus chance)
For RM ANOVA, define total variability (x1)
Deviation of each observation from grand mean
For RM ANOVA, define variability due to factor (x1)
Deviation of factor group means from grand mean (μj - μ)
For RM ANOVA, define variability due to Ps (x2)
Deviation of each participant’s mean from the grand mean (μi - μ)
For RM ANOVA, define error (x3)
Changes (inconsistencies) in effect of factor across participants
Estimated variance due to individual difference averaged over treatment levels
(TR x P interaction)
• List the formulae for various degrees of freedom in 1-way within-participants ANOVA
n = number of Ps N = number of observations j = number of conditions
dftotal = nj-1 = N-1 dfP = n-1 dftr = j–1 dferror = (n-1)(j-1) o Error df is different from between-participants anova – because is now interaction of participant factor x treatment factor
• When looking at the summary table for the omnibus test in 1-way within-participants ANOVA, explain which parts of the output you need to report (x2), and which parts you can ignore (x1)
Report df and F for treatment and error
Ignore output for between-subjects factor (estimated variance due to individual difference averaged over treatment levels)
• Explain the similarities in following up a significant main effect in 1-way between-participants ANOVA and 1-way within-participants ANOVA (x1)
If more than 2 levels, both need follow up comparisons
• Explain the differences in following up a significant main effect in 1-way between-participants ANOVA and 1-way within-participants ANOVA
Between-Ps ANOVA uses pooled error term from omnibus tests (due to assumed homogeneity of variance)
Within-Ps partitions out and ignores main Ps effect - computes an error term estimating inconsistency as participants change over WS levels
• We expect inconsistency in TR effect x Ps, so in simple comparisons use only data for conditions involved in comparison & calculate separate error terms each time
• Explain how systematic/treatment variance and error/residual variance is partitioned in 2-way within-participants ANOVA. (x10)
Between Ps variance Within-Ps variance, made up of: Between-treatment variance *Main effect A *Main effect B *Interaction A x B Residuals: *A x P interaction *B x P interaction *A x B x P interaction
• Explain the similarities/diffs in how systematic/treatment variance and error/residual variance is partitioned in 2-way within-participants ANOVA and 2-way between
In between, partitioned into treatment factors, interaction, and error
In within, still have factor and interaction, but each has own corresponding error term
• Explain how error terms are calculated in omnibus 2-way within-participants ANOVA tests
Corresponds to an interaction between the effect due to participants, and the treatment effect
• Main effect of A - error term is MSAxP (SSA/dfA x P)
• Main effect of B - error term is MSBxP (SSB/dfB x P)
• AxB interaction - error term is MSABxP (SSAB/dfAB x P)
• Explain how error terms are calculated in 2-way within-participants ANOVA follow-up tests
Simple effects error term is MSA at B1xP
o The interaction between the A treatment and participants, at B1
Simple comparisons error term is MSACOMP at B1xP
o Interaction between the A treatment (only the data contributing to the comparison, ACOMP), and participants, at B1
• Define fixed factors
You choose the levels of the IV
Eg, treatment is fixed
• Define random factors
Levels of IV chosen randomly
eg Ps - randomly assigned to conditions
• List the 3 assumptions of the mixed-model approach
Not dissimilar to between-participants assumptions:
Sample is randomly drawn from population
DV scores are normally distributed in the population
Compound symmetry
What are the two assumptions that make up compound symmetry
o Homogeneity of variances in levels of repeated-measures factor
o Homogeneity of covariances
(equal correlations/covariances between pairs of levels)
• For Mauchley’s test of sphericity:
o Explain what question it tests (x3)
Compound symmetry assumption very restrictive.
Often violated, so instead asks:
Are the main diagonal (variances) and off-diagonal (covariances) roughly equal
• For Mauchley’s test of sphericity:
o Identify the statistic it uses
(x1)
o explain what a significant result means (x1)
Chi-square
That sphericity assumption is violated
• For Mauchley’s test of sphericity:
o Indicate whether it is a robust test or not (x2)
No - often says everything is fine when sphericity violations present in data
• Explain when violations of sphericity matter (x1)
In all within-participants designs/factors with 3+ levels
• Explain when violations of sphericity do not matter (x2)
In between-participants designs, because treatments are unrelated (different participants in different treatments)
• The assumption of homogeneity of variance still matters though
When within-participant factors have 2 levels, because only one estimate of covariance can be computed
• Explain why the sphericity assumption is important – what implications does it have for research?
When violated, F-ratios are positively biased
• Critical values of F [based on df a – 1, (a – 1)(n – 1)] are too small
Therefore, probability of type-1 error increases
• So we need to adjust critical values of F
• Explain what epsilon adjustments are (x1)
A value by which the degrees of freedom for the test of F-ratio is multiplied (0- 1)
• Explain what epsilon adjustments do (x3)
Correct violations of sphericity
Equal to 1 when sphericity assumption is met (no adjustment), and
• Explain why epsilon adjustments are important/useful
Further epsilon is from 1, the more problem you have with sphericity violation for that tests, and more diff it makes to critical F
• List and explain the 3 types of epsilon (x3, x4, x4)
And indicate which one is: rather liberal/lax, rather conservative, and just right
Lower-bound epsilon
o Used for conditions of maximal heterogeneity, or worst-case violation of sphericity
o Often too conservative/increases Type 2
Greenhouse-Geisser epsilon
o Size of ε depends on degree of sphericity violation
o 1 ≥ ε ≥ 1/(k-1) : varies between 1 (sphericity intact) and lower-bound epsilon (worst-case violation)
o Generally recommended – not too stringent, not too lax
Huynh-Feldt epsilon
o An adjustment applied to the GG-epsilon
o Often results in epsilon exceeding 1, in which case reset to 1
o Used when “true value” of epsilon is believed to be ≥ .75 - too liberal
• List the advantages of within-participants/repeated-measures designs
More efficient
• n Ps in j treatments generate nj data points
• Simplifies procedure
More sensitive
• Estimate individual differences (SSparticipants) and remove from error term
• List the disadvantages of within-participants/repeated-measures designs
Restrictive statistical assumptions
Sequencing effects:
• Learning, practice – improved later regardless of manipulation
• Fatigue – deteriorating later regardless of manipulation
• Habituation – insensitivity to later manipulations
• Sensitisation – become more responsive to later manipulations
• Contrast – previous treatment sets standard to which react
• Adaptation – adjustment to previous manipulations changes reaction to later
• Direct carry-over – learn something in previous that alters later
• Etc!
Counterbalancing helps, but can still get treatment x order interactions
• Explain the methodology that can be employed to reduce one of the disadvantages of within-participants designs (x4)
MANOVA - gets around restrictive assumptions (spec. sphericity/compound symmetry):
Weighting DV for each level of RM IV with coefficients
(as with scores for each IV in multiple regression),
To create a predicted DV score that maximises differences across the levels of the IV
What are degrees of freedom in RM ANOVA?
Factor: number of levels -1
Factor error: (dfFactor) x (number of Ps -1)
Interaction: dfA x dfB
Interaction error: dfA x dfB x (number of Ps - 1)
What is the main issue with using MANOVA? (x2)
- Instead of adapting model to observed DVs, selectively weight/discount them according to how well they fit existing model
- Atheoretical, over-capitalises on chance
Under what conditions would it be ok to use MANOVA? (x2)
Where you’ve used a grab bag of levels (e.g., randomly selected 4 out of 100trials to analyse)
o When levels then have no real meaning, and some may be more error prone, e.g. emotion scales
In RM ANOVA, what is the error term for ANY effect equal to (including main effects, interactions and follow-ups)
The interaction between that effect and the effect of participants (a random factor)