Week 5 - Power/Blocking designs Flashcards
• Define Type 1 errors and Type 2 errors (x3, x3)
Type-1 = finding a significant difference in the sample that actually doesn’t exist in the population
o α
o False positives
Type-2 = finding no significant difference in the sample when one actually exists in the population
o β
o False negatives
• Define power, in both technical and useful terms (x2, x1)
The probability of correctly rejecting a false H0
o 1 - β
Degree to which we can detect treatment effects (main, interaction, simple) when they exist in the population
• Identify the 3 situations in which you might care about power (plus how to calculate)
Done a study and want to report power of my significant effect.
o Calculate observed power (post hoc power)
Done a study - did not find a significant effect. But I know mean difference exists in population. How can I increase power to detect it?
o Calculate predicted power (a priori power)
Designing a study. Want to be sure of enough power to detect predicted effects.
o Calculate predicted power (a priori power)
• Identify the 4 factors that affect power, and what these effects are (i.e., under what conditions does statistical power increase)
If alpha is relaxed
If treatment effect (d) is larger (distribution means further apart)
Increase sample size - can find anything if your sample’s big enough…
Reduce error variance
• Explain what a power estimate means (e.g., power = .41), and what the threshold is for optimal levels of power (x1, x1)
Chance of finding the significant effect, e.g. 41%
.80 (80% chance)
• Identify the pieces of information you need to calculate power estimates, in a priori analyses and post hoc analyses
(x4, x5)
(that you then feed into program such as G-power)
A priori – what N to achieve .80 power? 1. Estimate of effect size 2. Estimate of error (MSerror) • Typically found in previous resesarch Post-hoc – how powerful was my study? 1. Effect size 2. Error (MSerror) 3. N in your study • Get these from the dataset
• Explain the 3 caveats (i.e., cautionary tales/qualifications) of power analyses
Effect must exist for you to find it
Large samples - tiny effects that are unstable/unimportant
Error variance still matters - high error = still miss large effects
• Identify the 4 strategies you can use to maximise power
Sample size increase - practical/costs
Alpha relax- not publishable
Large effects, study them - tricky in psych…
Error, reduce it - method/design, or statistically
• Identify the strategies you can use to reduce error variance (x2)
Method/design - reliability and validity
Try to remove DV variance not due to your IV (by accounting for covariates)
• Define a control or concomitant variable (x1)
Less novel/interesting variable that is already known to explain some variance in your DV (in addition to the IV)
• Explain how to use blocking in between-participants designs (x4)
Add as factor in ANOVA
(as long as control doesn’t correlate with IV, ie explain same variance)
By pre-measuring Ps on factor,
And assigning them to groups (e.g. hi, med, lo)
• Explain how experimental differs from blocking designs (x2, x3)
Experimental are fully randomised:
*Ps assigned to 1 level of each (and every) factor
Blocking is not:
o Ps are categorized into levels of blocking factor
o Within blocking factor Ps are randomised
• Explain how the blocking variable fits in with the predictions made in a research study and the reporting of findings (x2)
Unlike effect of focal IV, the effect of blocking factor is not usually of interest
o Is only factored in to reduce error/increase power of test for focal IV
• List the two applications of blocking (x1, x2)
Increase power by reducing error variance (ie, MSerror)
Detecting potential confounds
*An interaction of focal/blocking factors indicates confound - focal IV effect is reduced
• Explain how blocking reduces error variance (x3)
Variance due to blocking factor is partitioned out,
(along with other factor, and interaction)
Thereby reducing what’s available for error