Week 5 - Power/Blocking designs Flashcards
• Define Type 1 errors and Type 2 errors (x3, x3)
Type-1 = finding a significant difference in the sample that actually doesn’t exist in the population
o α
o False positives
Type-2 = finding no significant difference in the sample when one actually exists in the population
o β
o False negatives
• Define power, in both technical and useful terms (x2, x1)
The probability of correctly rejecting a false H0
o 1 - β
Degree to which we can detect treatment effects (main, interaction, simple) when they exist in the population
• Identify the 3 situations in which you might care about power (plus how to calculate)
Done a study and want to report power of my significant effect.
o Calculate observed power (post hoc power)
Done a study - did not find a significant effect. But I know mean difference exists in population. How can I increase power to detect it?
o Calculate predicted power (a priori power)
Designing a study. Want to be sure of enough power to detect predicted effects.
o Calculate predicted power (a priori power)
• Identify the 4 factors that affect power, and what these effects are (i.e., under what conditions does statistical power increase)
If alpha is relaxed
If treatment effect (d) is larger (distribution means further apart)
Increase sample size - can find anything if your sample’s big enough…
Reduce error variance
• Explain what a power estimate means (e.g., power = .41), and what the threshold is for optimal levels of power (x1, x1)
Chance of finding the significant effect, e.g. 41%
.80 (80% chance)
• Identify the pieces of information you need to calculate power estimates, in a priori analyses and post hoc analyses
(x4, x5)
(that you then feed into program such as G-power)
A priori – what N to achieve .80 power? 1. Estimate of effect size 2. Estimate of error (MSerror) • Typically found in previous resesarch Post-hoc – how powerful was my study? 1. Effect size 2. Error (MSerror) 3. N in your study • Get these from the dataset
• Explain the 3 caveats (i.e., cautionary tales/qualifications) of power analyses
Effect must exist for you to find it
Large samples - tiny effects that are unstable/unimportant
Error variance still matters - high error = still miss large effects
• Identify the 4 strategies you can use to maximise power
Sample size increase - practical/costs
Alpha relax- not publishable
Large effects, study them - tricky in psych…
Error, reduce it - method/design, or statistically
• Identify the strategies you can use to reduce error variance (x2)
Method/design - reliability and validity
Try to remove DV variance not due to your IV (by accounting for covariates)
• Define a control or concomitant variable (x1)
Less novel/interesting variable that is already known to explain some variance in your DV (in addition to the IV)
• Explain how to use blocking in between-participants designs (x4)
Add as factor in ANOVA
(as long as control doesn’t correlate with IV, ie explain same variance)
By pre-measuring Ps on factor,
And assigning them to groups (e.g. hi, med, lo)
• Explain how experimental differs from blocking designs (x2, x3)
Experimental are fully randomised:
*Ps assigned to 1 level of each (and every) factor
Blocking is not:
o Ps are categorized into levels of blocking factor
o Within blocking factor Ps are randomised
• Explain how the blocking variable fits in with the predictions made in a research study and the reporting of findings (x2)
Unlike effect of focal IV, the effect of blocking factor is not usually of interest
o Is only factored in to reduce error/increase power of test for focal IV
• List the two applications of blocking (x1, x2)
Increase power by reducing error variance (ie, MSerror)
Detecting potential confounds
*An interaction of focal/blocking factors indicates confound - focal IV effect is reduced
• Explain how blocking reduces error variance (x3)
Variance due to blocking factor is partitioned out,
(along with other factor, and interaction)
Thereby reducing what’s available for error
• Explain what happens if the blocking variable does not reduce error variance (x3)
You are a moron!
Wrong blocking variable - it’s supposed to have a known DV relationship
(but no IV relationship)
• Explain the difference between an independent variable (IV), a control variable, and a confound variable (x2, x2, x2)
IV has significant effect, of great interest to you
o (systematic variance that you predicted)
Control variable has significant effect, which is okay
*(reduces error variance) but not novel / theoretically interesting
Confound variable has significant effect, which is not wanted
o (additional systematic variance)
• Explain how blocking can help you detect potential confounds
By showing a failure of treatment to generalise across levels of blocking factor
• List 3 advantages of blocking (x2, x1, x2)
May equate treatment groups better than a completely randomized design
• Assuming equal n for levels of blocking factor
Greater power, because error term reduced
Can check interactions of treatments and blocks
• Effects of treatments generalise?
• List the 3 disadvantages of blocking
Practical costs of intro blocking factor
Loss of power if blocking variable is poorly correlated with DV (r
What are the 4 quadrants of the signal detection table describing the reality/statistics outcomes for power?
- False alarm (alpha rate): rejecting a true null
- Jackpot (1 – beta, or power): correctly rejecting an false null
- Good save (1 – alpha): retaining a true null
- Miss (beta): retaining the null when the alternate is true
What is the (ethical) difference between the 2 ways of calculating power? (x2)
o A priori power way less dodgy, more likely to be useful
o Post hoc is a defensive use of power calculation…
What is a direct estimate of power? (x1)
How to calculate? (x1)
And interpret? (x3)
Cohen's d mu1 - mu0/populationSD Small effect d = 0.20, gives 85% overlap Medium effect d = 0.50, 67% Large effect d = 0.80, 53%
What are 3 additional terms for blocking designs?
o Randomized block design
o Stratification
o Matched samples design