Week 5 revision Flashcards
Define Type 1 error
Incorrectly accepting a hypothesis when there is none - false positive.
Define Type 2 error
Accepting a false null hypothesis when there is an effect - false negative.
Define power in technical and useful terms
The probability of correctly rejecting a false null hypothesis.
The degree to which we can detect treatment effects when they exist in the population.
What are the 3 situations in which you might care about power?
When designing a study.
Finished a study without a significant effect.
Finished a study with a significant effect.
What are the 4 factors that affect power, and what these factors’ effects are (What conditions increase statistical power)
Sample size increase statistical power.
Alpha level, larger increases, smaller decreases
Larger effects increase statistical power.
Error variance reduction increases power.
Explain what a power estimate means (e.g., power = .41), and what the threshold is
for optimal levels of power.
Threshold = .1/.05
Power estimate is the estimated amount of participants needed to find a statistically significant effect in a study
Identify the pieces of information you need to calculate power estimates, in a priori
analyses (2) and post hoc analyses (3).
A prior = estimates of effects size, estimates of error (MSerror)
Post Hoc = effect size, MSerror, N in your study.
Explain the three caveats (i.e., cautionary tales/qualifications) of power analyses.
Increasing a reduces T2E but increases T1E
Reducing a reduces T1E but increases T2E
An interaction means effects are qualified by a control variable (creates confound)
Identify the strategies you can use to maximize power.
Sample size = increase sample size
Alpha level = increase alpha levels
Larger effects = focus on larger effects
Error variance = reduce error variance
What are the strategies you can use to reduce error variance? (4)
- Improve operationalization/validity of
- variables
3.Improve measurements/internal reliability of variables - Improve design (e.g. by blocking)
Improve method of analysis (e.g. blocking or ANCOVA)
What is a control/concomitant variable?
An additional factor which are well known/less novel/less interesting that can explain common variance in the DV.
Explain how to use blocking in between-participants designs. (3)
- Pre-measure control variable.
- divide into groups according to results.
- stratify random assignment to IV levels.
Explain how blocking differs from experimental designs.
Experimental designs = fully randomized
Blocking = stratified random assignment
Explain how the blocking variable fits in with the predictions made in a research study and the reporting of findings.
Blocking results aren’t usually of interest, but are still reported in case of any interactions.
What are the two applications of blocking?
Increases power by reducing error variance. e.g. caffeine vs Problem solving (IQ)
Detecting potential confounds that explain the systematic variation in results. e.g. experimenter effects.