Null hypothesis Flashcards
What are the different types of null effects?
Outcome not different from chance
Outcome was real but not statistically significant
- because not sensitive enough measures
- ex: effects of caffeine tested with chocolate (only small amount of caffeine)
Outcome reached significance levels to reject the null hypothesis, but the size of the impact was too small to be meaningful
When can null effect can be useful?
- if wanna know if cheaper treatment will be equally effective as more expensive one
- might want to falsify theorical predictions about presence of that effect
- should always create experiments that can observe a non significant finding
What are some reasons for null effects?
Independent Variable
- not enough between-groups difference
- within-groups variability obscured the group differences.
- there really is no difference
Dependent variable
- Weak manipulations→ lack of large differences in stimuli
- Insensitive measures→ poor dependent variables
- Ceiling and floor effects→ scaling problems
—>can always tets with pilot testing
Within groups design
Measurement error
- should use reliable measurements
- measures more instances
Individual differences
- Change the design to a matched-groups design
- Add more participants→ reduce the impact of individual differences
Situation noise→ ex: light, room…
- controlling the surroundings of an experiment
Other causes
- Sampled participants→ representativeness, unbiased, fatigue or practice effects, ethical issues?
- Stimulus materials and equipment→ novelty of materials, same across participants, standardization
- Experimenters→ trained enough, objectivity, fatigue?
- Procedures→ same across participants, enough time for practice for novel procedures
-Constraints on study designs
What are some constraints on study design?
- limited sample sizes→ free during certain hours can apply
- issues with data collection process→ one experimenter on weekends
- issues with analysis methods employed→ not all measures at the same timepoint
What can be done after finding a null effect?
Re-run the study with improved design details
- Advantage→ more likely to be a strong test of null hypothesis
- Disadvantage→ time consuming
- consistent with scientific method
Re-measuring the dependent variables to reduce variability
- have to be honest about it in report
- average out the variance and compare rater to the mean
Constrain analyses to address portions of study without design flaws
- Advantage→ data already available
- Disadvantage→ difficult to interpret findings from a partial report as the null result replicated across all conditions?
- partial analyses if know the null hypothesis for all conditions
Consider publishing null effects as it is
- scientists need to know all outcomes
Formula of bayesian terms
BF10= (Prob data H1)/ (Prob data H0)