Meta and open science Flashcards
Type 1 Error
False Positive
Null is true but we reject it
Test gives significant results though there is no underlying effect
Controlled by significant threshold, lower p-value reduces type 1 error rate
Type error 2
False Negative
Null is false but we accept it
Test gave non-significant result where there is actually an underlying difference
Harder to control for
Power of experiment tells us how likely we are to find a significant effect, can’t be adjusted statistically but can be improved by changing things like sample size etc
Causes of type 1 and 2 errors
Random: researchers are just unlucky
Systematic: published results have consistent errors, due to factors like publication bias or QRPs
Meta science
Using scientific methodology to study scientific process itself
Useful to identify strengths and weaknesses of process and explore potential solutions
Distribution of published p-values
Histogram of studies show an unusual increase in frequency of p-values just below significance threshold of .05
Possible causes:
- publication bias
- over-emphasis on statistical testing
- Researcher degrees of freedom
Reproducible replication
same analysis steps performed on same dataset consistently produces same answer
Weak check - only checking for mistakes or typos
Robust replication
different analysis performed on same datasets produces qualitatively similar answers
Stronger check - is effect consistent across multiple independent datasets?
Generlisable replication
Combining replicable and robust findings allows us to form generalisable results
This is our aim - results independently validated by different analyses and different datasets
Open science
The 4 criteria are easier to meet if scientific methods are open
Easier to do science that others can replicate and generalise