critical perspectives 1 - replication Flashcards
% of studies replicated across different major psych journals
only 36% have findings replicated (open science collaboration, 2015)
23% in social psychology journal (lowest)
replication crisis
highly cited studies in emotions - biases in citation of studies
top 65 studies - 40 observational and 25 experimental
found highly cited found bigger effects than less cited studies
preference for big effects over most representative or realistic results in other meta-analyses
what is replication
if you do the same again, do you get the same result/effect
more evidence showing same result = more likely to believe it
if not - why not
why is replication important - advantages of it? (5)
- protects against false positives (e.g. sampling error)
- controls for artifacts (leading questions, worded weirdly)
- addresses researcher fraud (wanting to be published)
- whether findings generalise to different populations
- same hypothesis using a different procedure
direct replication
recreate critical elements of an original study
e.g. samples, procedures, measures are kept the same
direct not exact (exact is basically impossible)
getting the same (or similar) results are indication that the findings are accurate or reproducible
conceptual replication
test the same hypothesis using a different procedure
same/similar results = findings are robust to alternative research designs, operational definitions, and samples
way in which direct replications can be done
registered replication reports = call for people to do a study again to a high quality
labs can sign up and follow procedures and protocols then share findings and reach conclusion between many researchers
results of replication attempts are published regardless of the outcome
4 reasons for non-replication
- faking
- sloppy science
- outcome switching/ p-hacking
- small samples/lack of statistical power
non-replication: faking example
Diederik Stapel - made up his research about human nature and published it
publishers/media preferred clear answers but his research detailed complexities of life, so he simplified it to get published
got data to fit narrative
non-replication: “sloppy science”
nine circles of scientific hell
issues with scientific research
from 1 (best) to 9 (worst):
- limbo
- overselling
- post-hoc storytelling - hypothesising after results are known
- P-value fishing
- creative outliers
- plagiarism
- non-publication - but it is difficult to publish all work
- partial publication
- inventing data - faking
non-replication - outcome switching/ p-value fishing
4th circle of scientific hell
changing outcomes of interest in study depending on observed results
desire to obtain a p value under .05 - significant result
e.g. do a study on effect of music on sadness, but find effect on happiness, so ditch original idea of sadness and write about happiness instead –> switch outcome of interest –> should write up all effects, not ignore sadness insignificance
p-hacking
making decisions to maximise likelihood of a statistically significant effect, rather than objective or scientific grounds
need to report everything in a study, not just ignore insignificant grounds
non-replication: small samples
small sample = less statistical power
therefore may not be replicable with larger samples
publication bias
of the 9 circles of scientific hell –> 7. non-publication, and 8. partial publication
findings that are statistically significant are more likely to be published than those that are not
by not sharing this with others, it is bad science
there are good reasons for some not being published, e.g. ambiguity over results
file drawer problem with publication
could published studies represent the 5% of findings that occur by chance alone - as so much is unpublished, and the unpublished insignificant results could represent the truth
form of publication bias