Psychological Literature And The Replication Crisis Flashcards
What do we know about psychology?
- Our knowledge of psychology is the sum of the contents of journals in psychology.
- We add to our knowledge of psychology by publishing more articles (reporting new experiments).
If there are problems in the psychological literature, then there are problems in our knowledge of psychology.
How does new information get published
- New research is carried out and written up
- Submitted to a journal
- Reviewed by about two other researchers (unpaid) - does it reach certain criteria, is it something new
- If deemed good enough for that journal, then published
The “better” the research, the “better” journal it gets published in
Hierarchy of journals
- Top Tier: High-Impact, Prestigious Journals
- Psychological Science, Journal of Personality and Social Psychology, science and nature
- Highly cited and influential - of interest to a wide group of people
- Mid-Tier: Specialized and Applied Journals
- Journal of Applied Psychology, Journal of Experimental Psychology
- Emerging, Regional, or Niche Journals - tailored to particular types of research
- British Journal of Psychology, Autism
- Massive Open Access Journal - don’t consider new research on how novel or interesting it his if the research is solid they will publish it. the research only has to be scientifically rigorous.
- PLoS One, Scientific Reports, Cureus
- Frontiers in Psychology, Heliyon
- Not necessarily lower
- Local Journals or Predatory Journals
- May not have rigorous peer review
Impact factors
(Average number of citations per article two years after publishing)
- May not have rigorous peer review
assess journals on impact factors - average no of citations per article - 2 yrs after publishing
shift to open accèss and e journals
- historically - researchers submit and publish for free
unis buy the journals
open access - researchers pay to access
researchers can access papers for free
Why do academics publish research?
- To further understanding in science
- New ideas
- New findings
- To further one’s career you have to publish
- Promotion
- Grant funding
Publish or perish!
2011 the year that quietly changed psychology
- diederik stapel scandal - he has a lot of papers
- dutch psychologist was reported to have fabricated the data (making up the data) in a large number of studies - a lot of his studies were retracted. he lost his job as a psychologist and was punished as had lots of funding for his research .
daryl bem’s ESP study
- reported a study that showed we can predict the future
- published in top social psychology journal
publication of ‘false-positive psychology’ paper
- Simmons, nelson and simonsohn 2011 - used questionnaires asking what them and their colleagues had done eg data fabrication. identified large variability in how people report data
- much of the research literature may have signiifncat results because of questionable research practices
- p-hacking
- selective description of results
100 replication study ‘the replication criris’
- Between 2011 and 2015, Brian Nosek worked with the Open Science Collaboration to Replicate 100 key studies in psychology.
- 270 researchers.
- Used original methods and stimuli.
- Only 36% of the studies found significant findings.
So what research should we believe?
64% significant should only be 5%
Registered replication reports
- New style of paper
- Identify key study
- Large group target a single key finding and replicate it in different labs
Test whether the observed effect replicates
series of studies that focus on one particular experiment
modern research
strack, Martin and stepper 1988
registered replication was completed on it
found original study not that significant
all facial feedback research based in replications show very small effect - doesnt replicate, study was too small
Why was psychology particularly hit by the replication crisis?
it did hit other discipline aswell
- Publish or perish pressures
- Small samples and underpowered studies
- Lack of replication prior to publication
- P-hacking
- Selective reporting of particular conditions
- Questionable research practices
- Lack of transparency
- Data falsification or fabrication
- and a pressure to publish surprising findings
- Publication bias
Failed attempt to replicate rarely published and never rewarded
Consequences of the replication crisis
- Mistrust in science
- Public perception of science is damaged
- Waste of resources
- Many failed replications not published
- New research based on dubious findings
- Impact on theory development
Questionable findings delay the proper development of theories
The Credibility Revolution
How is psychology cleaning up its act?
Better consideration of effect sizes
- The effect size is the strength of a relationship between two things.
- A large effect would need a small sample size to find significance
- A small effect would need a large sample size to significance
- Always report effect sizes along with p values.
- r, eta squared, Cohen’s d.
- Use predicted effect sizes to choose sample size.
- Power calculation
- Typically set the Type II error rate to .2 - not finding an effect when there is really an effect there
- Power of experiment would be .8
Sample size required to have an 80% chance finding a significant effect the effect is real.
Publication of failed replications
- Some journals will publish replications of previous studies regardless of results.
- E.g., Royal Society Open Science, PLoS One
Mostly large online journals
regression crisis
can change hypothesis and sample size after have collected data