The Replication Crisis (Social Psych) Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What should Science be?

A
  • Should provide reliable results.
  • Should be transparent
  • Should be reproducible
  • Should be self-correcting
  • Should be trusted
  • Should be replicable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Define what is meant by replication.

A
  • a fundamental principle of the scientific method.
  • Replicability: obtaining consistent results across studies aimed at answering the same question, each of which has obtained its own data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define what is meant by reproducibility.

A
  • obtaining consistent results when the same data is analysed using the same techniques.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Identify the two types of replication.

A

Exact/Direct Replication.
Conceptual Replication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Conceptual Replication?

A
  • where scientists try to confirm a finding using a different set of methods and measures that test the same hypotheses.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Exact/Direct Replication?

A
  • where a study uses the exact same measures, conditions as an original study to reproduce the results.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe the Replication Crisis.

A
  • Recently, psychology has been criticised because many classic research findings do not replicate.
  • an ongoingmethodologicalcrisis insciencewhereby researchers find that the results of many scientific studies are difficult or impossible toreplicate or reproduceon subsequent investigation.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe a case relating to the replication crisis (Bem, D.J. (2011)

A
  • Title: Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect.Journal of personality and social psychology,100(3), 407.
  • Used established protocols such as affective priming and recall facilitation – methodological rigour.
  • 9 experiments with significant results – seemingly robust phenomenon but surprising (Wagenmakers et al., 2011)
  • Underwent peer review and published in the prestigious JPSP(which rejects 90% of submissions).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Describe problems with (Bem, D.J. (2011).

A
  • “It was both methodologically sound and logically insane” (Engber, 2017)
  • ESP is are outside current scientific explanations of human behaviour because they contradict fundamental principles of our current understanding of reality.
  • Ritchie, S. J., Wiseman, R., & French, C. C. (2012). study indicated that the study by Bem, D.J., was not replicable as there were 3 unsuccessful attempts to replicate it.
  • “If one had to choose a single moment that set off the “replication crisis” in psychology – an event that nudged the discipline into its present andanarchicstate, where eventextbook findingshave beencast in doubt – this might be it: The publication, in early 2011, of Daryl Bem’s experiments…”

Engber (2017)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Describe the Open Science Collaboration Project (2015) and Many Labs.

A
  • A team of international researchers called “Many Labs” aimed to replicate 100 studies in 3 top Psychology journals.
  • Many Labs 1: Examples of studies included:
    Does people’s belief that human behaviour is predetermined encourage cheating?

Do children blindly follow eye gaze to find hidden objects?

Is there a motion ‘after-effect’ from still photographs depicting motion.

A large percentage of studies couldn’t be replicated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What Questions did people have about the replication failures?

A
  • What if the replication study wasn’t identical to the original?
  • Would the findings replicate in the same culture?
  • What if the experimenters aren’t competent enough and lack the know-how to pull off the original experiment?
  • What if the replication sample sizes were too small?
  • Are there certain conditions in which the study would replicate? (these are known as ‘moderators’)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Describe the Many Labs 2 project (Klein et al. (2018)).

A
  • Many Labs 2 project was specifically designed to address these criticisms.
  • 15,305 participants = 60x more than the original
  • Researchers worked with the scientists behind the original studies to check every detail
  • Repeated experiments many times, with volunteers from 36 different countries, to see if the studies would replicate in some cultures and contexts but not others.
  • Only a 54% success rate.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the failure rate for replicating studies relating to preclinical cancer research?

A
  • 90% failure rate of replicating studies relating to preclinical cancer research.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Identify what influences a Replication Crisis?

A
  • The incentive structure of academia, which influences.
  • Questionable Research Practices (QRPs), like P-hacking, HARKing.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe the Incentive Structure of Academia.

A
  • papers were required to get JOB > GRANTS > PDRs PRESTIGE > PROMOTION.
  • The slow pace of science helps ensure that research is done correctly.
  • But it can come into conflict with the incentive structure of academic progress, as publications—the key marker of productivity in many disciplines—depend on research findings.
  • This led to small sample sizes, lots of papers, unreliable estimate and type 1 errors
  • “More speed, more haste, more stress, more waste” (Frith (2019)).
  • There was a drive for novel ‘sexy’ findings.
  • Selected papers should present novel and broadly important data.
  • The journal publishes cutting-edge research articles.
  • novel research” methodologies
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Describe problems associated with the incentive structure of academia.

A
  • leads to PUBLICATION BIAS
  • this is the phenomenon that significant results have a better chance of being published, are published earlier, and are published in journals with greater prestige.
14
Q

Explain what are QRPS (Questionable Research Practices).

A
  • in psychology, p-values are typically used to show that a finding is significant (p<.05)
  • an incentive structure that can also lead to psychologists chasing positive findings (p<.05).

-These ‘significant/positive’ findings were traditionally sought after by journals (remember “sexy findings”).

  • Influences researchers to engage in QRPs to find positive findings to enhance chances of publication/rewards.
  • QRPs are not fraud but exploit a ‘grey area’.
15
Q

Describe what is meant by Academic Misconduct/Fraud.

A
  • the explicit effort of a researcher to falsify or misinterpret data.
  • e.g.,
    Research star: 130 papers, 24 book chapters.
    Developed studies and pretended he had conducted them.
    Was dismissed, returned his PhD.
    Criminal prosecution.
    54 papers retracted so far.
16
Q

Describe what P-hacking is (QRP)

A
  • it occurs when someone excessively influences the data collection process or statistical analyses performed in order to produce a statistically significant result.
17
Q

What is HARKing?

A
  • Hypothesising Afters Results are Known
  • e.g., ., running multiple analyses…
  • Silberzahn et al. (2018): Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players.
  • Analytic approaches varied widely across the teams. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship.
18
Q

Explain the process of HARKing.

A
  • Hypothesising AFTER the Results are Known (Kerr, 1988).
  • process of looking at the data the developing ‘hypotheses’.
  • Write the paper as a ‘story’
  • Increases chances of false positives (Type 1 error)
  • May mask ‘no effect’ leading to wasteful resources when people try to repeat the experiment.
19
Q

What is the Open Science Movement?

A
  • movement developed in response to a culture secrecy and skepticism that has been pervasive throughout scientific research”.
  • We need to know about conceptual, methodological and analytical choices […] so that we can make more informed assessments of credibility.

(Kathawalla et al. 2020; Vazire, 2017)

20
Q

Identify the features of Open Science.

A
  • Open Data
  • Open Materials
  • Registered Reports
  • Equality, diversity & inclusion
  • Open Access
  • Collaboration
  • Open Source
  • Open Peer Review
  • Preregistration
  • Educational Resources
  • Open science can be seen as open “access”.
21
Q

What is Study Preregistration?

A
  • Researchers commit to their research predictions and methodsbefore starting their experiments.
  • Decisions about a scientific experiment that are supposed to be made before data analysis areactually made before data analysis.
22
Q

How does Open Data and Materials lead to Psychological Science?

A
  • Open Data leads to Open Materials which leads to Preregistered which leads to Psychological Science.
23
Q

Describe the benefits of Open Science.

A
  • (Protzko et al. 2023)
  • Assessed the replicability of 16 novel experimental findings with optimal practices:

Optimal sample sizes

Study preregistration

Methodological and analytic transparency

  • In contrast to past replication efforts (e.g., Many Labs), 86% of studies replicated, 97% had similar effect sizes.
24
Q

Describe Protzko et al. (2023) (paper)

A
  • Reports an investigation by 4 coordinated laboratories of the prospective replicability of 16 novel experimental findings.
  • was done using current optimal practices like high statistical power, preregistration, and complete methodological transparency.
  • In contrast to past systematic replication efforts that reported replication rates averaging 50%, replication attempts here produced the expected effects with significance testing.
25
Q

Describe the tradition publishing process of a report.

A
  • researcher plans a study > collects data > writes report > submits to journal and then peer-reviewers evaluate the paper and either accept or reject it.
  • They know the design, they know the results, and the researcher can’t change anything about how they’ve run the study.
26
Q

Describe the process of publishing a registered report.

A
  • The researcher writes up the introduction and methods section of a paper and includes detail of the hypotheses, sample and methods.
  • Then undergoes peer-review where the reviewer can suggest changes to the study to make it stronger.
  • The paper then gets either a ‘in principle acceptance’ or a rejection in the journal.
  • If the paper is accepted, the data collection THEN takes place (after the first review process) and the researcher re-submits the full article with the results and discussion section now in place.
  • If the researcher has STUCK to their plans, then the paper is published.
  • This means that the reviewing process is ‘results blind’ – the paper is accepted regardless of a positive or null result! How much better is that!