Replication Crisis Flashcards

1
Q

Replicability VS Reproducibility

A

Replicability→ whether a published study’s findings can be repeated in a different lab, using the same or similar research methods

Reproducibility→ ability of a different researcher to reproduce another researcher’s published analyses, given the original dataset and computer code for the statistics used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why can reproducibility fail?

A
  • process reproducibility failure→ unavailability of data or software
  • outcome reproduced failure→ not the same result/ error either in original or reproduction study
    —>authors not given enough information about methods of data collection, analysis steps
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How to enhance replicability?

A
  • better document the research methods used
  • run the study again
  • ask lab member to replicate the study
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the causes of the replication crisis?

A

Ignoring or Misunderstanding Statistics
Null hypotheses
-HARKing (Hypothesizing After the Results are Known)
Meaning of P-value
-Null hypothesis significance testing→ no significant difference between the groups
-P-hacking
-Cherry-Picking Data

Publication biases
-File drawer problem-> studies with non-significant or null results are less likely to be published
-selective reporting
-Incomplete knowledge

Falsifying data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

HARKing, THARKing and SHARKing?

A

HARKing (Hypothesizing After the Results are Known)→ formulate hypotheses after analysing data
Sharking (Secretly HARKing)→ hypotheses in Intro based on results from data at hand
Tharking (Transparently HARKing)→ clearly present new hypotheses in Discussion section of an article

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are different P-hacking techniques?

A

P hacking→ unethical and questionable practice of manipulating statistical analyses in order to achieve statistically significant results
- Stop collecting data when p < .05
- Analyze many measures, but report only those with p < .05
- Collect and analyze many conditions, but only report those with p < .05
- Add covariates to reach p < .05
- Exclude participants to reach p < .05
- Transform the data to reach p < .05

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Cherry-picking data?

A

only report results supporting your hypotheses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are some solutions to replication crisis?

A

Pre-registration of Research Methods for a given study = A detailed plan for research methods that are filed online (open) ahead of data collection
- unchangeable
- no review prior to data collection
- get a DOI (Digital Object Identifier) to put in final paper

Registered report→ detailed plan for research methods filed online that undergoes peer review prior to data collection
- provisionally accepted for publication if the authors follow through with the registered methodology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Meta Analysis VS Systematic Reviews?

A

Meta analysis->statistical combination of the results of multiple studies addressing a similar research question
- multiple replications from several studies

Systematic reviews-> No statistical analyses but search to identify all studies that meet those eligibility criteria
- assessment of the validity of the findings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly