critical perspectives Flashcards
1
Q
replication - the crisis
A
- Open Science Collaboration (2015)
- looked at hundred studies
–> taken from quite esteemed journal
–> arguably the best studies taken from the best journals at the time - only 36% of the studies
replicated when the collaboration replicated them again
–> only 23% from social psychology were replicable - scared psychologists
–> crisis of confidence
2
Q
cristea et al - emotion studies
A
- top 65 studies is emotion psychology
–> 40 cited observations
–> 25 experimental - showed greater effects than meta-analyses and large studies using the same questions
3
Q
what is meant by replication?
A
- doing the study again
- aim to see if the same findings are found
- more evidence we have that shows the same thing again and again, more likely we are to believe it
–> e.g. if one study finds ‘81% of people believe this’ it may be hard to believe, but if a second study finds ‘78% of believe this’ then they both become more believable
4
Q
define replication
A
repeatedly findings the same results
5
Q
benefits of replication
A
- Protects against false positives
–> e.g. sampling error - Controls for artifacts
–> maybe you had leading questions
–> maybe the look of the researcher impacted results - Addresses researcher fraud
- Test whether findings generalise to different populations
- Test the same hypothesis using a different procedure
6
Q
direct replication
A
- scientific attempt to recreate the critical elements of the original study
–> samples, questions, procedures, measures - The same or similar results are an indication that the findings are accurate and reproducible
- way of replicating the elements of the question you think are impacting results
- NOT EXACT replication
–> practically impossible in psychology
7
Q
conceptual replication
A
- test the same hypothesis using a different procedure
- same or similar results are an indication that the findings are robust to alternative research designs, operational definitions and samples
8
Q
registered replication reports - APA
A
- collection of independently conducted, direct replications of an original study, all of which follow a shared, predetermined protocol
- results of the replication attempts are published regardless of the outcome
9
Q
reasons for replication - faking
A
- Diederik Stapel
- found out he was faking and fabricated results
- hugely influential psychologist
- started off properly and fairly
–> found complicated and messy results across many variables
–> couldn’t bet his paper published
–> journal editors suggested cutting out the messy bits and added details - started to write more neat articles
–> made the data support the argument and create a narrative
10
Q
reasons for replication - sloppy science
A
- nine circles of scientific hell
1. limbo
–> seeing bad practice and not saying anything
2. overselling
–> focus on the bits of the study that worked
3. post-hoc storytelling
4. p-value fishing
–> outcome switching
5. creative outliers
–> deciding who to remove to make the data look better
6. plagiarism
7. non-publication
–> not publishing your papers
8. particle publication
9. inventing data - further down you are the worse it is
11
Q
outcome switching - sloppy science
A
- part of p-value fishing
- changing the outcomes of interest in the study depending on the observed results
- an example ‘p-hacking’
–> taking decisions to maximise likelihood of a statistically significant effect
–> rather than an objective or scientific grounds - if you do two tests, p value is now not 0.05 (5%) its actually 10%
–> if you found to be non-significant and one to be significant but you IGNORE the non-significant, you have changed you interested outcome due the results
12
Q
need for replication - small samples
A
- small samples and lack of statistical power can be a problem
- can say you have found an effect but if you find this in only a few number of people, that might not be the same in a larger sample
–> needs to be replicated in a larger group to be accepted
13
Q
need for replication - publication bias
A
- part of ‘non-publication’ and ‘partial publication’
- findings that are statistically significant are more likely to be published than those that are not
–> in general, there are good reasons for this - But could published studies represent the 5% of findings that occur by chance alone?
–> known as “the file drawer problem’ - quite scary to only see significant results
14
Q
how common is sloppy science?
A
- John et al (2012)
- Surveyed over 2,000 psychologists in the US about their involvement in questionable research practices
–> failing to report all the measures or conditions
–> deciding whether to collect more data after looking to see whether the results were significant
–> selectively reporting studies that “worked” - Concluded that the percentage of respondents who have engaged in questionable practices was surprisingly high
15
Q
peaking
A
- can’t stop study when you reach a peak
- have to decide on a set sample
- complete the entire study and the whole sample
- then calculate results
- data changes all the time, stopping at a point of your choosing doesn’t highlight the entire pattern
- might have just caught the data at a particular peak or stoop
16
Q
is sloppy science really a problem?
A
- Simmons, Nelson and Simonsohn (2011)
- Flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates
- Tested if listening to certain music makes you younger (2 songs)
- did find a significant effect that listening to a certain song made you 1.5 years younger
–> this is impossible
–> how? - they only reported the parts that gave them a significant effect
–> e.g. only 2 songs
–> controlling for father’s age - therefore appears impossible
17
Q
reporting using 0.05
A
- 0.05 means that there is a 1/20 chance you’ll find something
- if you test 20 things you will find an effect
- if you only report this, this looks like a significant effect
- if you reported all other 19 things and said they found no effect, people would question the significance of the 20th variable
–> but people didn’t use to do this