Replicability Flashcards
rules for researchers to follow so that findings will be replicable:
- don’t selectively report only the strongest effects
- don’t remove “outliers” without good reason (not just to make effect stronger)
- don’t claim exploratory findings as hypothesized
researchers selectively report only the strongest effects
- some researchers will choose one variable or one operationalization of a variable out of several on the basis of which one gives the best effect
- e.g., which scale, which time, which scoring, which time period etc.
this makes type I errors more likely
researchers removing “outliers” without good reason
- some researchers will remove participants simply because their data are making the effect weaker
e.g., weakening a correlation due to a big positive or big negative z score
(sometimes it is okay to remove outliers if they are so unusual)
researchers claiming exploratory findings as hypothesized
- some researchers will report unexpected findings as if they were previously hypothesized
- often times these expected findings are just flukes so they are unlikely to be replicable (need to be re-tested before reported)
harmful effects to society and science of scientific fraud and non-replicable findings generally
-gives wrong information to people who make important policy decisions
- put other scientists on the “wrong track” (waste time pursing this work)
- leads to jobs/grants/promotions/etc. being given to undeserving scientists instead of deserving ones
- undermine public trust of genuine findings
Dan Ariely “honesty pledge” study:
- Ariely reported that when people sign an honesty pledge at the TOP of a page rather than the bottom they are much less likely to lie about the information on that page
- article was cited fairly widely and even some US government agencies recommended including “honesty pledges” at the top of tax forms
Several years after Dan Ariely’s publication what was found?
- co-authors found that they could not replicate the study (the study was found completely non-replicable)
What were some big problems found with Ariely’s publication?
- unrealistic distribution of odometer readings (data) in the top of page group (uniform rather than skewed)
- lack of any rounded numbers
- apparent copy and paste of participant data in the bottom of page group (artificially doubled sample)
Francesca Gino “honesty pledge” study:
- provided the data for another study in the Shu et. al article
- research particpants were asked to sign a tax form about their travel expenses for coming to the research study and about how much money they earned in the study itself (for solving puzzles correctly)
What were some big problems found with Gino’s publication?
- participant ID’s were out of sequence (these were the participants producing the effect)
- examination revealed that participants were moved from one condition to another
What was wrong with Brain Wansink’s data
- alot of the data were impossible values for means (e.g., got 1.39 from averaging 4 people which is impossible)
- produced many other extremely unlikely values (e.g., ones that would require children to eat 50 carrots in a sitting etc.)
FALSIFIED DATA
What was wrong with Nicolas Gueguen data in the sexual intent research publication
- Red’s mean (6.28) is an impossible number to get with 30 raters
What was wrong with Nicolas Gueguen data in the hairstyle = help research publication
- every fraction was 1/10 (only 1 in 129 chance of this happening)
warm water and loneliness original claim
people who reported loneliness did tend to take warmer showers and baths
Two samples r=.59 & r=.37
warm water and loneliness replication findings
- found most people actually reported taking cold showers in the original data
-zero relationship was found between loneliness and warmth of baths and showers