Replicability Flashcards

1
Q

rules for researchers to follow so that findings will be replicable:

A
  • don’t selectively report only the strongest effects
  • don’t remove “outliers” without good reason (not just to make effect stronger)
  • don’t claim exploratory findings as hypothesized
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

researchers selectively report only the strongest effects

A
  • some researchers will choose one variable or one operationalization of a variable out of several on the basis of which one gives the best effect
  • e.g., which scale, which time, which scoring, which time period etc.
    this makes type I errors more likely
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

researchers removing “outliers” without good reason

A
  • some researchers will remove participants simply because their data are making the effect weaker
    e.g., weakening a correlation due to a big positive or big negative z score
    (sometimes it is okay to remove outliers if they are so unusual)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

researchers claiming exploratory findings as hypothesized

A
  • some researchers will report unexpected findings as if they were previously hypothesized
  • often times these expected findings are just flukes so they are unlikely to be replicable (need to be re-tested before reported)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

harmful effects to society and science of scientific fraud and non-replicable findings generally

A

-gives wrong information to people who make important policy decisions
- put other scientists on the “wrong track” (waste time pursing this work)
- leads to jobs/grants/promotions/etc. being given to undeserving scientists instead of deserving ones
- undermine public trust of genuine findings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Dan Ariely “honesty pledge” study:

A
  • Ariely reported that when people sign an honesty pledge at the TOP of a page rather than the bottom they are much less likely to lie about the information on that page
  • article was cited fairly widely and even some US government agencies recommended including “honesty pledges” at the top of tax forms
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Several years after Dan Ariely’s publication what was found?

A
  • co-authors found that they could not replicate the study (the study was found completely non-replicable)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What were some big problems found with Ariely’s publication?

A
  • unrealistic distribution of odometer readings (data) in the top of page group (uniform rather than skewed)
  • lack of any rounded numbers
  • apparent copy and paste of participant data in the bottom of page group (artificially doubled sample)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Francesca Gino “honesty pledge” study:

A
  • provided the data for another study in the Shu et. al article
  • research particpants were asked to sign a tax form about their travel expenses for coming to the research study and about how much money they earned in the study itself (for solving puzzles correctly)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What were some big problems found with Gino’s publication?

A
  • participant ID’s were out of sequence (these were the participants producing the effect)
  • examination revealed that participants were moved from one condition to another
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What was wrong with Brain Wansink’s data

A
  • alot of the data were impossible values for means (e.g., got 1.39 from averaging 4 people which is impossible)
  • produced many other extremely unlikely values (e.g., ones that would require children to eat 50 carrots in a sitting etc.)
    FALSIFIED DATA
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What was wrong with Nicolas Gueguen data in the sexual intent research publication

A
  • Red’s mean (6.28) is an impossible number to get with 30 raters
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What was wrong with Nicolas Gueguen data in the hairstyle = help research publication

A
  • every fraction was 1/10 (only 1 in 129 chance of this happening)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

warm water and loneliness original claim

A

people who reported loneliness did tend to take warmer showers and baths
Two samples r=.59 & r=.37

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

warm water and loneliness replication findings

A
  • found most people actually reported taking cold showers in the original data
    -zero relationship was found between loneliness and warmth of baths and showers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

ego depletion original findings

A
  • stated that higher willpower causes poorer self-control on other tasks
  • ## found university students who did not use willpower in cookie task did better on a subsequent puzzle solving task
17
Q

ego depletion replication findings

A
  • found that the largest sample studies produced smaller effects
  • real effects are close to zero
18
Q

power poses original findings

A
  • showed people who made two high power poses for one minute each showed increased testosterone and decreases cortisol
19
Q

power poses replication findings

A
  • other studies with larger sample sizes found zero effect
  • seems original authors did some cherry picking
20
Q

implicit association test

A
  • IAT is promted
  • IAT has low reliability and validity
  • test-retest reliability = .40
  • for predicting prejudiced behaviour: weak validity score = .15
    -construct validity : does it partly measure cognitve speed etc.
21
Q

why is it important for researchers to share their data after publishing their findings?

A
  • so researchers can verify the findings of the original study
  • allows other researchers to check for possible mistakes
  • allows other researchers to dispute the conclusions of a study