Week 7 : Open Science Flashcards

1
Q

Problems with the way research happens…

A
  • Statistics are based on probability and that means you need to be objective (you can’t go ‘fishing’ for things)
  • Bad research has resulted in a ton of garbage science & findings that cannot be replicated
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Bem’s search for evidence of Psychic abilities

A
  • 2010: he published his evidence of PSI in a top journal
  • 2011: the journal refused to publish failed replications cuz its not exciting enough to share with the world (no one could replicate it btw, system fundamentally broken)
  • 2015: Nosek et al. published a large-scale replication attempts for 98 different psychological research findings from 3 top journsl… half of they could not be replicated
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What creates false positives? (3)

A
  1. incentives to publish (fraud)
  2. Questionable research practices
  3. The file drawer problem (publication bias)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What creates false positives?

Incentives to publish (fraud)…

A
  • ‘publish or perish’
  • academics are rewarded for publishing (with jobs, grants, tenure, respetc, etc.) which can motivate ppl to take shortcuts
  • fraud happens in all fields
  • The highest profile case of this was Dietrich Stapel…
  • he faked data for at least 30 publications over years… he said “I did not withstand the pressure to score, to publish, the pressure to get better in time”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What creates false positives?

Questionable research practices

A
  • little ways to adjust your design, analysis and reporting to get the desires effect of p < 0.5
  • p value = the probability that you would get the specific pattern of results that you observed in your study even if the null hypothesis is true (the null hypothesis typically posits that there is no effect or no difference between groups)
  • so p >0.5 means you should get a false positive 1/20 times… but if you try to test the same thing 20 different ways… They’re picking and choosing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What creates false positives?

Questionable research practices example…

A
  • is the US economy affected by whether democrats or republicans are in office?…
  • do you look at number of republicans or democrats?
  • which politicians do you look at?
  • how do you measure the US economy?
  • results differ based on all these choices
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What creates false positives?

Specific questionable research practices (QRPs)

A
  1. measure the dependent variable in multiple ways… e.g. measuring age (years, months, how old do u feel)… if they measure and test them all and then only report the one that works
  2. gradually add more observations… Simmons et al. ran the key analysis after every 10 participants collected adn stopped data collection when p > 0.5
  3. Add and drop covariates… age effect only works when statistically controlling for fathers age
  4. Add or drop experimental conditions… remove other experimental manipulations that didn’t differ from the control condition
    - combining these gives you a 61% chance of getting p > 0.5 for an effect that isn’t real
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What creates false positives?

The file drawer problem (publication bias)

A
  • P-value represents the odda that you would get a particular pattern in your data purely by chance
  • so…. P = 0.5 = 5% chance of getting that result purely due to chance (1/20 getting those false positives)
  • Lady Macbeth effect… 20 different labs test the same research question, usually the one that worked would be published and not the other 19, then the published one cannot be replicated
  • file drawer problem… studies showing null effects often wind up in a file drawer instead of in a journal
  • The published literature is heavily biased toward studies that ‘worked’
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What can help?

A
  1. Preregistration
  2. Open materials and data
  3. Journals publishing null results
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What can help?

Preregistration

A
  • means write up a play for exactly how you are going to handle your data and test your hypothesis before you begin, you post it and time stamp it
  • with preregistration… design, analyze, write-up, collect data
  • preregistration mitigates QRP’s (answers all the worries about it because researcher said they were going to do something so they now have to do it)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What can help?

Open materials and data

A
  • open sharing mitigates QRPs… readers can see and test all dependent variavles, reader can test analyses with and without covariants, reader can see and test all experimental conditions
  • open sharing helps open the file drawer… you can post data & materials for studies that you didn’t publish
  • potential downfalls are ethics, quality control and other concerns
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What can help?

Journals publishing null results

A
  • do we need to know a study’s results to evaluate it… no, registered reports (publish your paper before you collect your data)
  • publishing replications… direct replications: conducting the exact same study again to see if you obtain the same results
  • the best replication attempts use a very large sample (e.g. collected from across many labs)… most journals publish these now, even if they fail
How well did you know this?
1
Not at all
2
3
4
5
Perfectly