Week 4 Flashcards
Early philosophers
- First big ideas about mind and science
Enlightenment
- Growing questions about mind, mechanism, empiricism
Early psychologists
- How to be experimentalists
- Study perception, consciousness, intelligence
Psychoanalysts
- The importance of the unconscious, inner conflict
Behaviourists
- No more mind silliness
- Behaviour only
Cognitive revolution
- The mind is back in psychology
- Study as information processing
Paradigm shift
- Dominant schools of thought about how to study the mind scientifically have changed
- Zeitgeists
- Often periods of upheaval, revolution
Reproducibility
- The extent to which consistent results are observed when scientific studies are repeated
- Major demarcation between science and pseudo-science
- Scientific claims should not gain credence by virtue of status/authority of their originator
Science
- Systematic observation
- Ruthless peer review
- Considers all evidence
- Invites criticism
- Repeated results
- Limited claims
- Specific terms, operational definitions
- Engages community
- Changes with new evidence
- Follows evidence where it leads
Pseudoscience
- Anecdotal evidence
- No peer review
- Considers only positive evidence
- Dismisses criticism
- Non-repeatable results
- Grandiose claims
- Vague terms and ideas – science-y jargon
- Isolated
- Dogmatic and unyielding
- Starts with a conclusion, works back to confirm
How to collect data
- Generate a hypothesis
- Is it interesting
- Collect some data
- Maybe the first study doesn’t work so you fix it by changing some variables
- Repeat step 3 until you have enough studies to publish
Stapel – 2010
- Prolific Dutch Social psychologist was investigated for fraud
- He often supplied the data to his grad students
- His grad students working in the lab remarked that stats for different studies showed similar means and SDs
- After investigation, admitting his fraud and found 25 published papers were based on fabricated data
- 58 papers were retracted
Daryll Bem
- ESP study
- Claimed people had precognition
- Picking between pictures behind curtains
- The study had issues of reproduction
Common bad research practices
- Stopping data collection once p is smaller than .05
- Analyse many measures but only report significant ones
- Collect and analyse many conditions but only report significant
- Using covariates to get significance
- Excluding participants
- Transforming data to get p<.05
Open science collaboration
- 100 replications of pieces of research from 3 prominent journals
- Formed in 2011 with around 60 members
- Grew to 270 scientists from over 50 countries
- 97% of original studies reported significant effects
- 36% of replications had significant effects in the same direction
Replication crisis factors – how did the crisis happen
- Enormous pressure and incentive to produce many papers
- Over-interest in counter-intuitive findings as sexy
- Confirmation biases by researchers promoting questionable research practices
- Lack of accountability and transparency
Confirmation bias
- Tendency to seek out information that verifies your theory (validation)
- And not seek out information that falsifies your theory (falsification)
Drawbacks of the replication crisis
- Failed replications may be the new trend
- It creates a culture of paranoia and moral righteousness
Dan Gilbert – reaction to the crisis
- So-called replicators are shameless little bullies and second stringers
Power pose replication crisis
- Carney, Cuddy and Yap 2010
- As evidence has come in over these past 2+ years
- My views have updated to reflect the evidence
- I do not believe that power pose effects are real
Schnall replication crisis
- One of her major papers failed to be replicated
- Schnall wrote a response commenting on damage to her career
- Roberts commented that damage to her career was less important than the PhDs she ruined for being honest
Increase replication
- We must ensure that findings are robust and replicable
- Direct/conceptual replications should be a part of research pipeline
Beware P-hacking
- Exploring researcher degrees of freedom to find a significant effect on
- Changing degrees of freedom to find a significant effect
- Implicit bias or explicit data manipulation
Definitions of power
- The probability of finding an effect in your study if the effect is real
- The probability that a test of significance will detect a possible deviation from the null hypothesis, should such a deviation exist
- The probability of avoiding a type 2 error
Boost your power
- Studies can be underpowered
- Due to misunderstanding of power
- Large studies are more expensive and time-consuming
- We need to publish more papers, more frequently
How do you increase power
- Larger sample sizes
Open data principles
- Making data, materials and analysis available online
- So that others can replicate, check and reproduce your work
Confirmatory/exploratory research
- Confirmatory – hypothesis testing
- Exploratory – studying around the topic
- Exploratory research is okay
- But must not be presented as confirmatory
- Should be followed by confirmatory
HARK-ing
- Hypothesising after results are known
How to conduct confirmatory research
- Decide study details a priori
- Hypotheses to test, number of subjects, conditions, DVs
- Only then start recruiting
- Pre-register your study
Open science practices in teaching
- Ensure the next generation moves on from the reproducibility crisis
- Teach the importance of conducting well-powered studies
- Encourage critical evaluation of published studies in terms of open science practices
- Open science practice leads to more reliable, reproducible science
Open science practices as reviewers
- Signatories will not offer comprehensive review for any manuscript that does not meet the minimum requirements
- Neither will they recommend the publication
- Stimuli and materials should be made publicly available
- Data should be made publicly available
- Documents containing details for interpreting any data files or analysis code should be made available
- The location of all these files should be advertised in the manuscript and all files should be hosted by a reliable third party
Incentives for pursuit of new ideas
- Publications, grant income, employment, promotion, tenure, fame
Issues with rewarding science practices
- Poor motivators
- May prompt bad practices for fame
Goals for reward open science practices
- Be tolerant of lower output if doing correctly
- Reward food practices for pre-registered studies regardless of outcome
- Reward good practices for high powered studies
- More time for research
Solutions to the replicability crisis
- More replications
- Beware of p-hacking
- Boost your power
- Open data, open materials, open analysis
- Conduct pre-registered confirmatory studies
- Incorporate open science practices in teaches
- Insist on open science practice as reviewers
- Reward open science practices