Week 5 Flashcards
Conditional probability
the probability of one event occurring given that another condition is true
P(a|b)
probability of a given b
P(b|a)
probability of b given a
prior probability
probability of it happening before you see the outcome
posterior probability
probability of it happening/obtained after evidence is collected
sensitivity
probability of a true positive result
specificity
probability of a true negative result
Fallacy of transposed conditional
flipping around the conditions in the probability statement
we have a intuitive but incorrect tendency to think we know the probability of condition B given point A when we have been presented with a conditional probability (probability of condition A given point B)
low prior probability
low posterior probability
high prior probability
high posterior
p-value
conditional probability
probability of an event (observing data like ours) given a condition (null hypothesis is true)
not all hypothesis
are equally plausible
to evaluate plausibility
consider whether there is strong theoretical basis for the hypothesis, and whether there’s plausible mechanisms by which a hypothesis can be true
what do we need to consider when determining the likelihood of b?
the prior probability of b being true
how is a statistically significant outcome more convincing?
If the prior probability of the hypothesis is high rather than low
Bem 2011
claimed to find evidence that information can travel back in time to affect our cognition and emotion
An implausible hypothesis that can’t possibly work and the effects have been difficult to replicate
what shouldn’t p values be the only thing used for?
evidence as to whether to accept or reject a hypothesis
sniff test
is the effect strong?
does it generalise to other populations? other situations?
do other researchers find these results?
what could be alternative explanations?
experimental error
inappropriate methods or analysis techniques
Straightforward replication
Identical copy of a previous study in order to see if the studies findings are valid and reliable
Different participant population replication
Do these findings extend to other populations?
Replications with different experimenters
These to less bias from preset ideas that experimentors may hold
Previous studies may be carried out a long time ago
replicate to see if results are still relevant
“good science”
accurate and reliable findings if replicated in a separate study
Psychology and complexity of human behaviour make replications hard
large variability
2015
replications of 100 psychology studies, only 39 were replicated
questionable research practises (QRP)
Due to publication bias pressure to publish significant findings
Simmons et al
Research question: does listening to children songs induce age contrast making people feel older?
2 songs were compared
People fell older after listening to the child song compared to the control p=.033 showing an age contrast effect
However, the actual study had three conditions/songs, older age contrast
Multiple questions asked, 10 in total
None of this was mentioned in the report
issues with QRP
Make replications difficult
False positives are far more likely leading to pointless further studies and in effective treatment or policies
The field of psychology and it’s credibilities questioned
4 common QRPs ‘degrees of freedom’
- observing among DV 66.5%
- choosing sample size
- using covariates
- reporting subsets of experimental conditions
why use QRPs?
Pressure to publish significant findings
Ambiguity about decisions such as the exclusion of outliers
Confirmation bias such as preset ideas
Lindsay 2015 troubling trio to help identify QRPs
- low statistical power
- surprising results
- p value only slightly less than .05
tackling QRPs
pre-registration of studies - writing down your hypothesis, study design, and analysis the plan before you collect any data
replication of ones one findings - reliability
open access data - allowing other researchers access to everything allows them to reproduce analyses, check analyses and conduct additional analyses
reduce publication bias against null findings
registered reports - methods and analysis is peer-reviewed before study is conducted
individual differences to measure and control
personality
experience
physiological state
psychological state
genetics
enviromental influences to control
temperature
context
lighting
time of day
social influence
too much control
lowers ecological validity - findings don’t appear in real life
questionable research practises
there are lots of decisions to make as a researcher about how best to conduct studies and analyse our data
the flexibility leaves room for QRPs to appear, we can make little changes to our study or analysis to make data ‘neater’
Common QRPs
collecting data from more participants and then checking the results again
cherry-picking particular comparisons that “work” and discarding those that don’t
adding or removing ‘covariates’
running lots of studies and only reporting those that “work”
deciding which “outliers” to discard after looking at the data
p-hacking
exploiting, perhaps unconsciously, researcher degrees of freedom until p<.05
open science
open data
open source
open educational resources
open methodology
open access
open peer review