Chapter 14 Flashcards
3 key components of the best practices/open science
transparency
reproducibility
replicability
reproducibility
reproducing identical results from the same data
replicability
replicating results generated from older data by collecting new data through similar procedures
what does replication give to a study
credibility
3 types of replication
direct replication
conceptual replication
replication-plus-extension
direct replication
the original study is repeated as similarly as possible to determine whether the original effect is found in the new data
conceptual replication
the same research question and same conceptual variables but different operationalizations
in replication plus extension in what 2 ways can you replicate the original study
- add another level to an existing IV
- add another variable (makes it a factorial design)
what does a meta-analysis yield?
a quantitative summary of a scientific literature/ an average of the effects from all studies (published and unpublished( on the same variables
scientific literature
series of related studies conducted by different researchers who have tested similar variables
limitations to meta-analyses
null and opposite effects are rarely published so a meta analysis might overestimate the true effect size (file drawer problem)
solution to the file drawer problem of meta-analyses
actively seek unpublished data and use social media forums
origin of the replication crisis
only 39% of a random sample of 100 studies published in journals had been replicated
recommended rxns to the replication crisis
- ask why replication studies might fail
- ask what the best practices are to improve reproducibility
why might a study fail to replicate?
-if direct replication was used when it doesn’t make sense to use it
-if the researchers relied on only 1 replication study
-questionable research practices
best known QRPs
underreporting null fx
p-hacking
HARKing
using small samples
how does underreporting null fx influence readers?
makes people think that the effects are stronger than they actually are
p-hacking
when researchers try running different statistical analyses or computing their data differently than they originally intended (in hopes of obtaining a significant p value, it’s not done intentionally but they can become biased and not be aware they’re doing it)
HARKing
hypothesizing after the results are known; misleads readers about the strength of the evidence
why can a small sample size be problematic
the study’s estimate is usually imprecise and not replicable because it doesn’t take many extreme variables to greatly influence the data set
best practices for scientific studies
pre-registration
power analysis
report all analyses
report all variables measured
report all conditions
pre-registration
preregister the study’s methods, hypotheses and statistical analyses online BEFORE DATA COLLECTION
power analysis
determines the adequate sample size according to the design; done before submission to ethics committee
what 2 factors are considered in external validity
-how well the results can generalize to a population of interest
-how the sample was selected (random?)
is a study fundamentally flawed if it does not use a representative sample?
NO
is a large sample more representative than a small sample?
NO
ecological validity
the extent to which a study’s tasks and manipulations are similar to what people experience in real life
mundane realism
replicating what people experience in real life
theory-testing mode
testing association or causal claims to support a theory, and external validity is NOT a priority
famous example of theory-testing mode theory
contact comfort theory
generalization mode
generalizing findings from the sample to other populations or contexts, external validity IS a priority
cultural psychology
a special area of generalization mode, some cultures see things differently than other cultures (ex: Muller-Lyer illusion)
WEIRD sample
western educated industrialized rich and democratic samples. be aware of studies that are primarily based on these
experimental realism
the extent to which participants experience authentic emotions and behaviors
how is the importance of a study not solely determined by external validity and mundane realism
when a study is capable of allowing participants to experience authentic emotions (experimental realism)