Research Methods Exam Flashcards
How do we generally investigate causal claims?
Experiments
What are the 3 criteria for establishing causation?
- Covariance
- Temporal Precedence
- Internal Validity
Explain Covariance
An association establishes that A causes B or B causes A
Explain Temporal Precedence
directionality, figuring out what came first; did A cause B? or did B cause A?
Explain Internal Validity
is there a third variable that is associated with both A and B independently that could interfere with causation?
What is an experiment?
the manipulation of one variable and the measurement of another
independent variables can have multiple…
conditions
what are the two types of varaibles?
- independent
- dependent
explain independent variables
manipulated (ex. note taking methods)
explain dependent variables
measured (ex. academic performance)
what is a control group?
receives no treatment; placebo group
what is a treatment group?
receives the treatment
what things should you keep in mind when choosing variables and methodology?
- replicability
- generalizability
- ability to make causal claims
- outside interfering varaibles
why do we need experiments?
allows us to draw conclusions about causation
what is the easiest criteria to establish?
covariance
what happens when results are explained by systemic differences (confounds)?
we cannot infer causation
what are confounds?
alternative explanation for the change in the dependent variable
what are the two types of confounds?
- design confounds
- selection effects
what are design confounds?
mistakes when designing the experiment
example of a confound within this claim: alcohol use increases your risk of lung cancer
individuals who use alcohol may be more likely to also smoke
confound: smoking
what is a selection effect
errors in the selection or participation of participants
unsystematic variability is not the same as…
confound
whats a secondary way that selection bias occurs?
when people volunteer for a study
how do you prevent confounds?
make sure researchers treat participants the same and try to make sure there aren’t parts of the experiment that vary systematically
how do you prevent selection effects?
random assignment
what is a way to make sure groups are equal?
matched groups
why can dealing with confounds be difficult?
time
measuring more variables
resources
more complicated study designs
lots of variables
missed variables
what are the three types of validity?
- statistical
- construct
- internal
explain statistical validity
statistical significance resulting from a statistical test (t-test, ANOVA, effect size)
explain construct validity
operationalization of variables
explain external validity
info about participants and their characteristics
what does construct validity ask within the note taking experiment?
- did the study do a good job of measuring academic performance?
- how well did they manipulate the note taking condition?
explain potential design confounds in the pasta experiment
- individuals in the large bowl group had tastier pasta
- what if the medium and large bowl groups ate at different times?
explain potential selection effects in the pasta experiment
individuals who received the large bowl love pasta, while those who received the medium bowl don’t
what is the difference between qualitative research and quantitative research?
quantitative: tests hypotheses or theories
qualitative: explores ideas formulation hypotheses and theories
how is qualitative research analyzed?
summarizing, categorizing, interpreting
how is quantitative research analyzed?
math and statistical analysis
how is qualitative research expressed?
words
how is quantitative research expressed?
numbers, graphs, tables, fewer words
how is qualitative research sampled
few responses
how is quantitative research sampled?
many responses
how are qualitative questions formatted?
open-ended
how are quantitative questions formatted?
close-ended or multiple choice
when do you use qualitative research?
to understand something
when do you use quantitative research?
confirm or test something
how do you analyze qualitative data?
- prepare and organize data
- review and explore your data
- identify recurring themes
- develop a data coding system
- assign codes to the data (labelling)
what is a mixed method of research?
combines quantitative and qualitative research to answer research questions
what do results look like in research in humanities
products of research are predominately intellectual and intangible, results contribute to an academic discipline
what are the 6 reasons research ethics matter?
- protecting participant safety
- maintaining scientific integrity
- upholding human rights and dignity
- ensuring social responsibility
- building trust in research and institutions
- complying with legal and regulatory compliance
how do we know a manipulation works?
manipulation check; measuring the independent variable to make sure that it actually worked
what is a null effect?
no significant difference or relationship between the things you’re studying; the independent variable did not effect the dependent variable
what is the file drawer problem?
studies that don’t find a difference stay in the file-drawer or aren’t published, because of this, no one knows that someone has conducted a particular study
what are the 6 components checked in pre-registration of a study?
- demonstrated credibility
- lasting reproducibility
- constructive review
- increased likelihood of acceptance
- a more complete scientific record
- keeps your options open
peer review before results are known to…
align scientific values and practices
when does stage 1 of peer review occur?
between the design study and collecting and analyzing data
when does stage 2 of peer review occur?
between writing the report and the publishing of the report
how could we miss an effect?
obscuring factors
what are the 3 forms of obscuring factors?
- weak manipulation of the IV
- insensitive measure of the DV
- floor and ceiling effects of the IV or DV
what is weak manipulation?
not very strong or clear creation of changes or differences between groups
what is insensitive measurement?
inaccuracy of the research tools
what are floor and ceiling effects?
very high or very low clusters of results
what are control variables?
factors that a researcher keeps constant or under control during an experiment; helps to rule out other possible explanations
what is the difference between systematic and unsystematic variables?
systematic variables are related to the study’s purpose and are controlled or accounted for, unsystematic variables are not directly linked to the study and are random or uncontrolled
what are obscuring factors?
things that make it difficult to see of understand the true relationship between variables
what is a pilot study?
a small-scale preliminary investigation conducted before the main study; helps researchers identify potential problems and refine their approach
what is P-hacking?
intentionally fish for a specific p-value; lower than 0.05
what is HARKing?
Hypothesis After Results are Known