Research Methods and Study Design Flashcards
Experimental Design
the technical term for a specific type of research
Steps to good experimental design
1) select the population
2) operationalize the independent and dependent variables
3) carefully select the control and experimental groups
4) randomly sample from the population
5) randomly assign individuals to groups
6) measure the results
7) test the hypothesis
1) Selecting the population
- Objective: determine the population of interest and consider what group will be pragmatic to sample
- Common Flaws: the population is too restrictive, sampling all individuals of interest is not practical
2) Operationalize variables
- Objective: determine the independent and dependent variables, specify exactly what is meant by each, make sure the dependent variable can be measured quantitatively within the parameters of the study
- Common Flaws: insufficient rigor in the description, manipulation of the independent variable presents practical problems
Dependent Variable
variable that is measured
Independent Variable
variable manipulated by the research team
Operational definition
Specification of precisely what they mean by each variable
Reproducibility
Quality of good experimental design, experiments can be reproduced by others. researchers
Quantitative
numerical
Qualitative
descriptive
3) Divide into groups
- Objective: carefully select experimental and control groups, homogenize the two groups, isolate the treatment by controlling for potential extraneous variables
- Common Flaws: control group does not resemble treatment along important variables, the experiment is not double-blind, participants can guess the experiment allowing a placebo effect to occur
Experimental Group
group of participants that receives treatment
Control group
group of participants that acts as a point of reference and comparison
Homogenous
a control group that is the same throughout and as similar as possible to the experimental group except for the treatment
Extraneous (or confounding)
variables other than the treatment that could potentially explain the results of an experiment
Placebo effect
believing that the treatment is being administered can lead to measurable results
Double blind
neither the person administering. treatment nor the. participants truly know if they are assigned to the treatment or control group
4) Random sampling
- Objective: make sure all members of the population are represented, ideally each member has an equal chance of being selected, meeting these criteria is often not possible for practical reasons
- Common Flaws: sampling is not truly random, sample does not represent the population of interest
Sampling bias
if it is not equally likely for all members of a population to be sampled
Selection bias
more general category of systemic flaws in a design that can compromise results, another example is purposefully selecting which studies to evaluate in a meta-analysis
Meta-analysis
big-picture analysis of many studies to look for trends in the data
Attrition
another type of selection bias, occurs when participants drop out of the study. If participants dropping out is non-random, this might introduce an extraneous variable
5) Random assignment
- Objective: individuals who have been sampled are equally likely to be assigned to treatment or. control, consider matching along potential extraneous variables which have been pre-selected
- Common Flaws: groups are not properly matched, assignment is not perfectly random
Randomized block technique
researchers evaluate. where participants fall along the variables they wish to equalize across experimental and control groups. Then randomly assign individuals from these groups so. that the treatment and control groups are similar along the variables of interest
6) Measurement
- Objective: make sure measurements are standardized, make sure instruments are reliable
- Common Flaws: tools are not precise enough to pick. up a result, instruments used for measurements are not reliable
Reliability
means that they produce stable and consistent results, measure what they’re supposed to (construct validity) and that repeated measurements lead to similar results (replicability)
Psychometrics
study of how to measure psychological variables through testing
Response bias
another concern with surveys, defined as the tendency for respondents to not have perfect insight into their state and provide inaccurate responses
Between-subjects design
the comparisons are made between subjects from one group to another