Chap 6: Research stategies Flashcards
What is the difference between Research strategy, design and procedure?
Research strategy→ general approach and goals of a research study
Research design→ general plan for implementing a research strategy
- Group VS individual
- Within subjects VS Between Subjects
- Number of variables to include
Research procedure→ exact, step-by-step description of a specific research study
How to avoid confirmation bias in research?
- Open and transparent research atmosphere where data and experimental designs are examined and evaluated by every lab member
- Encourage and consider critical views
- All team members should examine primary data
- Design an experiment that includes the possibilities to both prove or disprove the hypothesis
- Set experiment standards before starting the experiment (cut-off values, results that will/won’t provide useful info)
What are three aspects to consider when picking a research design?
- Group versus individual (case study, single-subject designs)
- Same individuals versus different individuals (within-subject, between-subject designs)
- The number of variables to be included (1-level, 2-level, Factorial designs)
What is the difference between correlational and descriptive research strategy?
Descriptive research strategy-> simply observe a variable
Correlational-> measure and describe relationship between two variables
What is the difference between correlational and non experimental research strategy?
Nonexperimental strategy->compares two or more groups of scores, measuring only one variable for each individual
Correlational strategy->uses one group of participants and measures two variables for each individual
What is the difference between internal and external validity?
Internal Validity→ extent to which you can conclude that the changes in X have caused the observed changes in Y
External Validity→ extent to which your results can generalize to other settings and populations
What are the threats to external validity?
Generalizing across participants
- Selection bias→ when sampling procedure favors the selection of some individuals over others
- Volunteer bias
- Participant characteristics→ if have too much characteristics in common
- Cross-species generalizations→ when experiment is conducted with animals
Generalizing across features of a study
- Novelty effect→ new experiment can be an anxiety-provoking experience
- Multiple treatment interference→ carry over effects (ex: fatigue, practice)
- Experimenter characteristics
Generalizing across features of the measures
- Sensitization→ assessment procedure can have an impact
- ex: pre-test sensitization→ increased awareness before taking a treatment can influence the results
- Self-monitoring→ act of monitoring participants’ behaviour can change it
- Choice of operational definition→ results can be constraint to one type of measurement (ex: heart rate)
- Time of measurement
What are the threats to internal validity?
Environmental variables (time of the day, location…)
Participant variables→assignment bias (gender, age, IQ…)
Time-related variables→ testing same participants multiple times (within subject designs, weather, mood…)
- History→ world events having an overall group average
- Maturation→ changes in participants’ physical or psychological characteristics (problem for long term studies with young or old participants)
- Instrumentation→ changes in measurement instrument over course of study
- Testing effects→ get better with repeated testing (mental fatigue)/carry over