Exam 4 (ch 12,13,14) Flashcards
One IV effects another IV and they consequently effect the DV together
Interaction Effect
The effect of one IV on the DV
Main Effect
Study where there are two or more IV’s or factors
Factorial Design
A condition in an experiment (level of one IV)
Cell
Age, Gender, Ethnicity (factors that are selected, but not manipulated)
Participant Variables
An experiment where researchers don’t have full control (may not be able to randomly assign participants to groups)
Quasi-Experiment
Like an IV, but researchers don’t have full control over it
Quasi-independent variable
Study that compares 2 or more existing groups
Nonequivalent Control Group Design
Study where the DV is measured repeatedly before after and during an intervention or event
Interrupted Time-Series Design
Study where the DV is measured before after and during an intervention in two or more existing groups
Nonequivalent Control Group Interrupted Times Series Design
A study that’s looking for treatments. One group is randomly assigned therapy and the others (second group) received it after a time delay
Wait-List Design
Design where info is gathered from only a few cases
small-N design
Small-N design where behaviors are observed for an extended baseline period before beginning the intervention
Stable-Baseline design
Small-N design where the introduction of the intervention (IV) is staggered at different times for different groups in different contexts
Multitude-Baseline design
Small-N design where the researcher observes a problem behavior before and after a treatment then discontinues treatment to see if the behavior returns.
Reversal design
Study whose results can be reproduced when the steps are repeated
Replicable
Replication where the original study is repeated exactly
Direct Replication
Replication where the operational definitions of some variables are changed when the study is repeated
Conceptual Replication
Replication where some variables or conditions are added.
Replica Plus Extension
Statistical average of existing studies
Meta-Analysis
When a meta-analysis is based on only published literature overestimates the support for the theory
File Drawer Problem
Researchers create an after-the-fact hypothesis
HARKing
adding participants to the results, looking for outliers, or trying new analysis to obtain a p<0.05
p-hacking
practice of sharing one’s data, hypothesis, materials freely
Open science
Researchers provide their full data set online so others can replicate
Open data
Full set of measures and manipulations are shared
Open materials
Researcher states hypothesis publicly before collecting any data
Preregistered
Extent to which the tasks/manipulations of a study are similar in an actual real work context
Ecological Validity
intent for a study is to test a theory or claim
Theory-testing
Intent is to generalize their study’s findings to other populations
Generalization Mode
How cultural settings shape an individual and the the collective individuals then shape culture
Cultural psychology
Real-world setting for a research study
Field Study
Use methods of science but don’t assume that all things have natural causes
Methodological Naturalism
Design that studies each possible combo of 2 manipulated IVs
Cross-Factorial Design
Design that includes on manipulated IV and one participant variable that is not manipulated but treated as an IV
IVxPV Design
How many IVs can you do max for an ideal study
4
Factorial design where all the IVs are within one group of participants
Within Group Factorial Design
Factorial design where one IV is manipulated between groups and one is manipulated within groups (increases the number of IV levels)
Mixed Factorial Design
Validity priorities for small-n designs:
1) Internal- high
2) External- can be low bc studies are small
3) Construct- important
4)Statistical- not super relevant
Benefits of small-n designs:
benefits rare conditions (research)
lots of detail
high internal validity
Problems with small-n designs
limited external validity
ethical problems (in Reversal designs treatment must be taken away)
Reasons studies fail to replicate:
- design confounds
- ceiling and floor effects
- statistics flukes
- sample differences/biases
-reviewer issues
-user errors
-lack of funding/resources
-procedure of original study is unclear
How to increase a study’s replicability:
-make common techniques more accessible
- use exact methodology when replicating
- pre-register studies
- open science- share original data
- improve peer review process
Benefits of Meta-Analysis
test new questions
discover new trends and patterns
Potential Issues of Meta-Analysis
Publication bias
Conclusions are limited by the studies you include