Ch10 Flashcards
control variable
Control variable: any variable that an experimenter holds constant
how does an experiment establish covariance, how is it indicated?
Indicated by difference in group means
-If independent variables don’t have different levels, then covariance can’t be established
-Covariance is also related to the outcome variable- if conditions are the same, the study will find no covariance, because the outcome didn’t vary based on the level
Comparison group (comparison condition)
provides a comparison for the outcome of interest,
- If independent variables don’t have different levels, then covariance can’t be established
control group
Control group (no treatment condition): the neutral group, level of the independent variable that doesn’t receive any treatment
treatment group
receive treatment condition/ manipulation
Placebo group (placebo control group, expectancy condition):
Placebo group (placebo control group, expectancy condition): If control group is given an inert treatment
design confounds
Design confounds: when a second variable varies systematically along with the intended independent variable, acts as alt info for the results
systematic variability
Systematic variability: when the second variable effects only one group
- confound
Unsystematic variability
random or haphazard, effects both groups, not a confound
- Can make it difficult to detect differences in your dependent variable
selection effects
Selection effects: occurs when kinds of participants in one level of the independent variable are systematically different from the participants in the other level(s)
- Can happen when experimenters let participants choose level (self-selected)
-Or if experimenters assign one type of person to a particular condition
Matched groups (matching, matched subjects design):
first measure participants on a variable that may affect the dependent variable, then match participants in pairs, then randomly assign one of them to each condition
-Prevents selection effects
Independent-groups design (between-subjects design, between-groups design):
different groups of participants placed into different levels of the independent variable
types of Ind-groups design
- Posttest-only design
- pretest/posttest design
Posttest-only design (equivalent groups posttest design)
participants are randomly assigned to independent variable groups thus rendering the groups equivalent, and are tested on the dependent variable once, after the manipulation occurs
pretest/posttest design (equivalent groups pretest/posttest design)
pretest/posttest design (equivalent groups pretest/posttest design): participants are randomly assigned, are tested on dependent variable twice- before and after exposure to independent variable
which kindof ind. groups design can show if there aren’t selection effects
pre/post Shows there’s no selection effect if the pretest results are equal/similar
Within groups design (within-subjects design, repeated measures design):
only one group of participants, each person presented with all levels of the independent variable
Types of within-group design
- Concurrent measures design
- repeated measures design
Concurrent measures design
type of w/in groups where participants exposed to all levels of independent variable at same time and a single preference is the dependent variable
Repeated measures design
type of w/in groups where participants are measured on a dependent variable after exposure to each level of the independent variable
pros of w/in groups
- participant “acts as their own control”
- Gives researchers more power to notice differences between conditions
- fewer participants
power
probability that a study will show statistically significant result when an independent variable truly has an effect in the population
- Reduces unsystematic variability (noise) that may obscure the actual effect.
order effects
being exposed to one condition effects how participants respond to other conditions
- threat to w/in groups design
types of order effects
- Practice effects (fatigue effects): long sequence may lead participants to get better at a task, or tired/bored
- Carryover effects: contamination from one condition/level to next
counterbalancing
randomly assign order of experiencing the different levels
- helps avoid order effects
types of counterbalancing
- Full counterbalancing: all possible condition orders are represented
- Partial counterbalancing: only some condition orders are represented,
- latin square
Demand characteristics
Demand characteristics (experimental design): participants pick up on cues that lead them to guess an experiment’s hypothesis
Manipulation check:
an extra dependent variable that researchers insert to determine if the experimental manipulation worked
- Used often when intention is to make participants think or feel some way (ex: making jokes to manipulate amusement, + a rating on how funny the jokes were)
pilot study
Pilot study: simple study using separate group completed either before or sometimes after conducting primary interest study
pro of experimenting on homogenous group
Running an experiment on a non-diverse (homogenous) sample lessens the chance of unsystematic variability from obscuring the results of the independent variable
coen’s d + what does it show
used in experiments instead of r
- d shows the groups differences of the results on the dependent variable
- Shows difference in groups means AND how much the scores overlap (std. deviation)
-Strengths of d
0.20 = small/weak
0.50 = medium/moderate
0.80 = large/strong
threats
- Any (design) confounds? Did another variable accidentally covary?
- in an independent-groups design: control for selection effects using random assignment or matching
- In within-groups design: control for order effects using counterbalancing