Chapter 10: Simple Experiments Flashcards
Experiment
Researchers manipulated at least one variable and measured another
Can take place anywhere that variables can be manipulated and measured
Manipulated Variable
Variable that is controlled
Levels created by researcher
Measured Variable
Record behaviour or attitudes
Self reports, behavioural observations, or physiological measures
Recording what happens after selecting manipulated variables
Independent Variable
Manipulated (causal) variable
Researcher has independence in assigning people to levels of it
Independent variable should not be confused with its levels (referred to as conditions)
On x axis
E.g. emotion (anger, happiness, neutral responses to offer)
E.g. methods of note taking (paper, computer)
Dependent Variable
Outcome variable
How participant acts on the measured variable depends on the level of independent variable
Less control over dependent
Y axis
Control Variable
Variable that an experimenter holds content on purpose
Need to ensure manipulating only one thing at a time
Crucial for internal validity
E.g. watch lecture in same room, same experimenter, same video, same questions
Criteria for Causal Claims
Covariance: do results show that the causal variable is related to the outcome variable
Temporal precedence: design ensures causal variable comes before outcome variable
Internal validity: design rules out alternative explanations for the results
Covariance
No difference in dependent variable between the manipulations = no covariance
Comparison groups allows comparison between levels of variables and their outcomes (establishes covariance)
All experiments need comparison group (doesn’t have to be a control group)
Control Group
Level of independent variable intended to represent ‘no treatment’ or neutral condition
Treatment Group
Other levels of independent variable aside from control
Placebo Group
When control group is exposed to inert treatment
Confound
Possible alternative explanations (threats to internal validity)
Design Confound
Experimenter’s mistake in designing the independent variable
Second variable happens to vary systematically along with the intended independent variable
Accidental second variable is alternative explanation
E.g. if all written notetakers were more interested in lecture than computer notetakers
Selection Effects
Result when kinds of participants in one level of independent variable are systematically different from those in the other
Can also happen when experiments let participants choose which group they want to be in (e.g. ask which note taking method people want to use)
Can also happen when researchers assign one type of person to first condition and another type to second condition (e.g. assigning all women or all people who sign up early to one condition)
E.g. families assigned to intensive therapy for autism condition were more eager to make it happen, lived closer, etc
Avoid with random assignment and matched groups
Random Assignment
Each participant has equal chance of being in each condition
This divides certain types of people more equally
Controls for selection effects
Matched Groups
Measure participants on particular variable that might matter to dependent variable (e.g. student achievement (GPA) could matter for note taking experiment)
Match participants in pairs based on similarity (e.g. two highest GPAs)
Within matched sets randomly assign conditions
Prevents selection effects (ensures equal groups)
Independent Groups Design
Separate groups of participants are placed into different levels of the independent variable
Also called between-subjects or between-groups design
Posttest-only design & pretest-posttest design
Within-Groups Design
Each person is presented with all levels of the independent variable
Also called within-subjects design
Concurrent-measures design & repeated-measures design
Posttest-Only Design
AKA equivalent groups, post-test-only design
Participants randomly assigned to independent variable groups
Tested on the dependent variable once
E.g. note taking study, knowledge tested only once
Satisfies causation criteria
Pretest/Posttest Design
AKA equivalent groups, pretest/posttest design
Tested on the dependent variable once before and once after exposure to the independent variable
E.g. test vocal score before and after two week mindfulness class
When want to ensure groups are equal or study improvement over time
Repeated-Measures Design
Participants measured on a dependent variable more than once, after exposure to each level of independent variable
E.g. Participants experienced both levels: tasting chocolate together and tasting it while confederate was looking at painting
Each participant rated the chocolate twice (told it was different even though was the same)
Concurrent-Measures Design
Participants exposed to all levels of independent variable at roughly the same time
Single attitudinal behaviour preference is dependent variable
E.g. babies shown faces (male and female) at same time to see which look at longer
Levels were male and female faces, exposed at the same time
Behavioural preference was looking preference (dependent)
Order Effects
Within-group designs
Being exposed to one condition first changes how participants react to the later condition (confound)
Avoid by counterbalancing
Practice Effects
Within-group designs
Fatigue effects
Long sequence might lead participants to get better at the task or to get tired or bored toward the end
Carryover Effects
Within-groups designs
Contamination carries over from one condition to the next
E.g. first bite of chocolate is always better than later bites
Counterbalancing
Present the levels of the independent variable to participants in different sequences
Any order effects should cancel each other out when data are combined
Full Counterbalancing
When a within-groups experiment has only two or three levels of independent variable
All possible condition orders are represented
Can cause an increased need for participants
Partial Counterbalancing
Only some of the possible condition orders are represented
Present conditions in randomized order for every subject
Latin square: formal system to ensure that every condition appears in each position at least once
Manipulation Check
Extra dependent variable that researchers insert into an experiment to convince them that their experimental manipulation worked
More likely to be used when intention is to make participants think or feel certain ways
E.g. researchers manipulate feelings of anxiety by telling some students they have to give a speech
Manipulation check can determine is operationalization of anxiety worked as intended
Tests construct validity
Pilot Study
Simple study using a separate group of participants
Completed before or sometimes after the study of primary interest
To confirm effectiveness of the manipulations
Tests construct validity