Lecture 15: The Experimental Research Strategy ll Flashcards
removing the confound
- We can eliminate some confounding variables (ex. Change the testing room, take away the element of surprise)
- But, we cannot eliminate all confounding variables
holding the confound constant
- If we cannot remove the confounded variable, we can try to hold it constant across conditions
- Ex. caffeine, surprise, music teacher
- Holding a variable constant eliminates its potential to become a confound
- By standardizing the environment and procedures (no noise, presence of music teacher), most environmental variables are held constant
- We can also standardize the confound to a range of values (ex. Only using 30-35 year-old female participants)
problems with holding the confound constant
- Too strict control can be unreasonable
- A trade-off between standardizing and external validity
- Cannot generalize beyond this sample
using a placebo control group
- Sometimes, the experimental method itself can become a confounding variable
- To control for this, we use a placebo-control group
matching a confound across conditions
- If we cannot remove or hold the confounded variable constant, we can try to match levels of it across conditions (balanced)
- But, matching based on fixed values can limit the generalizability (threat to external validity)
- We can use counterbalancing of variables to reduce effects due to different average values
- When averages are used, counterbalancing of other factors can be beneficial
problems with matching across conditions
- Requires a lot of time and effort
- Reduces participant sample to choose from
when is matching across conditions recommended?
for specific sets of variables that pose serious threats to internal validity
randomizing participants assigned to conditions
- Randomly assign participants to treatment conditions so that the extraneous variables related to participants will balance out across the conditions
- The aim is to disrupt any systematic relationship between the extraneous and independent variables (to prevent the EV from becoming a CV)
- Powerful method for controlling many environmental and participant variables simultaneously rather than individually
randomizing participants example study
- Participants were recruited for an experiment on one of 3 testing days
- Each participant was assigned randomly to “Intervention” or “Control” conditions
- Assign each participant randomly as they appear on testing day
how does randomly assigning participants to conditions distribute extraneous variables?
using unpredictable and unbiased procedures (ex. coin toss)
the downside of randomly assigning participants to conditions
- Does not guarantee control
- It’s still possible that all people with similar backgrounds (potential CVs) are assigned to one condition
- But, with large enough numbers, randomizing guarantees a balanced result (Groups of >=20 participants per condition)
two aspects of manipulation checks
- check the manipulation
- include an exit questionnaire
checking the manipulation
take measures of the IV to make sure your manipulation did what you wanted to do (e.g., sad vs. happy mood)
exit questionnaire
tests whether the participants were aware of the manipulation(s) and purpose of the experiment
when are manipulation checks especially importnat?
- participant manipulations
- subtle manipulations
- placebo controls
- simulations
participant manipulations
- Difficult to know if worked (especially compared to environmental manipulations)
- Include a measure of IV (e.g., mood, frustration, stress) to assess if worked
subtle manipulations
- Difficult to know if participants noticed
- Exit questionnaire
- Ex. “Did you notice the expression on the experimenter’s face when she gave you the instructions?”
placebo controls
- Did participants believe the placebo was real?
- Exit questionnaire
- Ex. “What treatment did you receive? Did you feel it was effective? Were you aware you were being deceived?”
simulations
- Difficult to know if participants perceive the environment as real
- Exit questionnaire
- Ex. “What did you think when the other participants answered incorrectly? To what extent did you think about the fact that you were in an experiment?”
possible reasons that an experiment didn’t work
- IV is not sensitive enough
- DV is not sensitive enough
- IV has floor or ceiling effects
- DV has floor or ceiling effects
- Measurement error
- Insufficient power
- Hypothesis is wrong
example of an IV is not sensitive enough
IV= 2 foods to test preferences: chocolate or beets
possible solutions to an IV that is not sensitive enough
include more foods
example of a DV that is not sensitive enough
2 levels of preference to rate: yes or no
possible solution to a DV that is not sensitive enough
use a rating scale (7-point)
example of an IV with floor or ceiling effects
chocolate IV is at ceiling; beets IV is at floor
possible solution to an IV with floor or ceiling effects
include test items not as preferred/avoided
example of a DV with floor or ceiling effects
“yes” response is at the ceiling and “no” response is at the floor
possible solution to a DV with floor or ceiling effects
include responses in the middle range
example of measurement error
subject variables like hunger or a noisy test environment
possible solutions to measurement error
control those variables
example of insufficient power
not enough participants to detect a true effect of the IV
possible solution to insufficient power
increase sample size
example of a hypothesis that is wrong
people do not prefer specific foods
possible solutions to a wrong hypothesis
compare it with previous studies
threats to internal validity
- history
- maturation
- statistical regression
- selection
- experimental attrition
- testing
- instrumentation
- design contamination
history
did some other current event affect the change in the dependent variable? Did all groups experience the same current events?
example of a history threat
A prominent hip-hop star was arrested on drug charges; fans are affected differently than non-fans in how they complete a survey on interest in media arts
maturation threat
were changes in the dependent variable due to normal developmental processes?
example of a maturation threat
Children age and change more quickly than adults
statistical regression threat
did subjects come from low or high-performing groups that will naturally generate scores more toward the mean?
example of a statistical regression threat
Two schools compared pre-, and post-test reading scores after the 4-week reading program; Differences within each school may arise because the low scores (more in school 1) cannot go lower
selection threat
were the participants self-selected or assigned randomly?
example of a selection threat
adolescents who choose to see a therapist may not be representative of the population
experimental attrition threat
Did some participants drop out in unequal numbers across conditions?
example of an experimental attrition threat
students who most dislike college drop out and are not included in a survey of whether study centers were better in high schools or universities.
testing threat
Did the previous testing affect the behaviour at later testing?
example of a testing threat
administering different vaccines to the same participants over time reduces their needle anxiety; the last vaccine may be most effective due to testing effects
instrumentation threat
Did the measurement method change during the research?
example of an instrumentation threat
the interviewer gets tired and does not ask as many questions of the last participants in the study with the 2-hour interview.
design contamination threat
did participants find out something about the experimental conditions?
example of a design contamination threat
People in a 2-person study about decision-making notice there is less opportunity to win in one condition than another; they start to compete more, believing that they are being judged
threats to external validity
- unique program features
- effects of selection
- effects of environment/setting
- effects of history
unique program features
There may have been an unusually motivated set of experimenters in some conditions. Can this experiment be replicated in another lab?
effects of selection
Was the recruitment and assignment of participants to conditions successful? Can this study be replicated with different participants?
effects of environment/setting
Can these results be replicated in other labs or other environments?
effects of history
Can these results be replicated in different periods?
lab simulation
Trying to bring the real world into the lab by creating conditions within an experiment that closely duplicate the natural environment
effect of lab simulation
increases external validity
mundane realism
how close the lab environment is to the real world
experimental realism
bringing only the psychological aspects into the lab (participants immersed in simulation may behave normally, not thinking or remembering they are in an experiment)
how did original lab simulations work?
- Examination of published simulations for psychological issues (pre-virtual reality)
- Strong preference for hypothetical scenarios as independent variables (manipulations)
- Strong preference for qualitative self-reports as dependent variables
current day lab simulations
- Realistic immersive stimuli (IV)
- Quantitative response measures (DV)
VR and emotional responses
- High virtual reality (VR) environments influence viewer’s emotional responses
- More positive effects when immersed in nature
- Similar to responses observed in actual nature
field studies
- Trying to bring the experimental strategy into the real world to increase external validity
- Can examine behaviours that would be difficult to simulate in the lab
examples of field studies
- Kitty Genovese was attacked in a park in 2014. 38 neighbours witnessed the attack, but no one intervened or called the police. Bystander effects may be different in a public area (bus, metro) than in a lab
- Spatial cognition in homing pigeons; taught a rooftop loft location and released birds 1-18 km away
strength of experimental studies
both simulations and field studies allow researchers to test behaviour in a more realistic environment than laboratories
weaknesses of experimental studies
- Field studies are difficult venues for controlling all extraneous variables
- Simulations are dependent on whether the participants believe the simulation is real
perils for experimental design
- Although experimental research requires theories for framing hypotheses for testing, much current experimental research is atheoretical. Without theories, the hypotheses tend to be ad hoc (after the finding) and possibly illogical or meaningless.
- Many measurement instruments (questionnaires, rating scales, equipment) used in experimental research are not tested for their reliability and validity and can be incomparable across studies. Consequently, results generated using those instruments are also incomparable.
- Experimental research sometimes uses inappropriate research designs, such as irrelevant dependent variables, no tests for interactions, no experimental controls, and non-equivalent stimuli across conditions. Findings from such studies tend to lack internal validity.
- The conditions used in experimental research may be incomparable or inconsistent across studies. The use of inappropriate tasks for participants introduces threats to external validity (would other participants have responded differently), making comparison of findings across studies difficult.
ways to avoid the perils of experimental design
- Use pre-validated stimulus materials and tasks if available
- Conduct treatment manipulation checks (by debriefing subjects after performing the assigned task)
- Conduct pilot tests with a small sample to ensure the roles of the IV and DV
- Use tasks that are simpler and familiar for the participants, instead of tasks that are complex or unfamiliar
2 types of realism
mundane and experimental realism