Chapter 9: Basic Issues in Experimental Research Flashcards
Three Essential Properties of a Well-Designed Experiment
The researcher must:
- Vary or manipulate at least one independent variable to assess its effects on participants’ responses.
- Have the power to assign participants to experimental conditions in a way that assures their initial equivalence.
- Control extraneous variables that might influence the outcome of the experiment.
Independent Variables
In an experiment, the researcher varies (or manipulates) one or more independent variables. An independent variable must have two or more levels (different values). These levels can reflect either quantitative or qualitative differences in the independent variable.
Types of Independent Variables
- Environmental Manipulations: Modifications of the participants’ physical or social environment (e.g. temperature, interaction with confederates).
- Instructional Manipulation: Vary the independent variable through the verbal instructions that participants receive.
- Invasive Manipulations: Create physical changes in the participant’s body through surgery or the administration of drugs.
Priming
Activating a concept.
Subliminal Priming
Activate a concept outside of conscious awareness.
Supraliminal Priming
Activate a concept by (for instance) reading a paragraph.
Experimental Group
Participants in an experiment who receive a nonzero level of the independent variable.
Control Group
Participants in an experiment who receive a zero level of the independent variable (or the absence of the variable of interest).
Pilot Test
A preliminary study that examines the usefulness of manipulations or measures that will be used in an experiment.
Manipulation Checks
Questions designed to determine whether the independent variable was manipulated successfully.
Participant (or Subject) Variable
A personal characteristic of research participants, such as age, gender, self-esteem, weight, or extraversion. Subject variables are not true independent variables because they are not manipulated by the researcher (may be used in a quasi-experiment).
Dependent Variable
The response being measured in a study, typically a measure of participants’ thoughts, feelings, behaviour, or physiological reactions.
Simple Random Assignment
Participants are placed in experimental conditions in such a way that every participant has an equal probability of being in any condition. Random assignment is used to make the conditions roughly equivalent at the start of the study.
Matched Random Assignment
Participants are matched into homogeneous blocks, and then participants within each block are assigned randomly to conditions. Matched random assignment helps to ensure that the conditions will be similar along some specific dimension, such as age or intelligence.
Repeated Measures (Within-Subjects) Design
An experimental design in which each participant serves in all conditions of the experiment. Repeated measure designs eliminate the need for random assignment because every participant is tested at every level of the independent variable.
Between-Subjects Design
Designs in which each participant serves in only one experimental condition.
Advantages of Within-Subjects Designs
- More Powerful (removes error variance due to individual differences)
- Require Fewer Participants
Disadvantages of Within-Subjects Designs
Order Effects (i.e. when the effects of a particular experimental condition are contaminated by its order in the sequence of experimental conditions in which participants are tested.
Types of Order Effects
- Practice Effects: Participants’ responses are affected by completing the dependent variable many times.
- Fatigue Effects: Participants become tired or bored as the experiment progresses.
- Sensitization: Participants gradually become suspicious of the hypothesis as the experiment progresses.
Counterbalancing
Counterbalancing is used to protect against order effects and involves presenting the levels of the independent variables in different orders to different participants.
Carryover Effects
Carryover effects occur in within-subjects designs when effects of one treatment condition are still present when the participant is tested in another condition.
Systematic (Between Groups) Variance
The part of total variance in participants’ responses that reflects differences among the experimental groups; if the independent variable has an effect on behaviour, we should see systematic differences between the scores in the various experimental conditions.
Two Sources of Systematic Variance
- Treatment Variance (Primary): The portion of the systematic variance that is due to the independent variable.
- Confound Variance (Secondary): The portion of the systematic variance that is due to extraneous variables that differ systematically between the experimental groups. Confound variance MUST be eliminated. (Confound variance is different from error variance.)
Error (Within-Groups) Variance
The portion of the total variance in participants’ responses that remains unaccounted for after systematic variance due to the independent is removed. It is unsystematic variance that is unrelated to the independent variable(s) under investigation in an experiment.
Internal Validity
The degree to which a research draws accurate conclusions about the effects of the independent variable on a dependent variable. To have internal validity, researchers must eliminate all potential confounds.
Threats to Internal Validity
- Biased Assignment of Participants to Conditions: Effects are due to initially nonequivalent groups rather than to the IV; this can occur when random assignment fails.
- Differential Attrition: Participants drop out of experimental conditions at different rates, making the experimental groups no longer equivalent.
- Pretest Sensitization: Completing a pretest leads participants to react differently to the IV than they would have reacted had they not been pretested.
- History Effects: Extraneous events occurring outside of the research setting have an effect on participants’ responses.
- Miscellaneous Design Confounds: Something other than the IV differs systematically between the experimental conditions.
Experimenter Expectancy Effects
Occur when a researcher’s expectations about the outcome of a study influences participants’ reactions.
Demand Characteristics
Occur when aspects of a study indicate to participants how they should respond.
Double-Blind Procedures
Used to avoid experimenter expectancy effects and demand characteristics; in a double-blind procedure, neither the participants nor the experimenter who interacts with them knows which condition the participant is in.
Placebo Effect
A physiological or psychological change that occurs as a result of the belief that an effect will occur.
Placebo Control Group
Some participants are administered an ineffective treatment (placebo). If there is a difference between the true control group and the placebo control group, we know that a placebo effect is present.
Sources of Error Variance
- Individual Differences: Pre-existing differences between people.
- Transient States: At the time of the experiment, participants differ in how they feel.
- Environmental Factors: Differences in the condition under which the study is conducted.
- Differential Treatment: Treating different participants in slightly different ways.
- Measurement Error; Unreliable measures contribute to error variance.
External Validity
The degree to which the results obtained in one study can be replicated or generalized to other samples, research settings, and procedures.