Chapter 10: Introduction to Simple Experiments Flashcards

1
Q

Experiment

A

A study in which at least one variable is manipulated and another is measured. Can be conducted in a lab or in a real-world setting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Manipulated variable

A

A variable in an experiment that a researcher controls, such as by assigning participants to its different levels (values). The IV in an experiment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Measured variable

A

A variable in a study whose levels (values) are observed and recorded. The DV in an experiment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Independent variable

A

In an experiment, a variable that is manipulated. Predictor/causal factor. Hint: Researcher has some “independence” in assigning people to different levels of the variable. Typically graphed on the x-axis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Dependent variable

A

In an experiment, the variable that is measured (aka outcome). Hint: How a participant acts on a measured variable depends on the level of the independent variable. Typically graphed on the y-axis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Condition

A

One of the levels of the IV in an experiment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Control variable

A

In an experiment, a variable that a researcher holds constant on purpose. Helps us eliminate alternative explanations, which contributes to the strength of internal validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Control group/condition

A

A level of an independent variable that is intended to represent “no treatment” or a neutral condition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Treatment group

A

The participants in an experiment who are exposed to the level of the independent variable that involves a medication, therapy, or intervention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Placebo group

A

A control group in an experiment that is exposed to an inert treatment, such as a sugar pill. Also called placebo control group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Comparison group/condition

A

A group in an experiment whose levels on the independent variable differ from those of the treatment group in some intended and meaningful way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Confound

A

A general term for a potential alternative explanation for research finding; a threat to internal validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Design confound

A

A threat to internal validity in an experiment in which a second variable happens to vary systematically along with the independent variable and therefore is an alternative explanation for the results. Low internal validity, cannot make a causal claim.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Systematic variability

A

In an experiment, a description of when the levels of a variable coincide in some predictable way with the experimental group membership, creating a potential confound.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Unsystematic variability

A

In an experiment, a description of when the levels of a variable fluctuate independently (randomly) of experimental group membership, contributing to variability within groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Selection effect/confound

A

A threat to internal validity that occurs in an independent-groups design when the kinds of participants at one level of the independent variable are systematically different from those at the other level.
Can happen when the participants get to select which condition to be in.
Random assignment fixes this! Let randomness distribute people.

17
Q

Random assignment

A

The use of a random method to assign participants into different experimental groups. Allows for your groups to be diverse which can help with establishing internal validity.

18
Q

Matched groups

A

An experimental design technique in which participants were similar on some measured variable are grouped into sets; the members of each matched set are then randomly assigned to different experimental conditions.

19
Q

Independent-groups design

A

An experimental design in which different groups of participants are exposed to different levels of the IV, such that each participant experiences only one level of the IV. Aka between-subjects design. Examines the IV by assigning participants to INDEPENDENT GROUPS (i.e. conditions).
Includes posttest-only design and pre-test/posttest design.

20
Q

Within-groups design

A

An experimental design in which each participant is presented with all levels of the independent variable. AKA within-subjects design. Examines the IV by looking WITHIN each participant.
Includes repeated measures design and concurrent measures design. Each participant is their own control, which ensures that the conditions across groups are comparable. These designs provide more statistical power to notice differences between conditions: Ability to detect a statistically significant result. Need fewer people since you use the same people multiple times in different conditions/levels

21
Q

Posttest-only design

A

An experiment using an independent-groups design in which participants are tested on the dependent variable only once. This design satisfies all three criteria for causation.

22
Q

Pre-test/posttest design

A

An experiment using an independent-groups design in which participants are tested on the key dependent variable twice: once before and once after exposure to the independent variable.
Allows the researcher to evaluate whether random assignment worked and to examine how the groups changed over time.

23
Q

Repeated-measures design

A

An experiment using a within-groups design in which participants respond to a DV more than once, after exposure to each level of the IV. Example: chocolate experiment

24
Q

Concurrent-measures design

A

An experiment using a within-groups design in which participants are exposed to all levels of an independent variable at roughly the same time, and a single attitudinal or behavioral preference is the dependent variable. Example: Harlow’s baby monkey experiment with wire and cloth monkey “mothers”

25
Q

Order effects

A

In a within-groups design, a threat to internal validity in which exposure to one condition changes participant responses to a later condition. Example: the baby exposed to the male and female face in a concurrent-measures design

26
Q

Practice effect

A

A type of order affect in which participants’ performance improves over time because they become practiced at the dependent measure (not because of manipulation or treatment). Also called fatigue effect, as participants may get tired/bored towards the end of the experiment.

27
Q

Carryover effect

A

A type of order effect, in which some form of contamination carries over from one condition to the next. Example: Feel weird from being watched eating chocolate, feel better now that you’re alone and your rating of chocolate changes as an effect

28
Q

Counterbalancing

A

In a repeated-measures experiment, presenting the levels of the independent variable to participants in different sequences to control for order effects.

29
Q

Full counterbalancing

A

A method of counterbalancing in which all possible condition orders are represented.

30
Q

Partial counterbalancing

A

A method of counterbalancing in which some, but not all, of the possible condition orders are represented.

31
Q

Latin square

A

A formal system of partial counterbalancing to ensure that every condition in a within-groups design appears in each position at least once.

32
Q

Demand characteristic

A

A cue that leads participants to guess a studies hypothesis or goals; a threat to internal validity.

33
Q

Manipulation check

A

In an experiment, an extra dependent variable researchers can include to determine how well a manipulation worked.

34
Q

Pilot study

A

A study completed before (or sometimes after) the study of primary interest, usually to test the effectiveness or characteristics of the manipulations.

35
Q

How experiments establish covariance

A

Design IV to show covariance through use of comparison/control group.
Look at results to demonstrate group mean differences.

36
Q

How experiments establish temporal precedence

A

Because the researcher manipulates the IV, they know it comes first. This is one of the major advantages of experiments over correlational designs.

37
Q

How well-designed experiments establish internal validity

A

Use control variables to demonstrate that the IV is responsible for change in the outcome.

38
Q

Which design should you pick?

A

Typically, you can trust random assignment (especially if the groups are large enough), so pretest isn’t always necessary.
Using a pretest can bias the participants, make them suspicious of what the researchers are measuring, or the participants may just answer the same way on the posttest (can bias the participant to know what the experiment is about)
Can create a testing effect: participants respond differently on the post-test due to something like fatigue, choosing identical responses, etc.
Posttest only is very powerful, may need to use pretest/posttest if you want to double check random assignment and if the pretest won’t impact the participants.

39
Q

Interrogating causal claims with the four validities

A

Construct validity: How well were the variables measured and manipulated? Look for manipulation checks and pilot study.
External validity: To whom or to what can the causal claim generalize? Look at sampling recruitment method.
Statistical validity: How well do the data support your causal conclusion? Look at p value and effect size.
Internal validity: Are there alternative explanations for the outcome? This is the most important validity for experiments! Are there any design confounds? Selection effects? Order effects?