Chapter 10: Intro to Simple Experiments Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is an ‘experiment?’

A

A study in which at least one variable is manipulated, and another is measured. Take place anywhere the researcher can manipulate one variable and measure another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a ‘manipulated variable?’

A

A variable in an experiment that a researcher controls, such as by assigning participants to its different levels (values).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a ‘measured variable?’

A

A variable in a study whose levels (values) are observed and recorded.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are examples of ‘manipulated variables?’

A

A controlled variable can be when a researcher assigns participants to a particular level (value) of the variable. The participants do not get to choose which value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are examples of a ‘measured variable?’

A

It can take the form of records of behaviour or attitudes, such as self-reports, behavioural observations, or recorded physiological measures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is an ‘independent variable?’

A

In an experiment, a variable that is manipulated. In a multiple-regression analysis, a predictor variable is used to explain variance in the criterion variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are ‘conditions?’

A

One of the levels of the independent variable in an experiment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a ‘dependent variable?’

A

In an experiment, the variable that is measured. In a multiple-regression analysis, the single outcome or criterion variable the researchers are most interested in understanding or predicting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a ‘control variable?’

A

In an experiment, a variable that a researcher holds constant on purpose. Not really a variable at because they do not vary in an experiment; experimenters keep the levels the same for all participants.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the purpose of the ‘control variable?’

A

Control variables enhance the internal validity of a study by limiting the influence of confounding and other extraneous variables. This helps you establish a correlational or causal relationship between your variables of interest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the minimum requirements for a study to be an experiment?

A

Require a manipulated variable and a measured variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is ‘covariance?’

A

Covariance measures the direction of the relationship between two variables. A positive covariance means that both variables tend to be high or low at the same time. A negative covariance means that when one variable is high, the other tends to be low.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is ‘temporal precedence?’

A
  • Establishing that the cause (i.e., independent variable) occurs before the effect (i.e., outcome);
  • Establishing that the cause and effect are related and/or covary; and.
  • Establishing that there are no plausible alternative explanations.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Whare the three rules for causation?

A
  1. Covariance
  2. Temporal precedence
  3. Internal validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is ‘internal validity?’

A

Refers to whether the effects observed in a study are due to the manipulation of the independent variable and not some other factor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a ‘comparison group?’

A

A group in an experiment whose levels on the independent variable differ from those of the treatment group in some intended and meaningful way. Also called the comparison condition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a ‘control group?’

A

A level of an independent variable that is intended to represent “no treatment” or a neutral condition. Also called control condition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is a ‘treatment group?’

A

The participants in an experiment who are exposed to the level of the independent variable that involves a medication, therapy, or intervention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is a ‘placebo group?’

A

A control group in an experiment that is exposed to an inert treatment, such as a sugar pill. Also called the placebo control group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is a ‘confound?’

A

A general term for a potential alternative explanation for a research finding; a threat to internal validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is a ‘design confounds?’

A

A threat to internal validity in an experiment in which a second variable happens to vary systematically along with the independent variable and therefore is an alternative explanation for the results.
- A mistake redesigning the independent variable; it occurs when a second variable happens to vary systematically along with the intended independent variable.

22
Q

What is ‘systematic variability?’

A

In an experiment, a description of when the levels of a variable coincide in some predictable way with experimental group membership creates a potential confound.
- Introduces a confound

23
Q

What is ‘unsystematic variability?’

A

In an experiment, a description of when the levels of a variable fluctuate independently of experimental group membership, contributeing to variability within groups.

24
Q

What are ‘selection effects?’

A

A threat to internal validity that occurs in an independent-groups design when the kinds of participants at one level of the independent variable are systematically different from those at the other level.

25
Q

What is ‘random assignment?’

A

The use of a random method (e.g. flipping a coin) to assign participants into different experimental groups.

26
Q

What are ‘matched groups’?

A

An experimental design technique in which participants who are similar on some measured variable are grouped into sets; the members of each matched set are then randomly assigned to different experimental conditions. Also called, matching.

27
Q

How does using matched groups prevent selection effects?

A

Matching has the advantage of randomness. Because each member of the matched pair is randomly assigned (which prevents selection effects). This method also ensures that the groups are equal on some important variable before the manipulation of the independent variable.

28
Q

How are design confounds and control variables related?

A

Control variables are used to eliminate potential design confounds. In which different groups of participants are placed into different levels of the independent variable. Also called between-subjects or between-groups desig

29
Q

Independent-groups design / Between-subjects design / Between-groups design

A

An experimental design in which different groups of participants are exposed to different levels of the independent variables, such that each participant experiences only one level of the independent variable.

30
Q

Within-group design

A

An experimental design in which each participant is presented with all levels of the independent variable.
- Within-groups designs require fewer participants

31
Q

Posttest-only design / Equivalent groups, posttest-only design

A

An experiment using an independent-groups design in which participants are tested on the dependent variable only once.

32
Q

Pretest/posttest design / equivalent groups pretest/posttest design

A

An experiment using an independent-groups design in which participants are tested on the key dependent variable twice: one before and one after exposure to the independent variable.

33
Q

What is the difference between independent-groups and within-groups designs? Use te term levels in your answer.

A

Independent groups are situations where each participant only sees one level of the IV. That is each level if the IV are independent from each other. Within group design is when each participant experiences all levels of the IV. Explain the difference between concurrent- measures and repeated measures designs.

34
Q

Describe why posttest-only and pretest/posttest designs are both independent-group designs. Explain how they differ.

A

posttest only: participants are randomly assigned to independent variable groups and are tested on the dependent variable once. Satisfy all three criteria. They allow researchers to covariance by detecting differences in the dependent variable. they establish temporal precedence because the independent variable comes first in time. They establish internal validity.
Pretest/posttest designs: participants are randomly assigned to at least two groups. They are tested on the key dependent variable twice: once before and once after exposure to the independent variable. Researchers might use a pretest/posttest design to evaluate whether random assignments equalled the groups.

35
Q

Repeated-measures design

A

An experiment using a within-groups design in which participants respond to a dependent variable more than once after exposure to each level of the independent variable.

36
Q

Concurrent-measures design

A

An experiment using a within-groups design in which participants are exposed to all levels of an independent variable at roughly the same time, and a single attitudinal or behavioural preference is the dependent variable.

37
Q

Order effects

A

In a within-groups design, a threat to internal validity in which exposure to one condition changes participant responses to a later condition.

38
Q

Practice effect / fatigue effect

A

A type of order effect in which participants’ performance improves over time because they become practiced at the dependent measure (not because of the manipulation or treatment).

39
Q

Carryover effect

A

A type of order effect in which some form of contamination carries over from one condition to the next.

40
Q

Counterbalancing

A

In a repeated-measures experiment, we present the level of the independent variable to participants in different sequences to control for order effects.

41
Q

Full counterbalancing

A

A method of counterbalancing in which all possible condition orders are represented.

42
Q

Partial counterbalancing

A

A method of counterbalancing in which some, but not all, of the possible condition orders, are represented.

43
Q

Latin Square

A

A formal system of partial counterbalancing ensures that every condition in a within-groups design appears in each position at least once.

44
Q

Demand Characteristics

A

A cue that leads participants to guess a study’s hypotheses or goals; is a threat to internal validity.

45
Q

Is a pretest/posttest repeated measures design?

A
  • In a true repeated-measure design, participants are exposed to all levels of a meaningful independent variable. The levels can be counterbalanced.
  • In a pretest/posttest design, participants can only see one level of the independent variable, not all levels.
46
Q

Manipulation Check

A

In an experiment, an extra dependent variable researchers can include to determine how well a manipulation worked.

47
Q

Pilot Study

A

A study completed before (or sometimes after) the study of primary interest, usually to test the effectiveness or characteristics of the manipulations.

48
Q

How do manipulation checks provide evidence for the construct validity of an experiment’s independent variable? Why does theory matter in evaluating construct validity?

A

manipulation check is an additional DV (dependent variable) that researchers can insert into an experiment to help them quantify how well an experimental manipulation worked.

Theory matters in evaluating construct validity to see if the measures you used will be testing the right things so you do not get results that don’t answer your question

49
Q

Besides generalization to other people, what other aspect of generalization does external validity address?

A

generalization to other situations

50
Q

What does it mean when an effect size is large (as opposed to small) in an experiment?

A

The causal effect is stronger or more important.

the independent variable caused the dependent variable to change for more of the participants in the study

51
Q

Summarize three threats to internal validity discussed in this chapter:

A