Lecture 6: Experimental Research Flashcards

1
Q

What is experimental research?

A

A research methodology where the researcher seeks to observe how changes in one or more variables impact on other variables.

It allows investigation of causal relationships.

Requires high levels of control.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is an independent variable (IV)?

A

The variable that is manipulated (the assumed ‘cause’)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the dependent variable (DV)?

A

The outcome variable that is measured (the assumed ‘effect’)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are extraneous variables?

A

Any other variables that impact on the relationship (e.g. ones that could weaken or distort results)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are confounding variables (or confounds)?

A

A type of extraneous variable that systematically varies with the independent variable (i.e. one which might otherwise be responsible for the changes in the dependent variable)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are controls?

A

When researchers aim to control or eliminate extraneous variables in the research design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are some design issues?

A

Randomisation

  • random assignment of participants and treatments to groups
  • scatters extraneous variables between conditions

Matching
* assigning participants to conditions based on pertinent attributes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are laboratory studies?

A

Experiment conducted in an artificial setting (such as a testing laboratory)

Researcher has much greater control over extraneous variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are field studies?

A

Experiment conducted in a natural setting

  • difficult to control extraneous variables
  • natural vs. formed conditions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a between-subjects design?

A

When comparisons are made between different groups of people

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a within-subjects design?

A

When comparisons are made using the same group of people (e.g. with pre- and post- designs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is internal validity?

A

Whether variation in the DV can be confidently attributed to the variation in the IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is external validity?

A

The extent to which the results of can generalise beyond the current study.

  • Is it likely that this group will also be found in non-experimental settings, with other groups, etc?
  • Wider population?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the threats to internal validity? (7)

A
  • Maturation effects
    • Systematic changes in the participant across time points or within the course of a study
    • Long-term maturation?
    • Short-term maturation?
  • History effects
    • Where the participant’s environment has changed across studies or within the course of a study
    • For example, brand crisis midway through longitudinal study
  • Testing
    • When an initial measure (e.g., pre-test) sensitises participants to the nature of the experiment
  • Instrumentation
    • Where changes in the measurement instrument change between test and re-test
  • Selection
    • Where participants are allocated to conditions in some systematic way
  • Mortality (a.k.a. Attrition)
    • Refers to participants dropping out of the study
    • Problematic if there is systematic attrition
    • Consider testing of social marketing programs on topics like safe sex, reduced speeding, quitting smoking
  • Demand effects
    • Where aspects of the experiment that provide cues to participants about researcher’s purpose, which are then acted upon by participants
    • Hawthorne effect - unintended effects cause by participants behaving differently
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are maturation effects?

A

A threat to internal validity

  • Systematic changes in the participant across time points or within the course of a study
  • Long-term maturation?
  • Short-term maturation?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are history effects?

A

A threat to internal validity

Where the participant’s environment has changed across studies or within the course of a study
- For example, brand crisis midway through longitudinal study

17
Q

What is testing (regarding threats to internal validity)?

A

When an initial measure (e.g., pre-test) sensitises participants to the nature of the experiment

18
Q

What is instrumentation (regarding threats to internal validity)?

A

Where changes in the measurement instrument change between test and re-test

19
Q

What is selection (regarding threats to internal validity)?

A

Where changes in the measurement instrument change between test and re-test

20
Q

What is mortality (aka attrition)?

A

A threat to internal validity?

Refers to participants dropping out of the study

Problematic if there is systematic attrition

Consider testing of social marketing programs on topics like safe sex, reduced speeding, quitting smoking

21
Q

What are demand effects (regarding internal validity)?

A

Where aspects of the experiment that provide cues to participants about reseacher’s purpose which are then acted upon by participants

22
Q

How can internal validity be improved? (3)

A
  • Control group - no treatment or filler treatment
  • Blinded and double-blinded experiments
  • Random assignment to conditions
23
Q

What are 3 threats to external validity?

A
  • Non-representative samples - e.g. reliance on convenience samples
  • Artificial stimuli - stimuli that suit the experiment, but do not represent full range of real-life stimuli. Ecological validity?
  • Artificial settings - testing in laboratory setting that does not correspond sufficiently with real-life setting
24
Q

What does X mean?

A

Exposure to IV (or a level of the IV)

25
Q

What does O mean?

A

Observation or measurement of DV

26
Q

What does R mean?

A

Random assignment of units (e.g. people to conditions)

27
Q

What is the one-shot case study (Quasi-experimental designs)?

A

A single group is exposed to some IV, and then observations are made (DV)

E.g. government announces an education policy and constituents’ voting intentions are assessed

Difficult to determine if IV has had any impact, or if observed results can attribute to the IV

28
Q

What is the static group comparison (Quasi-experimental designs)?

A

Experimental group vs control group

Two groups are exposed to different levels of an IV (e.g. IV present, IV absent) and observations are made

Problems of exposure to pre-test is eliminated

Problems relate to lack of group equivalence, selection bias, etc

29
Q

What is the one group, pre-test - post-test design (Quasi-experimental designs)?

A

A single group is first observed, then exposed to some IV, and then observations are made again

A pre-test baseline allows for comparison of post-test

Problems relate to maturation, history, etc

30
Q

What is the time series design (Quasi-experimental designs)?

A

A single group is pre-tested multiple times, exposed to some IV, and is then post-tested multiple times

Reduces potential problems associated with random fluctuations in performance, and helps to demonstrate how effective the IV is in impacting the IV long-term

Problems relate to history and maturation effects, testing effects, etc

Can be costly

31
Q

What is the post-test only, control group design (true experimental design)?

A

Randomisation

Two groups are randomly chosen, exposed to different levels of an IV (e.g. IV present, IV absent) and observations are made

Random assignment eliminates some problems associated with groups not initially being similar

32
Q

What is the pre-test - post-test control group design?

A

Two groups are randomly chosen, observations are made for each group, they each receive exposure to different levels of the IV (e.g. IV present, IV absent) and observations are made again

Experimental group can be compared to control group, and also back to baseline

33
Q

What is the Solomon four-group design?

A

Allows comparison of first two groups as with pre- and post-test design but addresses problems of history, maturation and testing with the provision of the two groups that do not have the pre-test

Challenges relate to having sufficient resources to execute

34
Q

What are factorial designs?

A

The manipulation of two or more IVs

Sometimes effect differs across levels of one IV, when other IVs are taken into account

Other factors are also likely involves, that we would be interested in

35
Q

The effect of one variable is known as what?

A

The main effect (e.g. main effect of gender or location)

36
Q

An effect due to a combination of variables is known as what?

A

An interaction effect

  • there is an interaction between gender and location
  • differences in the two levels of gender are not the same across each level of location
37
Q

What are test markets?

A

A controlled experimental procedure used by companies to trial a new product, promotion, strategy, etc. under realistic conditions with a limited audience. It’s a form of field experiment.

38
Q

What are the types of test markets? (4)

A
  • Standard test marketing 
- For example, company restricts distribution to limited geographic area or stores
    • High external validity
  • Control method of test marketing
    • For example, use of forced distribution to particular stores to ensure constantly in stock (e.g., paid shelf space)
  • Online test marketing
    • Panel used to test copy or respond to new product promotions
  • Virtual reality test marketing
    • Test market within virtual reality environment (e.g., Second Life)
39
Q

What are the limitations of test markets (4)?

A

Takes a long time to properly evaluate performance - too short can over-estimate sales (trial only purchases)

Loss of secrecy to competitors

Competitor tactics (e.g. distortion of results, imitation)

May not yield representative results if not implemented effectively (over-zealous salespeople, regularly used test markets)