Nov. 19th - Chapter 10: Research Designs for Special Circumstances Flashcards

1
Q

3 steps to identifying a causal relationship:

A
  1. Temporal precedence (cause comes first - if A causes B, then A MUST come before B)
  2. Covariation of the cause and effect
    * Presence of a cause = presence of effect
    * Absence of a cause = absence of effect
  3. Eliminate plausible alternative explanations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Key features of true experiments that help achieve internal validity

A
  1. Manipulated Independent Variable(s)
    * At least two levels to enable comparison
  2. Measured Dependent Variable(s)
  3. Experimental control over extraneous variables
    * No confounds
    * Random assignment to conditions of IV
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the next best thing when a true experiment isn’t possible?

A
  1. Single-case
  2. Quasi-experiment
  3. Developmental designs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Single Case - Types:

A
  1. ABA Reversal design
    * A = baseline
    * B = treatment
  2. ABAB Reversal design
  3. Multiple baseline design (spanning baseline/treatment over multiple days)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Single Case Studies - when you return back to baseline, what is this called?

A

Reversal design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Single Case Studies: Additional Points

A
  • A few Ps (not a large sample)
  • Baseline-treatment: Might add another baseline after, if ethical/possible
  • Key feature: Stagger start of treatment; Avoid ‘history’ confound: maybe other events cause a change!
  • Does everyone change behavior after treatment?
  • Useful to generalize beyond single P
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

In what areas can multiple baseline measure the effects of the IV?

A
  • Across participants
  • Across behaviours (does it control for something else? like dog barks?)
  • Across situations (changing environments - home vs. work)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Quasi-Experiments:

What is it?

A

An attempt to get at causation when you can’t use full experimental design
* Still attempt at IV -> DV
* Just quite a bit messier!!!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Quasi- versus True Experiments

A

True experiment
* Experimental manipulation
* Random assignment
* Experimental control

Quasi-experiment
* Often no direct manipulation
* No random assignment
* Limited control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In what situation would you choose a quasi-experiment?

A

Want to study effect of IV on DV, BUT
* Can’t manipulate or control variables
* Can’t use random assignment
- Ethically unfeasible
- Practically unfeasible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Quasi-Experiment Differences: Expanded

A
  • Often no direct manipulation
  • No random assignment:
  • Subjects are selected based on the values of the independent variable, rather than having the experimenter assign values of the independent variable to subjects (EX: through groups)
  • Limited control:
  • Quasi-experiments have less internal validity than experiments
  • A (poor) quasi-experimental design option Really tells you NOTHING!!!!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Threats to Internal Validity

History

A
  • Something happens in the world at the same time as the onset of treatment & therefore could have caused the effect
  • What if other events cause a change?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Threats to Internal Validity

Maturation

A
  • Ps change between pretest and posttest for some reason other than your treatment
  • What if the participants just changed on their own?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Threats to Internal Validity

Testing

A
  • Taking the pretest changes responses on posttest (order effects)
  • What if the process of testing changed participants?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Threats to Internal Validity

Instrument Decay

A
  • Over repeated use, treatments or measures change, making it look like your treatment had an effect
  • What if the measurement instrument changes?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Threats to Internal Validity

How do reduce threats to internal validity:

A
  • Who could you recruit to the comparison group?
  • Still no random assignment
  • Remember lower internal validity than a true experiment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Types of Quasi-Experiments:

A
  • One-group posttest only
  • One-group pretest-posttest
  • Nonequivalent control group
  • Nonequivalent control group pretest-postest

Multiple Repeated Measures:
* Interrupted time series

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Types of Quasi-Experiments:

Nonequivalent control group

A
  • Posttest-only design: still have threats to internal validity = types of alternative explanations
  • One being regression toward the Mean
  • Extreme scores tend to be less extreme with repeated measures.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Developmental Research

Key Terms: Age Effects

A
  • Age effects: any differences caused by underlying processes, such as biological or psychological changes that occur with aging.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Developmental Research

Key Terms: Cohort Effects

A
  • Cohort effects: Differences caused by experiences and circumstances unique to the generation/cohort to which one belongs
  • EX: generations living through WWII versus gen Z
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Developmental Research

Key Terms: Time of Measurement Effects

A
  • Time of measurement effects: differences stemming from sociocultural, environmental, historical, or other events at the time of data collection
  • Effects are often confounded!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Cross-sectional Design vs. Longitudinal Design

A
  • Compare groups of p’s of differing ages at a single point in time
    VERSUS
  • Observe one group of p’s repeately over time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Cross-sectional - Benefits/Negatives

A

Benefits:
* COST: Less expensive and immediately yields results

Negatives:
* Must infer developmental change (problematic!)
* Difference may be due to cohort effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Longitudinal: Benefits/Negatives

A

Benefits:
* Evidence for developmental change

Negatives:
* Loss of participants (high attrition)
* Expensive and long!
* Measures, interests change
* Results may not generalize to other cohorts
* Possible time of measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Longitudinal Designs

Sequential Designs

A
  • Longitudinal design with more than one cohort, multiple time points (mixture of both)
  • allows for cross sectional comparisons
  • allows for longitudinal comparisons
  • can investigate cohort effects

Similar cons to longitudinal: lack of p’s, expensive and long, interests in change, lack of generalizeability, possible time of measurement

26
Q

READINGS

A
27
Q

What types of research can prevent a true experiment? WHY?

A

** Applied research and evaluation research *

WHY?
* Researchers are invited to participate late in the process and **aren’t able to provide input **on the best measurement techniques
* Budget limitations rule out some forms of data collection

28
Q

Program Evaluation Research

A
  • …research on programs that are proposed and implemented to achieve some positive effect on a group of people (eg programs in schools, work settings, entire communities)
  • Raises many of the issues found in applied contexts
29
Q

Campbell - 5 Questions to Evaluate Program Effectiveness

A
  1. Needs Assessment
  2. Program Theory Assessment
  3. Process Evaluation
  4. Outcome Evaluation
  5. Efficiency Assessment
30
Q

5 Questions to Evaluate Program Effectiveness

Needs Assesment

A
  • Needs Assessment - are there problems that need to be addressed in a target population?
  • “What do homeless people need most?”
  • Data for needs assessments can come from surveys, interviews, and existing archival data maintained by public health, criminal justice, and other agencies
31
Q

5 Questions to Evaluate Program Effectiveness

Program Theory Assessment

A
  • Program Theory Assessment - how will the problems be addressed? Will the proposed program actually address the needs appropriately?
  • After identifying needs, a program can be designed to address them
  • Programs must be based on valid assumptions about the causes of problems and a cogent rationale for the best way to address these problems
  • Can involve researchers, service providers, and prospective program clients
  • Assessing includes articulating the rationale for how members will benefit from the program, including how they will access and use the program’s services
  • This rationale can be evaluated: will this program actually reach the target population as intended? Does it have appropriate goals?
32
Q

5 Questions to Evaluate Program Effectiveness

Process Evaluation

A
  • Process Evaluation - is the program addressing the needs appropriately? Is it being implemented appropriately?
  • When the program is under way, researchers monitor whether its reaching its target population or not, attracting enough clients, and whether staff is providing the planned services
  • Sometimes, staff hasn’t received adequate training or the services are being offered in a location that is undesireable or difficult to find
  • Overall, researcher seeks evidence that the program is doing what its supposed to do
  • Extremely important to avoid concluding that a program is ineffective, when really it’s not being implemented properly
  • Can involve questionnaires and interviews, observational studies, and analysis of records kept by program staff
33
Q

5 Questions to Evaluate Program Effectiveness

Outcome Evaluation

A
  • Outcome Evaluation - are the intended outcomes of the program being realized? As in, are the goals being achieved
  • To determine, evaluator must devise a way of measuring the outcome and then study the impact of the program on the outcome measure
  • Must know what the participants of the program are like and what they would be like had they not participated in the program
34
Q

5 Questions to Evaluate Program Effectiveness

Efficiency Assessment

A
  • Efficiency Assessment - is the cost of the program worth the outcomes?
  • Once its shown that a program does have its intended effect, researchers can determine whether the benefits are worth the program’s costs
  • Also, whether the resources used to implement the program might be put to some better use - is there a better way to carry it out?
35
Q

Quasi-Experimental Designs:

A
  • Address the need to study independent variables in settings in which true experimental designs are not possible
  • Resemble experiments in some ways, but ultimately lack important features of true experiments out of necessity (like control/experimental conditions)
  • As a result, cannot be used to make causal inferences, but rather indicate how variables are related
  • Also provide a good example of how studies that claim to be using a true experimental design might fall short of that goal, and inadvertently include elements that prevent them from being true experimental designs, and therefore DO NOT permit causal inferences
36
Q

Quasi-Experiments

It’s important to distinguish between…

A
  • 1 - Quasi-experimental designs used intentionally and out of necessity
  • 2 - Flawed experiments that purport to be true experiments but are actually quasi-experimental in nature
37
Q

Quasi-Experimental Designs

One-Group Posttest-Only Design:

A
  • Lacks a crucial element of true experiments - a control group or other source of comparison
  • Must be some sort of comparison and random assignment to separate conditions, enabling you to interpret your results
  • With only once measurement instance, this is not an experiment that allows causal inferences between the IV and DV, because results are too open to alternative explanations; thus, LACKS INTERNAL VALIDITY
38
Q

One-Group Posttest-Only Design - EXAMPLE

Sitting next to strangers

A

Participants -> Sit next to stranger (IV) -> Measure Time until Stranger Leaves (DV)

  • Suppose the average amount of time before people leave is 9.6 seconds
  • Without any comparison, this finding is uninterpretable: would people have stayed longer had you not sat down?
39
Q

Where can One-Group Posttest-Only Designs be seen?

A
  • As evidence for the effectiveness of program
  • EX: employees in a company participating in a 4 hour info session, then scoring 90% on a knowledge test - without any comparison, it would be inappropriate to conclude that it’s actually educating employees
40
Q

Quasi-Experimental Designs

One-Group Prettest-Posttest

A
  • …one way to obtain a comparison is to measure participants before the manipulation (prettest) and again afterward (posttest) - (BASICALLY USING THE P’S BEFORE/AFTER STATE AS COMPARISON GROUPS)
  • An index of change from one test to the next could then be computed
  • However, still not a true experiment and has issues that prevent the kinds of inferences possible with a true experiment
41
Q

One-Group Prettest-Posttest -EXAMPLE

Smoking

A

Participants -> Smoking Measure -> Training Program -> Smoking Measure

  • If a reduction in smoking is found, you couldn’t assume that the result was due to the relaxation program based on this design
42
Q

One-Group Prettest-Posttest -WHAT’S WRONG?

A

Lack of consideration for alternative explanations - AKA threats to *internal validity: *something that allows for some other reasonable alternative explanation for changes to the dependent variable

FROM LECTURE ALREADY: History, Maturation, Testing, Instrument Decay

43
Q

Threats to internal validity CONT.

Regression Towards the Mean

A
  • May occur when participants are chosen or groups are divided based on extreme scores on the pretest, because extreme scores tend to become less extreme on repeated measurement (posttest scores are closer to the mean)
  • Provides an alternative explanation for change between pretest and posttest
44
Q

Threats to internal validity CONT.

Attrition

A
  • …participants leave the study
  • Differences in attrition (reduction) between groups may create group differences, even if groups were initially randomly assigned, offering an alternative explanation for differences between groups
45
Q

Threats to internal validity CONT.

Selection Effects

A
  • Groups are divided based on any reason other than random assignment
  • Pre-existing group differences other than alternative explanation for differences between groups
46
Q

Threats to internal validity CONT.

Cohort Effects

A
  • …groups are divided by age. A special type of selection effect
  • Instead of differences being due to age, the unique characteristics of a particular cohort offer an alternative explanation for differences between groups
47
Q

How can we overcome threats to internal validity?

A

Can be addressed by the use of an appropriate control group (of other people)
* RE: comparison to infer causality
* When forming, should be as equivalent as possible

If p’s in the two groups differ before manipulation, they’ll probably differ after the manipulation as well - but not necessarily because of your manipulation

48
Q

Quasi-Experimental Designs

Non-Equivalent Control Group Design:

A
  • Has a separate control group, but the participants in the two conditions are not equivalent
  • EX: when p’s aren’t randomly assigned, but instead chosen from naturally pre-existing groups
  • Selection differences: pre-existing differences between groups that are confounded with the IV, and these differences provide an alternative explanation for the results
49
Q

Quasi-Experimental Designs

Non-equivalent control group pretest-posttest design

A

…adding a pretest to Non-Equivalent Control Group Design (has a separate control group, but the participants in the two conditions are not equivalent)

50
Q

Non-equivalent control group pretest-posttest design - NOT A TRUE EXPERIMENT BECAUSE…

A
  • Assignment to groups is not random, and so the two groups may be different at the beginning
  • If the groups aren’t equivalent, we can look at changes in scores from pretest to posttest: if the IV has an effect, the experimental group should show a greater change than the control group
51
Q

Non-equivalent control group pretest-posttest design - HOW TO ADDRESS ITS ISSUES

A

researchers use a pretest to demonstrate that these groups were similar in terms of demographics and symptom severity at the start of the study

52
Q

Quasi-Experimental Designs

Interrupted Time Series Design:

A
  • …similar to one group pretest/posttest design and non-equivalent pretest/posttest design; key difference being that there are multiple pretests and multiple posttests, instead of just one
  • Typically used to examine the effects of naturally occurring manipulations in society, like the passing of laws (archival designs)
53
Q

Interrupted Time Series Design - ONTARIO LAW EXAMPLE

A
  • EX: evaluating Ontario’s crackdown on driving while drunk
  • Passed a law in 1996 that mandated an immediate 90-day suspended licence for anyone caught driving with a blood-alcohol level over the limit
  • Comparing the number of fatalities (1996) to that after the law was in place (1997), researchers found little reduction
  • Alternative: interrupted time series design, examines the traffic fatality rates over an extended period of time - both before and after the law was passed
  • With this, there’s a steady downward trend in fatalities after the crackdown, but there can be an issue: The year prior to the intervention was already lower than all previous years; citizens may have heard of the potential law, and had begun curbing their drinking & driving behaviour
54
Q

Quasi-Experimental Designs

Interrupted Time Series Design: Control Series Design

A
  • Control Series Design: Interrupted time series design can be further improved upon by finding some kind of control group to create a control series design
  • Driving example: look at data from other provinces
55
Q

Single Case Experimental Designs:

AB

A
  • …participants behaviour is first measured during a baseline control time period, before any manipulation of a variable or intervention
  • Manipulation is then introduced during a treatment period, and p’s behaviour continues to be observed
  • Changes in pre- versus post-treatment offers evidence for effectiveness of manipulation
  • Developed so that experiments could be conducted within the context of a case study, with just a single research participant
56
Q

Single Case Experimental Designs

Reversal/Withdrawal Designs: ABA

A
  • Basic challenge is to determine if the treatment - specifically - had an effect on the DV; method takes the following form - A (baseline period) -> B (treatment period) -> A (baseline period); AKA an ABA design
  • One method to demonstrate that the effect can be undone or reversed, by removing the treatment
  • Sometimes can produce an immediate change in behaviour, others require lengthy treatment periods before a change is observed
57
Q

Reversal/Withdrawal Designs: ABAB

Ethical Benefits?

A
  • Can extend to ABAB designs, where the experimental treatment is introduced a second time - to ABABAB
  • Adding more can address issues to a typical ABA reversal design: single reversal could be caused by a random fluctuation in the person’s behaviour (treatment could’ve coincided with some other happy event, like getting a new puppy, and this is what actually caused the change in symptoms)
  • ABAB+ designs give more opportunities to test these, just in case
  • Ethical concerns - doesn’t seem right to end the design with the withdrawal of treatment, which could prove to be less convenient for the participant
58
Q

Single Case Experimental Designs

Multiple Baseline Designs:

A
  • Researchers look for changes in behaviour after a manipulation is introduced under multiple circumstances
  • Several variations: in one version, baseline is measured across participants, with the behaviour of several participants measured over time
  • Key element is that for each p, manipulation is introduced at a different point in time (EX: reward systems to improve child behaviour. Observing an improvement in each behaviour when the reward system was applied would be evidence for the effectiveness of the manipulation)
59
Q

Benefits of Multiple Baseline Designs:

A

Alternative explanations can be ruled out:
* Results are unlikely to be attributed to random chance or certain events
* By including additional p’s and starting interventions at different times, researchers have made stronger cases for the efficacy of intervention

60
Q

Replications in Single Case Designs

A
  • Single case designs have the same limitations as descriptive case studies. However, when the results observed with a single participant can be replicated with other participants, the generalizability of these results is enhanced
  • Single case can be especially valuable for applying treatment to help someone in particular improve their behaviour
    □ EX: a parent trying to reach a misbehaving child
    □ Results can also be combined using meta-analysis to reveal overall patterns
  • These designs offer systematic ways to examine hypotheses when studying only one or a few participants is either necessary or desired
61
Q
A