Evaluation Designs Flashcards
What are the 3 main stages of evaluation?
Formative
Process
Outcome
Describe the formative stage of an evaluation
Happens before any intervention to evaluation.
Tests the acceptability + feasibility of the intervention.
Mainly qualitative i.e focus groups and in-depth interviews
Describe the process stage of an evaluation
Happens whilst the intervention is underway.
Measures how the intervention was derived + received.
Mixed quantitative and qualitative
Describe the outcome stage of an evaluation
Measures whether the intervention has achieved its objectives.
Mainly quantitive
What is the main purpose of an evaluation design?
To be as confident as possible that any observed changes were caused by the intervention, rather than by chance or other unknown factors.
List the criteria for inferring causality
Cause must precede the effect
Plausibility
Strength of the association
Dose-response relationship
Reversibility
Criteria for inferring causality
How is the strength of the association measured
Effect size
or
Rel. risk
Criteria for inferring causality
What comes under the dose-response relationship
Occurs when changes in the level of a possible cause are associated with changes in the prevalence or incidence of the effect.
Criteria for inferring causality
What is meant by reversibility?
When the removal of the possible cause results in a return to baseline for the outcome.
What does high internal validity mean
High means the differences observed between the groups are related to the intervention tested in the trial.
Define external validity
Extent to which the results of an experiment of an intervention can be generalised to the target or general population.
What are the types of Evaluation design
Experimental - Randomly assigned controls or comparison groups
Quasi-Experimental - Not randomly assigned controls or comparison groups
Non-experimental - No comparison or control group
Strengths to experimental evaluation design
Can infer causality with highest degree of confidence
Weaknesses to experimental evaluation design
Most resource intensive of the evaluation designs
Req ensuring minimal extraneous factors
Can sometimes be challenging to generalise to the “real world”
Strengths to the quasi-experimental evaluation design
Can be used when unable to randomise a control group but still allows comparison across groups +/or time
Weaknesses to the quasi-experimental evaluation design
Differences between comparison groups may be confound
Group selection is critical
Moderate confidence in inferring causality
Strengths to the non-experimental evaluation design
Simple
Used when baseline data +/or comparisons groups are not available
Good for a descriptive study
May req fewer resources
Weakness to the non-experimental evaluation design
Minimal ability to infer causality
What are the types of RCT (Experimental design)
Randomised cross-over trials
Parallel randomised trials
What is the purpose of random assignment
To best ensure the intervention is the only difference between the 2 groups.
To ensure any factors influencing the outcome are evenly distributed between the groups.
List the main threats to internal validity in RCT (Experimental designs)
Selection bias
Performance bias
Detection bias
Attrition bias
Random Error