WEEK 3 AND 4 Flashcards
what is CAUSALITY?
Best study design
Challenges
‘Makes happen’
Its presence directly impacts, changes or effects something else
It is upstream
Often we are interested in causality
But it can be very difficult to ‘prove’ a cause and effect relationship
RCTs can help assess causality, but have limitations too
Different methodological approaches may be superior
“Both practical and ethical considerations mean that causality cannot, in general, be proved in human
studies. Rather, it must be induced from demonstrated associations between and exposure and health
outcomes.
Characteristics of that association, judged against some framework, then help to assess whether that
association is or is not causal”
How can we decide if a risk factors causes a disease?
Build up a solid body of evidence
Use a checklist that helps us decide
BRADFORD HILL CRITERIA
Useful to help establish the strength of epidemiological evidence of a causal relationship
But, they are not criteria that must be fulfilled
Instead provide ways of examining whether cause and effect is a reasonable inference
- Temporal relationship (essential)
For a risk factor to cause a disease it must occur/be present before the disease - Strength (effect size)
A strong association is more likely to be causal (but the reverse is NOT true) - Consistency (reproducibility)
Similar results are obtained in different populations with different study designs (given their varying
combinations of other ‘chance’ factors) - Analogy
Similarities with other well-established cause-effect relationships - Specificity
The more specific an association between a factor and an effect, the increased probability of a causal
association
Rarely occurs, as most diseases have multiple causes and most exposures, multiple effects
e.g. Huntington’s disease caused by a defect in a specific gene - Reversibility
If the removal of a possible risk factor results in reduced risk of disease, then the likelihood that the
association is causal is increased. - Dose-response relationship
Helpful to assist in confidence around causality, but not essential - Plausibility
Must be consistent with knowledge from other sources (e.g. animal experiments) & should make
biological sense - Coherence
The suggested cause-effect should be consistent with the natural history and biology of the disease
and should not conflict with the generally known facts
CAUSALITY- CHALLENGE FOR EPIDEMIOLOGISTS
Whether links between exposures and outcomes/disease can be considered causal can only
be assessed with confidence once full consideration has been taken of other factors
The notion of cause has become more complex
Most health outcomes having multiple component causes
Distinguishing which of these are necessary or sufficient is central to preventive efforts
How far upstream should the matter of cause (& thus potential intervention) be pursued
EXTERNAL VALIDITY
The degree to which the study findings can be applied to individuals not in the study
Two underlying concepts:
1. Generalisability
To what extent does the study sample represent the target population
2. Applicability
Whether the results of the study can be applied to a particular sample within the population
(clinical and psychosocial factors, health status)
Applicability refers to whether or not the study can be applied to your specific clinical setting and individual patient
A research finding may be entirely valid in one setting but not another
relates to the extent to which the results are likely to impact on practice”
INTERNAL VALIDITY
Extent to which…
Degree to which the investigator draws the correct conclusion about what actually happened in
the study
Extent to which the design and conduct of the trial/study, & methods used for analysis:
eliminate the possibility of bias (50+ types)
minimise impact of other factors (confounding, interaction)
reduce likelihood of random errors & chance findings
Internal vs external validity diagram
BIAS
Selection
Information
A systematic error in the design, recruitment, data collection or analysis that results in a mistaken
estimation of the true effect of the exposure and outcome
1. Selection bias: systematic error in the selection or retention of participants
2. Information (misclassification) bias: systematic error due to inaccurate measurement or
classification of disease, exposure or other variables
Bias limits validity of study results & is rarely eliminated during analysis
Thus, KEY is the study design and methods
How to judge external validity?
Often there are inclusion or exclusion criteria for a study e.g age, health, availability
Regardless of these criteria, the recruited sample can never be 100% representative
Individuals agreeing to participate in research studies are different from those who don’t
A matter of judgment
Age, sex, severity of disease, comorbid conditions
Similar drugs, other doses, timing, route of administration
Other outcomes (not assessed), different duration of treatment
SELECTION BIAS IN CASE CONTROL STUDIES
Inherent in the design, resulting in non-comparability between cases & controls
Cases & controls drawn from different populations
Particularly problematic in hospital settings
e.g.
Cases: A hospital-based study will fail to enrol severe cases that die before reaching hospital;
Cases not representative of all cases in the population
Controls: Study of the effects of smoking on lung cancer.
Controls may be selected from individuals hospitalised on respiratory ward for other conditions,
which may also be related to smoking
Over-representation of smoking in the control group would under-estimate the association
SELECTION BIAS IN COHORT STUDIES
When it can occur?
Less problematic because exposure status identified prior to outcome occurring
Can occur when non-response rate / loss to follow-up differs between exposure groups
E.g. A study investigating the role of heavy alcohol consumption on disease A
* Heavy drinkers may be less likely to respond (non-response related to exposure)
* People with a family history of disease A may be more likely to participate (non-response related
to disease)
* The cohort prevalence and incidence rates will be different from the general population
* The measured association between alcohol and disease A may also be biased
SELECTION BIAS IN RCTS
Less likely if: randomisation performed correctly, sufficiently large sample
Blinding to treatment allocation is important
INFORMATION BIAS,,
What is
Difference between Non-differential vs Differential
Systematic errors due to inaccurate measurement or classification
1. Non-differential classification
Misclassification is random (all individuals have same probability of being misclassified)
2. Differential classification
Misclassification of disease status is dependent upon risk factor status (or vice-versa)
Includes instrumentation errors, misdiagnosis, and missing data, also…
INFORMATION BIAS, observer, interviewer, reporting, recall
Observer bias: prior knowledge of expected outcome influences the way the information is
collected, measured or interpreted
Interviewer bias: leading questions which may systematically influence responses given
Reporting bias: individuals may selectively suppress or reveal information
e.g. people living near telecommunication towers might report adverse effects because of
hypersensitivity about cancer threat
Recall bias: when the information provided on exposure differs between exposed & unexposed
Particularly problematic in case-control & retrospective cohort studies
e.g. individuals with cancer may be more likely to recall exposure to toxic chemicals than controls
INFORMATION BIAS – HAWTHORNE EFFECT
REASONS FOR MISCLASSIFIED INFORMATION
Information bias- arise from the measurement error
Poorly worded, ambiguous questions
Instruments developed in one setting are not appropriate to another setting
Incriminating or personal questions (e.g. IV drug use, sex-related)
Multiple interviewers can lead to systematic errors in misclassification
Self-reported, telephone and face-to-face survey can provide startlingly different results
Blinding can reduce observer/interviewer bias