Epi Flashcards
Which study design controls all con founders?
RCT
Stratification
Analyses patient subgroups separately and then weighted average
Multivariable regression
Takes into account a number of confoundeds at the same time
Single estimate of stratification
Mantel haenzel
Ecological fallacy
Average characteristics of a population
What can you measure in cross sectional?
Prevalence
NOT incidence
What do we calculate with case control?
Odds ratio
Bias in case control
Reverse causality
Selection bias
Measurement- recall and interviewer
Bias in ecological
Selection
Measurement
Reverse causality
Trend test
Statistical
Presence of a linear increase or decrease in risk associated with increase in exposure
Binary
Trend test 2 effects
Dose response effect
Threshold effect
Cohort bias
Reverse causality Selection Loss to follow up Recall Interviewer
Inclusion or exclusion criteria in rct causes
Poor external validity
Good chance of detecting a clinically significant effect
Power more than 80%
Not achieving planned sample size
High risk of missing a clinically important effect
Can only be published if it proves evidence of an effect
Internal validity
The intervention caused the outcome or an observed outcome
Construct validity
If what you observed is what you wanted to observe
Or what you did is what you wanted to do.
Minimum effect size
Should be big enough to detect the smallest effect that is clinically important
The probability of correctly rejecting the null when the treatment has an effect
Power
Outcome reporter bias
Form of publication
Only present things that support
Contamination
Cluster rcts
Interim analysis
If study over years
Data monitoring committee
Disadvantages of interim analyses
Open to abuse
Over estimate treatment effect
Completed by confidential committee independent of study researchers
Number needed to harm
Round down
Number needed to benefit
Round up
Bias in rct
Selection
Performance
Detection
Attribution
Concealment is not
Blinding
Attrition bias
Use an itt
Patients analyses to groups they originally allocated not on whether they completed
Only unbiased up confounded estimate of effectiveness
Reflects reality
Public health impact
Missing data can be assessed with
Sensitivity analysis
Sensitivity analysis
Primary analysis
Then repeated with missing data filled in (assumed)
If results same as analysis then they are robust
If different then must use caution
Itt minimizes
Attrition bias
Consort framework
Framework for reporting trials
Forest plot boxes
Draw attention to studies with greatest weight
Forest plot diamond
Overall summary estimate
Vertical unbroken line forest plot
Null wave
Data extraction done by
2+ independent observers
Prisma statement
Guidance on what to include in systematic review
Examples of fixed effect
Hanzel
Basics of fixed effect
Assumes one true effect weighted average
Any deviation is due to chance or sampling error
Only looks at variation within samples
Examples of randome effect
Dersimonian
Basics of random effects
Assumes heterogeneity
Within study variance and between studies
Wider range
Weighted average
Bigger weight to bigger studies
Weights use the inverse of the variance of treatment effect
Between study variance
Tsquared
Derived from q
Fixed effect weight
W=1/v
Random effects weight
Includes inter study variance
Random effect weights are
Smaller and closer to each other than fixed
Fixed effect
Assumes studies are all measuring same treatment effect
Time-trade-off
Respondents are asked to choose between remaining in a state of ill health for a period of time, or being restored to perfect health but having a shorter life expectancy.
Standard gamble
Respondents are asked to choose between remaining in a state of ill health for a period of time, or choosing a medical intervention which has a chance of either restoring them to perfect health, or killing them.
Visual analogue scale
Respondents are asked to rate a state of ill health on a scale from 0 to 100, with 0 representing being dead and 100 representing perfect health. This method has the advantage of being the easiest to ask, but is the most subjective.
the cost effectiveness plane
Cost on y and effective on x
Dominant
more effective and less costly (South-East)
Dominated
expensive and cheaper (North-West)
Incremental cost-effectiveness
difference in cost divited by difference in effectiveness.
up to reader to decide cost-effective.
ICER can be misleading unless…
one intervention is more expensive and more effective.
Net monetary benefit
required to know how much NHS is able to pay per QALY
Statistical variability in economic evaluation
due to small sizes, high variability of costs and missing data
Cost effectiveness acceptability curve
a sensitivity analysis in economic evaluation
one way sensitivity analysis
estimates for each uncertainty varied one at a time to investigate the impact on the results.
scenario analysis
best case
worst case
sensitivity
probability of a + test in people with the disease
specificity
probability of a - trest in people without the disease
SnNout
test has a high sensitivity-
a neg result would rule out the disease
SpPin
test has high specificity-
a pos result would rule disease in.
when 2 tests are equally costly and convenient we can use the
Likelihood ratio
NPV
probability of being disease free if test result is negative
spectrum bias
diagnostic test only finds barn door cases from the controls
work-up bias
gold standard is expensive, risky and unpleasant
cases who test + have gold standard then we underestimate the false -ves and overestimate the true positives
likelihood ratio (+)
sensitivity/(1-spec)
likelihood ratio (-)
(1-sensitivity)/spec
likelihood ratios
the further away from the null (1) the more informative the test.
LR=1
equal to chance
LR=1.5
greater than chance
when 2 tests are equally costly and conveneient we can use the
Likelihood ratio