Evidence based medicine Flashcards
ANOVA
statistical test to demonstrate statistically significant differences between the means of several groups.
allows the comparison of more than just two means.
assumes that the variable is normally distributed.
works by comparing the variance of the means.
distinguishes between within group variance (the variance of the sample mean) and between group variance (the variance between the separate sample means).
null hypothesis assumes that the variance of all the means are the same and that within group variance is the same as between group variance.
The test is based on the ratio of these two variances (known as the F statistic)
three types of association.
Spurious - an association that has arisen by chance and is not real
Indirect - the association is due to the presence of another factor (a confounding variable)
Direct - a true association not linked by a third (confounding) variable
How to establish causation
Bradford Hill Causal Criteria (1) are used, these include:-
Strength - The stronger the association the more likely it is to be truly causal.
Temporality - Does the exposure precede the outcome?
Specificity - Is the suspected cause associated with a specific outcome/ disease?
Coherence - Does the association fit with other biological knowledge?
Consistency - Is the same association found in many studies?
Selection bias
Error in assigning individuals to groups leading to differences which may influence the outcome.
sampling bias -subjects are not representative of the population.
volunteer bias - participation when risk may be more, or less, likely to participate in the study.
non-responder bias - e.g. it is likely that the people who didn’t respond would have poorer diets than those who did
loss to follow up bias
prevalence/incidence bias (Neyman bias): when a study is investigating a condition that is characterised by early fatalities or silent cases.
admission bias (Berkson’s bias): cases and controls in a hospital case control study are systematically different from one another because the combination of exposure to risk and occurrence of disease increases the likelihood of being admitted to the hospital
healthy worker effect.
Recall bias
Difference in the accuracy of the recollections retrieved by study participants, possibly due to whether they have disorder or not. Difference in the accuracy of the recollections retrieved by study participants, possibly due to whether they have disorder or not.
A particular problem in case-control studies.
Publication bias
Failure to publish results from valid studies, often as they showed a negative or uninteresting result. Important in meta-analyses where studies showing negative results may be excluded.
Work-up bias (verification bias)
compare new diagnostic tests with gold standard tests, sometimes clinicians may be reluctant to order the gold standard test unless the new test is positive, as the gold standard test may be invasive (e.g. tissue biopsy).
This approach can distort results and alter values such as specificity and sensitivity. Sometimes work-up bias cannot be avoided, in these cases it must be adjusted for by the researchers.
Expectation bias (Pygmalion effect)
Only a problem in non-blinded trials. Observers may subconsciously measure or report data in a way that favours the expected study outcome.
Hawthorne effect
Describes a group changing it’s behaviour due to the knowledge that it is being studied
Late-look bias
Gathering information at an inappropriate time e.g. studying a fatal disease many years later when some of the patients may have died already
Procedure bias
Occurs when subjects in different groups receive different treatment
Lead-time bias
Occurs when two tests for a disease are compared, the new test diagnoses the disease earlier, but there is no effect on the outcome of the disease
Clinical trial phase 0
0 Exploratory studies. very small number of participants and aim to assess how a drug behaves in the human body. Used to assess pharmacokinetics and pharmacodynamics
Phase I clinical trial
Safety assessment
Determines side-effects prior to larger studies. Conducted on healthy volunteers
Phase II clinical trial
Assess efficacy
Involves small number of patients affected by particular disease
May be subdivided into
IIa - assesses optimal dosing
IIb - assesses efficacy
Phase III clinical trial
Assess effectiveness
Typically involves 100-1000’s of people, often as part of a randomised controlled trial, comparing new treatment with established treatments
Phase IV clinical
trial
Postmarketing surveillance
Monitors for long-term effectiveness and side-effect
Confidence interval
a range of values within which the true effect of intervention is likely to lie
The likelihood of the true effect lying within the confidence interval is determined by the confidence level. For example a confidence interval at the 95% confidence level means that the confidence interval should contain the true effect of intervention 95% of the time.
standard error of the mean (SEM)
a measure of the spread expected for the mean of the observations - i.e. how ‘accurate’ the calculated sample mean is from the true population mean
SEM = SD / square root (n)
where SD = standard deviation and n = sample size
therefore the SEM gets smaller as the sample size (n) increases
A 95% confidence interval:
lower limit = mean - (1.96 * SEM)
upper limit = mean + (1.96 * SEM)
Confounding
refers to a variable which correlates with other variables within a study leading to spurious results.
Correlation
used to test for association between variables
summarised by the correlation coefficient (r). This indicates how closely the points lie to a line drawn through the plotted data. In parametric data this is called Pearson’s correlation coefficient and can take any value between -1 to +1
Regression
not used unless two variables have firstly been shown to correlate
linear regression may be used to predict how much one variable changes when a second variable is changed. A regression equation may be formed, y = a + bx, where
y = the variable being calculated
a = the intercept value, when x = 0
b = the slope of the line or regression coefficient. Simply put, how much y changes for a given change in x
x = the second variable