Evidence-Based Medicine (EBM) Flashcards
What is “odds ratio”?
Odds rate= odds of exposure in cases/odds of exposure in the control group.
It is the ratio of the odds of developing condition in the exposed group compared to the odds of developing the condition in a non-exposed control group.
A value of “1” identifies that the condition/event is equally likely to occur in both groups.
A value of greater than “1” indicates the condition/event is more likely to occur in the first group (positive association between the treatment/outcome risk in the exposed group).
A value of less than “1” is a negative association between the treatment and outcome (the condition/event is less likely to occur in the first group (indicates a negative association between treatment and the outcome).
Value can exaggerate risk when outcome is common.
Value is used in case control studies.
What is “relative risk (RR)”?
Relative risk (RR)= Risk in treatment group/Risk in control group.
The risk of an event or development of a condition relative to exposure.
The risk of developing a condition when exposed compared to someone who has never been exposed.
The ratio of risk in an event in people with a specific characteristic to those without the characteristic.
Value equal to “1”= no association between treatment and outcome.
Value greater than “1” indicates a positive association between treatment and outcome.
Value less than “1” indicates a negative association between treatment and outcome.
What is “relative risk reduction (RRR)”?
RRR= 1 - Relative Risk (proportion)
OR
RRR= 100 - RR (%)
For example: 1-0.8= 0.2 or 20%
The proportional reduction in rates of bad events between the experiment group and the control group.
The RRR is easier to visualize than the relative risk (RR).
The actual magnitude of an intervention’s effect can be misleading if one only uses the RRR because it does not differentiate between large and small differences between the control group and the test group therefore we calculate the Absolute Risk Reduction (ARR).
What is “absolute risk reduction (ARR)”?
The absolute difference by which a therapy reduces the risk of bad outcomes between the control group and the experimental group.
ARR % = Proportion of events in control group (CER) % - Proportion of patients in treatment group experiencing events (TER) %
For example, 25% - 20% = 5%
There was a 5% lower incidence of a MI in the treatment group compared to the control group.
Drug companies like to use relative risk values because it looks better than Absolute Risk Reduction values.
What is the “number needed to treat (NNT)”?
The number of patients needed to be treated to achieve one additional favourable outcome; the number of patients that need to be treated for a given amount of time to prevent one adverse outcome.
NNT= 100/Absolute Risk Reduction (ARR)
NNT= 100/5= 20
*For this example, the study was 2 years long.
Outcome= 20 patients would need to use this drug for 2 years to prevent 1 MI…
Is it cost effective?
Drug for 1 person= $200/year X 2 years= $400
Drug for 20 people=$4000/year X 2 years= $8000
Therefore, it costs $8000 to prevent one MI.
What is “p-value”?
The probability (‘p’) that the results are due to change… “Fluke” or “False positive”.
The smaller the p-value the more confident we are that the result is a true effect and not a false positive.
p= less than 0.05 gives statistical significance; less than a 5% chance of a fluke.
Lower p-values mean that the result is less likely due to chance.
The size of the p-value is determined by the number of people in the study.
p-values do not determine if one treatment is better than another (for example p=0.01 vs p=0.001).
You can not say if one treatment was worse or better than another; all you can say is that both treatments met statistical significance –> keep in mind when looking at comparisons.
What are “statistical errors”?
Studies need to include large numbers of subjects in order to reduce statistical errors.
There are two types of statistical errors:
1) Type I or Alpha errors: in this type of error you think that there is a risk or difference but there truly is no risk or difference; in this type of error the null hypothesis or “Ho” is rejected when it is time so it is a false positive. This test would have poor specificity… For example, if you took a blood test that indicated that you have a disease when in fact you do not have the disease providing a FALSE POSITIVE result.
Null hypothesis rejected when it is true.
Type I errors produce falsely significant results.
2) Type II or Beta errors: You’re actually missing the risk. There truly was a risk and you missed it concluding that there is a problem; not rejecting the null hypothesis when it should have been rejected. For example, a pregnancy test that you identifies you are not pregnant when in actual fact you are providing a FALSE NEGATIVE result.
Failing to reject the null hypotheses.
A test that is highly sensitive will have low type II error and a test with poor sensitivity is likely to produce type II errors.
What is a “confidence interval”?
A range where the true value is likely to lie.
EG: 95% CI is commonly used used
The stat is significant because we are looking for the difference between the groups and the confidence value does not contain “0” we can say that the results are statistically significant.
The results of trials are a point estimate
These point estimates will vary each time an experiment if performed.
CI represents this variable.
Clinically significant if it does not cross zero when comparing differences between the means of the two populations.
For example, ARR= 5%, CI (2-20%)
- this would be statistically significant
- the real value falls within this range
- must determine if the lower end of the CI is clinically significant (if no, then you may want to use this drug)
For example, ARR= 5%, CI (-10%-30%)
- not statistically significant
- real value falls between -10 and 30
- negative usually means harmful
- is upper end statistically significant? If yes, then we may be missing an important treatment effect
Sometimes if something is not statistically significant you may ask yourself maybe the trial itself did not include enough people to find a difference.
What is the range of “clinical indifference”?
Not a statistical calculation.
Medical judgement as to the clinical significance of a result… Is the treatment worth it to you as a practitioner? Are the results compelling enough or is the benefit great enough for you to choose it?
Confidence intervals excluding the null value but lying within the range of clinical indifference are statistically significant but not clinically significant.
For both statistical and clinical significance the confidence interval must lie outside the range of clinical indifference AND exclude the null value.
What is “clinical significance”?
A narrower CI means more precision and also a lower p-value.
Use your judgement to determine if the results have clinical significance.
Statistical significance does not equal clinical significance.
If a result is not statistically significant but may be clinically meaningful you may suspect that the trial was not large enough and it should be repeated with a larger test population… The larger the study, the more power it has to determine an effect; larger studies usually have narrower confidence intervals.