stats Flashcards
odds
a+c/ B+D (yes/no)
odds ratio
Odds treatment/Odds control
ad/bc
use in case control/cross section
risk
yes/Yes+no
A+C/ A+B+C+D
relative risk
Risk treatment/Risk control
a/a+b devide by c/c+d
Confidence interval
range of plausible values for some summary measure
In repeated studies 95% of the confidence intervals will cover the true value of the summary measure
sensitivity
proportion of true positive
A/A+C
specificity
proportion of true negative
D/D+B
Positive predictive value
probability that a subject with a positive test result actually has the disease
A/A+B
sensitivity x prevalence/ sensitivity x prevalence + (1-spec)x (1-prev)
depend on sensitivity and specificity
Depend on prevalence
Negative predictive value
probability that a subject with a negative test result does not have the disease
D/C+D
spec x (1-prev)/ (1-sen)xprev + spec x(1-prev)
depend on sensitivity and specificity
Depend on prevalence
If the prevalence is low
false positives are likely,
even if sensitivity is high
selection bias
2 groups in study different due to allocation flaws, drop out, chance
Information bias
info collected incorrect
e.g. recall bias
confounding
stratify
a variable that influences both the dependent variable and independent variable, causing a spurious association
But how do we stratify confounders that we don’t know
about yet?
Randomise
Intention-to-treat
- Analysing data according to the treatment assigned, rather than what treatment was actually administered
Dropouts are included in the analysis - Avoids selection bias due to unequal dropouts
May underestimate treatment efficacy, as the treatment
effect is diluted by
* Dropouts
* Crossover
* Drop-ins to other treatment
type 1 error
Incorrectly finding a difference when there really isn’t one
false positive result
due to chance/bias
p value
The probability of a type 1 error due to chance
àanything less than a 5% chance of type 1 error (ie any p-value
less than 0.05) is acceptable enough for the observed difference
to be considered significant
type 2 error
Type 2 error refers to finding no difference between
2 groups, when in fact one exists
false negative result
Most commonly this is due to sample size being too
small
Power of study
The chance of a type 2 error is calculated at the
outset of a study, and is denoted by beta (β)
* The power of a study is 1 – β
Power is defined as the chance of finding a significant
difference between 2 groups, when such a difference exists
3 factors affect the power of a study:
* The sample size
* The magnitude of the difference between the groups
being studied
* The p-value required for a significant result (ie the ‘α’)
case control
Most useful for rare conditions, with relatively common exposures
patient with disease match with control
compare the exposure of the disease/control
cohort study
A group of normal subjects is identified and followed over time to see if they develop a disease
diseases with relatively high
incidence and short lead-time
P < 0.05 .
means that there is
less than 5% chance of
observing the trial result due
to random sampling if the null
hypothesis were true
That random sampling would
provide a smaller difference
than we measured more than
95% of the time.
relative risk reduction
1- RR
1- (a/a+b devide by c/c+d)
This is the MARKETED benefit
NNT
1/AAR
If the effect is small, the ARR will be very low
- The ARR takes into account the total number of patients in the study
whereas the RRR cancels it out can give deceivingly impressive results - ARR is a BETTER measure of benefit than RRR
- The best measure is NNT