Biostats Flashcards
Type I error
study incorrectly rejects a null hypothesis that is true. The rate of type I errors is denoted by α and usually reflects the significance level of a test. A higher α increases the likelihood of a type I error and decreases the likelihood of a type II error. The main effect of a smaller sample size is to increase the probability of a type II error rather than a type I error.
Type II error in relation to null hypothesis
fails to reject a null hypothesis (H0) that is false.
what odds ratio means
- measure of association between an exposure and an outcome. In this case, it represents the odds that an outcome (eg, major cardiovascular event) will occur in the presence of a particular exposure (eg, intensive statin therapy) compared to the odds of that outcome in a control group.
- An OR >1 means that the exposure is associated with higher odds of the outcome and an OR <1 means that the exposure is associated with lower odds of the outcome.
interpretation of negative likelihood ratio
negative likelihood ratio (LR-) represents the value of a negative test result. The smaller the LR, the less likely it is that the disease is actually present.
Pearson chi-squared test use
compare associations between categorical variables (eg gender)
paired t-test use
test the difference between 2 paired means; patients serve as their own control (eg, mean blood pressure before and after treatment in the same subjects).
standardized incidence ratio
- measure used to determine if the occurrence of cancer in a small population is high or low relative to an EXPECTED value derived from a larger comparison population.
- dividing observed cases (OC) by the expected cases (EC); the formula is SIR = OC / EC.
standardized mortality ratio
adjusted measure of overall mortality and is calculated by dividing the observed number of deaths in the population of interest (eg, miners) by the expected number derived from the reference population (“standard”)
verification bias + how to avoid
- Study uses gold standard testing selectively in order to confirm a positive (or negative) result of preliminary testing (eg., not feasible to biopsy everyone so some people get screened rather than biopsied). This can result in overestimates (or underestimates) of sensitivity (or specificity).
- perform gold standard testing in a random sample of participants with negative results, as seen in this study on cervical cancer
selection bias
results from the manner in which study participants are selected or lost to follow-up. Randomization in a clinical trial reduces selection bias
observer bias
observer responsible for recording results is influenced by prior knowledge about participants or study details. Blinded studies (as in this case) usually avoid this bias by preventing observers from knowing which treatment or intervention the participants are receiving; this leads to a more objective measurement of outcomes.
contamination bias
control group unintentionally receives the treatment or the intervention, thereby reducing the difference in outcomes between the control and treatment group.
attributable risk percent meaning
measure of excess risk. It estimates the proportion of the disease in exposed subjects that is attributed to exposure status.
Population attributable risk
estimates the proportion of disease in the population that is attributed to the exposure
factorial study design (or fully crossed design)
type of experimental study design that utilizes >2 interventions and all combinations of these interventions
Pragmatic study
Seeks to determine whether an intervention works in real-life conditions.
cross-sectional study
type of observational study in which a specific population or group is studied at one specific point in time, therefore providing a cross section of the group at that particular time point.
Net clinical benefit measures
Measure of intervention’s possible benefit minus its possible harm.
eg (benefit (reduced risk of death from any cause/myocardial infarction/stroke) minus harm (increased risk of intracranial bleeding).
reasons why you do intent to treat analysis
- preserve randomization
- avoid the effects of crossover and dropout, which may break randomization and affect the outcome. For example, if the sickest patients drop out at a higher rate, even an ineffective treatment may appear beneficial if analysis is performed on only those who finished the treatment.
odds ratio vs. relative risk
odds ratio = case-control and cross-sectional studies
relative risk = cohort study
prospective cohort study design
2 groups of subjects (ie, cohorts) are selected based on their exposure status (risk factor, no risk factor). The cohorts are then followed across time and the incidence of the disease (ie, postpartum depression [PPD]) is compared between groups.
case control study design
two groups (1 diseased, 1 nondiseased), then you look back in time to compare risk factor frequency
cross sectional study design
separate groups, positive and negative risk factor. Then compare disease prevalence in a single point in time.
best study design to investigate outbreak of infectious disease
most appropriate study design to investigate an outbreak of an acute infectious disease. It generally allows for quick localization of the outbreak source.
other caveat with confidence intervals
- overlapping areas may not imply statistical significance (so compare ranges of CI’s between two treatment groups. If overlapping you can’t say there’s a signficant difference between two treatment groups)