Module 5: Critical Thinking Flashcards
What is internal validity
Is our study estimate an accurate estimate of the actual value in the source population. I.e are there other explanations for the study findings, other than them being right?
What are the three factors to consider in internal validity
Chance, bias, confounding
What is external validity
The extent to which the study findings are applicable to a broader or different population (also known as generalisability) Judgement depending on what is being studied and who it is being applied to
What is sampling error
If you continuously sampled from the same source population, most of the time you would get a sample with a similar composition to the population you sampled from but some samples would be quite different just due to chance
How can sampling error be mitigated (can’t eliminate but can reduce)
Increase sample size: less sampling variability, increases likelihood of getting a representative sample and precision of parameter estimate
What is the statistical definition of a 95% confidence interval
If you repeated a study 100 times with a random sample each time and got 100 confidence intervals, in 95 of the 100 studies the parameter would lie within that study’s 95% confidence interval
What is the interpretation we use of the 95% confidence interval (CI can be applied to any numerical measure)
We are 95% confident that the true population value lies between the limits of the confidence interval
What effect does increasing the sample size have on the confidence interval
Makes it narrower
When is a study clinically important
When the confidence interval is entirely below the clinical importance threshold (a different value to null)
What are p values
Probability of getting study estimate (or one further from the null) when there is really no association, just because of sampling error. If probability very low, unlikely that estimate is due to sampling error. Probability of finding an association when there actually isn’t one.
What is the null hypothesis
That there really is no association in the population (parameter = null)
What is the alternative hypothesis
That there really is an association in the population (parameter does not equal null value)
What is the threshold for determining how unlikely is acceptable for a p value
<0.05
How is a p value of <0.05 interpreted
Reject null hypothesis, accept alternative hypothesis, association is statistically significant
How is a p value of >0.05 interpreted
Fail to reject null hypothesis, reject alternative hypothesis, association is not statistically significant
What is a type 1 error
Finding an association when there truly is no association
What is a type 2 error
Finding no association when there truly is an association. Incorrectly fail to reject the null hypothesis when should’ve
Why do type 2 errors occur
Typically due to having too few people in the study (bigger sample size = more likely to get small p)
How can statisticians work out how to minimise type 2 errors
Calculate power to work out how many study participants are needed to minimise chance of a type 2 error
If the confidence interval includes the null value what is the p value
p>0.05, not statistically significant
If the confidence interval does not include the null value what is the p value
p<0.05, statistically significant
Why are p values problematic
Arbitrary threshold, only about the null hypothesis, nothing about importance
At the 5% threshold when will a statistically significant association be found when there really isn’t one
At least one time in twenty (wrong 5% of the time)
What is the problem with p values regarding importance
If you include enough people in your study you’ll find a statistically significant difference, even if people were randomly assigned. Statistical significance is not clinical significance- don’t say anything about whether the results are useful, valid, or correct. Absence of a statistically significant association is not evidence of absence of a real association
What is bias
Any systematic error in a study that results in an incorrect estimate of the association between exposure and risk of disease
What is systematic error
Error due to things other than sampling (opposite of random error)
When can selection and information bias be controlled
Only during the design and data collection phases of a study. So investigators must identify potential sources of bias and identify ways to minimise these
What is selection bias
When there is a systematic difference between the people included in a study and those who are not, or when study and comparison groups are selected inappropriately or using different criteria
What can affect who is part of a study (selection bias)
How people are recruited, whether people agree to participate, whether everyone remains in the study
How can loss to follow up be reduced
Alternative contact details obtained at start of study, maintaining regular contact throughout study, making several attempts to contact people
What must be considered regarding selection bias in cross sectional studies
Who entered the study, is the sample representative of the source population, what is the response rate
What must be considered regarding selection bias in case control studies
Participants are selected based on their outcome status. If this is in some way dependent on their exposure status, then bias can occur. Must ensure high participation, clearly define population of interest, reliable way of ascertaining all cases or a representative sample of cases
What are the potential biases in selection of controls
Ensure controls are from the same defined population as the cases over the same time period, same inclusion and exclusion criteria for cases and controls, ensure high participation
What would happen to the odds ratio if cases who are exposed are more likely to be identified or to participate
Overestimation of harmful effect of exposure, OR is biased away from the null, numerically upward
What is a harmful factor when MOA is underestimated
Biased numerically downward toward the null
What is a protective factor when MOA is underestimated
Biased numerically upward toward the null
What would happen to the odds ratio if cases who are exposed are less likely to be identified or to participate
Underestimation of harmful effect of exposure, OR is biased toward the null, numerically downward
What are the common types of selection bias in cohort studies
Loss to follow up (if related to exposure and outcome this can lead to bias), comparison group selected separately from exposed group can lead to bias
What does a loss to follow up in the exposed group result in
Underestimate incidence proportion in exposed group, resulting in underestimated relative risk, RR biased numerically downward toward the null
What is a source of selection bias in RCTs
Loss to follow up
When is it important to consider systematic error
When critically appraising scientific literature, in evidence based practice, considering studies reported in the media, undertaking research
What is information bias
Observation or information bias results from systematic differences in the way data on exposure or outcome are obtained from the various study groups
How is data collected in a study
By participants or collected/measured by someone else
How can measurement error occur
Participants provide inaccurate responses, data is collected incorrectly/inaccurately
What is measurement error
Can be random: lack of precision, or systematic: lack of accuracy.
What effect can measurement error have in descriptive and analytic studies
Descriptive: inaccurate measurement of prevalence
Analytic: misclassification
What is non-differential misclassification
Not different between the study groups. Measurement error and misclassification don’t occur equally in all groups being compared. Normally RR moves toward null
What is differential misclassification
Different between study groups. Estimate can move toward or away from the null
What are some examples of differential misclassification
Cross sectional: people with outcome might report exposure differently to those without outcome.
Case control: cases might more accurately recall past exposures compared to controls, interviewers may probe cases more (or exposed in cohort studies)
What is a source of information bias in case control studies
Recall bias