Evidence based medicine & Appraising methodology Flashcards
Define a case report study
Single person/case is studied
- Prone to chance & bias!
Define a case series
Group of people are studied
- Good for rare diseases
Define a cohort study
Group of subjects exposed to a risk factor are matched to group not exposed to the risk factor
Are cohort studies good for rare diseases or rare exposures?
Rare exposures
Give an example of a influential cohort study
Review of mortality of doctors in relation to their smoking habits - Doll & Bradford
Define a case control study
Group of subjects with an outcome are matched to group who don’t have the outcome
- Used to investigate cause of the outcome
What type of bias can case control studies be prone to?
Recall bias - also have to rely on records
Give an example of an influential case control study
Smoking and carcinoma of the lung - Doll & Bradford
Are case control studies good for rare diseases or rare exposures?
Rare diseases
Define a nested case control study
Conducted on a population taking part in a cohort study once sufficient numbers of outcomes have been reached. The nested case control can investigate exposures no previously taken into consideration
- Cases in the study are matched to controls from the same cohort
Define a case cohort study
Similar recruitment to case control study but the control group is recruited from everyone in the initial cohort regardless of their future disease status
Name the 9 Bradford Hill criteria (helps decide if causative relationship exists)
Strength Consistency Specificity Temporality Biological gradient Plausibility Coherence Experimental evidence Analogy
What is a crossover trial?
Type of RCT, subjects receive one treatment then switch to the other halfway through the study
- Good to rare diseases where lack of subjects would effect power of study
- Can have carry over effects from first intervention if wash out period is too short
Define an N-1 trial
Single subject is studied and receives repeated courses of drug or alternative treatment in random order
What is the CONSORT statement?
Set of recommendations to improve the quality of RCT reports
What are cross sectional surveys good for?
Establishing prevalence and association
- NOT causality
What is a disadvantage of a cross sectional survey?
Prone to recall bias
Require large numbers of subjects
Groups can be unequal, confounders can be asymmetrically distributed
Define economic analysis
Type of study assessing the cost and/or utilities of an intervention
Name the 5 common methods used to obtain a sample from a population
Random sampling Systematic sampling Stratified sampling Cluster sampling Convenience sampling (prone to bias!)
Define systematic sampling
Every nth member of the target population is selected
- Quasi random sampling
Define stratified sampling
Different populations are recruited from subgroups in the target population based on one or more characteristics
What are the 5 categories of bias
Reporting Selection Performance Observation Attrition
What are the examples of reporting bias?
Literature search bias
Foreign language exclusion
What are the examples of selection bias?
Sampling bias (researcher) - e.g Berkson bias Response bias (subjects)
What are the examples of performance bias?
Instrument bias
Questionnaire bias
What are the examples of observation bias?
Interviewer bias
Recall bias
Response bias
Hawthorne effect
Define a positive confounder
Results in an association between two variables that are not associated
Define a negative confounder
Masks an association that is present
What methods are available for controlling confounders at the time of designing a study?
Restriction (inclusion & exclusion criteria)
Matching (subjects with confounders are allocated equally to different arms of the study)
Randomisation
What methods are available for controlling confounders at the time of analysing a study?
Stratification
Standardisation
Statistical adjustment using multivariate techniques
Which statistical technique can achieve stratification during analysis?
Mantel-Haenszel (gives adjusted relative risks as a summary measure of the over risk)
When would you use multiple linear regression?
To minimise the effect of cofounders when dependent variables are continuous in nature
When would you use logistic regression?
To minimise the effect of cofounders when variables are binary
What are the two types of randomisation methods?
Fixed (methods defined and set up before the start of the trial)
Adaptive (randomised groups are adjusted as the study progresses to account for imbalances)
What are the types of fixed randomisation?
Simple
Block
Stratified
What is a type of adaptive randomisation?
Minimisation (allocation of each subject dependent on the characteristics of those already enrolled)
What are the three main endpoints used in studies?
Clinical (measurement of direct clinical outcome)
Surrogate (measurement of outcome used as substitute for clinically meaningful endpoint)
Composite (Combines several measurements into a single endpoint)
Define validity
Extent to which a test measures what it is supposed to measure
Define reliability
How consistent a test is on repeated measurements
What are the types of validity?
Face Content Criterion (concurrent & predictive) Construct (convergent & divergent) Incremental
Define face validity
Extent to which the test measures what it is supposed to measure
Define content validity
Extent to which the test measures variables that are related to the parameter which should be measured by the test
Define concurrent validity
Subtype of criterion validity
The extent to which the test correlates with a measure that has been previously validated
Define predictive validity
Subtype of criterion validity
The extent to which the test is able to predict something that it should theoretically be able to predict
Define convergent validity
Subtype of construct validity
Extent to which the test is similar to other tests measuring the same construct
Define divergent validity
Subtype of construct validity
Extent to which the test is not similar to other tests that are measuring different constructs
Define incremental validity
Extent to which the test provides a significant improvement in addition to the use of another approach
Define intrarater reliability
Level of agreement between assessments by one rater of the same material at two or more different times
Define inter-rater reliability
Level of agreement between assessments made by two or more raters at the same time
Define test-retest reliability
Level of agreement between the initial test results and the results of repeat measurements made at a later date
What test can be used to assess internal consistency?
Crohnbach’s alpha
>5 = moderate agreement
>8 = exce;;ent agreement
What is continuous data?
Can take any value e.g height
What is Cohen’s Kappa
Assesses the level of agreement for data that falls into categories
What is the Kappa statistic?
Measures the level of agreement between assessments made by two or more raters at the same time where responses can fall into categories (aka inter-rater reliability)
- Can help decipher if agreement is by chance
What does a Kappa statistic of 0 mean?
Agreement between raters is chance only
What does a Kappa statistic of 1 mean?
Agreement between raters is perfect beyond chance