Statistics and critical appraisal Flashcards
Internal and external validity
Internal - was the study done right? Do the results accurately reflect the truth?
External - does the same thing happen elsewhere? Is this study applicable to real life?
Efficacy and effectiveness
Efficacy - impact of an intervention under ideal conditions
Effectiveness - impact of an invervention under clinical/real life conditions
Berkson bias
Sample population is taken from hospital setting, but these are not representative of target population (rate or severity)
Diagnostic purity bias
Comorbidity excluded, so complexity of target population not represented
Neyman bias
Time gap between exposure and sample selection, meaning some are not available for study (eg due to death)
Membership bias
Particular group is targeted for study, which is not representative (eg in a particular organisation)
Historical control bias
Subjects and controls chosen over time, so affected by changes in social definitions, treatment modalities etc.
Performance bias
Subjects behave differently because they know which group they are in. Controlled for by blinding and randomisation.
Ascertainment/interviewer bias
Researcher not blinded, which affects recording of results
Recall bias
Subjects mis-remember past
Response bias
Subjects answer questions in the way they think the researcher wants them to answer
Attrition bias
Bias due to subjects leaving the study at different rates in different groups in the study (eg due to side-effects)
Hawthorne effect
Subjects alter their behaviour, as they know they are being observed
Pygmalion (Rosenthal) effect
Subjects perform to meet expectations set by others (usually positively). Known as placebo effect in medical settings.
Inter-rater reliability
Agreement between different assessors at the same time (do different people agree with each other?)
Intra-rater reliability
Agreement between the results from one assessor at different times, whilst assessing the same material (does one person agree with herself?)
Test-retest reliability
Agreement between results of a test, and the results of the same test repeated at a later date.
Alternative form reliability
Agreement between the results of different versions of the same test
Split half reliability
Reliability of a test that is divided in two, with each half assessing the same material (do all parts of the test contribute equally?)
Cohen’s statistic (k)
Measures agreement between raters in tests measuring categorical variables. If no more than expected by chance, k=0. Statistically significant if k≥0.7
Crohnbach’s alpha
Measures agreement between variables when using complicated tests with several parts or measuring several variables.
Intraclass correlation co-efficient
Measures agreement. For use with quantitative variables
Predictive validity
Ability of a test to predict something it should theoretically be able to predict (eg predicting employment whilst at school)
Concurrent validity
Ability of a test to distinguish between 2 groups that it should theoretically be able to distinguish between (eg angina vs gastritis)