Reliability & Validity 1 Flashcards
reliability
degree to which the same event produces the same result
-consistent, repeatable, dependable, reproducable
many reliabllity (agreement) estimates are based on measures of correlation (assoc), while also accounting for measurement error
measurement error
“the noise”
Inc ME = des reliability
dec ME= inc reliability
types of measurement error
systematic errors- predictable, consistent, and constant
random errors- unpredictable, inconsistent, variable
**great graph in notes on decription
measurement error sources
examiner, examined, examination
examiner-
systematic errors- consistent use of improper landmarks
random errors- fatigue, inattention
biological variation in senses, rely on sight, hearing an touch. Variation in acuity affects agreement (between clinicians, within the same clinician)
intratester, intertester, and test retest reliability
examined
biological variation in the system
clinical attributes vary- HR, BP, pain intensity. these variations lead to inconsistencies.
examination
disruptive environments. Affecting senses (dim lighting, noise environment). Privacy of setting.
Disruptive interactions.
Incorrect use of diagnostic tools.
How to minimize measurement error
operational definition
proper training
inspection of equipment
indep interpretation of test results
“blinding” of examiner to diagnosis
seperate observaion from inference
test retest reliability
measures the consistency of the testing instrument over time Most basic way to assess.
internal consistency
measures the extent to which an instrument measures aspects of a certain characteristic (chronbach’s apha).
intra-rater reliability
measures the stability of measures recorded by one individual across 2 or more trials
inter-rater reliability
variation between 2 or more raters measuring the same group of subjects
ICC
reflects both relationship and agreement
Cant measure reliability with correlation
compares between individual error and error variability (ratio)
ratio approaching 0 indicates no agreement
ratio approaching 1.0 suggests perfect agreement
no universal standards for interpretation
ICC interpretation
> 0.90 = excellent reliability
> 0.75= good reliability
<0.75= poor to moderate reliability
Pearson product moment correlation
major flaws as in its independent use as an indicator of reliability
why? it is a measure of association between ratings and not a true measure of agreement
Does not distinguish between perfect agreement and systematic bias