Lesson 4: Measurement Considerations Flashcards
Bias
Systematic error introduced by selecting or encouraging out come over others
Validity
Extent to which a measure actually represents what it claims to measure
Reliability
Degree to which results are stable and consistent
Sensitivity
Proportion of POSITIVES that are correctly identified
Specificity
Proportion of NEGATIVES that are correctly identified
Correlated
A statistical relationship existing between two variables or data sets that reflects a dependence between the two
Independent
The occurrence of one variable does not influence the probability of another variable
Parametric
Data with an underlying normal distribution
Nonparametric
Data for which the probability distribution is unknown or NOT known to be normal
When can bias occur during clinical research?
Before, During, & after the study

Selection Bias
the error associated with how participants are selected for studies
Interviewer Bias
When the interviewer influences the participant response during an interview
ex: giving reactions, social cues, presenting questions in a certain way
Recall Bias
the error associated with remembering
Publication Bias
the error associated with not selectively submitting [researchers fault] and/or publishing research [journal’s fault]
*negative results are much less likely to be published
What does it mean when we say that something is validated?
- We made some attempt to show that our data actually represents what it claims to measure
- Most types of validity are measured with a metric or scale and then compared to some sort of standard
Name a few examples of threats to validity in measuring medication adherence.
Patients could lie about taking their medications
Small Sample size
Face Validity (for questionnaires)
do the questions look like (“at face value”) they measure what they say they are measuring?
Ex:
Measuring Necessity = My life would be impossible without my medicines
Measuring Overuse = Doctors use too many medicines
Construct Validity
are we measuring distinct constructs? (i.e. are the scales correlated or independent?)
How well did it measure or test a specific construct?
Did you measure adherence? Or something else?
External Validity
Are these findings generalizable to beyond the sample?
Internal Validity
Was the study well designed?
Did the study limit or control for possible confounders?
If our measuring instrument has been validated before, can we automatically use it?
No; We need to make sure our instrument is still valid when we put it in a new setting. Once is not enough. We need to revalidate the survey/instrument of measurement.
Which is valid? Which reliable?

The left is reliable, but not valid.
The middle two are neither.
The right is both reliable and valid.
Inter-Rater Reliablilty
The extent of agreement between two or more raters
Test-Retest Reliability
The degree to which test or instrument scores are consistent from one point in time to the next (the test taker and test conditions must be the same at both points in time)
Internal Consistency Reliability
The consistency of responses across items on a single instrument or test
Is nominal data usually parametric or nonparametric?
Nominal Data is always non-parametric.
Nominal Data cannot by ordered by magnitude along the x-axis, thus cannot be normal.