7: Measurement issues in clinical and health psychological research Flashcards
Conceptualization
Process of specifying what we mean by a term
Operationalization
Process of connecting concepts to observations
Univariate operational definition
One single test
Multivariate definition
Two or more indicators are used
Operational definition
what it measures, how the indicators are measured and the rules used to assign value to what is observed.
If you need to develop a new scale
Phase 1: initial pool of items; literature review, interviews, focus groups, content validity, cognitive tests
Phase 2: refine the item pool, using exploratory MIRT
Phase 3: psychometric properties, confirmatory MIRT, internal consistency, reliability and discriminant validity.
Practical considerations in selecting psychological instruments
- The cost of a test
- Permission and responsibility for using instrument
- Time and length
Reliability
- Prerequisite of validity. A valid test is a reliable test, but a reliable test is not always a valid test.
- Infers low error
- A reliable measure is optimally free from random error, while a valid measure is optimally free from random error.
Cronbachs alpha
Its a measure for internal consistency.
Its an estimate of the correlation between observed scores and true scores.
Assumptions of Cronbach´s alpha
- the scale adheres to tau equivalent (a statistically way to state that each item on a scale contributes to the total scale score)
- Scale items are on a continuous scale and normally distributed
- The error of the items do not covary
- The scale is unidimensional
Item response theory
Refers to a set of mathematical models that describe the relationship between. persons response to a survey question/test item and his or her level of the latent variable being measured by the scale.
This latent variable is usually a hypothetical construct which is postulated to exist but cannot be measured by a single observable variable. instead it is indirectly measured by using multiple items or questions in a multi-item test/scale.
What IRT does
IRT models provide a clear statement (picture) of the performance of each item in the scale/test
Validity
A measure is valid if it does what it claims to do.
Face validity
Refers to the extent to which a measures “appears” to measure what it is supposed to measure.
- not statistical; involves the judgment of the researcher.
- A measure has face validity “if people think it does”.
Concurrent validity
Measure and criterion are assessed at the same time
Predicitve validity
Elapsed time between the administration of the measure to be validated and the criterion is relatively long period (e.g. months or years). It refers to a measures ability to distinguish participants on a relevant behavioral criterion at some point in the future.
Construct validity
Refers to the question of whether a test measures what (an unobservable construct) it purports to measure.
Convergent validity
the alternative measure of the same construct should be highly intercorrelated
Discriminant validity
The measures of different constructs should be, at most, moderately intercorrelated. it should not correlate with measures that it should not correlated with.
Sensitivity
the sensitivity of a test in the ability of the test to identify correctly affected individuals.
Specificity
The specificity of a test in the ability of the test to identify correctly non-affected individuals.
ROC curve
Its a graphical representation of the trade off between the false negative and false positive rates for every possible cut off.