Assessment Flashcards
Which of the following is a measure of interrater reliability that adjusts for the level of chance agreement between raters’ scores?
Select one:
A.
Predictive validity
B.
Magnitude of effect
C.
Kappa coefficient
D.
Standard error of measurement
The correct answer is C.
The Kappa coefficient is a statistical measure of inter-rater reliability used to assess qualitative documents and determine the reliability between two raters.
Answer A: Predictive validity is the degree to which a measure can predict another measure given at a later time.
Answer B: The magnitude of the effect is a measure of the strength of the relation between variables, or the magnitude, of change across time, or the magnitude of difference between groups.
Answer D: The standard error of measurement estimates how repeated measures of an individual on the same instrument tend to be distributed around their “true” score.
_______ is the degree to which an obtained measure of effect has a practical value or can guide clinical judgments.
Select one:
A.
Diagnostic efficiency
B.
Clinical utility
C.
Clinical significance
D.
Likelihood ratio
The correct answer is C.
Clinical significance is the degree to which measures contribute meaningful information and aid clinical judgment.
Answer A: Diagnostic efficiency is the degree to which established test cut-off scores accurately identify persons diagnosed with or without a disorder, as identified by an external criterion.
Answer B: Clinical utility is the degree to which the results of an instrument assist the clinician in making judgments about a client or enhance the validity of those judgments.
Answer D: The likelihood ratio is the degree to which the odds of a correct classification are increased by the use of the assessment data.