Assessment Flashcards

1
Q

Which of the following is a measure of interrater reliability that adjusts for the level of chance agreement between raters’ scores?
Select one:

A.
Predictive validity

B.
Magnitude of effect

C.
Kappa coefficient

D.
Standard error of measurement

A

The correct answer is C.

The Kappa coefficient is a statistical measure of inter-rater reliability used to assess qualitative documents and determine the reliability between two raters.

Answer A: Predictive validity is the degree to which a measure can predict another measure given at a later time.

Answer B: The magnitude of the effect is a measure of the strength of the relation between variables, or the magnitude, of change across time, or the magnitude of difference between groups.

Answer D: The standard error of measurement estimates how repeated measures of an individual on the same instrument tend to be distributed around their “true” score.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

_______ is the degree to which an obtained measure of effect has a practical value or can guide clinical judgments.
Select one:

A.
Diagnostic efficiency

B.
Clinical utility

C.
Clinical significance

D.
Likelihood ratio

A

The correct answer is C.

Clinical significance is the degree to which measures contribute meaningful information and aid clinical judgment.

Answer A: Diagnostic efficiency is the degree to which established test cut-off scores accurately identify persons diagnosed with or without a disorder, as identified by an external criterion.

Answer B: Clinical utility is the degree to which the results of an instrument assist the clinician in making judgments about a client or enhance the validity of those judgments.

Answer D: The likelihood ratio is the degree to which the odds of a correct classification are increased by the use of the assessment data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly