Chapter 3: Clinical Assessment and Diagnosis Flashcards
Purpose of Clinical Assessment
Treatment Planning, Understanding the individual, Predicting behavior, Diagnosing
Reliability
degree of consistency of a measurement
Inter-rater Reliability
measures the level of agreement between ratings by multiple people (raters, judges, etc.); higher = more accurate
Test-retest Reliability
a test produces similar results over time
Inter-item Reliability
consistency between multiple items measuring the same construct (multiple items tell you a result; ex: personality quizzes)
Parallel Forms Reliability
create more than one form that asses the same thing (ex: two different forms for an exam)
Validity
Does the test measure what it’s supposed to?
Face Validity
the assessment appears effected in terms of its stated aims
Content Validity
all factors have equal coverage
Criterion-related Validity
how well the measurement of one variable can predict the response of another variable
What are the two types of Criterion-related validity?
concurrent & predictive
Concurrent Validity
measures two variables at the same time to see if one is significantly associated with the other
Predictive Validity
determines if a measurement of one variable is able to accurately predict the measurement of some variable in the future
Construct-related Validity
how well a test measures the theory it is supposed to measure
Convergent Validity
Does it relate to things that it should relate to?
Discriminant Validity
Does it not relate to the things that it should not relate to?
Standardization
set of norms to ensure consistency throughout measurements