Chapter 3 Flashcards
What is reliability in the context of psychological assessment and why is it important?
Reliability refers to the consistency of measurement—how dependable and stable a test or observation is over time, across raters, or within the test itself.
A reliable measure gives the same results under consistent conditions (e.g., a wooden ruler).
An unreliable measure varies randomly (e.g., a stretchy rubber ruler).
Reliability is essential for all assessment procedures to ensure accurate, trustworthy results.
What are the four main types of reliability used in assessment and diagnosis?
Inter-rater Reliability: Agreement between independent observers (e.g., two umpires judging the same play). Crucial for interviews, not for self-report questionnaires.
Test–Retest Reliability: Consistency of test results over time. Best for stable traits (e.g., intelligence). Less useful for changing states (e.g., mood).
Alternate-Form Reliability: Consistency between two different versions of a test to reduce memory effects.
Internal Consistency Reliability: Do the test items correlate with each other? (e.g., anxiety test items like dry mouth and muscle tension should align).
How is reliability measured and what do different scores mean?
Reliability is measured on a scale from 0 to 1.0.
The closer to 1.0, the more reliable the test.
Example:
.65 = only moderately reliable
.91 = highly reliable
High reliability is critical for producing valid, repeatable results in clinical diagnosis and assessment.
What is validity in psychological assessment, and how does it relate to reliability?
Validity refers to whether a test measures what it is supposed to measure (e.g., does a hostility questionnaire actually measure hostility?).
Validity is dependent on reliability: an unreliable measure cannot be valid.
However, reliability ≠ validity: a test can be consistent (reliable) but still not measure the right thing.
Example: Height can be measured reliably, but it’s not a valid measure of anxiety.
Validity is a complex, theory-driven concept used to ensure accuracy and relevance in measurement.
What are criterion validity and content validity?
Criterion Validity
Assesses whether scores on a test correlate with scores from other tests measuring the same thing.
Example: A new social anxiety scale should correlate with existing social anxiety scales.
Content Validity
Evaluates whether a test adequately covers the domain of interest.
A good social anxiety test should include questions about various social situations.
It would lack content validity if used to measure specific phobias (like snakes or heights) because it doesn’t cover those areas.
What is construct validity, and how is it assessed?
Construct Validity assesses how well a test measures an inferred, unobservable concept (like anxiety proneness or distorted cognition).
Evaluated by checking how test results relate to:
Diagnostic groups (e.g., people with vs. without anxiety disorders)
Behavioral observations (e.g., fidgeting, trembling)
Physiological responses (e.g., heart rate, breathing)
Strong construct validity = test aligns with multiple related measures.
It’s also tied to theory: e.g., if anxiety proneness is linked to family history, and the test reflects that, both construct validity and theory are supported.
What is validity in psychological measurement?
Validity refers to whether a test measures what it is supposed to measure.
How is reliability related to validity?
Reliability is necessary for validity; an unreliable test cannot be valid, but a reliable test is not automatically valid.
What is criterion validity?
It evaluates whether a test correlates with other measures of the same construct.
What is content validity?
It assesses whether a test adequately covers the domain it aims to measure.
What is construct validity?
It evaluates how well a test measures a theoretical concept or construct that is not directly observable.
Why is diagnosis important in clinical care?
It helps guide treatment, provides relief, explains symptoms, and enables accurate communication among professionals.
What is the DSM-5-TR?
It is the Diagnostic and Statistical Manual of Mental Disorders, 5th edition, Text Revision, used for classifying psychological disorders.
How does DSM-5-TR organize diagnoses?
Based on symptom patterns, not causes, but chapters reflect comorbidity and shared risk factors.
What cultural features were added in DSM-5?
Cultural formulation interview, culture-specific syndromes appendix, and cultural notes in disorder descriptions.
What is comorbidity in the context of diagnosis?
The presence of more than one diagnosis in the same person; it is very common in clinical populations.
Why is culture important in diagnosis?
Culture shapes symptom expression, stigma, treatment access, and can influence whether behaviors are seen as disordered.
Give an example of a cultural syndrome in DSM-5-TR.
Taijin kyofusho: fear of offending others, common in Japan; overlaps with social anxiety disorder.
What is hikikomori?
A cultural syndrome in Japan/South Korea involving extreme social withdrawal, especially in young men.
What criticism is made about too many diagnoses in DSM-5-TR?
It pathologizes normal variations and minor issues, contributing to excessive comorbidity and overlap in categories
How does DSM-5-TR approach symptoms in a cultural context?
Clinicians are cautioned to diagnose only if symptoms are atypical and problematic within the person’s culture.
Why is thorough assessment important in diagnosis?
To avoid misdiagnosis, like mistaking symptoms for schizophrenia when another condition is present.
What is the “lumping vs. splitting” debate in diagnosis?
Lumping combines similar disorders due to shared risk factors and treatments; splitting maintains finer distinctions.
What is the “p factor”?
A proposed general psychopathology factor suggesting some risk factors relate to all mental disorders.