Chapter 16 - Psychological Testing and Diagnosis (ONE) Flashcards

1
Q

5 Benefits of Testing

A
  1. Optimize effectiveness by tweaking approaches
  2. Verify legitimacy of treatments
  3. Identify and end use of pseudoscience
  4. Accountability, ex. having to report back to ministry, clients, insurance with proof of efficacy of treatments
  5. Meet ethical standards
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why do we test?

A
  1. to evaluate effects of intervention: done before/after and during treatment
  2. to guide decision making
  3. to prevent mistakes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are some common test scores used in communication with clients? and the contexts in which they are used

A
  1. Grade equivalents; teachers making IEPs, ex. he has a 6th grade level of intelligence
  2. Age equivalents; adults with severe intellectual abilities, helps set realistic expections and get them help, ex. his intellectual development is that of a 6 year old
  3. Percentiles; used in a lot of contexts, he scored in the 6th percentile for intelligence
  4. Standard Scores; ex. IQ
    - may not account for strengths and weaknesses
    - help compare performance across time and different tests
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Define: validity, content/construct/criterion validity

A
  • validity; test measures what it says it measures, necessary but not sufficient
    1. Content; test items test full range/content/symptoms of issue your testing
  • experts in a field judge this through consensus
    2. Construct; similar tests are correlated
    3. Criterion; test measures are correlated with some type of criterion, ex. those who test high on depression on a test should similarly get diagnosed with depression
  • test result should be able to predict result of a related test, ex. LSAT predicts how well you’d do in law school
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define: reliability

  • inter-rater/observer
  • test-retest
  • parallel-forms
  • internal consistency
A
  • reliability; test yields the same values across repeated measurement of the same event, a test can be reliable and not valid
    1. Inter-Rater/Observer; this is achieved when 2+ people judge the same test and agree on results
  • can reduce subjectivity
    2. Test-Retest; subject would get same result if given same test over time without intervention, ex. IQ should be stable over time
    3. Parallel-Forms; same test in different words, person should get same score
    4. Internal Consistency; person will score similarly across themes of like construct
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Threats to validity/reliability:

A
  1. Indirect measurement; measuring a behavior other than the behavior of interest, ex. indirect reporting (parent rating child)
  2. Human error
  3. Poorly designed measurement systems
  4. Inadequate training
  5. Unintended influences on test takers, ex. boredom, hunger, fatigue
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Difference between Standardized vs. Non-standardized Tests?

A
  • Standardized; given in same way to all people, norm based results, comparative, ex. IQ
  • Non-standardized, ex. dynamic assessment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Difference between Individual and Group Tests?

A
  • Individual, ex. IQ

- Group, ex. EQAO

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Difference between Speed based and Power based Tests?

A
  • Speed; ex. crossword puzzle

- Matrices/Power; person uses brain power to solve, ex. puzzle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Difference between maximum and typical Test goals?

A
  • Maximum; goal is for maximum score, ex. IQ

- Typical; goal is for typical score, ex. Child Behaviour Checklist, Conners (ADHD)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Who created the first intelligence test? What is it called today?

A
  • Alfred Binet

- Stanford-Binet Intelligence Scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Who created the most used intelligence test? Why?

A
  • David Wechsler

- he objected to the single score offered by the 1937 Binet scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Define/explain: Achievement Tests

A
  • measures the degree of an individual’s learning in a subject, ex. reading, math, and writing
  • measures components of each
  • used for educational placement, remediation, and diagnosis
  • ex. Wechsler Individual Achievement Test. Wide Range Achievement Test
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Define: Personality Test

A
  • distinguish people experiencing psychiatric problems
  • Myers-Briggs Type Indicator, used for normal people as well
  • Rorschach, projective
  • can be objective or projective; dependent on the interpretations of the assessor
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are some benefits and problems with diagnosis?

A
  • opens the door for services, directs treatment, communication between professionals, allows person to externalize their “problem”
  • stigma, multiple diagnoses and doctor shopping, can set up self-fulfilling prophesies for clients
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Define/explain: Diagnosis

A
  • a description of the clients condition, ex. based on DSM-V in NA, ICD-10 in Europe
  • the meaning received from assessment information and translated in the form of a classification system
  • some diagnostic categories are shared with clients others are withheld
17
Q

Define: Assessment

A

the procedures and processes of collecting information and measures of human behavior apart from test data

18
Q

6 purposes of Assessment:

A
  1. Obtain information on client’s presenting
    problem
  2. Identify contributing variables to the problem
  3. Determine the client’s goals/expectations
  4. Gather baseline data
  5. Educate and motivate the client
  6. Plan treatment interventions and strategies
19
Q

Examples of Assessment Techniques:

A
  1. Standardized tests; must be trained to administer
  2. Diagnostic interviews – structured clinical
    interviews
  3. Projective personality measures
  4. Questionnaires
  5. Mental status examinations
  6. Checklists
  7. Behavioral observation
  8. Collateral reports by other professionals
20
Q

Define/explain: Mental Status Examination (MSE), what are the 6 categories

A
  • used in settings requiring assessment, diagnosis, and treatment of mental health issues
  • Categories are:
    1. Appearance
    2. Mood
    3. Speech and language
    4. Thought process
    5. Cognition
    6. Insight and judgment
21
Q

Define: Dual Diagnosis

A

-an individual is perceived to be carrying both a substance abuse and mental health diagnosis

22
Q

Define: Comorbid

A

-two conditions existing simultaneously but independently

23
Q

Define: Clinical Decision-Making

A

-the intricate decisions professional counselors make when they assess the degree of severity of a client’s
symptoms, identify a client’s level of functioning,
and make decisions about a client’s prognosis

24
Q

What does DSM stand for?

A

Diagnostic and Statistical Manual of Mental Disorders

25
Q

Define; consequential validity

A

-the social consequences of using a particular test for a particular purpose