Ch. 12: Critical Review of Tests Flashcards

1
Q

Questions to consider: Broad issues

A
  1. What is the original purpose of the test? 2. What is the specific goal of administering the test? 3. Is the test being used in the manner in which it was intended? 4. For the purpose it was intended? 5. In the population for which it was initially designed and validated?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Questions to consider: More granular

A

: 1. Reliability 2. Construct validity of the test 3. Structural validity 4. Discriminability 5. Difficulty of questions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Impact on test scores and indices of performance(statistics)

A

Measurement statistics: mean, standard deviation, correlation, effect size, sensitivity, specificity; other statistics: Cronbach’s alpha, Kappa, Eigenvalues, Factor loadings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

When selecting a test for clinical use

A
  1. Purpose of the assessment tool is identified
  2. Tester qualifications are explicitly stated
  3. Testing procedures are well explained
  4. Adequate standardization size
  5. Clearly defined standardization sample
  6. Evidence of item analysis
  7. Measures of central tendency
  8. Convergent validity
  9. Predictive validity
  10. Test–retest reliability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When selecting a test for clinical use: Purpose of the assessment tool is identified

A
  1. Diagnose the presence or absence of a disorder? 2. Determine the severity level of a known disorder? 3. Establish treatment goals and/or objectives?
    Clinicians need to be aware that assessment tools might purport to serve a specific purpose, but offer no data to substantiate the validity of using a test for that rationale; If information related to the purpose of a test is not provided, the validity of the information collected using that tool might well be compromised.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

When selecting a test for clinical use: 1. Tester qualifications are explicitly stated

A

: essential to the validity of a test, as any data collected cannot be considered valid if it is administered and/or interpreted by an unqualified individual (exactly what should you say when they respond differently than expected).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

When selecting a test for clinical use: Testing procedures are well explained:

A
  1. Administering the assessment tool in a way that matches the presentation of the test to those in the standardization sample. 2. Any differences in how standardized assessment tools are administered yields scores that cannot be reliably compared to the normative sample.
  2. Quality of the data collected can be compromised, rendering test scores unusable for the purpose(s) they intended to fulfill
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

When selecting a test for clinical use: 1. Adequate standardization size

A

. Test scores that are compared to larger groups are more stable, and thus can be used more dependably in the clinical decision-making process. 2. Smaller sample sizes can also be indicative of a less representative sample for comparing scores, as with a small group included in the standardization pool it becomes doubtful that all possible subgroups of children (e.g. ethnicity, socio- economic status) have been included in a satisfactory manner, thus rendering the assessment tool in question unusable in many clinical settings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

When selecting a test for clinical use: Clearly defined standardization sample

A
  1. Provide the following information relative to the normative sample: geographic representation, socioeconomic status, and the language status of those in the normative group (typical vs. atypical language skills).
  2. Information about the sample relative to the diagnostic purposes of the test (I.e. How many people in the sample meet the diagnosis of interest?)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why would it be a bad idea to administer a test to a person who was not represented in the normative sample?

A

It may not mean anything for the population you’re giving it to because it was normed on a separate group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

When selecting a test for clinical use: Evidence of item analysis

A
  1. Item analysis is used to maximize both the reliability and quality of questions included 2. Looking at the content of individual questions, screening items for inclusion in the assessment tool 3. Ensuring that tests target the skills they purport to measure. 4. Factor structure supports the theory of the construct 5. Use of an assessment tool that fails to report data relative to item analysis could lead to clinical judgments being made on the basis of test questions that were poorly constructed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

When selecting a test for clinical use: Measures of central tendency

A
  1. Mean and standard deviation of all subtest scores for all groups of the normative sample 2. These measures are the basis for other scores that are derived for comparison of performance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

When selecting a test for clinical use: Convergent validity

A
  1. Evidence demonstrating a correlation between results obtained from the test in question as well as other, similar assessment tools in indicating the presence or absence of the disorder. 2. Shows that results from a given assessment tool are more likely to be valid if a tool that assesses a similar construct has yielded analogous results.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

When selecting a test for clinical use: Predictive validity

A
  1. Provide evidence that performance on a given test is predictive of performance observed in a more functional setting through direct observation or gold standard interview 2. Absence of predictive validity leads to uncertainty as to how assessment tools and real-life tasks can be compared. 3. Further, decisions related to intervention planning could be compromised as a result of a lack of reliability evident in test scores collected from such instruments.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

When selecting a test for clinical use: Test–retest reliability

A
  1. Ensure that scores attained on a given test are stable over time.
  2. Time interval should be considered based on the construct 3. Is this construct supposed to be stable in a week, in a month, a year? When relevant: Inter-examiner reliability ensures that test scores do not fluctuate when different clinicians administer the test battery
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Frieburg (2010): Considerations for test selex: how do validity and reliability impact diagnostic decisions?

A

Assessment Criteria Needed

  1. Purpose of the assessment tool is identified
  2. Test qualifications are explicitly stated (specify any special training needed to administer)
  3. Testing procedures are well explained
  4. Adequate standardization size (N = 100+)
  5. Clearly defined standardization sample
    a. Geographic representation, socioeconomic status, and language status
    b. Inclusion of language impaired children if purpose is to assign severity
  6. Evidence of item analysis exists
    a. Studied and controlled for item difficulty and/or item validity (e.g., Classical Test Theory which looks to improve the reliability of standardized assessment tools)
  7. Measures of central tendency are reported
  8. Concurrent validity is documented
    a. Demonstrating correlation between results obtained from test in question as well as other, similar assessment tools in indicating the presence or absence of disorder
  9. Predictive validity is documented *only shown in 2/9 assessments
    a. Give evidence showing that performance on a given test is predictive of observed performance
  10. Test-retest reliability is reported
    a. Ensure stability over time; correlation coefficient > .90
  11. Inter-examiner reliability is reported
    a. Ensure scores do not fluctuate when given by different clinicians; > .90
    - When choosing an assessment, first clinicians need to determine if identification is accurate (sensitive and specific) then look at other psychometric validity
17
Q

Cicchetti, D. V. (1994)

A
  1. Standardization
    a. Age, gender, education and/or occupation, geographic region, and urban/rural
  2. Norming
    a. Norming refers to the average score of a standardization sample
    b. Appropriate standardizations allow us to develop national norms for the valid interpretation of an individual’s score
  3. Test Reliability
    a. Internal consistency, test-retest and interexaminer reliability
  4. Test Validity
    a. Content validity, face validity, discriminant validity, clinical validity, concurrent validity, factoriral validity, criterion validity