Module 3: Validity and Utility Flashcards

1
Q

Validity

A

a judgement or estimate of how well a test measures what it purports to measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Validation

A

the process of gathering evaluation evidence about validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Validity is often conceptualised as three categories:

A
  • *Face validity: the test appears to cover relevant content
  • Content validity: based on evaluation of content covered by a test
  • Criterion validity: obtained by evaluating the relationship between scores on your test and other tests/measures.
  • Construct validity: Arrived at by comprehensive analysis of:
    o How score on the test relate to other scores and measures, and
    o How scores on the test can be understood within some theoretical framework for understanding construct test was designed to measure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Face Validity

A

Face Validity: is a judgement concerning how relevant the test items appear to be.

If a test appears to measure what it purports to measure ‘on the face of it’, it has 
high face validity.

Do these have high face validity? o	Personality tests? (e.g., NEO?) YES o	Rorscharch ink blot? LOW o	IQ tests? SOME YES SOME NO
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Content Validity

A

Content validity: a judgement of how adequately a test samples behaviour representative of the universe of behaviour test was designed to sample.

Test blueprint: a plan regarding the types of information to be covered by the items, the number of items tapping each area of coverage, the organisation of the items in the test etc.,

Lawshe’s (1975) content validity ratio (CVR)

  1. Select set of panel members who are experts in the content area
  2. Ask panellists to rate each item as one of
    a. Essential
    b. Useful but not essential
    c. Not necessary
  3. Use content validity ratio (CVR) formula, given by:
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Criterion Validity

A

A criterion is the standard against which a test or a test score is evaluated.

Characteristics of adequate criterion:

  • Relevant for the matter at hand
  • Valid for the purpose for which it is being used
  • Uncontaminated (i.e., it is not part of the predictor)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The validity coefficient

A

The validity coefficient: a correlation coefficient that provides a measure of the relationship between test scores and score on the criterion measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Incremental validity

A

the degree to which an additional predictor explains additional variation in the criterion measure. Is your test a good valid test having utility beyond existing tests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Expectancy table

A

shows proportion of people within test-score intervals who subsequently rated in various categories of the criterion (e.g., ‘passed’ vs ‘failed’ category)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Construct validity

A

Construct validity: ability of test to measure theorised construct (e.g., intelligence, aggression, personality, etc.) that it purports to measure.

If a test is a valid measure of a construct, high score and low scores should behave as 
theorised.

All types of validity evidence, including evidence from the content- and criterion-related varieties of validity, come under the umbrella of construct validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Evidence of homogeneity

A

Evidence of homogeneity: how uniform a test is in measure a single concept

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Evidence of changes with age

A

Evidence of changes with age: some constructs are expected to change over time (e.g., reading rate).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Evidence of pretest/posttest changes:

A

Evidence of pretest/posttest changes: test scores change as a result of some experience between a pretest and a posttest (e.g., therapy)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Evidence from distinct groups:

A

Evidence from distinct groups: scores on a test vary in predictable way as function of membership to a group (e.g., impulsivity should be higher in substance users).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Convergent validity:

A

Convergent evidence: scores on a test undergoing construct validation tend to correlate highly in predicted direction with scores on older, more established, tests designed to measure the same (or similar) constructs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Discriminant evidence:

A

Discriminant evidence: validity coefficient shows little relationship between test scores and other variables with which scores on the test should not theoretically be correlated.

17
Q

Validity and test bias

A

Bias: a factor inherent in a test that systematically prevents accurate, impartial measurement
- Bias implies systematic variation in test scores.

Fairness: the extent to which a test is used in an impartial, just, and equitable way.

Rating error: judgement resulting from intentional or unintentional misuse of a rating scale.

  • Raters may be too lenient, too severe, or reluctant to give ratings at the extremes (central tendency error).
  • Halo effect: tendency to give particular person higher rating than objectively deserves because of a favourable overall impression.
18
Q

Utility of tests

A

Utility: the usefulness or practical value of testing to improve efficiency.

19
Q

Factors affecting utility

A

Psychometric soundness:

  • Generally higher validity = greater utility
  • But, many factors affect utility and utility assessed in different ways.
  • Valid tests not always useful

Costs:
- Economic costs? E.g., purchasing a test and scoring sheets, training programs, software, hardware, cost of not using the best test.
- Non-economic costs? E.g., time, ethical considerations, face validity, poor data acquisition
Benefits?

Do the benefits justify the costs?

  • What are the profits, gains, or advantages?
  • Better data?
  • More reliable assessment?
  • Increased validity of measurement?
  • Appropriate testing for your population (e.g., specific norms)?
  • Non-economic benefits e.g., cutting edge assessment?
20
Q

Utility analysis

A

Utility Analysis: family of techniques that entail a cost-benefit analysis to assist in decision about usefulness of assessment tool.

  • Some utility tests straightforward, others are more sophisticated (e.g., using mathematical models).
  • Often utility tests address the question of ‘which test gives us the most bang for the buck?’
  • Endpoint of utility analysis yields educated decision as to which several alternative courses of action is most optimal (in terms of costs and benefits).

Expectancy data: likelihood that a testtaker will score within some interval of scores on a criterion measure.

21
Q

Determining cut score/cut points

A

Cut scores: what score will be used to differentiate people on your test? (i.e., only for categorical outcomes)

  • Relative cut scores: determine in reference to normative data
  • Fixed cut scores: made on the basis of minimum acceptable level.
  • Multiple cut scores: use of multiple cut points, for a single predictor (e.g., grades A, B, C, etc; categorised outcomes mild, moderate etc.).
  • Multiple hurdles: need to achieve lower cut point before advancing to the next stage of testing.
22
Q

Methods of setting cut scores

A

The Angoff Method: judgements of experts are averaged to yield cut scores for the test.

The Known Groups Method: entail collection of data on the predictor of interest from groups known to possess, and to possess, a trait, attribute, or ability of interest.

  • After analysis of a data, a cut score is chosen that best discriminates the groups
  • One problem with the known groups method is how do you know which ‘known groups’ to select?