Reliability Flashcards
How are predictive validity and concurrent validity different?
Predictive validity is assessed by calculating the correlation between a predictor and a criterion (with the criterion being obtained at a later point in time). *Wei says that this type of validity is of interest when the criterion can only be obtained at a later point in time*
Concurrent validity also is assessed by calculating the correlation between a predictor and a criterion, however, the two are both obtained at the same time.
When should you assess predictive validity?
When you are interested in using a test (predictor) to predict future behavior (criterion).
When should you assess concurrent validity?
When you are interested in assessing a criterion at the same time as the predictor. Also, is used if it is difficult, or impossible, to obtain predictor scores at a later point in time.
Does concurrent validity tend to underestimate predictive validity? Why or why not?
Yes. It underestimates it because there is a range restriction on the criterion due to people being unavailable for testing.
What are four ways to assess the construct validity of a measure?
- See if there are any group differences
- See if scores change over time or following an intervention (e.g., training, treatment, etc.)
- See if scores correlate with related constructs and don’t correlate with unrelated constructs
- See how people process or think about the measure
What are two major types of construct validity?
- Convergent
- Discriminant
What is convergent validity?
Convergent validity is present when two different measures of the same concept correlate with one another
What is discriminant validity?
Discriminant validity is present when two measures of different concepts do not correlate with one another
Is an instrument that is valid always reliable?
Yes
Is an instrument that is not valid reliable?
It may or may not be reliable.
Is an instrument that is reliable valid?
It may or may not be valid.
Is an instrument that is not reliable valid?
No. An instrument must be reliable in order to be valid.
What is necessary for a measure to be valid?
It must be reliable
Describe the two types of content validity?
- Face Validity - when a measure appears to measure what the trait is says it measures
- Logical/Sampling Validity - when a measure contains all, or a good portion, of the content within the domain/trait of interest
What are the two major types of criterion-related validity?
- Concurrent
- Predictive