Lecture 5: Reliability And Validity Flashcards
0
Q
Face validity
A
- does the measure appear to measure what it claims to measure?
- assessed qualitatively or by expert panel and/or client input
- consider: has little scientific rigor, necessary but not sufficient condition
1
Q
The process of scale development
A
- the intuitive concept
- definition of construct
- operational definition
- measurement scale
- validity of measurement
- reliability of measurement
2
Q
Concurrent validity
A
- The new measure should correlate (imperfectly) with an established measure of the same construct
- needs to be theoretically related as well
3
Q
Convergent validity
A
- 2 new measures of same construct correlating
4
Q
Construct validity
A
- demonstrates that the measure being validated behaves as the construct would behave under varying conditions
- assessed through a triangulation of correlations
5
Q
Discriminant validity
A
- Groups that ought to differ with respect to the construct are found to do so. Related to predictive validity
- assessed via differences in mean scores by group and impaired t-test to assess statistical significance of difference
6
Q
Sources of unreliability: observer error
A
- technical errors or misjudgments by the individual rating participants
- Assessed: difficult, inter-eater reliability covers part of this problem
7
Q
Sources of unreliability: Environment changes
A
- changes in the environment may influence performance by the participant
- ideally test under consistent circumstances
8
Q
Sources of unreliability: participant changes
A
- Changes within the participant that alter the score
9
Q
Sources of unreliability: changes in the construct
A
- the construct being measured changes b/w testing occasions e.g. Pain
- measurements need to be made near enough in time to minimize risk of change in the underlying construct
10
Q
Test-retest reliability
A
- same scores from individuals across 2 points in time
- assessed by correlation and examining the magnitude and significance of change within person
11
Q
Parallel forms reliability
A
- similar to test-retest but with alternate versions of the test being used
- helps avoid scores changes due to exposure to test
12
Q
Internal reliability
A
- the extent to which the items that make up a measure are all measuring a consistent construct
- assessed by split half reliability (correlate scores calculate by randomly splitting the items into 2 halves) which can lead to different correlations depending on how the items are split
- Cronbach’s alpha: an average of all possible split half correlations