Reliability & Validity Flashcards
Definition of reliability
consistency, stability, and accuracy of the instrument
Types of reliability
Internal reliability, test-retest reliability, inter-examiner reliability
Internal reliability
constancy of result on items within a test/on the overall test. Can be measured using split-half reliability, want a correlation of .7 or more.
Test-retest reliability
Stability of the measure across time. Examine relationship of scores on a test over an interval which change is unlikely to occur. Can balance learning effects with spontaneous recovery
Inter-examiner reliability
How much the examiner influences performance. Examine relationship between the same individual’s scores on tests given by different examiners
Definition of validity
degree to which a test measures what it purports to measure
Types of validity
Face validity, Content validity, Criterion validity (concurrent, predictive), Construct validity (convergent, divergent)
Face validity
subjective judgments by individuals using the test that there is a match between the purpose of the test and its content
Content validity
adequacy of sampling from the domain of the construct to be measured (i.e., items should be representative of the construct)
Criterion validity
how well the score on the test can predict a certain outcome. Correlation between the scores on the test variable under study and an external measure
Concurrent validity
relationship between a specific criteria and scores from the test (i.e., previously validated test, how well the test discriminates)
Predictive validity
how well a score on a test can predict future events
Sensitivity
percent of individuals scoring below the cut-off point
Specificity
percent of individuals without aphasia scoring above the cut-off point
Construct validity
tests ability to index the underlying theoretical construct it is intending to measure. Supported by theoretical rationale and empirical evidence