Validity and Reliability Flashcards
what is validity?
what is it dependant on?
the ability of a test to measure accurately, the degree to which a test measures what it purports to measure
–> dependant on reliability, relevance and appropriateness of scores
what is reliability
the consistency or repeatability of an observation, the degree to which repeated measurements of a trait are reproducible under the same conditions
4 common types of validity evidence
- construct validity
- logical validity
- criterion validity
- convergent validity
all types of validity can be estimated either _______ or _______
logically, statistically
the test effectively measures the desired construct
construct validity
the measures obviously involves the performance being measured
- no statistical evidence is required
logical/face validity
degree to which scores on a test are related to recognize standard or criterion (gold standard)
criterion validity
AKA statistical or correlation validity
criterion validity
how is criterion validity obtained?
by determining the correlation/ validity coefficient (r) between scores for a test and the criterion measure
The criterion is measured at approximately the same time as the alternate measure and the scores are compared
- ex?
concurrent validity
ex, skin folds and hydrostatic weighing
the criterion is measured in the future (week, months, years later)
- ex?
predictive validity
ex, the pre selection test battery score and the selection success
2 or more measurements are conducted to collect data and establish that a test (battery) is measuring what is purports to measure
convergent validity
the consistency of repeatability of an observation
- the degree to which repeated measurements of a trait are reproducible under the same conditions
reliability
how do calculate reliability
rest retest scores to calculate reliability coefficient
eg. r= 0.99 to sit and reach (very high reliability)
3 types of reliability
- stability reliability
- internal - consistency reliability
- objectivity
what is stability reliability ?
when scores do not change across days
–> look at the relationship b/w multiple trials across multiple days
3 factors that contribute to low stability
- the people tested may perform differently
- the measuring instruments may operate or be applied differently
- the person administering the measurement may change
what is internal consistency reliability ?
evaluator gives at least 2 trials of the test within a single day
the internal consistency reliability coefficient is not comparable to stability reliability coefficient, the I-C coefficient is almost always?
higher
what is objectivity reliability
rater/judge reliability
- inter-tester reliability
2 factors affecting objectivity
- the clarity of the score system
2. the degree to which the ‘judge’ can assign a score accurately
9 considerations for reducing measurement error
- validity and reliable test
- instructions
- test complexity
- warm up and test trials (learning effect - may need 5 trials)
- equipment quality and preparation (e.g calibration)
- testing environment
- scoring accuracy
- experience and state of mind of person conducting the test
- state of mind of person being tested
Why calibrate?
How calibrate?
important to confirm accuracy of what equipment is telling you
- requires comparison between measurements (one of known magnitude and one of unknown magnitude that need sot be confirmed)
- check equipment is up to date and proper functioning
how often do you calibrate?
at least every 6 months, or according to manufacturer’s guidelines
reliability can be expected when ? (3)
- the testing environment is favorable to good performance
- ppl are motivated , ready to be tested and familiar
- the person administering the test is trained and competent