Chapter 12-Test Selection and Administration Flashcards
Validity
the degree to which a test measures what it is supposed to measure
Construct validity
ability of a test to represent the underlying construct (the theory developed to organize and explain some aspects of existing knowledge and observation
overall validity or the extent to which the test actually measures what it was designed to measure
To be valid, a test should:
measure abilities important in the sport
produce repeatable results
measure the performance of one athlete at a time
appear meaningful
be of suitable difficulty
be able to differentiate between various levels of ability
permit accurate scoring
include sufficient number of trials
withstand the test of statistical evaluation
face validity
appearance to the athlete and other casual observers that the test measures what it is purported to measure
content validity
assessment by experts that the testing covers all relevant subtopics or component abilities in appropriate proportions; includes all of the component abilities needed for a particular sport or position
criterion-referenced validity
extent to which test scores are associated with some other measure of the same ability. three types are concurrent, predictive and discriminant
concurrent validity
extent to which test scores are associated with those of other accepted tests that measure the same ability
convergent validity
high positive correlation between results of the test being assessed and those of the recognized measure of the construct or the “gold standard”
predictive validity
extent to which the test score corresponds with future behavior or performance
discriminant validity
ability of a test to distinguish between two different constructs and is evidenced by a low correlation between the results of the test and those of tests of a different construct
Reliability
measure of the degree of consistency or repeatability of a test
test-retest reliability
statistical correlation of the scores from two administrations of the same test to the same group of athletes
Typical error of measurement
includes both the equipment error and biological variation of athletes
Factors resulting in differences in scores
intrasubject (within subjects) variability
lack of interrater (between raters) reliability or agreement
intrarater (within raters) variability
failure of the test itself to provide consistent results
intrasubject variability
lack of consistent performance by the person being tested
interrater reliability
degree to which different raters agree in their test results over time or on repeated occasions; measure of consistency
sources of interrater differences include:
variations in calibrating testing devices
preparing athletes
running the test
some athletes may be more motivated by a particular coach