Principles of Test Selection & Administration Flashcards
A test used to assess ability that is performed away from the lab and does not require extensive training or expensive equipment
Field Test
Periodic re-evaluation based on mid-tests administered during the training, usually at regular intervals; allows for monitoring of athlete’s progress and adjustments to the training program if needed
Formative evaluation
Overarching term that refers to the degree to which a test measures what it is supposed to measure
Validity
The ability of a test to represent the underlying construct that is desired; refers to the overall validity of a test
Construct validity
The subjective appearance to an athlete or other casual observer that a test measures what it is supposed to measure
Face validity
The assessment by experts that a test covers all relevant material, sub-topics, or component abilities in appropriate proportions
Content validity
Over-arching term that refers to the extent that test scores are associated with some other measure of the same ability
Criterion-referenced validity
Type of criterion-referenced validity; the extent to which test scores are associated with those of other accepted tests that measure the same ability
Concurrent validity
Type of criterion-referenced validity; refers to high positive correlation between results of the test being assessed and those of the “gold standard” for measuring that specific construct
Convergent validity
Type of criterion-referenced validity; the extent to which the test score corresponds with future behavior or performance
Predictive validity
The ability of a test to distinguish between two different constructs and is evidenced by a low correlation between the results of the test and those of tests of a different construct
Discriminant validity
Over-arching term that refers to the degree of consistency or repeatability of a test
Reliability
Refers to lack of consistent performance by the person being tested; can be impacted by time of day, temperature, or an athlete’s experience with a specific skill/test
Intra-subject variability
The degree to which different raters agree in their test results over time or on repeated occasions; can be impacted by rater variations in calibrating testing devices, preparing athletes, administering the test; levels of leniency (allowing less deep 1RM back squat)
Inter-rater reliability or inter-rater agreement
Refers to a lack of consistent scores given one given tester; may be impacted by inadequate training, inattentiveness, lack of concentration, failure to follow standardized protocols
Intra-rater variability