Chapter 12 - Principles of Test Selection and Administration Flashcards
Concurrent validity
The extent to which test scores are associated with those of other accepted tests that measure the same ability
Construct validity
The degree to which a test measures what it is supposed to measure
Content validity
The assessment by experts that the testing covers all relevant subtopics or component abilities in appropriate proportions needed for the sport
Convergent validity
A high-positive correlation between results of the test being assessed and those of the recognized measure of the construct (“The gold standard”)
Discriminant validity
The ability of a test to distinguish between two different constructs - evidenced by low correlations between results from one test construct vs another
Evaluation
The process of analyzing test results for the purposes of making decisions
Face validity
The appearance to the athlete and other observers that the test measures what it is purported to measure - “the appearance of test validity to nonexperts”
Field test
A test used to assess ability that is performed away from the laboratory and does not require extensive training or expensive equipment
Formative evaluation
A periodic reevaluation based on midtests administered during the training, usually at regular intervals
Interrater agreement
The degree to which different raters agree in their test results over time or on repeated occasions (aka interrater reliability, objectivity)
Interrater reliability
The degree to which different raters agree in their test results over time or on repeated occasions (aka interrater agreement, objectivity)
Intrarater variability
Lack of consistent scores by a given tester
Instrasubject variability
Lack of consistent performance from a testing athlete
Measurement
The process of collecting test data
Midtest
A test administered one or more times during the training period to assess progress and modify the program as needed