NSCA CSCS - Chapter 12 Flashcards
Concurrent validity
The extent to which test scores are associated with those of other accepted tests that measure the same ability
Construct validity
The degree to which a test measures what it is supposed to measure
Content validity
The assessment by experts that the testing covers all relevant subtopics or component abilities in appropriate proportions needed for the sport
Convergent validity
A high-positive correlation between results of the test being assessed and those of the recognized measure of the construct (“The gold standard”)
Discriminant validity
The ability of a test to distinguish between two different constructs - evidenced by low correlations between results from one test construct vs another
Evaluation
The process of analyzing test results for the purposes of making decisions
Face validity
The appearance to the athlete and other observers that the test measures what it is purported to measure - “the appearance of test validity to nonexperts”
Field test
A test used to assess ability that is performed away from the laboratory and does not require extensive training or expensive equipment
Formative evaluation
A periodic reevaluation based on midtests administered during the training, usually at regular intervals
Interrater agreement
The degree to which different raters agree in their test results over time or on repeated occasions (aka interrater reliability, objectivity)
Interrater reliability
The degree to which different raters agree in their test results over time or on repeated occasions (aka interrater agreement, objectivity)
Intrarater variability
Lack of consistent scores by a given tester
Instrasubject variability
Lack of consistent performance from a testing athlete
Measurement
The process of collecting test data
Midtest
A test administered one or more times during the training period to assess progress and modify the program as needed
Objectivity
The degree to which different raters agree in their test results over time or on repeated occasions (aka interrater agree, interrater reliability)
Posttest
Test administered after the training period to determine the success of the training program in achieving the training objectives
Predictive validity
The extent to which the test score corresponds to future performance or behavior in the relevant sport
Pretest
A test administered before the beginning of training to determine the athlete’s initial basic ability levels
Reliability
A measure of the degree of consistency or repeatability of a test
Test
A procedure for assessing ability in a particular endeavor
Test battery
A series of tests performed one after the other
Test-retest reliability
The degree to which a test results in differences between two scores on the same athlete with the same ability each time
Typical error of measurement
Error in testing scoring due to testers, equipment, and biological variations in a given athlete