Chapter 12: Principles of Test Selection and Administration Flashcards
a test used to assess ability that is performed in a natural environment and does not require extensive training or expensive equipment
field test
a procedure for assessing ability
test
the process of collecting test data
measurement
the process of analyzing test results for the purpose of making decisions
evaluation
a test administered prior to training to determine the athlete’s baseline measurements
pretest
a test administered one or more times during the training period to assess progress and modify the program as needed to maximize benefit
midtest
periodic reevaluation based on midtests administered during the training period to monitor progress or adjust program
formative evaluation
a test administered after the training period to determine the success of the training program
posttest
the degree to which a test measures what it is supposed to measure
validity
what is the most important characteristic of testing?
validity
evaluation of test quality is based on what two things?
validity and reliability
how well the test measures what it is designed to measure
construct validity
the appearance to the athlete/observers that the test measures what it is supposed to measure
face validity
how well the experts believe the test covers relevant subtopics and component ability
content validity
the extent to which test scores are associated with some other measure of the same ability, 3 subgroups
criterion referenced validity
the extent to which test scores are associated with those of other accepted tests that measure the same ability
concurrent validity
how well the test lines up with the gold standard but is easier, cheaper, and faster to perform
convergent validity
the extent to which the test score corresponds with future performance
predictive validity
the ability of a test to distinguish between two different constructs
discriminant validity
a measure of the degree of consistency or repeatability of a test
reliability
the ability of the test to provide consistent results/scores from two test administrations to the same group
test-retest reliability
lack of consistent performance by the person being tested
intra subject variability
degree to which different raters agree in their test results over time or on repeated occasions
inter rater reliability/objectivity
lack of consistent scores by a given tester
intra rater variability