Ch12: principles of testing selection and administration Flashcards
reasons for testing
assessment of athletic talent
identification of physical abilities in need of improvement
setting of realistic goals using baseline measurements
evaluation of progress
identification of physical staleness, burnout, and overtraining
test
a procedure for assessing ability in a particular endeavor
field test
a test used to assess ability that is performed away from the laboratory and does not require extensive training or expensive equipment
measurement
the process of analyzing test results for the purpose of making decisions
evaluation
the process of collecting data
pretest
a test administered before the beginning of training to determine the athlete’s initial basic ability levels
midtest
a test administered one or more times during the training period to assess progress and modify the program as needed to maximize benefit
formative evaluation
periodic reevaluation based on midtests administered during the training, usually at regular intervals
posttest
a test administered after the training period to determine the success of the training program in achieving the training objectives
validity
is the degree to which a test or test item measures what it is supposed to measure
is one of the most important characteristics of testing
tests of basic athletic abilities or capacities is more difficult to establish than those of physical properties such as height and weight
construct validity
is the ability of a test to represent the underlying construct
refers to overall validity, or the extent to which the test actually measures what it was designed to measure
construct
is the theory developed to organize and explain some aspects of existing knowledge and observation
face validity
is the appearance to the athlete and other casual observers that the rest measures what it is purported or supposed to measure
the assessment of face validity is generally informal and nonquantitative
content validity
is the assessment by experts that the testing covers all relevant subtopics or component abilities in appropriate proportions
how can a test developer ensure content validity
listing the ability components to be assessed
making sure the ability components are all represented on the test
verifying that the proportion of the total score attributable to a particular component ability should be proportional to the importance of that component to total performance
criterion-referenced validity
is the extent to which test scores are associated with some other measure of the same ability
is often estimated statistically
what does formative evaluation allow
-eval of diff training methods
-collection of normative data
-enables monitoring of athlete’s progress
-enables adjusting the training program for the athlete’s individual needs
A test battery for soccer players should include at min tests of:
sprinting speed
agility
coordination
kicking power
what are the types of criterion referenced validity
concurrent validity (convergent validity)
predictive validity
discriminant validity
concurrent validity
is the extent to which test scores are associated with those of other accepted tests that measure the same ability.
convergent validity
-is evidenced by high positive correlation between results of the test being assessed and those of the recognized measure of the construct (the “gold standard”)
-is the type of concurrent validity that field tests should exhibit
predictive validity
-is the extent to which the test score corresponds with future behavior or performance.
-can be measured through comparison of a test score with some measure of success in the sport
-example: the correlation between the overall score on a battery of tests used to assess potential for basketball and a measurement of actual basketball performance
discriminant validity
-is the ability of a test to distinguish between two different constructs and is evidenced by a low correlation between the results of the test and those of tests of a different construct
-It is best if tests in a battery measure relatively independent ability components
Reliability
-is a measure of the degree of consistency or repeatability of a test
-A test must be reliable to be valid
-However, even a reliable test may not be valid, because the test may not measure what it is supposed to measure.
factors that produce measurement error
-Intrasubject (within subjects) variability
-Lack of interrater (between raters) reliability or agreement
-Intrarater (within raters) variability
-Failure of the test itself to provide consistent results
intrasubject variability
is a lack of consistent performance by the person being tested.
intrarater variability
is the lack of consistent scores by a given tester.
interrater reliability
is the degree to which different raters agree in their test results over time or on repeated occasions; it is a measure of consistency
-referred to as objectivity or interrater agreement
sources of interrater differences
variations in the
-calibrating testing devices
-preparation of athletes
-administration of the test.
sources of intrarater error
-unintentional leniency
-inadequate training
-inattentiveness
-lack of concentration
-failure to follow standardized procedures for:
device calibration,
athlete preparation,
test administration,
or test scoring
what are the factors of test selection
-metabolic energy system specificity
-biomechanical movement pattern specificity
-athlete experience & training status
-age & sex
-environmental factors
metabolic energy system specificity
-A valid test must emulate the energy requirements of the sport for which ability is being assessed.
-must understand the 3 basic energy systems (phosphagen, glycolytic, and oxidative) and their inter- relationships in order to apply the principle of specificity when choosing or designing valid tests to measure athletic ability
biomechanical movement pattern specificity
-the more similar the test is to an important movement in the sport, the better
-Sports differ in their physical demands
-Positions within a sport differ
what does experience and training status consider
-athlete’s ability to perform the technique
-athlete’s level of cardiorespiratory endurance, strength, speed and power training
-the type of resistance training equipment being used by the athlete
-the type of resistance training exercise being used to test the athlete
age and sex
-can affect the validity and reliability of a test
-the 1.5-mile run: aerobic power test
college-aged men & women vs male & female preadolescents
-Max # of chin-ups: muscular endurance test, men vs women
environmental factors
-High ambient temperature, especially in combination with high humidity, can impair endurance exercise performance, pose health risks, and lower the validity of an aerobic endurance exercise test.
-temp fluctuations can reduce the ability to compare test results over time
-altitude: can also impair performance on aerobic endurance tests, although not on tests of strength and power