Chapter 7: utility Flashcards
The practical value of testing to improve efficiency
utility
factors affecting utility
psychometric soundness: the higher the criterion-related validity of the test scores, the higher the utility of the test
T or F: valid tests are not always useful tests
true
2 factors affecting utility
- cost
- benefit
One of the most basic elements of utility analysis is the financial cost associated with a test
cost
The benefits of testing should be weighted against the costs of administering, scoring, and interpreting the test
benefits
A family of techniques that entail a cost–benefit analysis designed to yield information relevant to a decision about the usefulness and/or practical value of a tool of assessment
utility analysis
endpoint of a utility analysis
yields an educated decision as to which of several alternative courses of action is most optimal (in terms of costs and benefits)
An assumption is made that high scores on one attribute can “balance out” or compensate for low scores on another attribute
compensatory model of selection
The likelihood that a test taker will score within some interval of scores on a criterion measure
expectancy data
Provide an estimate of the percentage of employees hired by the use of a particular test who will be successful at their jobs
taylor-russell tables
different combinations of three variables in taylor-russell tables
- test’s validity
- selection ratio used
- base rate
help obtain the difference between the means of the selected and unselected groups to derive an index of what the test (or some other tool of assessment) is adding to already established procedures
naylor-shine tables
T or F: For both Taylor-Russell and Naylor-Shine tables, the validity coefficient comes from concurrent validation procedures
true
used to calculate the
dollar/peso amount of a utility gain resulting from the use of a particular selection instrument under specified conditions
brogden-cronbach-gleser formula
practical considerations
- the pool of job applicants
- the complexity of the job
- cut off score
Some utility models are based on the assumption that that there will be a ready supply of viable applicants from which to choose and fill positions
the pool of job applicants
The same kind of utility models are used for a variety of positions, yet the more complex the job, the bigger the difference in people who perform well or poorly
the complexity of the job
reference point derived as a result of a judgment
cut-off score
types of cut-off score
- relative cut score
- fixed cut score
- multiple cut score
- multiple hurdle
reference point that is set based on norm-related considerations rather than on the relationship of test scores to a criterion
a. relative cut score
b. fixed cut score
c. multiple cut score
d. multiple hurdle
a. relative cut score
minimum level of proficiency required to be included in a particular classification
a. relative cut score
b. fixed cut score
c. multiple cut score
d. multiple hurdle
b. fixed cut score
use of two or more cut
scores with reference to one predictor for purpose of categorizing test takers
a. relative cut score
b. fixed cut score
c. multiple cut score
d. multiple hurdle
c. multiple cut score
multistage decision-making process wherein cut score on one test is necessary in order to advance to the next stage of evaluation in a selection process
a. relative cut score
b. fixed cut score
c. multiple cut score
d. multiple hurdle
d. multiple hurdle
under classical test score theory
- angoff method
- known groups method
- item mapping method
- bookmark method
- angoff method
- known groups method
under IRT-based method
- angoff method
- known groups method
- item mapping method
- bookmark method
- item mapping method
- bookmark method
methods for setting cut scores
- classical test score theory
- IRT-based method
- method of predictive yield
- discriminant analysis
The judgments of the experts are averaged to yield cut scores for the test
angoff method
methods of setting cut scores: can be used for personnel selection based on traits, attributes, and abilities
angoff method
problem with angoff method
problems arise if there is disagreement between experts
methods of setting cut scores: Entails collection of data on the predictor of interest from groups known to possess, and not to possess, a trait, attribute, or ability of interest
known groups method
problem with known groups method
there is no standard set of guidelines for choosing contrasting groups
In an IRT framework, each item is associated with a particular level of difficulty; in order to “pass” the test, the test taker must answer items that are deemed to be above some minimum level of difficulty, which is determined by experts and serves as the cut score
IRT-based methods
entails arrangement of items in a histogram with each column containing items deemed to be of
equivalent value
a. item-mapping method
b. bookmark method
a. item-mapping method
trained judges are provided with sample items from each column and are asked whether or not a minimally competent individual would answer those items correctly
a. item-mapping method
b. bookmark method
a. item-mapping method
difficulty level is set as the cut score
a. item-mapping method
b. bookmark method
a. item-mapping method
training of experts with regard to the minimal knowledge, skills and/or abilities test takers should possess in order to pass
a. item-mapping method
b. bookmark method
b. bookmark method
experts are given a book of items arranged in ascending order of difficulty
a. item-mapping method
b. bookmark method
b. bookmark method
experts place a bookmark
between 2 items deemed to separate test takers who have acquired minimal knowledge, etc.
a. item-mapping method
b. bookmark method
b. bookmark method
methods of setting cut scores: takes into account the number of positions to be filled, projections regarding the likelihood of offer acceptance, and the distribution of applicant scores
method of predictive yield
methods of setting cut scores: A family of statistical techniques used to shed light on the relationship between identified variables (such as scores on a battery of tests) and two (or more) naturally occurring groups (such as persons judged to be successful at a job and persons judged unsuccessful at a job)
discriminant analysis
5 item analysis
- index of item difficulty
- index of item discrimination
- index of item reliability
- index of item validity
- spiral omnibus format
item-endorsement index; (cognitive tests) a statistic indicating how many tests takers responded correctly to an Item; (personality tests) a statistic indicating how many test takers responded to an item in a particular direction
a. index of item difficulty
b. index of item discrimination
c. index of item reliability
d. index of item validity
e. spiral omnibus format
a. index of item difficulty
a statistic designed to indicate how adequately a test item discriminates between high and low scorers
a. index of item difficulty
b. index of item discrimination
c. index of item reliability
d. index of item validity
e. spiral omnibus format
b. index of item discrimination
provides an indication of the internal consistency of a test; is equal to the product of the item-score standard deviation (s) and the correlation (r) between the item score and the total test score
a. index of item difficulty
b. index of item discrimination
c. index of item reliability
d. index of item validity
e. spiral omnibus format
c. index of item reliability
statistic indicating the degree to which a test measures what it purports to measure; the higher the item-validity index, the greater the test’s criterion-related validity
a. index of item difficulty
b. index of item discrimination
c. index of item reliability
d. index of item validity
e. spiral omnibus format
d. index of item validity