Utility Flashcards
The practical value of testing to improve efficiency
A. Utility
B. Psychometric Soundness
C. Costs
D. Benefits
A. Utility
The higher the criterion-related validity of test scores, the higher the utility of the test
A. Utility
B. Psychometric Soundness
C. Costs
D. Benefits
B. Psychometric Soundness
One of the most basic elements of utility analysis
A. Utility
B. Psychometric Soundness
C. Costs
D. Benefits
C. Costs
Weighed against the costs of administering, scoring, and interpreting the test
A. Utility
B. Psychometric Soundness
C. Costs
D. Benefits
D. Benefits
An assumption is made that high scores on one attribute can “balance out” low scores on another attribute
A. Compensatory Model of Selection
B. Expectancy Data
C. Taylor-Russell Tables
D. Naylor-Shine Tables
E. Brogden-Cronbach-Gleser Formula
A. Compensatory Model of Selection
The likelihood that a test taker will score within some interval of scores on a criterion measure
A. Compensatory Model of Selection
B. Expectancy Data
C. Taylor-Russell Tables
D. Naylor-Shine Tables
E. Brogden-Cronbach-Gleser Formula
B. Expectancy Data
You must be able to create a set of norms where your score will fall under
A. Compensatory Model of Selection
B. Expectancy Data
C. Taylor-Russell Tables
D. Naylor-Shine Tables
E. Brogden-Cronbach-Gleser Formula
B. Expectancy Data
Provide an estimate of the percentage of employees hired by the use of a particular test who will be successful at their jobs
A. Compensatory Model of Selection
B. Expectancy Data
C. Taylor-Russell Tables
D. Naylor-Shine Tables
E. Brogden-Cronbach-Gleser Formula
C. Taylor-Russell Tables
Help obtain the difference between the means of the selected and unselected groups to derive an index of what the test (or some other tool of assessment) is adding to already established procedures
A. Compensatory Model of Selection
B. Expectancy Data
C. Taylor-Russell Tables
D. Naylor-Shine Tables
E. Brogden-Cronbach-Gleser Formula
D. Naylor-Shine Tables
The validity coefficient comes from concurrent validation procedures.
A. Compensatory Model of Selection
B. Expectancy Data
C. Taylor-Russell Tables
D. Naylor-Shine Tables
E. Brogden-Cronbach-Gleser Formula
D. Naylor-Shine Tables
Many other variables may play a role in selection decisions, including applicants’ minority status, general physical or mental health, or drug use.
A. Compensatory Model of Selection
B. Expectancy Data
C. Taylor-Russell Tables
D. Naylor-Shine Tables
E. Brogden-Cronbach-Gleser Formula
D. Naylor-Shine Tables
Used to calculate the dollar/peso amount of a utility gain resulting from the use of a particular selection instrument under specified conditions
A. Compensatory Model of Selection
B. Expectancy Data
C. Taylor-Russell Tables
D. Naylor-Shine Tables
E. Brogden-Cronbach-Gleser Formula
E. Brogden-Cronbach-Gleser Formula
Some utility models are based on the assumption that there will be a ready supply of viable applicants from which to choose and fill positions.
A. The Pool of Job Applicants
B. The Complexity of the Job
C. Cut-Off Score
A. The Pool of Job Applicants
The same kind of utility models are used for a variety of positions, yet the more complex the job, the bigger the difference in people who perform well or poorly
A. The Pool of Job Applicants
B. The Complexity of the Job
C. Cut-Off Score
B. The Complexity of the Job
Reference point derived as a result of a judgment
A. The Pool of Job Applicants
B. The Complexity of the Job
C. Cut-Off Score
C. Cut-Off Score
Used to divide a set of data into two or more classifications as basis for some actions to be taken or some inferences to be made
A. The Pool of Job Applicants
B. The Complexity of the Job
C. Cut-Off Score
C. Cut-Off Score
Reference point that is set based on norm-related considerations rather than on the relationship of test scores to a criterion
A. Relative cut score
B. Fixed cut score
C. Multiple cut score
D. Multiple Hurdle
A. Relative cut score
Minimum level of proficiency required to be included in a particular classification
A. Relative cut score
B. Fixed cut score
C. Multiple cut score
D. Multiple Hurdle
B. Fixed cut score
Use of two or more cut scores with reference to one predictor for purpose of categorizing test takers.
A. Relative cut score
B. Fixed cut score
C. Multiple cut score
D. Multiple Hurdle
C. Multiple cut score
Multistage decision-making process wherein cut score on one test is necessary in order to advance to the next stage of evaluation in a selection process
A. Relative cut score
B. Fixed cut score
C. Multiple cut score
D. Multiple Hurdle
D. Multiple Hurdle
Classical Test Score Theory: The judgments of the experts are averaged to yield cut scores for the test.
A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method
A. Angoff Method
Classical Test Score Theory: Can be used for personnel selection based on traits, attributes, and abilities.
A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method
A. Angoff Method
Classical Test Score Theory: Problems arise if there is disagreement between experts
A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method
A. Angoff Method
Entails collection of data on the predictor of interest from groups known to possess, and not to possess, a trait, attribute, or ability of interest.
A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method
B. Known Groups Method
Based on the analysis of data, a cut score is set on the test that best discriminates the groups’ test performance.
A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method
B. Known Groups Method
There is no standard set of guidelines for choosing contrasting groups
A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method
B. Known Groups Method
Each item is associated with a particular level of difficulty.
A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method
E. IRT-Based Method
In order to “pass” the test, the test taker must answer items that are deemed to be above some minimum level of difficulty, which is determined by experts and serves as the cut score
A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method
E. IRT-Based Method
Entails arrangement of items in a histogram with each column containing items deemed to be of equivalent value
A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method
C. Item-Mapping Method
Trained judges are provided with sample items from each column and are asked whether or not a minimally competent individual would answer those items correctly
A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method
C. Item-Mapping Method
Difficulty level is set as the cut score.
A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method
C. Item-Mapping Method
Training of experts with regard to the minimal knowledge, skills and / or abilities test takers should possess in order to pass
A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method
D. Bookmark Method
Experts are given a book of items arranged in ascending order of difficulty and place a bookmark between 2 items deemed to separate test takers who have acquired minimal knowledge, etc
A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method
D. Bookmark Method
R. L. Thorndike (1949) proposed a norm-referenced method called ______
A. Method of Predictive Yield
B. Discriminant Analysis
A. Method of Predictive Yield
Took into account the number of positions to be filled, projections regarding the likelihood of offer acceptance, and the distribution of applicant scores
A. Method of Predictive Yield
B. Discriminant Analysis
A. Method of Predictive Yield
A family of statistical techniques used to shed light on the relationship between identified variables (such as scores on a battery of tests) and two (or more) naturally occurring groups (such as persons judged to be successful at a job and persons judged unsuccessful at a job).
A. Method of Predictive Yield
B. Discriminant Analysis
B. Discriminant Analysis
item-endorsement index
A. Index of Item Difficulty
B. Index of Item Discrimination
C. Index of Item Reliability
D. Index of Item Validity
A. Index of Item Difficulty
in cognitive tests, a statistic indicating how many tests takers responded correctly to an Item
A. Index of Item Difficulty
B. Index of Item Discrimination
C. Index of Item Reliability
D. Index of Item Validity
A. Index of Item Difficulty
in personality tests, a statistic indicating how many test takers responded to an item in a particular direction.
A. Index of Item Difficulty
B. Index of Item Discrimination
C. Index of Item Reliability
D. Index of Item Validity
A. Index of Item Difficulty
a statistic designed to indicate how adequately a test item discriminates between high and low scorers
A. Index of Item Difficulty
B. Index of Item Discrimination
C. Index of Item Reliability
D. Index of Item Validity
B. Index of Item Discrimination
provides an indication of the internal consistency of a test
A. Index of Item Difficulty
B. Index of Item Discrimination
C. Index of Item Reliability
D. Index of Item Validity
C. Index of Item Reliability
is equal to the product of the item-score standard deviation (s) and the correlation (r) between the item score and the total test score
A. Index of Item Difficulty
B. Index of Item Discrimination
C. Index of Item Reliability
D. Index of Item Validity
C. Index of Item Reliability
statistic indicating the degree to which a test measures what it purports to measure;
A. Index of Item Difficulty
B. Index of Item Discrimination
C. Index of Item Reliability
D. Index of Item Validity
D. Index of Item Validity