CHAPTER 6 Flashcards
refers to any technique used to evaluate someone
- TEST (IN I/O PSYCH)
The name of a book containing information
about the reliability and validity of various psychological tests.
MENTAL MEASUREMENTS YEARBOOK
(MMY):
CHARACTERISTICS OF EFFECTIVE
SELECTION TECHNIQUES
- RELIABILITY
- VALIDITY
the extent to which a score from a selection measure is stable and free from error
RELIABILITY:
each one of several people take the same test twice; scores from the first administration of the test are correlated with scores from
the second to determine whether they are similar
TEST-RETEST RELIABILITY
- Typical test-retest reliability coefficient for tests used in industry (Hood, 2001).
TEMPORAL STABILITY
two forms of the same test are constructed; the scores on the two forms are then correlated to determine whether they are
similar; If they are, the test is said to have FORM STABILITY
ALTERNATE-FORMS RELIABILITY
of test-taking order is designed to eliminate any effects that taking one form of the test first may have on scores on the second form
COUNTERBALANCING
extent to which similar items are answered in similar ways and measures ITEM STABILITY; The longer the test , the higher its internal consistency
- INTERNAL CONSISTENCY
all of the items measure the same thing
- ITEM HOMOGENEITY
easiest to use, as items on a test are split into two groups; odd and even numbered
SPLIT-HALF METHOD
a formula used to adjust correlation if numbers of items in test has been reduced
- SPEARMAN-BROWN PROPHECY
used for tests containing DICHOTOMOUS
ITEMS (Yes/No)
K-R 20
can be used not only for dichotomous items; also for interval and ratio
- Median internal reliability coefficient found in
research
COEFFICIENT ALPHA
an issue in projective or subjective tests in which there is no one correct answer but even tests scored with the use of keys suffer from scorer mistakes
SCORER RELIABILITY
is will two interviewers give an applicant similar ratings
INTERRATER RELIABILITY
: the degree to which inferences from scores
on tests or assessments are justified by the evidence
VALIDITY
the extent to which test items sample the content that they are supposed to measure
CONTENT VALIDITY
refers to the extent to which a test score is related to some measure of job performance called CRITERION
CRITERION VALIDITY
design, a test is given to a group of employees who are already on the job; the scores on the test are then correlated with a measure of
the employees’ current performance
CONCURRENT VALIDITY
: a test is administered to a group of job applicants who are going to be hired; the
test scores are then compared with a future measure of job performance
PREDICTIVE VALIDITY
The characteristic of a test that significantly predicts a criterion for one class of people but not for another.
SINGLE-GROUP VALIDITY
the extent to which a test found valid for a job in one location is valid for the same job in a different location
VALIDITY GENERALIZATION OR VG
the most theoretical of the validity types; defined as the extent to which a test actually measures the construct that it purports to measure
- CONSTRUCT VALIDITY
a test is given to two groups of people who are ‘known” to be different on the trait in question
- KNOWN-GROUP VALIDITY
the extent to which a test appears to be job related
FACE VALIDITY
The correlation between scores on a selection method (Ex: interview, cognitive ability test) and a measure of job performance (Ex: supervisor rating, absenteeism)
VALIDITY COEFFICIENT
: statements so general that they can be true of almost everyone
- BARNUM STATEMENT
the computer adapts the next question to be asked on the basis of how the test-taker responded to the previous question or questions
COMPUTER-ADAPTIVE TESTING (CAT)
- series of tables based on the selection ratio, base rate, and test validity designed to estimate the percentage of future employees who will be successful on the job if an organization uses a particular test
TAYLOR-RUSSELL TABLES
simply the percentage of people an organization must hire
- SELECTION RATIO
current performance; the percentage of
employees currently on the job who are considered successful
- BASE RATE
- easier to do but less accurate than the Taylor-Russell tables; only information needed to determine the proportion of correct decisions is employee test scores and the scores on the criterion.
- TEST VALIDITY PROPORTION OF CORRECT DECISIONS
- the probability that a particular applicant will be successful;
- To use these tables, three pieces of information are needed. The validity coefficient and the base rate are found in the same way as for the Taylor-Russell tables.
The third piece of information needed is the applicant’s test score. More specifically, did the person score in the top 20%, the next 20%, the middle 20%, the next lowest 20%, or the bottom 20%?
LAWSHE TABLES
- another way to determine the value of a test in a given situation is by computing the amount of money an organization would save if it used the test to select employees
BROGDEN-CRONBACH-GLESER UTILITY
FORMULA
to estimate the monetary savings to an organization
UTILITY FORMULA
the test will significantly predict performance for one group and not others
SINGLE-GROUP VALIDITY
a test is valid for two groups but more valid for one than for the other
- DIFFERENTIAL VALIDITY
- if more than one criterion-valid test is used, the scores on the tests must be combined
- each test score is weighted acc. to how well it predicts the criterion
MULTIPLE REGRESSION
the condition in which a criterion score is affected by things other than those under the control of the employee
CONTAMINATION
- Who will perform the best?
- selecting applicants in straight rank order of their test scores
- starting with the highest score and moving down until all openings have been filled
TOP-DOWN SELECTION
- the assumption is that if multiple test scores are used the relationship between a low score on one test can be compensated for by a high score on another
COMPENSATORY APPROACH
- the names of the top three/five applicants are given to a hiring authority who can then select any of the three/five
- Advantages: Possibly higher quality of selected applicants and objective decision making
- Disadvantages: Less flexibility in decision making, ignores measurement error, and assumes test score accounts for all the variance in performance
THE “RULE OF THREE/FIVE”
- Who will perform at an acceptable level?
PASSING SCORE: a point in a distribution of scores that distinguishes acceptable from unacceptable performance.
THE PASSING SCORES APPROACH
A method of hiring in which an applicant must score higher than a particular score to be considered for employment.
* Advantages: increased flexibility in decision making and less adverse impact against protected groups
* Disadvantages: lowered utility and can be difficult to set
- CUTOFF APPROACH
- The width of the band is based upon the standard error of the test and other statistical criteria .
- A compromise between top-down hiring and passing scores
- Attempts to hire the top test scorers while still allowing some flexibility for affirmative action
- Can help to achieve certain hiring goals such as improving diversity.
- Advantages of banding: increase workforce diversity and perceptions of fairness, and allows you to consider secondary criteria relevant to the job
BANDING
: how many points apart do two applicants have to be before we say their test scores are significantly different (Example: SE = SD 1 − reliability)
- STANDARD ERROR
– the means through which managers ensure that employees’ activities and outputs are congruent with the organization’s Goals
PERFORMANCE MANAGEMENT
– the process through which an organization gets information on how well an employee is doing his/her job
PERFORMANCE APPRAISAL
– the process of providing employees information regarding their performance effectiveness
PERFORMANCE FEEDBACK