10- Test Construction Flashcards

1
Q

An eigenvalue is the:
Select one:
A. proportion of variance attributable to two or more factors
B. amount of variance in all the tests accounted for by a factor
C. effect of one independent variable, without consideration of the effects of other independent variables.
D. strength of the relationship between factors

A

Correct Answer is: B
In a factor analysis or principal components analysis, the explained variance, or “eigenvalues” indicate the amount of variance in all the tests accounted for by a factor.
proportion of variance attributable to two or more factors

This choice describes “communality” which is another outcome of a factor analysis.

effect of one independent variable, without consideration of the effects of other independent variables.

This is the definition of a “main effect”.

Additional Information: Explained Variance (or Eigenvalues)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
In a study examining the effects of relaxation training on test-taking anxiety, a pre-test measure of anxiety is administered to a group of self-identified highly anxious test takers resulting in a split-half reliability coefficient of .75. If the pre-test is administered to a randomly selected group of the same number of people the split-half reliability coefficient will most likely be:
Select one:
A. Greater than .75
B. Less than .75
C. Equal to .75
D. impossible to predict
A

Correct Answer is: A
A general rule for all correlation coefficients, including reliability coefficients, is that the more heterogeneous the group, i.e., the wider the variability, the higher the coefficient will be. Since a randomly selected group would be more heterogeneous than a group of highly anxious test-takers, the randomly selected group would most likely have a higher reliability coefficient.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

When looking at an item characteristic curve (ICC), which of the following provides information about how well the item discriminates between high and low achievers?
Select one:
A. the Y-intercept
B. the slope of the curve
C. the position of the curve (left versus right)
D. the position of the curve (top versus bottom)

A

Correct Answer is: B
An item response curve provides one to three pieces of information about a test item - its difficulty “the position of the curve (left versus right)”; its ability to discriminate between high and low scorers (correct answer); and the probability of answering the item correctly just by guessing “the Y-intercept”.
Additional Information: Item Response Theory and Item Response Curve

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Adding more easy to moderately easy items to a difficult test will:
Select one:
A. increase the test’s floor.
B. decrease the test’s floor.
C. alter the test’s floor only if there is an equal number of difficult to moderately difficult items.
D. have no effect on the test’s floor.

A

Correct Answer is: B
As you may have guessed, “floor” refers to the lowest scores on a test (ceiling refers to the highest scores). Adding more easy to moderately easy items would lower or decrease the floor allowing for better discrimination of people at the low end.
Additional Information: Ceiling and Floor Effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Adding more items to a test would most likely:
Select one:
A. increase the test’s reliability
B. decrease the test’s validity
C. have no effect on the test’s reliability or validity
D. preclude the use of the Spearman-Brown prophecy formula

A

Correct Answer is: A
Lengthening a test, that is, adding more test items, generally results in an increase in the test’s reliability. For example, a test consisting of only 3 questions would probably be more reliable if we added 10 more items.
The Spearman-Brown formula is specifically used to estimate the reliability of a test if it were lengthened or shortened.
Additional Information: Factors Affecting Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The appropriate kind of validity for a test depends on the test’s purpose. For example, for the psychology licensing exam:
Select one:
A. construct validity is most important because it measures the hypothetical trait of “competence.”
B. content validity is most important because it measures knowledge of various content domains in the field of psychology.
C. criterion-related validity is most important because it predicts which psychologists will and will not do well as professionals.
D. no evidence of validity is required.

A

Correct Answer is: B
The psychology licensing exam is considered a measure of knowledge of various areas in the field of psychology and, therefore, is essentially an achievement-type test. Measures of content knowledge should have adequate content validity.
Additional Information: Content Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
A test developer creates a new test of anxiety sensitivity and correlates it with an existing measure of anxiety sensitivity. The test developer is operating under the assumption that
Select one:
A. the new test is valid.
B. the existing test is valid.
C. the new test is reliable.
D. the existing test is reliable.
A

Correct Answer is: B
The question is describing an example of obtaining evidence for a test’s construct validity. Construct validity refers to the degree to which a test measures a theoretical construct that it purports to measure; anxiety sensitivity is an example of a theoretical construct measured in psychological tests. A high correlation between a new test and an existing test that measures the same construct offers evidence of convergent validity, which is a type of construct validity. Another type is divergent validity, which is the degree to which a test has a low correlation with another test that measures a different construct. Correlating scores on a new test with an existing test to assess the new test’s convergent validity requires an assumption that the existing test is valid; i.e., that it actually does measure the construct.
Additional Information: Construct Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Rotation is used in factor analysis to:
Select one:
A. get an easier pattern of factor loadings to interpret.
B. increase the magnitude of the communalities.
C. reduce the magnitude of the communalities.
D. reduce the effects of measurement error on the factor loadings.

A

Correct Answer is: A
Factors are rotated to obtain a pattern that’s easier to interpret since the pattern of factor loadings in the initial factor matrix is often difficult to interpret.
Rotation alters the magnitude of the factor loadings but not the magnitude of the communalities (“increase the magnitude of the communalities” and “reduce the magnitude of the communalities”) and does not reduce the effects of measurement error (“reduce the effects of measurement error on the factor loadings”).
Additional Information: Interpreting and Naming the Factors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
When seeking results that would be sensitive to the \_\_\_\_\_\_\_\_\_\_\_\_\_ of the test-taker, test-retest reliability would need to be the highest.
Select one:
A. maturity
B. mood
C. aptitude
D. gender
A

Correct Answer is: D
Test-retest reliability is appropriate for determining the reliability of tests designed to measure attributes that are not affected by repeated measurement and that are relatively stable over time. The characteristics or traits represented in the incorrect choices (“maturity,” “mood,” and “aptitude,”) fluctuate and negatively affect the test-retest results.
Additional Information: Test-Retest Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
Researchers are interested in detecting differential item functioning (DIF). Which method would not be used?
Select one:
A. SIBTEST
B. Mantel-Haenszel
C. Lord's chi-square
D. cluster analysis
A

Correct Answer is: D
In the context of item response theory, differential item functioning (DIF), or item bias analysis, refers to a difference in the probability of individuals from different subpopulations making a correct or positive response to an item, who are equal on the latent or underlying attribute measured by the test. The SIBTEST or simultaneous item bias test, Mantel-Haenszel, and Lord’s chi-square are statistical techniques used to identify DIF. Cluster analysis is a statistical technique used to develop a classification system or taxonomy. This method wouldn’t detect item bias or differences.
Additional Information: Item Response Theory and Item Response Curve

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
A measure of relative strength of a score within an individual is referred to as a(n):
Select one:
A. ipsative score
B. normative score
C. standard score
D. independent variable
A

Correct Answer is: A
Ipsative scores report an examinee’s scores using the examinee him or herself as a frame of reference. They indicate the relative strength of a score within an individual but, unlike normative measures, do not provide the absolute strength of a domain relative to a normative group. Examples of ipsative scores are the results of a forced choice measure.
Additional Information: Ipsative vs. Normative Measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
Discriminant and convergent validity are classified as examples of:
Select one:
A. construct validity.
B. content validity
C. face validity.
D. concurrent validity
A

Correct Answer is: A
There are many ways to assess the validity of a test. If we correlate our test with another test that is supposed to measure the same thing, we’ll expect the two to have a high correlation; if they do, the tests will be said to have convergent validity. If our test has a low correlation with other tests measuring something our test is not supposed to measure, it will be said to have discriminant (or divergent) validity. Convergent and divergent validity are both types of construct validity.
Additional Information: Construct Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A negative item discrimination (D) indicates:
Select one:
A. an index equal to zero.
B. more high-achieving examinees than low-achieving examinees answered the item correctly.
C. an item was answered correctly by the same number of low- and high-achieving students.
D. more low-achieving examinees answered the item correctly than high-achieving.

A

Correct Answer is: D
The discrimination index, D, has a value range from +1.0 to -1.0 and is the number of people in the upper or high scoring group who answered the item correctly minus the number of people in the lower scoring group who answered the item correctly, divided by the number of people in the largest of the two groups. An item will have a discrimination index equal to zero if everyone gets it correct or incorrect. A negative item discrimination index indicates that the item was answered correctly by more low-achieving students than by high-achieving students. In other words, a poor student may make a guess, select that response, and come up with the correct answer without any real understanding of what is being assessed. Whereas good students (like EPPP candidates) may be suspicious of a question that looks too easy, may read too much into the question, and may end up being less successful than those who guess.
more high-achieving examinees than low-achieving examinees answered the item correctly.

A positive item discrimination index indicates that the item was answered correctly by more high-achieving students than by low-achieving students.
Additional Information: Item Discrimination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
Likert scales are most useful for:
Select one:
A. dichotomizing quantitative data
B. quantifying objective data
C. quantifying subjective data
D. ordering categorical data
A

Correct Answer is: C
Attitudes are subject phenomena. Likert scales indicate the degree to which a person agrees or disagrees with an attitudinal statement. Using a Likert scale, attitudes are quantified - or represented in terms of ordinal scores.
Additional Information: Scales of Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
On the MMPI-2, what percentage of the general population the test is intended for can be expected to obtain a T-score between 40 and 60 on the depression scale?
Select one:
A. 50
B. 68
C. 95
D. 99
A

Correct Answer is: B
A T-score is a standardized score. Standardization involves converting raw scores into scores that indicate how many standard deviations the values are above or below the mean. A T-score is a standard score with a mean of 50 and a standard deviation of 10. Results of personality inventories such as the MMPI-2 are commonly reported in terms of T-scores. Other standard scores include z-scores, with a mean of 0 and a standard deviation of 1, and IQ scores, with a mean of 100 and a standard deviation of 15. When values are normally distributed in a population, standardization facilitates interpretation of test scores by making it easier to see where a test-taker stands on the variable in relation to others in the population. This is because, due to the properties of a normal distribution, one always knows the percentage of cases that are within standard deviation ranges of the mean. For example, in a normal distribution, 68.26 % of scores will fall within one standard deviation of the mean, or in a T score distribution, between 40 and 60, so 68% is the best answer to this question. Another example: 95.44% of scores fall within two standard deviations of the mean; therefore, 4.56% will have scores 2 standard deviation units or more above or below the mean. By dividing 4.56 in half, we can see that 2.28% of test-takers will score 70 or above on any MMPI scale, and 2.28% will score 30 or below.
Additional Information: Standard Scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
A condition necessary for pooled variance is:
Select one:
A. unequal sample sizes
B. equal sample sizes
C. unequal covariances
D. equal covariances
A

Correct Answer is: B
Pooled variance is the weighted average variance for each group. They are “weighted” based on the number of subjects in each group. Use of a pooled variance assumes that the population variances are approximately the same, even though the sample variances differ. When the population variances were known to be equal or could be assumed to be equal, they might be labeled equal variances assumed, common variance or pooled variance. Equal variances not assumed or separate variances is appropriate for normally distributed individual values when the population variances are known to be unequal or cannot be assumed to be equal.
Additional Information: The Variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

In a clinical trial of a new drug, the null hypothesis is the new drug is, on average, no better than the current drug. It is concluded that the two drugs produce the same effect when in fact the new drug is superior. This is:
Select one:
A. corrected by reducing the power of the test
B. corrected by reducing the sample size
C. a Type I error
D. a Type II error

A

Correct Answer is: D
Type II errors occur when the null hypothesis is not rejected when it is in fact false; Type I errors are often considered more serious as the null hypothesis is wrongly rejected. For example, in the clinical trial of a new drug, this would be concluding that the new drug was better when in fact it was not. Type I and II errors are inversely related: as the probability of a Type I error increases, the probability of a Type II error decreases, and vice versa.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Which of the following statements is not true regarding concurrent validity?
Select one:
A. It is used to establish criterion-related validity.
B. It is appropriate for tests designed to assess a person’s future status on a criterion.
C. It is obtained by collecting predictor and criterion scores at about the same time.
D. It indicates the extent to which a test yields the same results as other measures of the same phenomenon.

A

Feedback
Correct Answer is: B
There are two ways to establish the criterion-related validity of a test: concurrent validation and predictive validation. In concurrent validation, predictor and criterion scores are collected at about the same time; by contrast, in predictive validation, predictor scores are collected first and criterion data are collected at some future point. Concurrent validity indicates the extent to which a test yields the same results as other measures of the same phenomenon. For example, if you developed a new test for depression, you might administer it along with the BDI and measure the concurrent validity of the two tests.
Additional Information: Concurrent vs. Predictive Validation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Cluster analysis would most likely be used to
Select one:
A. construct a “taxonomy” of criminal personality types.
B. obtain descriptive information about a particular case.
C. test the hypothesis that an independent variable has an effect on a dependent variable.
D. test statistical hypotheses when the assumption of independence of observations is violated.

A

Correct Answer is: A
The purpose of cluster analysis is to place objects into categories. More technically, the technique is designed to help one develop a taxonomy, or classification system of variables. The results of a cluster analysis indicate which variables cluster together into categories. The technique is sometimes used to divide a population of individuals into subtypes.
Additional Information: Techniques Related to Factor Analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Which of the following illustrates the concept of shrinkage?
Select one:
A. extremely depressed individuals obtain a high score on a depression inventory the first time they take it, but obtain a slightly lower score the second time they take it
B. items that have collectively been shown to be a valid way to diagnose a sample of individuals as depressed prove to be less valid when used for a different sample
C. the self-esteem of depressed individuals shrinks when they are faced with very difficult tasks
D. abilities such as short-term memory and response speed diminish as we get older

A

Correct Answer is: B
Shrinkage can be an issue when a predictor test is developed by testing out a pool of items on a validation (“try-out”) sample and then choosing the items that have the highest correlation with the criterion. When the chosen items are administered to a second sample, they usually don’t work quite as well – in other words, the validity coefficient shrinks. This occurs because of chance factors operating in the original validation sample that are not present in the second sample.
Additional Information: Factors Affecting the Validity Coefficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
In the multitrait-multimethod matrix, a large heterotrait-monomethod coefficient would indicate:
Select one:
A. low convergent validity.
B. high convergent validity.
C. high divergent validity.
D. low divergent validity.
A

Correct Answer is: D
Use of a multitrait-multimethod matrix is one method of assessing a test’s construct validity. The matrix contains correlations among different tests that measure both the same and different traits using similar and different methodologies. The heterotrait-monomethod coefficient, one of the correlation coefficients that would appear on this matrix, reflects the correlation between two tests that measure different traits using similar methods. An example might be the correlation between a test of depression based on self-report data and a test of anxiety also based on self-report data. If a test has good divergent validity, this correlation would be low. Divergent validity is the degree to which a test has a low correlation with other tests that do not measure the same construct. Using the above example, a test of depression would have poor divergent validity if it had a high correlation with other tests that purportedly measure different traits, such as anxiety. This would be evidence that the depression test is measuring traits that are unrelated to depression.
Additional Information: Convergent and Discriminant (Divergent) Validation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A kappa coefficient of .93 would indicate that the two tests
Select one:
A. measure what they are supposed to.
B. have a high degree of agreement between their raters.
C. aren’t especially reliable.
D. present test items with a high level of difficulty.

A

Feedback
Correct Answer is: B
The kappa coefficient is used to evaluate inter-rater reliability. A coefficient in the lower .90s indicates high reliability.
This option (“measure what they are supposed to”) is a layman’s definition of the general concept of valdity.
Additional Information: Interscorer Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q
Kuder-Richardson reliability applies to
Select one:
A. split-half reliability.
B. test-retest stability.
C. Likert scales.
D. tests with dichotomously scored questions.
A

Correct Answer is: D
The Kuder-Richardson formula is one of several statistical indices of a test’s internal consistency reliability. It is used to assess the inter-item consistency of tests that are dichotomously scored (e.g., scored as right or wrong).
Additional Information: Internal Consistency Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

In designing a new test of a psychological construct, you correlate it with an old test the new one will replace. Your assumption in this situation is that:
Select one:
A. the old test is invalid.
B. the old test is valid but out of date.
C. the old test is better than the new test.
D. the old test and the new test are both culture-fair.

A

Correct Answer is: B
the old test is valid but out of date.
This choice is the only one that makes logical sense. In the assessment of the construct validity of a new test, a common practice is to correlate that test with another test that measures the same construct. For this technique to work, the other test must be a valid measure of the construct. So in this situation, it is assumed that the old test is valid, but at the same time, it is being replaced. Of the choices listed, the correct option provides a reason why a valid test would be replaced.
Additional Information: Construct Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q
A large monotrait-heteromethod coefficient in a multitrait-multimethod matrix indicates evidence of:
Select one:
A. convergent validity
B. concurrent validity
C. predictive validity
D. discriminant validity
A

Correct Answer is: A
A multitrait-multimethod matrix is a complicated method for assessing convergent and discriminant validity. Convergent validity requires that different ways of measuring the same trait yield the same result. Monotrait-heteromethod coefficients are correlations between two measures that assess the same trait using different methods; therefore if a test has convergent validity, this correlation should be high. Heterotrait-monomethod and heterotrait-heteromethod, both confirm discriminatory validity, and monotrait-monomethod coefficients are reliability coefficients.
Additional Information: Convergent and Discriminant (Divergent) Validation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

In computing test reliability, to control for practice effects one would use a(n):
I. split-half reliability coefficient.
II. alternative forms reliability coefficient.
III. test-retest reliability coefficient.
Select one:
A. I and III only
B. I and II only
C. II and III only
D. II only

A

Correct Answer is: B
The clue here is the practice effect. That means that if you give a test, just taking it will give the person practice so that next time, he or she is not a naive person. To control for that, we want to eliminate the situation where the person is administered the same test again. So we do not use test-retest. We can use the two other methods listed. We can use split-half since, here, only one administration is used (the two parts are thought of as two different tests). And, in the alternative forms method, a different test is given the second time, controlling for the effects of taking the same test twice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q
If an examinee correctly guesses the answers to a test, the reliability coefficient:
Select one:
A. is not affected
B. stays the same
C. decreases
D. increases
A

Correct Answer is: C
One of the factors that affect the reliability coefficient is guessing. Guessing correctly decreases the reliability coefficient. The incorrect options (“is not affected,” “stays the same,” and “increases”) are not true in regards to the reliability coefficient.
Additional Information: Factors Affecting Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q
A researcher employs multiple methods of measurement in an attempt to increase reliability by reducing systematic error. This strategy is referred to as:
Select one:
A. calibration
B. intraclass correlation (ICC)
C. triangulation
D. correction for attenuation
A

Correct Answer is: C
Triangulation is the attempt to increase reliability by reducing systematic or method error through a strategy in which the researcher employs multiple methods of measurement (e.g., observation, survey, archival data). If the alternative methods do not share the same source of systematic error, examination of data from the alternative methods gives insight into how individual scores may be adjusted to come closer to reflecting true scores, thereby increasing reliability.
calibration

Calibration is the attempt to increase reliability by increasing homogeneity of ratings through feedback to the raters, when multiple raters are used. For example, raters might meet during pretesting of the instrument to discuss items on which they have disagreed seeking to reach consensus on rules for rating items (e.g.., defining a “2” for an item dealing with job performance).

intraclass correlation (ICC)

Intraclass correlation (ICC) is used to measure inter-rater reliability for two or more raters and may also be used to assess test-retest reliability. ICC may be conceptualized as the ratio of between-groups variance to total variance.

correction for attenuation

Correction for attenuation is a method used to adjust correlation coefficients upward because of errors of measurement when two measured variables are correlated; the errors always serve to lower the correlation coefficient as compared with what it would have been if the measurement of the two variables had been perfectly reliable.

Additional Information: Factors Affecting Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q
If, in a normally-shaped distribution, the mean is 100 and the standard error of measurement is 5, what would the 95% confidence interval be for an examinee who receives a score of 90?
Select one:
A. 75-105
B. 80-100
C. 90-100
D. 95-105
A

Correct Answer is: B
The standard error of measurement indicates how much error an individual test score can be expected to have. A confidence interval indicates the range within which an examinee’s true score is likely to fall, given his or her obtained score. To calculate the 95% confidence interval we simply add and subtract two standard errors of measurement to the obtained score. Two standard errors of measurement in this case equal 10. We’re told that the examinee’s obtained score is 90. 90 +/- 10 results in a confidence interval of 80 to 100. In other words, we can be 95% confident that the examinee’s true score falls within 80 and 100.
Additional Information: Standard Error of Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q
In the multitrait-multimethod matrix, a low heterotrait-heteromethod coefficient would indicate:
Select one:
A. low convergent validity
B. low divergent validity
C. high convergent validity
D. high divergent validity
A

Correct Answer is: D
Use of a multitrait-multimethod matrix is one method of assessing a test’s construct validity. The matrix contains correlations among different tests that measure both the same and different traits using similar and different methodologies. The heterotrait-heteromethod coefficient, one of the correlation coefficients that would appear on this matrix, reflects the correlation between two tests that measure different (hetero) traits using different (hetero) methods. An example might be the correlation between vocabulary subtest scores on the WAIS-IV for intelligence and scores on the Beck Depression Inventory for depression. Since these measures presumably measure different constructs, the correlation coefficient should be low, indicating high divergent or discriminant validity.
Additional Information: Convergent and Discriminant (Divergent) Validation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q
A way to define criterion in regard to determining criterion related validity is that the criterion is:
Select one:
A. The predictor test
B. The validity measure
C. The predictee
D. The content.
A

Correct Answer is: C
To determine criterion-related validity, scores on a predictor test are correlated with an outside criteria. The criteria is that which is being predicted, or the “predictee.”
Additional Information: Relationship between Reliability and Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q
Raising the cutoff score on a predictor test would have the effect of
Select one:
A. increasing true positives
B. decreasing false positives
C. decreasing true negatives
D. decreasing false negatives.
A

Correct Answer is: B
A simple way to answer this question is with reference to a chart such as the one displayed under the topic “Criterion-Related Validity” in the Psychology-Test Construction section of your materials. If you look at this chart, you can see that increasing the predictor cutoff score (i.e., moving the vertical line to the right) decreases the number of false positives as well as true positives (you can also see that the number of both true and false negatives would be increased).
You can also think about this question more abstractly by coming up with an example. Imagine, for instance, that a general knowledge test is used as a predictor of job success. If the cutoff score on this test is raised, fewer people will score above this cutoff and, therefore, fewer people will be predicted to be successful. Another way of saying this is that fewer people will come up “positive” on this predictor. This applies to both true positives and false positives.
Additional Information: Decision-Making

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

If, in a normally-shaped distribution, the mean is 100 and the standard error of measurement is 10, what would the 68% confidence interval be for an examinee who receives a score of 95?
Select one:
A. 85 to 105
B. 90 to 100
C. 90 to 110
D. impossible to calculate without the reliability coefficient

A

Correct Answer is: A
The standard error of measurement indicates how much error an individual test score can be expected to have. A confidence interval indicates the range within which an examinees’s true score is likely to fall, given his or her obtained score. To calculate the 68% confidence interval we simply add and subtract one standard error of measurement to the obtained score.
impossible to calculate without the reliability coefficient

This choice is incorrect because although the reliability coefficient is needed to calculate a standard error of measurement, in this case, we are provided with the standard error.
Additional Information: Standard Error of Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q
The cutoff IQ score for placement in a school district's gifted program is 135. The parent of a child who scored 133 might be interested in knowing the test's standard error of measurement in order to estimate the child's
Select one:
A. true score.
B. mean score.
C. error score.
D. criterion score.
A

Correct Answer is: A
The question is just a roundabout way of asking “what is the standard error of measurement?”, though it does supply a practical application of the concept. According to classical test theory, an obtained test score consists of truth and error. The truth component reflects the degree to which the score reflects the actual characteristic the test measures, and the error component reflects random or chance factors affecting the score. For instance, on an IQ test, a score will reflect to some degree the person’s “true” IQ and to some degree chance factors such as whether the person was tired the day he took the test, whether some of the questions happen to be a particularly good fit with the person’s knowledge base, etc. The standard error of measurement of a test indicates the expected amount of error a score on that test will contain. It can be used to answer the question, “given an obtained score, what is the likely true score?” For example, if the test referenced had a standard error of measurement of 5, there would be a 68% chance that the true test score lies within one standard error of measurement of the obtained score (between 128 and 138 in this case), and a 95% chance that the true score lies within two standard errors of measurement (between 123 - 143). So in the example, the parent would be interested to know what the test’s standard error of measurement because the higher it is, the greater the possibility that an obtained score of 133 actually reflects a true score of 135 or above.
Additional Information: Standard Error of Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q
When using a rating scale, several psychologists agree on the same diagnosis for one patient. This is a sign that the scale is
Select one:
A. reliable.
B. valid.
C. reliable and valid.
D. neither reliable nor valid.
A

Correct Answer is: A
The rating scale described by the question has good inter-rater reliability, or consistency across raters. However, it may or may not have good validity; that is, it may or may not measure what it purports to measure. The question illustrates that high reliability is a necessary but not a sufficient condition for high validity.
Additional Information: Interscorer Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q
A test with limited ceiling would have a \_\_\_\_\_\_\_\_\_\_\_\_ distribution shape.
Select one:
A. normal
B. flat
C. positively skewed
D. negatively skewed
A

Correct Answer is: D
A test with limited ceiling has an inadequate number of difficult items resulting in few low scores. Therefore the distribution would be negatively skewed.
Additional Information: Skewed Distributions, Measures of Central Tendency, Skewed Distributions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

The primary purpose of rotation in factor analysis is to
Select one:
A. facilitate interpretation of the data.
B. improve the mathematical fit of the solution.
C. obtain uncorrelated factors.
D. improve the predictive validity of the factors.

A

Correct Answer is: A
Factor analysis is a statistical procedure that is designed to reduce measurements on a number of variables to fewer, underlying variables. Factor analysis is based on the assumption that variables or measures highly correlated with each other measure the same or a similar underlying construct, or factor. For example, a researcher might administer 250 proposed items on a personality test and use factor analysis to identify latent factors that could account for variability in responses to the items. These factors would then be interpreted based on logical analysis or the researcher’s theories. If one of the factors identified by the analysis correlated highly with items that asked about the person’s happiness, level of energy, and hopelessness, that factor might be labeled “Depressive Tendencies.” In factor analysis, rotation is usually the final statistical step. Its purpose is to facilitate the interpretation of data by identifying variables that load (i.e., correlate) highly on one factor and not others.
Additional Information: Interpreting and Naming the Factors

38
Q
A school psychologist is asked to work with a child whose on-task behavior is poor. To monitor the child's on-task behavior, the psychologist is most likely to train the teacher or teacher's assistant in
Select one:
A. interval recording.
B. frequency recording.
C. continuous recording.
D. duration recording.
A

Correct Answer is: A
All the choices refer to methods of recording behaviors that can be used by observational raters or researchers. In interval recording (the correct answer), the rater observes a subject at given intervals and notes whether or not the subject is engaging in the target behavior during that interval. For instance, a rater might observe a student for 10 seconds every three minutes and record whether on not the student is on-task during those 10 seconds. Interval recording is most useful for behaviors that do not have a fixed beginning or end – such as being on task.
Frequency recording involves keeping count of the number of times a behavior occurs; this would not be practical in keeping track of whether or not a person is on task.

Continuous recording involves recording all the behaviors of the target subject during each observation session. Although it’s possible to keep track of whether a person is on-task using this method, it is not as practical or meaningful for this purpose as interval recording.

Finally, duration recording involves recording the elapsed time during which the target behavior or behaviors occur. This would not be practical for a behavior that has no fixed beginning or end.
Additional Information: Interscorer Reliability

39
Q

The purpose of rotation in factor analysis is to facilitate interpretation of the factors. Rotation:
Select one:
A. alters the factor loadings for each variable but not the eigenvalue for each factor
B. alters the eigenvalue for each factor but not the factor loadings for the variables
C. alters the factor loadings for each variable and the eigenvalue for each factor
D. does not alter the eigenvalue for each factor nor the factor loadings for the variables

A

Correct Answer is: C
In factor analysis, rotating the factors changes the factor loadings for the variables and eigenvalue for each factor although the total of the eigenvalues remains the same.
Additional Information: Interpreting and Naming the Factors

40
Q

A common source of criterion contamination is:
Select one:
A. knowledge of predictor scores by the individual conducting the assessment on the criterion.
B. cheating on the criterion test.
C. a non-normal distribution of scores on the criterion test.
D. a low range of scores on the predictor test.

A

Correct Answer is: A
A criterion measure is one on which a predictor test attempts to predict outcome; it could be termed the “predictee.” For example, if scores on a personality test were used to predict job success as measured by supervisor evaluations, the supervisor evaluations would be the criterion measure. Criterion contamination occurs when a factor irrelevant to what is being measured affects scores on the criterion. When the criterion measure is based on subjective ratings, rater knowledge of predictor scores is a common source of criterion contamination. In our example, if supervisors knew employees’ results on the personality test, their evaluations might be biased based on their knowledge of these scores.
Additional Information: Criterion Contamination

41
Q

A percentage score, as opposed to a percentile rank, is based on:
Select one:
A. Total number of items
B. An examinee’s score in comparison to other examinee’s scores
C. That there are one hundred test items
D. The number of items answered correctly

A

Correct Answer is: D
A percentage score indicates the number of items answered correctly. A percentile rank compares one examinee’s score with all other examinee’s scores.
Additional Information: Percentile Ranks

42
Q

In a factor analysis, an eigenvalue corresponds to
Select one:
A. the number of latent variables.
B. the strength of the relationship between factors.
C. the level of significance of the factor analysis.
D. the explained variance of one of the factors.

A

Correct Answer is: D
When a factor analysis produces a series of factors, it is useful to determine how much of the variance is accounted for by each factor. An eigenvalue is based on the factor loadings of all the variables in the factor analysis to a particular factor. When the factor loadings are high, the eigenvalue will be large. A large eigenvalue would mean that a particular factor accounts for a large proportion of the variance among the variables.
Additional Information: Explained Variance (or Eigenvalues)

43
Q
If a job selection test has lower validity for Hispanics as compared to White or African-Americans, you could say that ethnicity is acting as a:
Select one:
A. confounding variable
B. criterion contaminator
C. discriminant variable
D. moderator variable
A

Correct Answer is: D
A moderator variable is any variable which moderates, or influences, the relationship between two other variables. If the validity of a job selection test is different for different ethnic groups (i.e. there is differential validity), then ethnicity would be considered a moderator variable since it is influencing the relationship between the test (predictor) and actual job performance (the criterion).
A confounding variable is a variable in a research study which is not of interest to the researcher, but which exerts a systematic effect on the DV. Criterion contamination is the artificial inflation of validity which can occur when raters subjectively score ratees on a criterion measure after they have been informed how the ratees scored on the predictor.
Additional Information: Factors Affecting the Validity Coefficient

44
Q

The factor loading for Test A and Factor II is .80 in a factor matrix. This means that:
Select one:
A. only 80% of variability in Test A is accounted for by the factor analysis
B. only 64% of variability in Test A is accounted for by the factor analysis
C. 80% of variability in Test A is accounted for by Factor II
D. 64% of variability in Test A is accounted for by Factor II

A

Correct Answer is: D
The correlation coefficient for a test and an identified factor is referred to as a factor loading. To obtain a measure of shared variability, the factor loading is squared. This example, the factor loading is .80, meaning that 64% (.80 squared) of variability in the test is accounted for by the factor.
The other identified factor(s) probably also account for some variability in Test A, which is why this option is not the best answer: only 64% of variability in Test A is accounted for by the factor analysis.
Additional Information: Factor Analysis

45
Q
The reliability statistic that can be interpreted as the average of all possible split-half coefficients is
Select one:
A. the Spearman-Brown formula.
B. Cronbach's coefficient alpha.
C. chi-square.
D. point-biserial coefficient.
A

Correct Answer is: B
According to classical test theory, the reliability of a test indicates the degree to which examinees’ scores are free from error and reflect their “true” test score. Reliability is typically measured by obtaining the correlation between scores on the same test, such as by having examinees take then retake the test and correlating both sets of scores (test-retest reliability) or by dividing the test in half and correlating scores on both halves (split-half reliability). Cronbach’s alpha, like split-half reliability, is categorized as an internal consistency reliability coefficient. Its calculation is based on the average of all inter-item correlations, which are correlations between responses on two individual items. Mathematically, Cronbach’s alpha works out to the average of all possible split-half correlations (there are many possible split-half correlations because there are many different ways of splitting the test in half).
Regarding the other choices, the Spearman-Brown formula is used to estimate the effects of lengthening a test on its reliability coefficient. Longer tests are typically more reliable. The Spearman-Brown formula is commonly used to adjust the split-half coefficient to estimate what reliability would have been if the halved tests had as many items as the full test. The chi-square test is used to test predictions about observed versus expected frequency distributions of nominal, or categorical, data; for example, if you flip a coin 100 times, you can use the chi-square test to determine if the distribution of heads versus tails outcomes falls into the expected range or if there is evidence that the coin toss was “fixed.” And the point-biserial correlation coefficient is used to correlate dichotomously scaled variables with interval or ratio data; for example, it can be used to correlate responses on test items scored as correct or incorrect with scores on the test as a whole.
Additional Information: Internal Consistency Reliability

46
Q
An athlete is requested to take a drug screening test used to identify individuals with performance enhancing substances in their systems. Despite the player's actual usage of steroids, the test fails to identify the substances. In the context of decision-making theory, this individual is a:
Select one:
A. false positive
B. false negative
C. true positive
D. true negative
A

Correct Answer is: B
Based on performance on the predictor and the criterion, individuals may be classified as false positives, true positives, false negatives, or true negatives. False negatives, like the athlete, are not identified as having used substances when, in fact, they have.
Conversely, false positives are identified by the drug screening test as having used or having substances present, when they have not. True positives are individuals identified by the screening test as having substances present and they do. True negatives are individuals not identified by the screening test to have substances present and do not.
Additional Information: Decision-Making

47
Q
The item difficulty ("p") index yields information about the difficulty of test items in terms of a(n) \_\_\_\_\_\_\_\_\_ scale of measurement.
Select one:
A. nominal
B. ordinal
C. interval
D. ratio
A

Correct Answer is: B
An item difficulty index indicates the percentage of individuals who answer a particular item correctly. For example, if an item has a difficulty index of .80, it means that 80% of test-takers answered the item correctly. Although it appears that the item difficulty index is a ratio scale of measurement, according to Anastasi (1982) it is actually an ordinal scale because it does not necessarily indicate equivalent differences in difficulty.
Additional Information: Item Difficulty

48
Q

All of the following statements regarding item response theory are true, except
Select one:
A. it cannot be applied in the attempt to develop culture-fair tests.
B. it’s a useful theory in the development of computer programs designed to create tests tailored to the individual’s level of ability.
C. one of its assumptions is that test items measure a “latent trait.”
D. it usually has little practical significance unless one is working with very large samples

A

Correct Answer is: A
Item response theory is a highly technical mathematical approach to item analysis. Use of item analysis is based on a number of complex mathematical assumptions. One of these assumptions, known as invariance of item parameters, holds that the characteristics of items should be the same for all theoretically equivalent groups of subjects chosen from the same population. Thus, any culture-free test should demonstrate such invariance; i.e., a set of items shouldn’t have a different set of characteristics for minority and non-minority subgroups.
it cannot be applied in the attempt to develop culture-fair tests.

For this reason, item response theory has been applied to the development of culture-free tests, so this choice is not a true statement. The other choices are all true statements about item response theory, and therefore incorrect answers to this question.

it’s a useful theory in the development of computer programs designed to create tests tailored to the individual’s level of ability.

Consistent with this choice, item response theory is the theoretical basis of computer adaptive assessment, in which tests tailored to the examinee’s ability level are computer generated.

one of its assumptions is that test items measure a “latent trait.”

As stated by this choice, an assumption of item response theory is that items measure a latent trait, such as intelligence or general ability.

it usually has little practical significance unless one is working with very large samples.

And, finally, research supports the notion that the assumptions of item response theory only hold true for very large samples.

Additional Information: Item Difficulty

49
Q
Which of the following procedures involves identifying the underlying structure in a set of variables?
Select one:
A. multiple regression
B. factor analysis
C. canonical correlation
D. discriminant analysis
A

Correct Answer is: B
The purpose of factor analysis is to determine the degree to which many tests or variables are measuring fewer, underlying constructs. For example, factor analyses of the WAIS-III have suggested that four factors – verbal comprehension, perceptual organization, processing speed, and working memory – explain, to a large degree, scores on the fourteen subtests. Another way of saying this is that a factor analysis helps to identify the underlying structure in a set of variables.
Additional Information: Factor Analysis

50
Q

Limited “floor” would be the biggest problem when a test will be used to
Select one:
A. distinguish between mildly and moderately retarded children.
B. distinguish between above-average and gifted students.
C. distinguish between successful and unsuccessful trainees.
D. distinguish between satisfied and dissatisfied customers.

A

Feedback
Correct Answer is: A
Floor refers to a test’s ability to distinguish between examinees at the low end of the distribution, which would be an issue when distinguishing between those with mild versus moderate retardation. Limited floor occurs when the test does not contain enough easy items.
Note that “ceiling” would be of concern for tests designed to distinguish between examinees at the high end of the distribution: distinguish between above-average and gifted students.
Additional Information: Ceiling and Floor Effects

51
Q
You are testing a cross-section of minority clients including New Zealanders, Hispanics, African-Americans and Asians. The New Zealander's group turns out to serve as a moderator variable. This means the test has
Select one:
A. Cross validation
B. Shrinkage
C. Differential validity
D. Criterion contamination.
A

Correct Answer is: C
Variables that affect the validity of a test are moderator variables. When a moderator variable is present a test is said to have differential validity–meaning there would be a different validity coefficient for the New Zealanders group than for the others.
Additional Information: Factors Affecting the Validity Coefficient

52
Q

R-squared is used as an indicator of:
Select one:
A. The number of values that are free to vary in a statistical calculation
B. The variability of scores
C. How much your ability to predict is improved using the regression line
D. The relationship between two variables that have a nonlinear relationship

A

Feedback
Correct Answer is: C
You might have been able to guess correctly using the process of elimination. If so, note that R-squared tells you how much your ability to predict is improved using the regression line, compared to not using it. The most possible improvement is 1 and the least is 0.
The number of values that are free to vary in a statistical calculation

This choice is the definition of degrees of freedom.

The variability of scores

This is the definition of variance.

The relationship between two variables that have a nonlinear relationship

And this is a description of the coefficient eta.

53
Q
A test has a reliability coefficient of .90. What percentage of variability among examinees on this test is due to true score differences?
Select one:
A. 1
B. 0.9
C. 0.81
D. 0.5
A

Correct Answer is: B
In most cases, you would square the correlation coefficient to obtain the answer to this question. However, the reliability coefficient is an exception to this rule: it is never squared. Instead, it is interpreted directly. This means that the value of the reliability coefficient itself indicates the proportion of variance in a test that reflects true variance.
Additional Information: Reliability (Shared Variability)

54
Q
\: Which of the following methods of establishing a test's reliability is, all other things being equal, likely to be lowest?
Select one:
A. split-half
B. Cronbach's alpha
C. alternate forms
D. test-retest
A

Correct Answer is: C
You probably remember that the alternate forms coefficient is considered by many to be the best reliability coefficient to use when practical (if you don’t, commit this factoid to memory now). Everything else being equal, it is also likely to have a lower magnitude than the other types of reliability coefficients. The reason for this is similar to the reason why it is considered the best one to use. To obtain an alternate forms coefficient, one must administer two forms of the same test to a group of examinees, and correlate scores on the two forms. The two forms of the test are administered at different times and (because they are different forms) contain different items or content. In other words, there are two sources of error (or factors that could lower the coefficient) for the alternate forms coefficient: the time interval and different content (in technical terms, these sources of error are referred to respectively as “time sampling” and “content sampling”). The alternate forms coefficient is considered the best reliability coefficient by many because, for it to be high, the test must demonstrate consistency across both a time interval and different content.
Additional Information: Alternate Forms Reliability

55
Q
Which of the following item difficulty levels maximizes discrimination among test-takers?
Select one:
A. 0.1
B. 0.25
C. 0.5
D. 0.9
A

Correct Answer is: C
If a test item has an item difficulty level of .50, this means that 50% of examinees answered the item correctly. Therefore, items with this difficulty level are most useful for discriminating between “high scoring” and “low scoring” groups.
Additional Information: Item Difficulty

56
Q
Form A is administered to a group of employees in Spring and then again in Fall. Using this method, what type of reliability is measured?
Select one:
A. split-half
B. equivalence
C. stability
D. internal consistency
A

Correct Answer is: C
Test-retest reliability, or the coefficient of stability, involves administering the same test to the same group on two occasions and then correlating the scores.
split-half

Split-half reliability is a method of determining internal consistency reliability.

equivalence

Alternative forms reliability, or coefficient of equivalence, consists of administering two alternate forms of a test to the same group and then correlating the scores.

internal consistency

Internal consistency reliability utilizes a single test administration and involves obtaining correlations among individual test items.

Additional Information: Test-Retest Reliability

57
Q
Different regression line slopes in a scatterplot suggests:
Select one:
A. differential validity
B. a lack of factorial validity
C. divergent validity
D. a lack of convergent validity
A

Correct Answer is: A
The slope of a regression line for a test is directly related to the test’s criterion-related validity: The steeper the slope, the greater the validity. A test has differential validity when it has different validity coefficients for different groups, which is what is suggested by different regression line slopes in a scatterplot.
Factorial validity refers to the extent a test or test item correlates with factors expected to be correlated with in a factor analysis. The extent a test does not correlate with measures of an unrelated construct is referred to as divergent validity. Convergent validity refers to the degree a test correlates with measures of the same or a similar construct.
Additional Information: Factors Affecting the Validity Coefficient

58
Q
An examinee obtains a score of 70 on a test that has a mean of 80, a standard deviation of 15, and a standard error of measurement of 5. The 95% confidence interval for the examinee's score is:
Select one:
A. 50-90
B. 55-85
C. 60-80
D. 65-75
A

Correct Answer is: C
Confidence interval indicates the range within which an examinees’ true score is likely to fall, given his or her obtained score. The standard error of measurement indicates how much error an individual test score can be expected to have and is used to construct confidence intervals. To calculate the 68% confidence interval, add and subtract one standard error of measurement to the obtained score. To calculate the 95% confidence interval, add and subtract two standard errors of measurement to the obtained score. Two standard errors of measurement in this case equal 10. We’re told that the examinee’s obtained score is 70. 70 + 10 results in a confidence interval of 80 to 100. In other words, we can be 95% confident that the examinee’s true score falls within 60 and 80.
Additional Information: Standard Error of Measurement

59
Q

Confidence intervals are used in order to:
Select one:
A. calculate the test’s mean
B. calculate the standard deviation
C. calculate the standard error of measurement
D. estimate true scores from obtained scores

A

Correct Answer is: D
Confidence intervals allow us to determine the range within which an examinee’s true score on a test is likely to fall, given his or her obtained score.
The standard error of measurement is used to construct confidence intervals, not the other way around.
Additional Information: Standard Error of Measurement

60
Q

If you find that your job selection measure yields too many “false positives,” what could you do to correct the problem?
Select one:
A. raise the predictor cutoff score and/or lower the criterion cutoff score
B. raise the predictor cutoff score and/or raise the criterion cutoff score
C. lower the predictor cutoff score and/or raise the criterion cutoff score
D. lower the predictor cutoff score and/or lower the criterion cutoff score

A

Correct Answer is: A
On a job selection test, a “false positive” is someone who is identified by the test as successful but who does not turn out to be successful, as measured by a performance criterion. If you raise the selection test cutoff score, you will reduce false positives, since, by making it harder to “pass” the test, you will be ensuring that the people who do pass are more qualified and therefore more likely to be successful. By lowering the criterion score, what you are in effect doing is making your definition of success more lax. It therefore becomes easier to be considered successful, and many of the people who were false positives will now be considered true positives.
If you understand concepts in pictures better than in words, refer to the Psychology-Test Construction section, where a graph is used to explain this idea.
Additional Information: Decision-Making

61
Q
In the multitrait-multimethod matrix, a low heterotrait-monomethod coefficient would indicate:
Select one:
A. low convergent validity.
B. low divergent validity.
C. high convergent validity.
D. high divergent validity.
A

Correct Answer is: D
Use of a multitrait-multimethod matrix is one method of assessing a test’s construct validity. The matrix contains correlations among different tests that measure both the same and different traits using similar and different methodologies. The heterotrait-monomethod coefficient, one of the correlation coefficients that would appear on this matrix, reflects the correlation between two tests that measure different traits using similar methods. An example might be the correlation between a test of depression based on self-report data and a test of anxiety also based on self-report data. If a test has good divergent validity, this correlation would be low. Divergent validity is the degree to which a test has a low correlation with other tests that do not measure the same construct. Using the above example, a test of depression would have good divergent validity if it had a low correlation with other tests that purportedly measure different traits, such as anxiety. This would be evidence that the depression test is not measuring traits that are unrelated to depression.
Additional Information: Convergent and Discriminant (Divergent) Validation

62
Q
A predictor that is highly sensitive for identifying the presence of a disorder would most likely result in:
Select one:
A. measurement error
B. type II error
C. a high number of false positives
D. a high number of false negatives
A

Correct Answer is: C
A predictor that is highly sensitive will more likely identify the presence of a characteristic; that is, it will result in more positives (true and false). This may be desirable when the risk of not detecting a problem is high. For example, in the detection of cancer, a blood test that results in a high number of false positives is preferable to one that has many false negatives. A positive test result can then be verified by another method, for example, a biopsy.
Measurement error is the part of test scores which is due to random factors. Type II error is an error made when an experimenter erroneously accepts the null hypothesis.
Additional Information: Decision-Making

63
Q

Which statement is most correct?
Select one:
A. High reliability assumes high validity.
B. High validity assumes high reliability.
C. Low validity assumes low reliability.
D. Low reliability assumes low validity.

A

Correct Answer is: B
This question is difficult because the language of the response choices is convoluted and imprecise. We don’t write questions like this because we’re sadistic; it’s just that you’ll sometimes see this type of language on the exam as well, and we want to prepare you. What you need to do on questions like this is bring to mind what you know about the issue being asked about, and to choose the answer that best applies. Here, you should bring to mind what you know about the relationship between reliability and validity: For a test to have high validity, it must be reliable; however, for a test to have high reliability, it does not necessarily have to be valid. With this in mind, you should see that “high validity assumes high reliability” is the best answer. This means that a precondition of high validity is high reliability.
The second best choice states that low reliability assumes low validity. This is a true statement if you interpret the word “assume” to mean “implies” or “predicts.” But if you interpret the word “assume” to mean “depends on” or “is preconditioned by,” the statement is not correct.
Additional Information: Relationship between Reliability and Validity

64
Q

Differential prediction is one of the causes of test unfairness and occurs when:
Select one:
A. members of one group obtain lower scores on a selection test than members of another group, but the difference in scores is not reflected in their scores on measures of job performance
B. a rater’s knowledge of ratees’ performance on the predictor biases his/her ratings of ratees’ performance on the criterion
C. a predictor’s validity coefficient differs for different groups
D. a test has differential validity

A

Correct Answer is: A
As described in the Federal Uniform Guidelines on Employee Selection, differential prediction is a potential cause of test unfairness. Differential prediction occurs when the use of scores on a selection test systematically over- or under-predict the job performance of members of one group as compared to members of another group.
a rater’s knowledge of ratees’ performance on the predictor biases his/her ratings of ratees’ performance on the criterion

Criterion contamination occurs when a rater’s knowledge of ratees’ performance on the predictor biases his/her ratings of ratees’ performance on the criterion.

a predictor’s validity coefficient differs for different groups

Differential validity, also a possible cause of adverse impact, occurs when a predictor’s validity coefficient differs for different groups.

a test has differential validity

When a test has differential validity, there is a slope bias. Slope bias refers to differences in the slope of the regression line.

Additional Information: Adverse Impact

65
Q

What are the minimum and maximum values of the standard error of measurement?
Select one:
A. 0 and the standard deviation of test scores
B. 0 and 1
C. 1 and the standard deviation of test scores
D. -1 and 1

A

Correct Answer is: A
This question is best answered with reference to the formula for the standard error of measurement, which appears in the Psychology-Test Construction section. It is calculated by subtracting 1 by reliability coefficent, and taking the square root of this value; then this is multiplied by the standard deviation of x.You need to know the minimum and maximum values of the reliability coefficient – 0 and +1.0, respectively.If the reliability coefficient is +1.0, you will find from the above formula that the standard error of measurement is 0, which is its minimum value. And when the reliability coefficient is 0, you find from the formula that the standard error of measurement is equal to the standard deviation of test scores, which is its maximum value.
Additional Information: Standard Error of Measurement

66
Q
\: What value is preferred for the average item difficulty level in order to maximize the size of a test's reliability coefficient?
Select one:
A. 10
B. 0.5
C. 1
D. 0
A

Correct Answer is: B
The item difficulty index ranges from 0 to 1, and it indicates the number of examinees who answered the item correctly. Items with a moderate difficulty level, typically 0.5, are preferred because it helps to maximize the test’s reliability.
Additional Information: Item Difficulty

67
Q

If a predictor test has a validity of 1.0, the standard error of estimate would be
Select one:
A. equal to the standard deviation of the criterion measure.
B. 1
C. 0
D. unknown; there is not enough information to answer the question.

A

Correct Answer is: C
A validity coefficient and the standard error of estimate are both measures of the accuracy of a predictor test. A validity coefficient is the correlation between scores on a predictor and a criterion (outcome) measure. A coefficient of 1.0 reflects a perfect correlation; it would mean that one would always be able to perfectly predict, without error, the scores on the outcome measure. The standard error of estimate indicates how much error one can expect in the prediction or estimation process. If a predictor test has perfect validity, there would be no error of estimate; you would always know the exact score on the outcome measure just from the score on the predictor. Therefore, the closer the validity coefficient is to 1.0, the smaller the value of the standard error of estimate, and if the coefficient were 1.0, the standard error of estimate would be 0.
Additional Information: Standard Error of Estimate

68
Q
To determine two rater's level of agreement on a test you would use:
Select one:
A. Kappa coefficient
B. Discriminant validity
C. Percentage of agreement
D. Item response theory
A

Correct Answer is: A
There are a number of ways to estimate the interscorer reliability, but the most common involves calculating a correlation coefficient between the scores of two different raters. The Kappa coefficient is a measure of the agreement between two judges who each rate a set of objects using the nominal scales.
Additional Information: Interscorer Reliability

69
Q

The sensitivity of a screening for a psychological disorder refers to
Select one:
A. the ratio of correct to incorrect diagnostic decisions its use results in.
B. the proportion of correct diagnostic decisions its use results in.
C. the proportion of individuals without the disorder it identifies.
D. the proportion of individuals with the disorder it identifies.

A

Correct Answer is: D
In any test used to make a “yes/no” decision (e.g., screening tests, medical tests such as pregnancy tests, and job selection tests in some cases), the term “sensitivity” refers to the proportion of correctly identified cases–i.e., the ratio of examinees whom the test correctly identifies as having the characteristic to the total number of examinees who actually possess the characteristic. You can also conceptualize sensitivity in terms of true positives and false negatives. A “positive” on a screening test means that the test identified the person as having the condition, while a “negative” is someone classified by the test as not having the condition. The term true and false in this context refer to the accuracy or correctness of test results. Therefore, sensitivity can be defined as the ratio of true positives (people with the condition whom the test correctly detects) to the sum of true positives and false negatives (all the examinees who have the condition).

70
Q
Which of the following would be used to measure the internal consistency of a test?
Select one:
A. kappa coefficient
B. test-retest reliability
C. split-half reliability
D. alternate forms reliability
A

Correct Answer is: C
Internal consistency is one of several types of reliability. As its name implies, it is concerned with the consistency within a test, that is, the correlations among the different test items. Split-half reliability is one of the measures of internal consistency and involves splitting a test in two and correlating the two halves with each other. Other measures of internal (inter-item) consistency are the Kuder-Richardson Formula 20 (for dichotomously scored items) and Cronbach’s coefficient alpha (for multiple-scored items).
Test-retest reliability is not concerned with internal consistency, but rather, the stability of a test over time, and uses the correlations of scores between different administrations of the same test. Alternative forms reliability is concerned with the equivalence of different versions of a test. And the kappa coefficient is used as a measure of inter-rater reliability, that is, the amount of agreement between two raters.
Additional Information: Internal Consistency Reliability

71
Q
As a measure of test reliability, an internal consistency coefficient would be least appropriate for a
Select one:
A. vocational aptitude test.
B. intelligence test.
C. power test.
D. speed test.
A

Correct Answer is: D
Tests can be compared to each other in terms of whether they emphasize power or speed. A pure speed test contains relatively easy items and has a strict time limit; it is designed to measure examinees’ speed of response. A pure power test supplies enough time for most examinees to finish and contains items of varying difficulty. Power tests are designed to assess examinees’ knowledge or ability in whatever content domain is being measured. Many tests measure both power and speed. An internal consistency reliability coefficient measures the correlation of responses to different items within the same test. On a pure speed test, all items answered are likely to be correct. As a result, the correlation between responses is artificially inflated; therefore, for speed tests, other measures of reliability, such as test-retest or alternate forms, are more appropriate.
Additional Information: Speed, Power, and Mastery Tests

72
Q
A company wants its clerical employees to be very efficient, accurate and fast. Examinees are given a perceptual speed test on which they indicate whether two names are exactly identical or slightly different. The reliability of the test would be best assessed by:
Select one:
A. test-retest
B. Cronbach's coefficient alpha
C. split-half
D. Kuder-Richardson Formula 20
A

Correct Answer is: A
Perceptual speed tests are highly speeded and are comprised of very easy items that every examinee, it is assumed, could answer correctly with unlimited time. The best way to estimate the reliability of speed tests is to administer separately timed forms and correlate these, therefore using a test-retest or alternate forms coefficient would be the best way to assess the reliability of the test in this question.
The other response choices are all methods for assessing internal consistency reliability. These are useful when a test is designed to measure a single characteristic, when the characteristic measured by the test fluctuates over time, or when scores are likely to be affected by repeated exposure to the test. However, they are not appropriate for assessing the reliability of speed tests because they tend to produce spuriously high coefficients.
Additional Information: Reliability

73
Q

When conducting a factor analysis, an oblique rotation is preferred when:
Select one:
A. more than two factors have been extracted.
B. the underlying traits are believed to be dependent.
C. the assumption of homoscedasticity has been violated.
D. the number of factors is equal to the number of tests.

A

Correct Answer is: B
In the context of factor analysis, “oblique” means correlated or dependent. (“Orthogonal” means uncorrelated or independent.)
Additional Information: Interpreting and Naming the Factors

74
Q
When constructing an achievement test, which of the following would be useful for comparing total test scores of a sample of examinees to the proportion of examinees who answer each item correctly?
Select one:
A. classical test theory
B. item response theory
C. generalizability theory
D. item utility theory
A

Correct Answer is: B
The question describes the kind of information that is provided in an item response curve, which is constructed for each item to determine its characteristics when using item response theory as the basis for test development. (Note that there is no such thing as “item utility theory.”)
Additional Information: Item Response Theory and Item Response Curve

75
Q

The rotation of factors can be either orthogonal or oblique in factor analysis. An oblique rotation would be chosen when the:
Select one:
A. effects of one or more variables have been removed from X and Y.
B. effects of one or more variables have been removed from X only.
C. variables included in the analysis are uncorrelated.
D. variables included in the analysis are correlated.

A

Correct Answer is: D
An oblique rotation is used when the variables included in the analysis are considered to be correlated.
effects of one or more variables have been removed from X and Y.

This choice describes semi-partial correlation.

effects of one or more variables have been removed from X only.

This describes partial correlation.

variables included in the analysis are uncorrelated.

When the variables included in the analysis are believed to be uncorrelated, an orthogonal rotation is used.

Additional Information: Interpreting and Naming the Factors

76
Q

Which statement is most true about validity?
Select one:
A. Validity is never higher than the reliability coefficient.
B. Validity is never higher than the square of the reliability coefficient.
C. Validity is never higher than the square root of the reliability coefficient.
D. Validity is never higher than 1 minus the reliability coefficient.

A

Correct Answer is: C
A test’s reliability sets an upper limit on its criterion-related validity. Specifically, a test’s validity coefficient can never be higher than the square root of its reliability coefficient. In practice, a validity coefficient will never be that high, but, theoretically, that’s the upper limit.
Additional Information: Relationship between Reliability and Validity

77
Q
The slope of the item response curve, with respect to item response theory, indicates an item's:
Select one:
A. reliability
B. validity
C. difficulty
D. discriminability
A

Correct Answer is: D
The item response curve provides information about an item’s difficulty; ability to discriminate between those who are high and low on the characteristic being measured; and the probability of correctly answering the item by guessing. The position of the curve indicates its difficulty* and the steeper the slope of the item response curve, the better its ability to discriminate (correct response) between examinees who are high and low on the characteristic being measured. The item response curve does not indicate reliability* or validity* (* incorrect options).
Additional Information: Item Response Theory and Item Response Curve

78
Q
A factory requires all job applicants to complete a Biographical Information Blank (BIB) which asks, among other things, for details about the applicant's personal interests and skills. Many of the applicants, upon seeing the test, become very angry and subsequently file a class action suit against the company. The problem with this test seems to be that it lacks:
Select one:
A. face validity
B. content validity
C. construct validity
D. predictive validity
A

Correct Answer is: A
Biographical Information Blanks (BIB) have actually been found to be highly predictive of job success and only slightly less valid than cognitive ability tests for predicting job performance. However, they often lack face validity since some of the questions do not appear to the applicants to have anything to do with job performance.
Additional Information: Content Validity

79
Q
A person obtains a raw score of 70 on a Math test with a mean of 50 and an SD of 10; a percentile rank of 84 on a History test; and a T-score of 65 on an English test. What is the relative order of each of these scores?
Select one:
A. History >> Math >> English
B. Math >> History >> English
C. History >> English >> Math
D. Math >> English >> History
A

Correct Answer is: D
Before we can compare different forms of scores, we must transform them into some form of standardized measure. A Math test which has a mean of 50 and an SD of 10 indicates that a raw score of 70 would fall 2 standard deviations above the mean. Assuming a normal distribution of scores, a percentile rank of 84 on a History test is equivalent to 1 standard deviation above the mean. If you haven’t memorized that, you could still figure it out: Remember that 50% of all scores in a normal distribution fall below the mean and 50% fall above the mean. And 68% of scores fall within +/- 1 SD of the mean. If you divide 68% by 2 - you get 34% (the percentage of scores that fall between 0 and +1 SD). If you then add that 34% to the 50% that fall below the mean - you get a percentile rank of 84. Thus, the 84 percentile score is equivalent to 1 SD above the mean. Finally, looking at the T-score on the English test - we know that T-scores always have a mean of 50 and an SD of 10. Thus a T-score of 65 is equivalent to 1½ standard deviations above the mean. Comparing the 3 test scores we find the highest score was in Math at 2 SDs above the mean, followed by English at 1½ SDs above the mean, and History at 1 SD above the mean.
Additional Information: Standard Scores

80
Q

Computer-adaptive testing will yield
Select one:
A. more accurate results for high scorers on a test.
B. more accurate results for low scorers on a test.
C. more accurate results for examinees who score in the middle range of a test.
D. equally accurate results across all range of scores on a test.

A

Correct Answer is: D
In computerized adaptive testing, the examinee’s previous responses are used to tailor the test to his or her ability. As a result, inaccuracy of scores is reduced across ability levels.

81
Q
Which of the following techniques would be most useful for combining test scores when poor performance on one test can be offset by excellent performance on another:
Select one:
A. multiple baseline
B. multiple hurdle
C. multiple regression
D. multiple cutoff
A

Correct Answer is: C
Multiple regression is the preferred technique for combining test scores in this situation as it is a compensatory technique since a low score on one test can be offset (compensated for) by high scores on other tests.
Multiple baseline* is a research design, not a method for combining test scores. Multiple hurdle* and multiple cutoff* are noncompensatory techniques (* incorrect options).
Additional Information: Multiple Correlation and Multiple Regression

82
Q
In a study assessing the predictive validity of the SAT test to predict college success, it is found the SAT scores have a statistically significant correlation of .47 with the criterion, first year college GPA. A follow-up study separating the data by gender finds that for a given SAT score, the predicted GPA scores are higher for women than for men. This situation is most clearly an example of
Select one:
A. single group validity.
B. differential validity.
C. differential prediction.
D. adverse impact.
A

Correct Answer is: C
Differential prediction is a bit of a technical term, but in a non-technical way, it can be defined as a case where given scores on a predictor test predict different outcomes for different subgroups. Using the example in the question: if the average predicted GPA for men scoring 500 on the verbal SAT was 2.7, the average predicted GPA for females with the same SAT score was 3.3, and this type of difference is statistically significant across scores on the SAT, then use of the SAT would result in differential prediction based on gender. Differential prediction could result in selection bias in favor of one group at the expense of others. In the example under discussion, if 500 were the cutoff score for college admission, the men selected for admission would be less qualified than the women selected, and there would be a number of women not selected for admission who were equally or more qualified than the men who were selected. So use of the test would not be fair to female candidates.
Regarding the other choices: differential validity means that a test is more valid for one subgroup but not another, and single-group validity would mean that a test is valid for one subgroup but not another subgroup. In both cases, it means that the validity coefficient, or the correlation between the predictor and criterion, is different for different subgroups. This could be, but is not necessarily, the cause of differential prediction. In our example, it could be that, even though criterion scores are different for men and women at the same SAT score, the SAT predicts those scores at the same accuracy level for both groups (e.g., in our example, the score 500 provides the same level of predictive power for both men and women). In this scenario, the validity coefficients would be the same for both groups. Finally, adverse impact occurs when the use of a selection test results in a substantially lower rate of selection for one subgroup as compared to another subgroup–specifically, when the selection rate of one subgroup is 80% of less of the selection rate of another. For example, if use of the SAT resulted in 80% of males and 50% of females being admitted to college, the test would have adverse impact against females (50/80 = .625 or 62.5%, less than 80%). Since the question contains no information about the selection rates for men and women, this is not the best choice.

83
Q

Item analysis is a procedure used to:
Select one:
A. Determine which items will be retained for the final version of the test
B. Refer to the degree to which items differentiate among examinees
C. Graph depictions of percentages of people
D. Help the IRS with an audit

A

Correct Answer is: A
The procedure used to determine what items will be retained for the final version of a test is the definition of item analysis. The degree to which items discriminate among examinees is the definition of Item Discrimination. A graph that depicts percentages of people is an item characteristic curve.

84
Q

Which of the following would be used to determine the probability that examinees of different ability levels are able to answer a particular test item correctly?
Select one:
A. criterion-related validity coefficient
B. item discrimination index
C. item difficulty index
D. item characteristic curve

A

Correct Answer is: D
Item characteristic curves (ICCs), which are associated with item response theory, are graphs that depict individual test items in terms of the percentage of individuals in different ability groups who answered the item correctly. For example, an ICC for an individual test item might show that 80% of people in the highest ability group, 40% of people in the middle ability group, and 5% of people in the lowest ability group answered the item correctly. Although costly to derive, ICCs provide much information about individual test items, including their difficulty, discriminability, and probability that the item will be guessed correctly.
Additional Information: Item Response Theory and Item Response Curve

85
Q
The kappa statistic is used to evaluate reliability when data are:
Select one:
A. interval or ratio (continuous)
B. nominal or ordinal (discontinuous)
C. metric
D. nonlinear
A

Correct Answer is: B
The kappa statistic is used to evaluate inter-rater reliability, or the consistency of ratings assigned by two raters, when data are nominal or ordinal. Interval and ratio data is sometimes referred to by the term metric.
Additional Information: Interscorer Reliability

86
Q
Which of the following would be used to determine how well an examinee did on a test in terms of a specific standard of performance?
Select one:
A. norm-referenced interpretation
B. criterion-referenced interpretation
C. domain-referenced interpretation
D. objectives-referenced interpretation
A

Correct Answer is: B
There are several ways an examinee’s test score can be interpreted. In this question, a criterion-referenced interpretation, an examinee’s test performance is interpreted in terms of an external criterion, or standard of performance.
norm-referenced interpretation

In a norm-referenced interpretation, an examinee’s test performance is compared to the performance of members of the norm group (other people who have taken the test).

domain-referenced interpretation

Domain-referenced interpretation is used to determine how much of a specific knowledge domain the examinee has mastered.

objectives-referenced interpretation

Objectives-referenced interpretation involves interpreting an examinee’s performance in terms of achievement of instructional objectives.

Additional Information: Criterion-Referenced Interpretation

87
Q
Determining test-retest reliability would be most appropriate for which of the following types of tests?
Select one:
A. brief
B. speed
C. state
D. trait
A

Correct Answer is: D
As the name implies, test-retest reliability involves administering a test to the same group of examinees at two different times and then correlating the two sets of scores. This would be most appropriate when evaluating a test that purports to measure a stable trait, since it should not be significantly affected by the passage of time between test administrations.
Additional Information: Test-Retest Reliability

88
Q
Which of the following is not a norm referenced test?
Select one:
A. GRE
B. drivers license
C. EPPP
D. GED
A

Correct Answer is: B
A norm-referenced score is one that is interpreted in terms of a comparison to others who have taken the same test. Norm-referenced assessment is the method that compares a student with the age or grade-level expectancies of a norm group. It is generally used to sort students rather than to measure individual performance against a standard or criterion. The GRE and intelligence tests used in determining eligibility for special education programs are examples of norm-referenced measures.
Additional Information: Norm-Referenced Interpretation

89
Q
Which of the following descriptive words for tests are most opposite in nature?
Select one:
A. speed and power
B. subjective and aptitude
C. norm-referenced and standardized
D. maximal and ipsative
A

Correct Answer is: A
Pure speed tests and pure power tests are opposite ends of a continuum. A speed test is one with a strict time limit and easy items that most or all examinees are expected to answer correctly. Speed tests measure examinees’ response speed. A power test is one with no or a generous time limit but with items ranging from easy to very difficult (usually ordered from least to most difficult). Power tests measure level of content mastered.
Additional Information: Speed, Power, and Mastery Tests

90
Q
If a student scored between 1 and 2 standard deviations above the mean in a normal distribution of scores, you could conclude that the student's
Select one:
A. T-score is greater than 70
B. z-score is greater than 2
C. percentile rank is between 68 and 95
D. percentile rank is between 84 and 98
A

Correct Answer is: D
If a score falls between 1 and 2 standard deviations in a normal distribution we can readily conclude that it’s T-Score is between 60 and 70 and it’s z-score is between 1 and 2 (since z-scores are stated in standard deviation units). We can, therefore, eliminate these two choices “T-score is greater than 70” and “z-score is greater than 2.”
To determine percentile ranks you can do a simple calculation if you know the areas under a normal curve. Remember that 50% of all scores in a normal distribution fall below the mean and 50% fall above the mean. And 68% of scores fall within +/- 1 SD of the mean. If you divide 68% by 2, you get 34% (the percentage of scores that fall between 0 and +1 SD). If you then add that 34% to the 50% that fall below the mean, you get a percentile rank of 84. Thus, the 84 percentile score is equivalent to 1 SD above the mean. The same calculation is used for determining the percentile rank at 2 standard deviations. Since 95% of all scores fall within +/- 2 SD, we divide 95% by 2 which equals 47.5 and add that to the 50% which falls below the mean, which totals 97.5 (rounded off = 98). Thus, the percentile rank is between 84 and 98.

Additional Information: Standard Scores