Chapter 5: Reliability Flashcards

1
Q

Reliability

A

Consistency in measurement; the total variance in an observed distribution of test scores equals the sum of the true variance plus the error variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Reliability Coefficient

A

Index of reliability; proportiion that indicates the ratio between the true score variance on a test and the total variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Concept of Reliability

A
X = T+ E
X = Observed score
T = True score
E = Error
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

True Score Model

A

Also true that the magnitude of the presence of a certain psychological trait as measured b a test of that trait will be due to the true amount of that trait and other factors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Variance

A

Statistic useful in describing sources of a test score variability; useful because it can be broken down into components

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

True Variance

A

Variance from true differences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Error Variance

A

Variance from irrelevant, random sources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Reliability of a Test

A

The greater the proportion of the total variance attributed to true variance, the more reliable the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Sources of Error Variance

A

Test Construction
Administration
Scoring
Interpretation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Item/Content Sampling

A

Terms that refer to variation among items within a test as well as to variation among items between tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Challenge in Test Development

A

Maximize the proportion of the total variance that is true variance and to minimize the proportion of the total variance that is error variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Factors related to the Test Environment

A

Room temperature
Level of Lighting
Amount of ventilation and noise
Instrument used to enter responses and even the writing surface on which responses are written

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Factors related to Testtaker variables

A

Pressing emotional problems
Physical discomfort
Lack of sleep
Effects of drugs or medication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Factors related to Examiner-Related Variables

A

Examiner’s physical appearance and demeanor; presence or absence of an examiner

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Scoring and Scoring systems

A

Technical glitches may contaminate data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Test-Retest Method

A

Using the same instrument to measure the same thing at two points in time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Test-Restest Reliability

A

Result of a reliability evaluation; estimate of reliability obtained by correlating pairs of scores from the same people on two different administrations of the same test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Test-Retest Measure

A

Appropriate when evaluating the reliability of a test that purports to measure something that is relatively stable over time;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Coefficient of Stability

A

Estimate of test-retest reliability when the interval between testing is greater than six months

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Coefficient of Equivalence

A

Alternate-Forms or Parraled forms coefficient of reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Parallel Forms

A

Exist when for each form of the test, the means and the variances of observed test scores are equal; means of scores obtained on parallel forms correlate equally with the tue score; scores obtained on parallel test correlate equally with other measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Alternate Forms

A

Different versions of a test that have been constructed so as to be parallel; designed to be equivalent with respect to variables such as content and level of difficulty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Similarity between obtaining estimates of alternate forms reliability and parallel forms reliability and obtaining an estimate of test-retest reliability

A

Two test administrations with the same group are required
Test scores may be affected by factors such as motivation, fatigue, or intervening events such as practice, learning or therapy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Item Sampling

A

Inherent in the computation of an alternate- or parallel-forms reliability coefficient; testtakers may do better or worse on a specific form of the test not as a function of their true ability but simply because of the particular items that were selected for inclusion in the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Internal Consistency Estimate of Reliability/Estimate of Inter-Item Consistency

A

Obtaining an estimate of the reliability of a test without developing an alternate form of the test and without having to administer the test twice to the same people

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Split-Half Reliability

A

Obtained by correlating two pairs of scores obtained from equivalent halves of a single test administered once; useful measure of reliability when it is impractical or undersirable to assess reliability with two tests or to administer a test twice

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Steps to compute a Coefficient of Split-Half Reliability

A

Divide the test into equivalent halves.
Calculate a Pearson r between scores on the two halves of the test
Adjust the half-test reliability using the Spearman-Brown formula

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

To Split a Test

A

Randomly assign items to one or the other half of the test; assign odd-numbered items to one half of the test and even-numbered items to the other half

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Odd-Even Reliability

A

assign odd-numbered items to one half of the test and even-numbered items to the other half

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Mini Parallel Forms

A

Each half equal to the other in format, stylistic, statistical, and related aspect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Spearman-Brown Formula

A

Allows a test developer or user to estimate internal consistency and reliability from a correlation of two halves of a test; Specific application to estimate the reliability of a test that is legnthened or shhortened by any number of items; used to determine the number of items needed to attain a desired level of reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

In adding items to increase test reliability to a desired level

A

The rule is that new items must be equivalent in content and difficulty so that the longer test still measures what the original test measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

When Internal Consistency Estimates of Reliability are Inappropriate

A

When measuring the reliability of a heterogeneous test and speed test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Inter-item Consistency

A

Refers to the degree of correlation among all the items on a scale; calculated from a single administration of a single form on a test; useful in assessing the homogeneity of a test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Homogeniety

A

Degree to which a test measures a single factor; extent to which items in a scale are unifactorial

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Heterogeneity

A

Degree to which a test measures different factors; composed of items that measure more than one trait

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Nature of Homogeneous Test

A

The more homogeneous a test is, the more inter-item consistency it can be expected to have;
Desirable because it allows relatively straighforward test-score interpretation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Testtakers with the same score on a Homogeneous Test

A

Have similar abilities in the area tested

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Testtakers with the same score on a Heterogeneous Test

A

May have different abilities

40
Q

Homogeneous Test

A

Insufficient tool for measuring multifaceted psychological variables such as intelligence or personality

41
Q

G. Frederic Kuder & M.W. Richardson

A

Developed their own measures for estimating reliability; Kuder-Richardson Formula 20 (KR-20)

42
Q

Kuder Richardson Formula 20 (KR-20)

A

Most popular formula

43
Q

Where test items are highly Homogeneous

A

KR-20 and split-half reliability estimates will be similar

44
Q

Where test items are highly Heterogeneous

A

KR-20 will yield lower reliability estimates than the split-half method

45
Q

Dichotomous Items

A

Items that can be scored right or wrong, such as multiple choice items

46
Q

Test Battery

A

A selected assortment of tests and assessment procedures in the process of evaluation; typically composed of tests designed to measure different variables

47
Q

r KR20

A

The Kuder-Richardson Formula 20 Reliability Coefficient

48
Q

KR-21

A

Used if there is reason to assume that all the test items have approximately the same degree of difficulty; Outdated in an era of calculators and computers

49
Q

Coefficient Alpha

A

Variant of the KR-20 that has received the most acceptance and is in widest used today; mean of all possible split-half correlations, corrected by the Spearman-Brown formula; approriate for use on tests containing nondichotomous items; preferred statistic for obtaining an estimate of internal consistency reliability; formula yields an estimate of the mean of all possible test-retest, split-half coefficients; widely used as a measure of reliability, in part because it requires only one administration of the test; gives information about the test scores and not the test itself

50
Q

Coefficient Alpha Result Coefficient alpha is calculated to help answer questions about how similar sets of data are

A

Ranges in value from 0 to 1; impossible to yield a negative value of alpha, if negative, report as zero

51
Q

Scale of Coefficient of Alpha

A

0 Absolutely no similarity
1 Perfectly identical
- Alpha is usually reported as Zero

52
Q

Inter-Scorer Reliability

A

Degree of agreement or consistency between two or more scorers (or judges or raters) with regard to a particular measure

53
Q

Coefficient of Inter-scorer Reliability

A

A way to determine the degree of consistency among scorers

54
Q

Approaches to the Estimation of Reliability

A

Test-Retest
Alternate or Parallel Forms
Internal or Inter-Item Consistency

55
Q

How High a Coefficient of Reliability Should Be

A

On a continuum relative to the purpose and importance of the decisions to be made on the basis of scores on the test

56
Q

Considerations of the Nature of The Testing Itself

A

Test items are homogeneous or heterogeneous in nature
The characteristic, ability, or trait being measured is presumed to be dynamic or static
The range of test scores is or is not restricted
Test is a speed or a power test
Test is or is not criterion-referenced

57
Q

Sources of Variance in a Hypothetical Test

A
True Variance 67%
Error due to Test Construction 18%
Administration Error 5%
Unidentified Error 5%
Scorer Error 5%
58
Q

Homogeneity of Test Items

A

HOmogeneous in items if it is functionally uniform throughout

59
Q

Heterogeneity of Test Items

A

An estimate of internal consistency might be low relative to a more appropriate estimate of test-retest reliability

60
Q

Dynamic Characteristic

A

A trait, state, or ability presumed to be ever-changing as a function of situational and cognitive experiences; Obtained measurement would not be expected to vary significantly as a function of time, and either the test-retest or the alternate forms method would be appropriate;

61
Q

Static Characteristic

A

Trait, state, or ability resumed to be relatively unchanging

62
Q

Restriction of Range/Variance

A

If Variance of either variable in a correlational analysis is restricted by the sampling procedure used, then the resulting correlation coefficient tends to be lower; if the variance of either variable in a correlational analysis is inflated by the sampling procedure, then the resulting correlation coefficient tends to be higher

63
Q

Power Test

A

when a time limit is long enough to allow testtakers to attempt all items and if some items are so difficult that no testtaker is able to obtain a perfect score

64
Q

Speed Test

A

Generally contains items of uniform level of difficulty so that when gien generous time limits, all testtakers should be able to complete all test items correctly; based on performance speed; time limit is established so that few, if any, of the testtakers will be able to complete the entire test

65
Q

Reliability Estimate of A Speed Test

A

Based on performance from two independent testng periods using one of the following:
Test-Retest Reliability
Alternate-Forms Reliability
Split-Half Reliability from two separately timed half tests

66
Q

If Split Half Procedure is Used for a Speed Test

A

The obtained reliability coeffiient is for a half test and should be adjusted using the Spearman-Brown formula

67
Q

Speed Test Administered Once & Measure of Internal Consistency is Calculated

A

Result will be a spuriously high reliability coefficient; two people, one who completes 82 items of a speed test and another who completes 61 items of the same speed test; correlation of the two will be close to 1 but will not say anything about response consistency

68
Q

Criterion-Referenced Test

A

Designed to provide an indication of whether a testtaker stands with respect to some variable or cirterion, such as an educational or a vocational objective; tend to contain material that has been mastered in heirarchical fashion; tend to be interpreted in pass-fail terms, and any scrutiny of performance on individual items tends to be for diagnostic and remedial purpose

69
Q

Test-Retest Reliability Estimate

A

Based on the correlation between the total scores on two admnistrations of the same test

70
Q

Alternate-Forms Reliability Estimate

A

A reliability estimate is based on the correlation between scores on two halves of the test and is then adjusted using the Spearman-Brown formula to obtain a reliability estimate of the whole test

71
Q

Generalizability Theory/Domain Sampling Theory

A

Seek to estimate the extent to which specific sources of variation under defined conditions are contributing to the test score; A test’s reliability is conceived of as an objective measure ofhow precisely the test score assesses the domain from which the test draws a sample

72
Q

Domain of Behavior

A

Universe of items that could conceivably measure that behavior; hypothetical construct: one that shares certain characteristics with (and is measured by) the sample of items that make up the test

73
Q

Generalizability Theory

A

May be viewed as an extension of true score theory wherein the concept of a universe score replaces that of a true score; developed by Lee J. Cronbach; Given the same conditions of all the facets in the universe, the exact same test score should be obtained

74
Q

Lee J. Cronbach

A

Encouraged test deelopers and researchers to describe the details of the particular test situation (universe) leading to a speciic test score

75
Q

Universe

A

Described in terms of its facets

76
Q

Facets

A

Include things like the number of items in the test, the amount of training the test scorers have had, and the purose of the test administration

77
Q

Universe Score

A

The test score; analogous to a true score in the true score model

78
Q

Generalizability Study

A

Examines how generalizable scores from a particular test are if the test is administered in different situations; examines how much of an impact different facets of the universe have on the test score

79
Q

Coefficients of Generalizability

A

Influence of particular facets on the test score; similar to reliability coefficients in the true score model

80
Q

Decision Study

A

Developers examine the usefulness of test scores in helping the test user make decisions; designed to tell the test user how test scores should be used and how dependable those scores are as a basis for decisions, depending on the context of their use

81
Q

Item Response Theory

A

Provide a way to model the probability that a person with X ability will be able to perform at a level of Y; Stated in terms of personality assessment, it models the probability that a person with X amount of a particular personality trait will exhibit Y amount of that trait on a personality test designed to measure it; not a term used to reer to a single theory or method

82
Q

Latent

A

Physically unobservable

83
Q

Latent-Trait Theory

A

Synonym for IRT; Propose models that describe how the latent trait influences performance on each test item; theoretically can take on values from -infinity to +infinity;

84
Q

Characteristics of Items within an IRT Framework

A

Difficulty Leel of an Item

Item’s Level of Discrimination

85
Q

Difficulty

A

Refers to the attribute of not being easily accomplished, solved, or comprehended; May also refer to physical difficulty

86
Q

Physical Difficulty

A

How hard or easy it is for a person to engage in a particular activity

87
Q

Discrimination

A

Signifies the degree to which an item differentiates among people with higher or lower levels of the trait, ability, or whatever it is that is being measured

88
Q

Dichotomous Test Items

A

Test items or questions that can be answered with only one of two alternate responses, such as true-false, yes-no, or correct-incorrect questions

89
Q

Polytomous Test Items

A

TEst items or questions with three or more alternative responses, where only one is scored correct or scored as being consistent with a targeted trait or other construct

90
Q

Georg Rasch

A

Developed a group of IRT models; each item on the test is assumed to have an equivalent relationship with the construct being measured by the test;

91
Q

Reliability Coefficient

A

Helps the test developer build an adequate measuring instrument
Helps the test user select a suitable test
Its usefulness does not end with test construction and selection

92
Q

Standard Error of Measurement

A

SEM; provides a measure of the precision of an obsered test score; provides an estimate of the amount of error inherent in an obsered score or measurement; inverse relationship between SEM and reliability of a test; the higher the reliability of a test (or individual subtest within a test) the lower the SEM; tool used to estimate or infer the extent to which an observed score deviates from a ture score; standard deviation of a theoretically normal distribution of test scores obtained by one person on equivalent tests

93
Q

Standard Error of a Score

A

Another term for Standard Error of Measurement; Index of the extent to which one’s individual’s scores vary over tests presumed to be parallel

94
Q

Confidence Interval

A

Range or band of test scores that is likely to contain the true score

95
Q

Standard Error of the Difference

A

A statistical measure that can aid a test user in determining how large a difference should be before it is considered statistically significant

96
Q

Questions that Standard Error of the Difference Between Two Scores can Answer

A

How did this individual’s performance on test 1 compare with his or her performance on test 2?
How did this individual’s performance on test 1 compare with someone else’s performance on test 1?
How did thisindividual’s performance on test 1 compare with someone else’s performance on test 2?