UNIT 5 NEW Flashcards
Could also be used to determine the number of items needed to attain a desired level of reliability that has the desired reliability
Spearman–Brown formula
sources of error variance of alternate forms
test construction or administration
If interested in looking in the truth independent of measurement, psychologists look for the
construct score
It provides an estimate of the amount of error inherent in an observed score or measurement
Standard Error of Measurement (SEM)
sources of error variance of test-retest
administration
A statistical measure that can aid a test user in determining how large a difference should be before it is considered statistically significant
standard error of difference
a reference to an IRT model with specific assumptions about the underlying distribution
rasch model
The procedures of this provide a way to model the probability that a person with X ability will be able to perform at a level of Y
Item Response Theory (IRT)
obtained by correlating two pairs of scores obtained from equivalent halves of a single test administered once
split-half reliability
its use is typically to evaluate the homogeneity of a measure (or, all items are tapping in a single construct)
internal consistency
If test developers or users wish to shorten a test, the __ may be used to estimate the effect of the shortening on the test’s reliability
Spearman–Brown formula
It is a useful measure of reliability when it is impractical or undesirable to assess reliability with two tests or to administer a test twice (because of factors such as time or expense)
split-half reliability
A __ of behavior, or the universe of items that could conceivably measure that behavior, can be thought of as a hypothetical construct
domain
exist when, for each form of the test, the means and the variances of observed test scores are equal
parallel forms
consists of unpredictable fluctuations and inconsistencies of other variables in the measurement process;
sometimes referred to as “noise”;
random error
trait, state, or ability presumed to be ever-changing as a function of situational and cognitive experiences
dynamic characteristic
a statistic that quantifies reliability, ranging from 0 (not at all reliable) to 1 (perfectly reliable)
reliability coefficient
In general, a primary objective in splitting a test in half for the purpose of obtaining a split-half reliability estimate is to create what might be called __
mini- parallel-forms
tied to the measurement instrument used
true score
often used when coding nonverbal behavior
inter-scorer reliability
The influence of particular facets on the test score is represented by _
coefficients of generalizability
to evaluate the relationship between different forms of a measure
alternate forms
terms that refer to variation among items within a test as well as to variation among items between tests
item sampling or content sampling
the inherent uncertainty associated with any measurement, even after care has been taken to minimize preventable mistake
measurement of error
to evaluate the extent to which items on a scale relate to one another
internal consistency
This source of error fluctuates from one testing situation to another with no discernible pattern that would systematically raise or lower scores; increase or decrease test scores unpredictably
random error
Computation of a coefficient of split-half reliability:
Step 1: Divide the test into equivalent halves.
Step 2: Calculate a Pearson r between scores on
the two halves of the test
Step 3: Adjust the half-test reliability using the
Spearman–Brown formula
The extent to which a testtaker’s score is affected by the content sampled on a test and by the way the content is sampled (i.e., the way in which the item is constructed) is a source of error variance
test construction
Widely used as a measure of reliability, in part because it requires only one administration of the test
coefficient alpha
If the variance of either variable in a correlational analysis is restricted by the sampling procedure used, then the resulting correlation coefficient tends to be __
lower
test items or questions that can be answered with only one of two alternative responses, such as true–false, yes– no, or correct–incorrect questions
dichotomous test items
If two scores each contain error such that in each
case the true score could be higher or lower, then we would want the two scores to be further apart before we conclude that there is a significant difference between them
standard error of difference
It is a specific application of a more general formula to estimate the reliability of a test that is lengthened or shortened by any number of item
Spearman-Brown formula
generally contains items of uniform level of difficulty (typically uniformly low) so that, when given generous time limits, all testtakers should be able to complete all the test items correctly
speed test
In many tests, the advent of computer scoring and a growing reliance on objective, computer-scorable items have virtually eliminated error variance caused by scorer difference
test scoring and interpretation
trait, state, or ability presumed to be relatively unchanging, such as intelligence
static characteristic
approach to reliability evaluation
test-retest method
Portion of variability in test scores that is due to factors unrelated to the construct being measured.
error variance
an estimate of reliability obtained by correlating pairs of scores from the same people on two different administrations of the same test
test-retest reliability
A test is said to be __ in items if it is functionally uniform throughout
homogeneous
When measuring something repeatedly, two influences interfere with accurate measurement:
(1) Time elapses between measurements
(2) The act of measurement can alter what is being estimated
provides a measure of the precision of an observed test score
Standard Error of Measurement (SEM)
typically designed to be equivalent with respect to variables such as content and level of difficulty
alternate forms
A range or band of test scores that is likely to contain the true score
confidence interval
if the test is __ in items, an estimate of internal consistency might be low relative to a more appropriate estimate of test-retest reliability
heterogeneous
If the variance of either variable in a correlational analysis is inflated by the sampling procedure, then the resulting correlation coefficient tends to be __
higher
Universe is described in terms of its facets, which include considerations such as —
the number of items in the test,
the amount of training the test scorers have had, and
the purpose of the test administration
nternal consistency estimates of reliability, such as that obtained by use of the Spearman– Brown formula, are inappropriate for measuring the reliability of
heterogenous and speed tests
seek to estimate the extent to which specific sources of variation under defined conditions are contributing to the test score
domain sampling theory
obtaining estimates of alternate-forms reliability and parallel-forms reliability is similar in two ways to obtaining an estimate of test-retest reliability:
(1) Two test administrations with the same group are
required
(2) Test scores may be affected by factors such as
motivation, fatigue, or intervening events such as practice, learning, or therapy
statistical procedure of test-retest
pearson r or spearman rho
the degree to which a measure predictably overestimates or underestimates a quantity
refers to the degree to which systematic error influences the measurement
bias
An estimate of test-retest reliability may be most appropriate in gauging the reliability of tests that employ outcome measures such as r___
reaction time or perceptual judgments
By determining the reliability of one half of a test, a test developer can use the __ to estimate the reliability of a whole test
Spearman-Brown formula
The relationship between the SEM and the reliability of a test is _
inverse
the higher the reliability of a test (or individual subtest within a test), the lower the SEM
an estimate of the extent to which item sampling and other errors have affected test scores on versions of the same test when, for each form of the test, the means and variances of observed test scores are equal
parallel forms reliability
nature of the test
(1) The test items are homogeneous or heterogeneous in nature;
(2) The characteristic, ability, or trait being measured is presumed to be dynamic or static;
(3) The range of test scores is or is not restricted;
(4) The test is a speed or a power test; and
(5) The test is or is not criterion-referenced
based on the idea that a person’s test scores vary from testing to testing because of variables in the testing situation
generalizability theory
test items or questions with three or more alternative responses, where only one is scored correct or scored as being consistent with a targeted trait or other construct
polytomous test items
Of the three types of estimates of reliability, __ are perhaps the most compatible with domain sampling theory
measures of internal consistency
Given the exact same conditions of all the facets in the universe, the exact same test score should be obtained
▪ This test score is the __, and it is, as Cronbach noted, analogous to a true score in the true score model
universe score
in homogenous test, it is reasonable to expect a high degree of
internal consistency
A statistic useful in describing sources of test score variability is the
variance
In ability tests, __ are carryover effects in which the test itself provides an opportunity to learn and practice the ability being measured
practice effects
calculated to help answer questions about how similar sets of data are
coefficient alpha
most frequently used measure of internal consistency, but has several well- known limitations
cronbach’s alpha
refers both to preventable mistakes and to aspects of measurement imprecision that are inevitable
error
The tool used to estimate or infer the extent to which an observed score deviates from a true score
Standard Error of Measurement (SEM)
measurement processes that alter what is measured
Carryover effects
designed to provide an indication of where a testtaker stands with respect to some variable or criterion, such as an educational or a vocational objective
criterion-referenced test
Tests designed to measure one factor, such as
one ability or one trait,
homogeneous
allows a test developer or user to estimate internal consistency reliability from a correlation between two halves of a test
Spearman-Brown formula
they influence test scores in a consistent direction; either consistently inflate scores or consistently deflate scores
systematic error
Reliable tests give scores that closely approximate __ scores
true score
use: when assessing the stability of various personality traits
test-retest reliability
refers to the portion of variability in tests scores that reflects the actual differences in the trait, ability, characteristics the test is designed to measure
true variance
Refers to the degree of correlation among all the
items on a scale
inter-item consistency
Scores on criterion-referenced tests tend to be interpreted in -
pass–fail (or, perhaps more accurately, “master–failed-to-master”) terms
can be used to set the confidence interval for a particular score or to determine whether a score is significantly different from a criterion (such as the cutoff score of 70 described previously)
Standard Error of Measurement (SEM)
an estimate of the reliability of a test can be obtained without developing an alternate form of the test and without having to administer the test twice to the same people
Internal consistency estimate of reliability or estimate of inter-item consistency
Valid tests give scores that closely approximate __ scores
construct
a value that according to CTT genuinely reflects an individual’s ability (or trait) level as measured by a particular test
true score
A reliability estimate of a __ test should be based on performance from two independent testing periods
speed
allows us to estimate, with a specific level of confidence, the range in which the true score is likely to exist
Standard Error of Measurement (SEM)
the degree of the relationship between various forms of a test
Coefficient of equivalence
refers to the proportion of the total variance attributed to true variance
reliability
two types of variance
True variance: variance from true differences
Error variance: variance from irrelevant, random sources
The total variance in an observed distribution of test scores (σ2)
equals the sum of the TRUE variance ( 𝜎2) and the ERROR variance ( 𝜎2)
a person’s standing on a theoretical variable independent of any particular measurement
construct score
it accurately measures internal consistency under highly specific conditions that are rarely met in real measures
cronbach’s alpha
The degree of agreement or consistency between two or more scorers (or judges or raters) with regard to a particular measure
Inter-scorer reliability
an estimate of the extent to which these different forms of the same test have been affected by item sampling error, or other error
alternate forms reliability
an estimate of reliability obtained by correlating pairs of scores from the same people on two different administrations of the same test
test-retest reliability
purpose: to evaluate the stability of a measure
test-retest reliability
__ are carryover effects in which repeated testing reduces overall mental energy or motivation to perform on a test
Fatigue effects
the goal of psychological assessment is to
maximize true variance
minimize error variance
signifies the degree to which an item differentiates among people with higher or lower levels of the trait, ability, or whatever it is that is being measured
discrimination
simplest way of determining the degree of consistency among scorers in the scoring of a test
coefficient of inter-scorer reliability
when a time limit is long enough to allow testtakers to attempt all items, and if some items are so difficult that no testtaker is able to obtain a perfect score
power test