Reliability MCQ Flashcards
Which one of the following represents the relationship between reliability and test length?
a. Spearman-Brown formula
b. Cohen’s kappa
c. Coefficient alpha
d. KR-20
A
The greatest danger in using the test-retest method of estimating reliability is that the test takers will score differently on the test because of:
a. practice effects.
b. alternate forms.
c. order effects.
d. parallel forms.
A
When a test developer gives the same test to the same group of test takers on two different occasions, he/she can measure
a. internal consistency.
b. test-retest reliability.
c. split-half reliability.
d. validity.
b
As a rule, adding more questions to a test that measures the same trait or attribute _________ the test’s reliability.
a. can decrease
b. can increase
c. does not affect
d. lowers
b
Which one of the following provides a nonparametric index for scorer agreement when the scores are nominal or ordinal data?
a. Coefficient alpha.
b. Spearman-Brown formula.
c. Inter-rater agreement.
d. Cohen’s kappa.
d
In the formula, X = T + E, the E stands for:
a. equivalence.
b. random error.
c. systematic error.
d. equality.
b
Which one of the following statistics is necessary to calculate a confidence interval around a test score?
a. Standard error of the mean.
b. Intraclass correlation coefficient.
c. Standard error of measurement.
d. Cohen’s kappa.
c
The greatest danger when using alternate/parallel forms is that the:
a. test takers will display practice effects.
b. test takers can be confused by order effects.
c. two forms will not be equivalent.
d. test will be reliable but not valid.
c
Estimating reliability using methods of internal consistency is appropriate only for tests that are:
a. alternate forms.
b. parallel forms.
c. equivalent forms.
d. homogenous.
d
What is the amount of consistency among scorers’ judgements called?
Internal reliability.
Interrater reliability.
Test-retest reliability.
Intrascorer reliability.
b
`Which one of the following provides an index of the strength and direction of the linear relationship between two variables?
Split-half reliability coefficient.
Spearman-Brown formula.
Correlation.
Interrater agreement.
c
Cronbach’s alpha is a measure of:
Inter-item reliability
Inter-observer reliability
Split-half reliability
Test-retest reliability
Inter-item reliability
Which of the following is NOT one of the four types of validity?
Inter-rater
Test-retest
Predictive
Parallel forms
Predictive
Which of the following statements is NOT correct?
A test-retest reliability coefficient of zero indicates perfect test-retest reliability.
The test-retest reliability coefficient is the correlation between the first and second administration of the test.
The closer each respondent’s scores are on T1 and T2, the more reliable the test measure.
In order to measure the test-retest reliability, we have to give the same test to the same test respondents on two separate occasions.
A test-retest reliability coefficient of zero indicates perfect test-retest reliability.
One way to assess the stability of measures is by computing a Pearson _________?
coefficient criterion.
linear coefficient.
correlation coefficient.
coefficient of determination.
correlation coefficient.
When you read about reliability in an article, the correlation between two test scores will usually be referred to as the _________ coefficient.
score
internal
reliability
significance
reliability
A group of students took a cognitive development test on Monday and then took the same test again the following week. Most likely, researchers were assessing the __________ reliability of the measure.
test-retest
split-half
odd-even
inter-item
test-retest
Which of the following statements about test-retest reliability is FALSE?
The reliability coefficient of a measure should be at least .80 before the measure is accepted as reliable.
Sometimes test-retest reliability is artificially high because people remember how they answered the first time.
Test-retest reliability is the best form of reliability measurement.
Test-retest reliability is very useful if the variable being measured is expected to stay constant over time.
Test-retest reliability is the best form of reliability measurement.
Which of the following is an indicator of the internal consistency reliability of a measure?
Split-half reliability
Cronbach’s alpha
Inter-rater reliability
Both A and B
Both A and B
When two raters observe the same behaviours, the extent to which their observations are in agreement is called _________?
Cronbach’s alpha
Item-total correlations
Split-half reliability
Inter-rater reliability
Inter-rater reliability
Internal consistency reliability is to Cronbach’s alpha as interrater reliability is to:
Spearman-Brown reliability coefficient
Pearson product-moment correlation coefficient
the item-total correlation
Cohen’s kappa
Cohen’s kappa
With regard to test reliability:
There are different types of reliability
It is seldom an all-or-none matter.
Tests are reliable to different degrees.
All of the above.
All of the above.
In the context of psychometrics, error refers to the component of the observed score on an ability test that:
does not have to do with the ability being measured.
was distorted as a result of examiner error.
may have been measured inaccurately for whatever reason.
was administered solely for experimental reasons.
does not have to do with the ability being measured.
Which is source of error variance:
Test construction.
Test administration.
Test scoring.
All of the above.
All of the above.
According to the true score theory, an individual’s score on a test of extraversion reflects a level of extraversion as defined by the test and that level is presumed to be…:
the test-taker’s “true” level of extraversion.
only an estimate of the test-taker’s true level of extraversion.
greater than the degree of error inherent in the score.
less than or equal to the degree of error inherent in the score.
only an estimate of the test-taker’s true level of extraversion.
Coefficient alpha is conceptually:
the variance of all possible sources of error variance.
the mean of all possible split-half correlations.
the standard deviation of all possible sources of variation.
the estimate of inter-scorer reliability that is most robust.
the mean of all possible split-half correlations.
The degree of correlation among all of the items on a scale:
is referred to as inter-item consistency
may be estimated by means of KR-20.
may be estimated by means of the Cronbach’s formula.
all of the above.
all of the above.
The correlation between two scores on two different administrations of the same test is called:
coefficient of stability
coefficient of equivalence
internal consistency
Cronbach’s alpha
coefficient of stability
Using split-half reliability to estimate consistency means we _________ reliability compared to the total test.
increase
reduce
halve
none of the above
reduce
Which formula do we use to adjust or re-evaluate reliability obtained from the split-half method?
KR-20
Cronbach’s alpha
Pearson product-moment correlation
Spearman-Brown correction
Spearman-Brown correction