Test Construction Flashcards

1
Q

Classical test theory

A
  • measurement that is used to develop and evaluate tests
  • framework
  • assumes that obtain test scores (X) are due to the combination of true variability (T) and measurement error (E)

X=T+E

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

True score variability

A
  • result of actual differences among examinees in regards to whatever the test is measuring
  • assumed to be consistent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Measurement error

A
  • due to random factors that affect test performance of examinees in unpredictable ways

Example: distractions, ambiguously worded test items, and examinee fatigue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Test reliability

A
  • The extent to which a test provides consistent information
  • there are several methods for evaluating this and each is appropriate for different circumstances

+ Most methods provide a reliability coefficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Reliability coefficient

A
  • type of correlation coefficient
  • ranges from 0 to 1.0
  • designated with the letter r with the subscript of two of the same letters or numbers
  • always interpreted as the the direct amount of variability obtained in test scores that’s due to true score variability
  • .70 or higher is considered the minimally acceptable level, but 90 is usually required for higher stakes tests used to select employees, assign diagnoses, or other important decisions about individuals
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The acceptable level of reliability

A
  • depends on the type of test and its purpose
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Standardized cognitive ability tests versus personality test

A
  • cognitive ability test have a higher reliability coefficient
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Standardized cognitive ability tests versus personality test

A
  • cognitive ability test have a higher reliability coefficient
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Test retest reliability

A
  • provides information on the consistency of scores over time
  • involves administering the test to a sample of examinees and readministering the test to the same examinees at a later time than correlating the two sets of scores
  • useful for tests that are designed to measure a characteristic that is stable over time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Alternative forms reliability

A

,- provides information about the consistency of scores over different forms of the test and when the second form is administered at a later time, the consistency of scores over time

  • involves administering one forum to a sample of examinees administering the other forms to the same sample of examinees and correlating the sets of scores
  • important whenever a test has more than one form
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Internal consistency reliability

A
  • provides information on the consistency of scores over different test items
  • useful for tests that are designed to measure a single content, domain or aspect of behavior

,- not useful for speed tests because it overestimates their reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Speed test

A
  • test retest reliability and alternative forms reliability are appropriate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Coefficient alpha

A

-aka cronbach’s alpha

  • involves administering the test to a sample of examinees and calculating the average inter-item consistency
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Coefficient alpha

A

-aka cronbach’s alpha

  • involves administering the test to a sample of examinees and calculating the average inter-item consistency
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Kuder Richardson’s 20( kr-20)

A
  • alternative to coefficient alpha
  • can be used when test items are dichotomously scored
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Split half reliability

A
  • involves administering the test to a sample of examinees splitting the test in half, usually in terms of even and odd numbers and correlating the scores on the two halves
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Problem with split half reliability

A
  • You’re required to calculate reliability coefficient for two forms of the test that are half as long as the original test and shorter tests tend to be less reliable than longer ones
  • It usually underestimates a test’s reliability and is usually corrected with the spearman Brown prophecy formula
18
Q

Inter rater reliability

A
  • important for measures that are subjectively scored
  • provides info on the consistency of scores or ratings assigned by different raters
19
Q

Percent agreement and cohens alpha coefficient

A
  • methods for calculating inter rater reliability
20
Q

Percent agreement

A
  • can be calculated on two or more raters
  • does not take chance agreement into account which can result in an ever overestimate of reliability
21
Q

Cohen’s kappa coefficient

A
  • aka the kappa statistic
  • One of several inter-rator reliability coefficients that is corrected for chance agreement between raters
  • used to assess the consistency of ratings assigned by two raters when the ratings represent a nominal scale
22
Q

Factors that affect Reliability of subjective ratings

A
  • can be affected by consensual observer drift
23
Q

Consensual observer drift

A
  • occurs when two or more raters communicate to each other while assigning ratings which results in an increased consistency but decreased accuracy in ratings and an overestimate of iterator reliability
24
Q

Ways to reduce consensual observer drift

A
  • not having raiders work together
  • providing raiders with adequate training and regular monitoring the accuracy of raters ratings
25
Q

Factors that affect the reliability coefficient

A
  • content homogeneity
  • range of scores
  • guessing
  • reliability index
  • item analysis
26
Q

Content homogeneity

A
  • tests that are homogeneous tend to have larger reliability coefficients than tests that are heterogeneous
  • especially true for internal consistency reliability
27
Q

Content homogeneity

A
  • tests that are homogeneous tend to have larger reliability coefficients than tests that are heterogeneous
  • especially true for internal consistency reliability
28
Q

Range of scores

A

,- reliability coefficients are larger when the scores are unrestricted in terms of range

  • when a sample includes examinees who have high, moderate, and low levels of the characteristics
29
Q

Guessing

A
  • reliability coefficients are affected by the likelihood that a test items could be answered correctly by guessing
  • The easier the questions. The lower the reliability coefficient
  • true false tests are likely to be less reliable than multiple choice tests with three or more answers
30
Q

Reliability index

A
  • is either the reliability coefficient or the theoretical correlation between the observed test scores and the true test scores calculated by taking the square root of the reliability coefficient
31
Q

Item analysis

A
  • used to determine which items to include on a test and involves determining each item’s difficulty level and ability to discriminate between examinees who obtain high and low test scores
32
Q

Standard error of measurement

A
  • used to construct a confidence interval and is calculated by multiplying the test standard deviation times 1- us the reliability coefficient
33
Q

Calculating confidence interval

A
  • for 68%. You add and subtract one standard error of measurement to and from the obtained score
  • for 95% confidence interval you add and subtract two standard deviations of measurement

+ And for 99% confidence interval you add and subtract three standard deviations of measurement

34
Q

Adequate reliability

A
  • test scores can be expected to be consistent

,- does not indicate that the test measures what it’s designed to measure

35
Q

Validity

A
  • The degree to which the test accurately measures which what it was designed to measure

Three types

,- content validity

  • Construct validity
    Criterion related validity
36
Q

P content validity

A
  • important for tests that have been designed to measure one or more content or behavioral domain

,- example achievement tests or work samples

  • established during the development of a test by clearly defining the domain to be assessed and including items that are representative samples of that domain

,- test items are then systematically reviewed by subject matter experts to ensure the item’s address all important aspects of the domain

37
Q

Construct validity

A
  • important for Tess that have been designed to measure a hypothetical trait. Example intelligence, motivation, or introversion that cannot be directly observed but is inferred by an examinees behavior
  • involves using several procedures including obtaining evidence of a test, convergent and divergent validity
38
Q

Convergent validity

A

,- The degree to which scores on a test have high correlations with scores or other measures designed to assess the same or related constructs

39
Q

Convergent validity

A

,- The degree to which scores on a test have high correlations with scores or other measures designed to assess the same or related constructs

40
Q

Divergent validity

A
  • also known as discriminant validity
  • The degree to which scores on the test have low correlations with scores on measures of unrelated constructs