Chapter 9 Flashcards

1
Q

Observed score = true score ± error component

A

Measurement error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

2 types of measurement error

A

Systematic and random

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Potential sources of measurement error

A

The person taking the measurements
•The measuring instrument
•Variability in the characteristic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Not all error is random
●Some error components can be attributed to other sources, such as rater or test occasion.

A

Generalizability Theory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Reflects true variance as a proportion of total variance in a set of scores
•Measured as a unitless coefficient
•Intraclass correlation coefficients (ICC) and kappa coefficients are commonly used

A

Relative reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Indicates how much of a measured value, expressed in the original units, is likely due to error
•Standard error of the measurement (SEM) is commonly used

A

Absolute reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How does reliability exist in a context

A

It is relevant to a tools application

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How is reliability not all-or-none

A

It exists to some extent in any instrument

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Types of reliability

A

●Test-retest
●Rater
●Alternate forms
●Internal consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

When considering for test-retest reliability the interval between tests is considered to do what?

A

•To support stability of the measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

When considering test-retest reliability, the idea of practice to learning

A

Carryover

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Considerations of test-retest reliability that state the act of measurement changes the outcome is called

A

Testing effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

One rater

A

Intra-rater

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Two or more raters

A

Inter-rater

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Is it best when all raters measure the same response?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

With large error variance, the difference from trial 1 to trial 2 may cancel out true score and be composed of mostly error. This is an example of

A

Change scores

17
Q

Tendency for extreme scores to fall closer to the mean on retesting is called

A

Regression toward the mean

18
Q

Based on the standard error of the measurement (SEM)
•Amount of change that goes beyond error
•Also known as minimal detectable difference, smallest real difference, smallest detectable change, coefficient of repeatability, or the reliability change index.

A

Minimal detectable change (MDC)

19
Q

Ways to maximize reliability include

A

Standardize measurement protocols
●Train raters
●Calibrate and improve the instrument
●Take multiple measurements
●Choose a sample with a range of scores
●Pilot testing