Exam One - measurement reliability Flashcards

1
Q

reliability

A

consistency with which an instrument or rater measures a variable
extent to which a measurement is free from error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

validity

A

concerns the closeness of the measurement to the “true” value
is the test measuring what it is intended to measure?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Measurement =

A

true score + error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The _________ is the recorded data point and the _______ is the real or actual value under investigation

A

measurement, true score

true score is fixed and exists independently of the measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Measurement error

A

variability between measurements is an estimate of the extent of measurement error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The greater the _____ between measurements the __________ the reliability is presumed to be

A

variability, lower

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the primary contributors to measurement error?

A

1 - subject: mood, fatigue, injury type, practice, motivation, knowledge
2 - testing: test instruments, multiple testers, tester skill, environment
3 - instrumentation: calibration, measurement drift

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

________ reliability increases you confidence that DV differences between observation are ______

A

greater, real

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

formula for estimating reliability

A

(sample variability between subjects at test one)/(sample variability between subjects at test one + (test2-test1))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

ratios closer to 1 are considered

A

reliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

ratios closer to 0 are considered

A

unreliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

intrarater reliability

A

extent that scores of a tester agree with a second set of scores by the same tester

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

inter-rater reliability

A

extent that scores of one tester agree with scores for a different tester

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what are some important reliability considerations?

A
  • would other raters achieve similar results?
  • timing of retest?
  • measurements performed on separate occasions?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How would the reliability estimate be affected if the true score DID change between test 2 and test 1?

A

differences will always be interpreted as error!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

ICC stands for?

A

intraclass correlation coefficient

17
Q

ICC

A

comparison of two or more repeated measurements
- continuous data
- provides estimate of association and agreement in absolute magnitude

18
Q

What is the cutoff categories for ICC?

A

<0.75 = unacceptable
0.75-0.9 = good but consider other options
>0.9 = excellent and acceptable

19
Q

Kappa stat

A

categorical data only
- often used to measure agreement between raters

20
Q

What are the ranges for kappa statistc?

A

<0.2 = no different than expected by chance alone
.2-.4 = minimal agreement
.4-.6 = moderate
.6-.8 = substantial
>0.8 = near perfect (suitable for clinical practice)

21
Q

________ statistic corrects for chance agreement

A

kappa

22
Q

correlation coefficient

A

describes the extend that one measurement varies with another
- continuous data

23
Q

R value meanings

A

positive = direct relation
negative = inverse relation
range: -1 to 1
|R|
0.25-.5 = fair
.5-.75 = moderate
>0.75 = good

24
Q

Pearson correlation coefficient _________ represent the extent of the agreement of the magnitude for the two measures

A

does not
- is NOT a TRUE measure of reliability