Measurement Issues Flashcards

1
Q

What is measurement reliability?

A

extent to which a measurement is consistent & free from error, indicates ability of an instrument to produce similar scores, any test is situation specific

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the 6 ways measurement reliability is assessed by?

A
  1. accuracy
  2. consistency
  3. stability
  4. precision
  5. reproducibility
  6. dependability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What 2 characteristics is measurement validity dependent on?

A

reliability & relevance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

All valid tests are ____, however all ____ tests are not valid.

A

reliable, reliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are 3 sources of measurement error?

A
  • the individual taking the measurement
  • variability of characteristics being measured
  • the measuring instrument itself
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is predictable, consistent, and usually correctable?

A

systematic error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is due to chance, unpredictable, and effect scores are unprectable?

A

random error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the term for the difference between T (true score) and X?

A

measurement error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

systematic error:

A

A form of measurement error, where error is constant across trials.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

random error:

A

measurement are due to chance and can affect a subjects score in an unpredictable way from trial to trial

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are 6 ways to avoid measurement error?

A
  • clear operational definitions
  • careful planning
  • using detailed procedures
  • being trained in procedures
  • practice of specific procedures
  • inspection of equipment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

regression toward the mean:

A

A statistical phenomenon in which scores on a pretest are likely to move toward the group mean on a posttest because of inherent positive or negative measurement error; also called statistical regression.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

variance:

A

a measure of the variability of differences among scores within a sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The larger the variance the greater the _____.

A

dispersion of scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The smaller the variance the more ____.

A

homogeneous the scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You have measured dogs for height the heights (at the shoulders) are 600mm, 470mm, 170mm, 430mm, and 300mm. Find the mean, variance, and standard deviation.

A

mean: 394
variance: 21,704 (all answers squared / # of answers)
standard deviation: 147 (square root of variance)

17
Q

with zero error the ratio will produce what coefficient?

A

1.00

18
Q

coefficient:

A

a number used to multiply a variable

19
Q

reliability coefficient of <.50 is:

A

poor

20
Q

reliability coefficient of .50-.75 is:

A

moderate

21
Q

reliability coefficient of >.75 is:

A

good

22
Q

test-retest reliability:

A
  • stability of the instrument
  • repeated administration of test
  • administered by one rater
  • one sample tested repeatedly
  • time intervals are important
  • depends on stability of what’s being measured
  • assumption is that what is being measured remains the same
  • any changes seen is then due to random error
23
Q

intrarater reliability:

A

1 individual over 2 or more trials. Usually over short interval of time
- Statistics used ICC model 2 or 3

24
Q

interrater reliability:

A

The degree to which two or more raters can obtain the same ratings for a given variable.

Greater than or equal to 2 raters measuring same subject group. Best done during a single trial. Important for generalizability.
- Statistics used ICC model 2 or 3

25
Q

internal consistency:

A

-

This is the homogeneity of items on a test. Reflexts extent to which items measure various aspects of a characteristic. Important when developing physical performance measures. Looks at correlation of all items on the test.

Possible statistics used are Cronbach’s Coefficient Alpha, Spearman-Brown Prophecy Statistic, Item-to-Total Correlation.

26
Q

What are the carryover effects of test-retest?

A

training & motivation

27
Q

What are the testing effects of test-retest

A

pain ensues & soft tissue stretch

28
Q

What are the statistics used for test-retest?

A

intraclass correlation coefficient (ICC) model 3 & kappa statistics (nominal data)

29
Q

What are examples of alternate forms of reliability? What are the statistics used?

A
  • exists in 2 or more versions of a test
  • equivalent, parallel, or alternate forms
  • 2 alternate forms administered to the same group in 1 setting
  • will look at correlation between paired observations
  • needed to generalize findings from one situation to another

statistics used are limits of agreement & correlation coefficients

30
Q

What are 6 factors that affect reliability?

A
  • things internal to participant
  • time between testing
  • testing period circumstances
  • appropriate level of difficulty
  • precision of measurement
  • environmental conditions
31
Q

What is the standard error of measurement (SEM)?

A

Relates to the reliability of the measurement. The degree to which an observed score fluctuates due to errors of measurement. Is the standard deviation of the errors of measurement around the observed score.

32
Q

true score + random error = actual score

A

classical reliability theory

33
Q

true score + random error + test-retest error + rater error + other error sources = actual score

A

generalizability theory (Cronbach)

34
Q

What is the minimal detectable difference?

A

The amount of change in a variable that must be achieved to reflect a true difference. The greater the reliability the smaller the MDD.

ex) pain scale- change of 2 is MDD. MMT- a full grade, not plus and minus.

35
Q

pilot testing:

A

a smaller sample to test reliability of measurement