P&B Chapter 14: Measurement and Data Quality Flashcards

1
Q

True score +/- error =

A

obtained score

Obtained score = True Score + or - Error

Obtained score is the observed score

True score is the value obtained with an infallible measure (it’s hypothetical b/c there is no known measure that is not infallible)

Error is the error of measurement (i.e. the different between true and obtained scores is the result of factors that distort the measurement)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What measurement error is described?
awareness of rater presence, such as in a structure observation or environmental factors like temperature/lighting/time of day

A

situational containments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What measurement error is described?

*mood, fatigue

A

transitory personal factors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What measurement error is described?

*there is a change in test administrator or a participant has to redo the measure

A

administrative variations

alterations in the method of collecting data from one person to the next can result in score variations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which is not a type of measurement error?

  1. response bias
  2. instrument date
  3. instrument clarity
  4. item sampling
  5. instrument format
A
  1. instrument date
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

2 types of reliability that you see in research (according to Kelli)

A
  1. test-retest

2. internal consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
Which type of reliability is defined?
1. test-retest
or 
2. internal consistency
correlational piece; degree to which multiple measures of the same thing agree with one another
A

internal consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which type of reliability is defined?

*getting the same results with different raters

A

interrater reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which type of validity is described?

*the research LOOKS like it is measuring the target construct

A

face validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which type of validity is described?

*instrument has appropriate # of items measuring what it’s supposed to

A

content validity

e.g. everything in the instrument adequately measures the ____________ (whatever it is you are measuring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

T/F
Predictive validity is a type of criterion-related validity and looks at if a change in variable is predictive of future performance

A

TRUE

e.g. looking at your performance on an instrument to predict future performance such as GPA to get into OT school

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

T/F
Construct validity is the extent to which test scores correlate with scores on other relevent measures administered at the same time

A

FALSE: CONCURRENT VALIDITY is the extent to which test scores correlate with scores on other relevent measures administered at the same time

(e.g. a psychological test to differentiate between patients in a mental institution who can and cannot be released could be correlated with current behavioral ratings of healthcare personnel)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What measurement error is described:
*relatively enduring characteristics of people that can interfere with accurate measurements (e.g. social desirability or acquiescence in self-report measures)

A

Response-set biases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What measurement error is described:

*if the directions on an instrument are poorly described

A

Instrument clarity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What measurement error is described:
*errors can be introduced as a result of the sampling of the items in a measure (e.g. a score on a 100 item test will be influenced by WHICH 100 questions were included)

A

Item sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which measurement error is described:

*technical characteristics of the instrument (e.g. ordering of questions in an instrument may influence the responses)

A

Instrument format

17
Q

What concept does this describe:

“an instrument’s __________ is the consistency with which it measures the target attribute”

A

Reliability (p. 331 in book)

can be equated with a measure’s STABILITY, CONSISTENCY, or DEPENDABILITY
an instrument is also reliable to the extent that its measures reflect true scores

18
Q

What concept does this describe:

“the ___________ of an instrument is the extent to which similar scores are obtained on separate occasions” (use a procedure such as test-retest reliability to assess this concept)

A

Stability (pgs. 330-332 in book)

as an FYI, stability indexes like the test-retest measurement are often appropriate for relatively stable characteristics like personality/abilities/adult height

these indexes are easy and can be used with self-report/observational/psychologic indexes

19
Q

What does the letter r represent?

A

correlation coefficient

20
Q

Increases in height tend to be associated with increases in weight - what type of relationship does this describe?

A

Positive relationship (.00 to +.10)

21
Q

Older people tend to sleep fewer hours and younger people tend to sleep more hours - what type of relationship does this describe?

A

Inverse/Negative relationship (.00 to -.10)

22
Q

A women’s dress size and her intelligence - what type of relationship does this describe?

A

No relationship…the correlation coefficient would be 0

23
Q

True or False: “coefficient alpha (Cronbach’s alpha) is an index of internal consistency to estimate the extent to which different subparts of an instrument (i.e. items) are reliably measuring the critical attribute”

A

True (p. 333)

24
Q

True or False: an instrument’s reliability is a fixed entity

A

FALSE “the reliability of an instrument is a property not of the instrument, but rather of the instrument when administered to certain people under certain conditions” (p. 335

FYI - some factors affecting reliability are listed on p. 335 as well:
Composite self-report/observational scales - add items tapping the same concept and remove items that elicit similar responses from respondents
Observational scales - improve category definition, increase clarity in explanations of the construct/rating scale, improve observer training

25
Q

True or False: “the more homogeneous the sample (i.e. the more similar the scores), the higher the reliability coefficient will be”

A

FALSE….the more homogeneous the sample, the LOWER the reliability coefficient (b/c instruments are designed to measure differences among those being measured - it would be hard to differentiate between those who possess varying degrees of the attribute) (p. 335)

26
Q

What concept does this describe:

“___________ is the degree to which an instrument measures what it is supposed to measure”

A

validity

27
Q

Can an instrument be valid if it is unreliable?

A

No (p. 336)

28
Q

Can an instrument be reliable if it is invalid?

A

Yes (p. 336)

29
Q

What’s the difference between predictive and concurrent validity (these are two types of criterion related validity)

A

the timing of obtaining measurements on a criterion

30
Q

What type of validity is a key criterion for assessing the quality of a study? (“what is this instrument really measuring?”)

A

Construct validity (p. 339)

FYI - the more abstract the concept, the more difficult it is to establish construct validity BUT at the same time, the more abstract the concept, the less suitable it is to rely on criterion-related validity

Also - if strong steps have been taken to ensure the content validity of an instrument, construct validity will also be strengthened