P&B Chapter 14: Measurement and Data Quality Flashcards
True score +/- error =
obtained score
Obtained score = True Score + or - Error
Obtained score is the observed score
True score is the value obtained with an infallible measure (it’s hypothetical b/c there is no known measure that is not infallible)
Error is the error of measurement (i.e. the different between true and obtained scores is the result of factors that distort the measurement)
What measurement error is described?
awareness of rater presence, such as in a structure observation or environmental factors like temperature/lighting/time of day
situational containments
What measurement error is described?
*mood, fatigue
transitory personal factors
What measurement error is described?
*there is a change in test administrator or a participant has to redo the measure
administrative variations
alterations in the method of collecting data from one person to the next can result in score variations
Which is not a type of measurement error?
- response bias
- instrument date
- instrument clarity
- item sampling
- instrument format
- instrument date
2 types of reliability that you see in research (according to Kelli)
- test-retest
2. internal consistency
Which type of reliability is defined? 1. test-retest or 2. internal consistency correlational piece; degree to which multiple measures of the same thing agree with one another
internal consistency
Which type of reliability is defined?
*getting the same results with different raters
interrater reliability
Which type of validity is described?
*the research LOOKS like it is measuring the target construct
face validity
Which type of validity is described?
*instrument has appropriate # of items measuring what it’s supposed to
content validity
e.g. everything in the instrument adequately measures the ____________ (whatever it is you are measuring
T/F
Predictive validity is a type of criterion-related validity and looks at if a change in variable is predictive of future performance
TRUE
e.g. looking at your performance on an instrument to predict future performance such as GPA to get into OT school
T/F
Construct validity is the extent to which test scores correlate with scores on other relevent measures administered at the same time
FALSE: CONCURRENT VALIDITY is the extent to which test scores correlate with scores on other relevent measures administered at the same time
(e.g. a psychological test to differentiate between patients in a mental institution who can and cannot be released could be correlated with current behavioral ratings of healthcare personnel)
What measurement error is described:
*relatively enduring characteristics of people that can interfere with accurate measurements (e.g. social desirability or acquiescence in self-report measures)
Response-set biases
What measurement error is described:
*if the directions on an instrument are poorly described
Instrument clarity
What measurement error is described:
*errors can be introduced as a result of the sampling of the items in a measure (e.g. a score on a 100 item test will be influenced by WHICH 100 questions were included)
Item sampling
Which measurement error is described:
*technical characteristics of the instrument (e.g. ordering of questions in an instrument may influence the responses)
Instrument format
What concept does this describe:
“an instrument’s __________ is the consistency with which it measures the target attribute”
Reliability (p. 331 in book)
can be equated with a measure’s STABILITY, CONSISTENCY, or DEPENDABILITY
an instrument is also reliable to the extent that its measures reflect true scores
What concept does this describe:
“the ___________ of an instrument is the extent to which similar scores are obtained on separate occasions” (use a procedure such as test-retest reliability to assess this concept)
Stability (pgs. 330-332 in book)
as an FYI, stability indexes like the test-retest measurement are often appropriate for relatively stable characteristics like personality/abilities/adult height
these indexes are easy and can be used with self-report/observational/psychologic indexes
What does the letter r represent?
correlation coefficient
Increases in height tend to be associated with increases in weight - what type of relationship does this describe?
Positive relationship (.00 to +.10)
Older people tend to sleep fewer hours and younger people tend to sleep more hours - what type of relationship does this describe?
Inverse/Negative relationship (.00 to -.10)
A women’s dress size and her intelligence - what type of relationship does this describe?
No relationship…the correlation coefficient would be 0
True or False: “coefficient alpha (Cronbach’s alpha) is an index of internal consistency to estimate the extent to which different subparts of an instrument (i.e. items) are reliably measuring the critical attribute”
True (p. 333)
True or False: an instrument’s reliability is a fixed entity
FALSE “the reliability of an instrument is a property not of the instrument, but rather of the instrument when administered to certain people under certain conditions” (p. 335
FYI - some factors affecting reliability are listed on p. 335 as well:
Composite self-report/observational scales - add items tapping the same concept and remove items that elicit similar responses from respondents
Observational scales - improve category definition, increase clarity in explanations of the construct/rating scale, improve observer training
True or False: “the more homogeneous the sample (i.e. the more similar the scores), the higher the reliability coefficient will be”
FALSE….the more homogeneous the sample, the LOWER the reliability coefficient (b/c instruments are designed to measure differences among those being measured - it would be hard to differentiate between those who possess varying degrees of the attribute) (p. 335)
What concept does this describe:
“___________ is the degree to which an instrument measures what it is supposed to measure”
validity
Can an instrument be valid if it is unreliable?
No (p. 336)
Can an instrument be reliable if it is invalid?
Yes (p. 336)
What’s the difference between predictive and concurrent validity (these are two types of criterion related validity)
the timing of obtaining measurements on a criterion
What type of validity is a key criterion for assessing the quality of a study? (“what is this instrument really measuring?”)
Construct validity (p. 339)
FYI - the more abstract the concept, the more difficult it is to establish construct validity BUT at the same time, the more abstract the concept, the less suitable it is to rely on criterion-related validity
Also - if strong steps have been taken to ensure the content validity of an instrument, construct validity will also be strengthened