Measurement Reliability Flashcards
1
Q
Measurement Reliability
A
- extent to which repeated measurements agree with one another and are believable and useful
- also referred to as stability, consistency, and reproducibility
- sources of error include errors made by examiners, subject variability, and instrumentation flows or failures
2
Q
Types of Measurement Reliability
A
- instrument: test-retest, internal consistency, parallel forms, split-half
- rater: intra-tester (within), inter-tester (between or among)
3
Q
Test-Retest Reliability
A
-obtained by administering the same test twice over a period of time
4
Q
Instrument Reliability
A
- internal consistency: measure of reliability used to evaluate the degree to which different test items that cover the same construct produce similar results; self-report instruments; grouping of questions that each measure different constructs or concepts; all items in one domain should not relate to another domain
- parallel forms: two forms of an instrument are administered; score equally on both instruments;
- split half: combine two forms of an instrument that cover the same concept into one longer version; compare scores on one half with the other
5
Q
Quantification of Reliability
A
- relative: if measurement is reliable, individual measurements within a group will maintain their position within the group upon repeated measurement-very reliable but may not be near measurement score
- absolute: the extent to which a score varies upon repeated measurement-no change from day one to day two
6
Q
Measurement Validity
A
- degree to which a measurement captures what it is intended to measure
- reliability is a necessary but not sufficient condition for validity…
- a test may be reliable because it consistently reports the same measurement
- however it may not be valid because the measurement is incorrect
7
Q
Types of Measurement Valididty
A
- face validity
- content validity
- construct validity: convergent, discriminant
- criterion related validity: concurrent validity, predictive validity
8
Q
Face Validity
A
- does the test or instrument appear, on the face of it, to assess what is intended
- addressed from the standpoint of the tester and from the standpoint of the patient or family member
9
Q
Content Validity
A
- extent to which an instrument reflects all the meaningful elements of a variable
- judged by content experts or people with experience with the variable
- usually only pertinent to multidimensional measurements
- disability measures, functional measures, self-reported tools, knowledge assessment
- look at example on page 19 of 9/18 notes
10
Q
Construct Validity
A
- degree to which a measure reflects the operational definition of the concept it is said to represent
- achieved via operational definitions, logical arguments, theoretical arguments, and research evidence
11
Q
Forms of Construct Validity-Convergent Validity
A
- comparison of scores between two similar instruments expected to produce similar results
- positively correlate with each other
12
Q
Forms of Construct Validity-Discriminant Validity
A
- differentiation among different levels of characteristics of interest
- degree of disability
- does an instrument or test differentiate between individuals with shoulder impingement and those without
13
Q
Criterion Validity
A
- extent to which one measure is systematically related to other measures or outcomes
- required direct comparison of the index measure with a standard (criterion) measure
14
Q
Forms of Criterion Validity-Concurrent Validity
A
- ability of an index measure to capture an outcome similar to that of another measure
- compare the index measure to the criterion measure, that was obtained at the same time
15
Q
Forms of Criterion Validity-Predictive Validity
A
- the ability of an index measure to predict a figure outcome
- compare the index measure to the criterion measure that was obtained at a later point in time