Week 5 - Reliability and Validity Flashcards

1
Q

internal and external validity are considered in a what?

A

study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

reliability and validity are considered in a what?

A

measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

when evaluating a ___ discuss the internal and external validity

A

study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

when evaluating a ___ discuss the reliability and validity.

A

measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
  • process of assigning numerals to variables to represent quantities of characteristics according to certain rules.
  • approach to detecting and documenting relative conditions or events.
A

measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

____ decreases ambiguity and increases understanding via the expression of qualitative/quantitative info about a given variable.

A

measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

numbers represent units with equal intervals, measured from true zero.

A

ratio scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

name 3 examples of ratio measurements.

A

distance, age, time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

numbers have equal intervals but no true zero.

A

interval scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

name 2 examples of interval measurements.

A

calendar years, temperature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

numbers indicate rank order

A

ordinal scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

name 2 examples of ordinal measurements.

A

mmt, pain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

numerals are category labels.

A

nominal scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

name 2 examples of nominal measurements.

A

gender, blood type

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

some level of inconsistency is inevitable

A

measurement error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

name 3 sources of inconsistency in measurements.

A
  • tester (rater)
  • instrument
  • subject or character itself
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

describe the formula for observed score.

A

observed score (x) = true score(T) +- measurement error (E)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q
  • consistent, unidirectional, and predictable (if detected).
  • relatively easy to correct; recalibration or add or subject the correction.
  • a concern of validity
A

systematic errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

occur by chance and alter scores in unpredictable ways; chance fluctuations (tend to cancel out over repeated measurements)

A

random errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

name 2 examples of systematic errors.

A

illiteracy, confusing terms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

name 3 examples of random errors.

A

mood, level of fatigue, motivation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

___ ___ are generally not influenced by magnitude of true score.

A

random error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

the ____ the sample, the more the random errors are cancelled out.

A

larger

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

name 4 common sources of error.

A
  • respondent
  • situational factors
  • measurer
  • instrument
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q
  • not all error is random

- some error components can be attributed to other sources, such as rater or test occasion.

A

generalizability theory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

the consistency of your measurement instrument

A

reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

the degree to which an instrument measures the same way each time it is used under the same condition with the same subjects

A

reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

reflects how consistent and free from error a measurement is (ex: reproducible/dependable)

A

reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

reliability estimates are based upon score variance: the variability or distribution of scores

A

reliability coefficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

how is reliability/reliability coefficient measured? (formula)

A

reliability coefficient = true variance/ (true variance + error variance)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

describe the range of the reliability coefficient.

A

<0.50 = poor
0.50-0.75 = moderate
>0.75 = good
(the closer to 1 the better)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

reflects the degree of association or proportion between scores

A

correlation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

reflects the actual equality of scores

A

agreement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

do not affect reliability coefficient since relative scores remain consistent (high correlation).

A

systematic errors

35
Q

name the 4 types of reliability.

A
  • test-retest reliability
  • rater reliability
  • alternate forms reliability
  • internal consistency
36
Q

-indicates the stability (consistency) of an instrument through repeated trials.

A

test-retest reliability

37
Q

addresses the rater’s influence on the accuracy of the measurement

A

intra-rater reliability

38
Q

addresses the variation between separate raters on the same group of participants.

A

inter-rater reliability

39
Q

how is test-retest reliability and rater reliability assessed?

A

intraclass correlation coefficient (ICC) or kappa

40
Q
  • equivalent or parallel forms reliability

- eliminates memory of particular responses in traditional test-retest format.

A

alternate forms reliability

41
Q
  • homogeneity; the degree of relatedness of individual items measuring the same thing (factor/dimension)
  • how well items “hang together”
A

internal consistency

42
Q

how is alternate forms reliability assessed?

A

correlation coefficients

43
Q

how is internal consistency assessed?

A

cronbach’s coefficient alpha

44
Q

reliability exists in a ____.

A

context

45
Q

reliability is not ____. it exists to some extent in any instrument.

A

all-or-none

46
Q

name 6 ways to maximize reliability.

A
  • standardize measurement protocols
  • train raters
  • calibrate and improve the instrument
  • take multiple measurements
  • choose a sample with a range of scores
  • pilot testing
47
Q

how consistent it is given the same conditions

A

reliability

48
Q

if it measures what it is supposed to and how accurate it is

A

validity

49
Q

the degree to which an instrument actually measures what it is meant to measure

A

validity

50
Q

how is validity determined?

A

by the relationship btwn test results and certain behaviors, characteristics or performances.

51
Q

____ is a prerequisite for ____, but not vice-versa.

A

reliability, validity

52
Q

name the 4 types of measurement validity.

A
  • face validity
  • content validity
  • criterion-related validity
  • construct validity
53
Q

instrument appears to test what it is supposed to and it seems reasonable to implement; subjective process

A

face validity

54
Q

what is the weakest form of validity?

A

face validity

55
Q

instrument adequately addresses all aspects of a particular variable of interest and nothing else; subjective process by “panel of experts” during text development; non-statistical procedure

A

content validity

56
Q

new instrument is compared to a “gold standard” measure; objective and practical test of validity

A

criterion-related validity

57
Q

target and criterion measures are taken relatively at the same time

A

concurrent validity

58
Q

target measure will be suitable predictor of future criterion score

A

predictive validity

59
Q

name an example of predictive validity.

A

the SAT

60
Q
  • instrument effectively measures a specific abstract ideal.

- reliant upon content validity of construct and underlying theoretical context

A

construct validity

61
Q

name 5 methods of construct validation.

A
  • known groups method
  • convergent and divergent validity
  • factor analysis
  • hypothesis testing
  • criterion validation
62
Q

two measures believed to reflect the same underlying phenomenon will yield similar results or will correlate highly.

A

convergent validity

63
Q

indicates that different results or low correlations are expected from measures that are believed to assess different characteristics.

A

divergent validity

64
Q

a test to discriminate btwn 2 or more groups.

A

discriminant validity

65
Q

name the 2 main types of construct validity.

A

convergent and divergent validity

66
Q

name the 2 main types of criterion-related validity.

A

concurrent and predictive validity

67
Q

the ability of an instrument to accurately detect change when it has occurred.

A

responsiveness to change

68
Q

smallest difference in a measured variable that subjects perceive as beneficial.

A

minimally clinically important difference (MCID)

69
Q

a standardized assessment designed to compare and rank individuals within a defined population.

A

norm-referenced test

70
Q

interpreted according to a fixed standard that represents an acceptable level of performance.

A

criterion-referenced test

71
Q

name 3 things that change scores are used to do.

A
  • demonstrate effectiveness of an intervention.
  • track the course of a disorder over time.
  • provide a context for clinical decision making
72
Q

the smallest difference that signifies an important difference in a pts. condition

A

minimal clinically important difference (MCID)

73
Q

more meaningful for the subjects and clinicians

A

clinically important data

74
Q

the methods and measures we used for the study are good and will produce valid results.

A

internal validity

75
Q

relates to how well we can generalize the findings of the study to the entire population we’re interested in.

A

external validity

76
Q

must have ___ validity to also have ___ validity.

A

internal, external

77
Q

way to conceptualize a variable to reduce ambiguity about it.

A

measurement

78
Q

___ errors are harder to correct.

A

random

79
Q

what is the first step in making a measure standardized?

A

reliability

80
Q

administer a test twice to assess agreement btwn the 2 tests

A

test-retest reliability

81
Q

participants could get better the second time they take the test.

A

practice effect

82
Q

statistic that reflects both agreement and correlation

A

ICC (intraclass correlation coefficient)

83
Q

one rater; assess the same person twice to see whether your scoring has changed

A

intra-rater reliability

84
Q

considers the constructs rather than the consistency of the measurements.

A

factor analysis