Chapter 5: Measurement* Flashcards

1
Q

How do you evaluate the accuracy of your measurement tool?

A

Reliability and construct validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Reliability

A

the degree to which a measure (of behavior/personality/intelligence/psychological construct) is consistent, providing a stable form of measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Construct validity

A

the degree to which your measure actually measures what you want it to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

True score theory

A

If only measuring a variable, true score is a person’s real score. If conducting an experiment, true score is a person’s score affected by the condition they are in.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Errors

A

sources of variability in your measure caused by things other than your IV (if there is an IV)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Types of errors

A

random and systematic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

True score

A

an individual’s actual level of the variable being measured, not the score they get on the measure of that variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Measurement error

A

any contributor to the measure’s score that is not based on the actual level of the variable of interest (i.e. not the true score); responsible for the degree to which a measure’s score deviates from the true score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Random error

A

has no pattern, unavoidable, unpredictable, and can’t be replicated by repeating the experiment e.g. misreading or misunderstanding questions, time of day

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Systematic error

A

has a pattern, produces consistent errors, and affects a participant’s scores in all conditions e.g. response biases, individual differences, incorrectly calibrated measuring instruments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why is low reliability a problem?

A

Difference between conditions can be misleadingly inflated or deflated due to unreliable measurements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Types of reliability for a measure using the correlation coefficient

A

Test-retest reliability, internal consistency reliability, inter-rater reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Test-retest reliability

A

degree of reliability assessed by administering the same measure on two different occasions, then calculating the correlation between the two different scores obtained

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Alternate forms reliability (solution for practice effects in test-retest)

A

two different forms of the same test are administered on two separate occasions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Challenges of test-retest reliability

A

practice effects and demand characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Internal consistency reliability

A

form of reliability assessing the degree to which items in a scale are consistent in measuring the same construct or variable

17
Q

Cronbach’s alpha

A

indicator of internal consistency reliability assessed by examining the average correlation of each item in a measure (inter-item correlations) with every other question; higher alpha = more reliable (max is 1)

18
Q

Challenge of internal consistency reliability

A

makes the assumption that items actually measure the same construct

19
Q

Interrater reliability

A

an indicator of reliability that examines the degree to which two or more raters agree on an observation (score), having the same or similar judgements for a set of stimuli

20
Q

Intraclass correlation coefficient (ICC)

A

higher ICC = greater inter-rater reliability (max of 1)

21
Q

Challenge of inter-rater reliability

A

judges need to be trained and independent, and can be expensive

22
Q

Uses of inter-rater reliability

A

behavioral coding, personality measures, thematic/content coding

23
Q

Construct validity

A

the degree to which a measure accurately measures the theoretical construct it is designed to measure; the quality of operationalization

24
Q

Construct or conceptual variable

A

abstract variable that, in its natural form, can’t be quantified; needs an operational definition

25
Q

Face validity

A

the degree to which a measure appears to measure the intended variable; a subjective process

26
Q

Content validity

A

form of construct validity evaluated by comparing the content of the measure to the theoretical definition of the construct, ensuring that all aspects of the construct are measured and no other extraneous elements

27
Q

Predictive validity

A

aspect of construct validity that involves examining if a measure can predict a theoretically relevant FUTURE behavior or criterion

28
Q

Concurrent validity

A

type of construct validity that examines whether the measure can predict a criterion measured at the same time the measure is administered

29
Q

Convergent validity

A

aspect of construct validity assessed by examining the extent to which scores on the measure are related to other measures of the same or similar constructs

30
Q

Discriminant/divergent validity

A

aspect of construct validity in which scores on a measure are not related to scores on conceptually unrelated measures

31
Q

Indicators of construct validity of a measure

A

face validity, content validity, predictive validity, concurrent validity, convergent validity, discriminant validity

32
Q

When are reliability and validity necessary?

A

(1) reliability alone is necessary (but not sufficient) to establish validity (2) construct validity is not necessary to establish reliability (3) reliability and indicators of construct validity are both necessary to establish construct validity

33
Q

Reactivity

A

when reacting to the act of measuring or observing something changes a person’s behavior; minimized by using nonreactive or unobtrusive operationalizations

34
Q

Scales of measurement

A

nominal, ordinal, interval, ratio

35
Q

Nominal scale

A

scale of measurement with 2 or more categories that have no numerical properties; a.k.a. categorical variables

36
Q

Ordinal scale

A

scale of measurement in which the measurement categories form a rank order along a continuum but the distance is unknown

37
Q

Interval scale

A

a scale of measurement in which the intervals between numbers on the scale are all equal and zero is arbitrary (i.e. does not indicate a complete absence of quantity) e.g. temperature in fahrenheit and celsius

38
Q

Ratio scale

A

numeric scale of measurement with equal intervals and has a meaningful zero indicating total absence of the variable measured e.g. temperature in Kelvin