Chapter 5 - Identifying Good Measurement Flashcards
Operationalization/operational definition
- turning a construct of interest into a measured or manipulated variable
Conceptual definition/construct
- the researcher’s definition of the variable in question at a theoretical level
self-report measures
- recording people’s answers to questions about themselves in a questionnaire or interview
observational measures
- recording observable bevahiours or physical traces of behaviours
physiological measures
- recording biological data
what are the 3 common ways researchers operationalize their variables
self-report
observational
physiological
categorical (nominal) variables
- a variable whose levels are categories (ex. male, female)
quantitative (continuous) variables
- a variable whose values can be recorded as meaningful numbers (ex. height weight)
what are the 3 types of measurement scales for quantitative operational variables?
ordinal
interval
ratio
ordinal scale of measurement
- numbers of a quantitative variable represent ranked order
- distance between numerals may not be equal
interval scale of measurement
- when numerals represent equal intervals (distances) between levels
- there is no true zero (0 doesn’t mean “nothing”)
ratio scale of measurement
- numerals of a quantitative variable have equal intervals and there is a true zero (0 means “none”)
Reliability
how consistent the results of a measure are
Validity
Whether the operationalization is measuring what it is supposed to measure
What are the 3 types of reliability?
test-retest
interrater
Internal
test-retest reliability
- a participant will get pretty much the same score each time they are measured with it
- ex. when dealing with a variable such as mood
interrater reliability
- consistent scores are obtained no matter who measures the variable
Internal reliability
- a participant gives a consistent pattern of answers, no matter how the researchers phrase the question
correlation coefficient
- a single number, ranging from -1.0 to 1.0
- indicated the strength and direction of an association between two variables
slope direction
- which way the direction of a relationship slopes on a scatterplot
- Positive (up), negative (down), or zero (not at all)
strength
- a description of association indicating how closely data points in a scatterplot cluster along a line of best fit drawn through them
Strong: r is close to 1 (strongest positive relationship) or -1 (strongest negative relationship)
Weak: r is close to 0 (no relation)
average inter-item correlation (AIC)
- A measure of internal reliability for a set of items; it is the mean of all possible correlations computed between each item and the others
- average of all correlations
r
direction and strength of a relationship
Cronbach’s alpha (coefficient alpha)
- mathematically combines the AIC and the number of items on a scale
- closer to 1.0, better scale reliability
Which type(s) of reliability is relevant for self-report
test-retest
internal
Which type(s) of reliability is relevant for observational measures
test-retest
interrater
Which type(s) of reliability is relevant for physiological measures
test-retest
interrater
Construct validity
- check to make sure a measure is reliable and that it measures the conceptual variables as it was intended
face validity
- the extent to which a measure is subjectively considered a plausible operationalization of the conceptual variable in question
- ex. hat size has a high face validity as a measurement of head size but low face validity as an operationalization of intelligence
content validity
- the extent to which a measure captures all parts of a defined construct
- Ex. Conceptual definition of intelligence which contains many aspects such as the ability to reason, plan, solve problems, etc. - content validity requires that the operationalization (ex. Survey) covers questions that attend to all of these specific parts of the variable under study
Criterion validity
- evaluates whether the measure under consideration is associated with a concrete behavioural outcome that it should be associated with, according to the conceptual definition
Known-groups paradigm
- researchers see whether scores on the measure can discriminate among two or more groups whose behaviour is already confirmed
Discriminant validity
- An empirical test of the extent to which a self-report measure does not correlate strongly with measures of theoretically dissimilar constructs
- Ex. Depression is not the same as a person’s perception of their overall physical health
Convergent validity
- An empirical test of the extent to which a self-report measure correlates with other measures of a theoretically similar construct
- Ex. A measure of depression should correlate with a different measure of the same construct - depression