Psychological measurement Flashcards
Latent variable
inferences from observed variables
Most of what psychology seeks to measure can not simply be measured with one score directly, use latent variables instead, for ex. Intelligence, Happiness, Extraversion, Depression etc.
For example: Intelligence can be measured through multiple latent variables, such as arithmetic, vocab, matrix reasoning, and the measure itself will contain the measure of specific variance for those variables, plus error
Classical test theory
An observed score on a test is the “true” score + error
Also applies to measures that aren’t tests!
The “true” score is a latent variable
Factor analysis
Fundamental point of factor analysis: maybe the variation that we see in a large number of observed variables really reflects the variation in a smaller number of latent variables
To what extent can we reduce the dimensions of our data to show these latent variables?
Put another way: the reason that observed variables correlate together might be that they’re all associated with a latent variable
With intelligence example, a proportion of varaince among IQ tests was theorized to be explained by “intelligence” a latent variables
Exploratory versus confirmatory factor analysis: Four factor-analysis questions
How many factors are there? (exploratory)
What are they? (exploratory)
How well does a model where we assume k factors fit to the data? (confirmatory)
How well does each individual item represent those factors? (both)
Steps of factor analysis
Factor analysis extracts a set of new variables (factors) that summarise the overlap in variance among the individual items
Each factor has an eigenvalue: the amount of variance that it explains compared to one individual variable
e.g. An eigenvalue of 3.5 means that the factor explains as much variance as 3.5 of the observed variables
The factors are lined up in order of their eigenvalues
We use some criterion (there are many) to decide how many factors to keep
Higher eigen values indicate that factor accounts for more variance ie. has more power
Can create a scree plot of the eigen values, in that plot there will be an elbow
Where elbow flattens suggests those factors are not explaining very much
But this is subjective and can use other (confirmatory ?) tests to determine the variance explained by factors
Use number determined by elbow to run factor analysis ex. 6 factor (if that is number weather elbow flattens)
Then we look at which observed variables are most strongly linked to each factor
That is, their “factor loadings” (the correlation of each observed variable with the factor)
See the variance each factor explains for each variable
The eigenvalue is the sum of the (squared) loadings on the factor
Set factor loading cutoff to remove those values
Through this filtering, can see which factors determine more variance for which variable
Limits of FA
Just because you have the items that read onto a construct, doesn’t mean its real
It’s easy to read too much into a factor analysis
The factors aren’t necessarily “real”
The interpretation isn’t straightforward
Latent vs. formative variables
Intelligence is latent, where you measure it using factors such as arithmetic abilities and vocabulary, but intelligence isnt formed by those factors, rather intelligence informs how someone performs on those measures
SES is formative, measured using income, job status, etc., and the SES is formed by those measures, rather than the other way around
Reliability
Reliability measures is the test consistently measuring the same thing?
Better reliability = more of the observed score consists of “true” variance rather than error
Test-retest reliability
does the test measure the same thing over time?
Internal consistency and split half reliability
do the different parts of the test measure the same thing?
Interrater reliability
do different raters use the test in the same way?
Chronbachs alpha
Measures the internal consistency of a set of scale items
To what extent do the items overlap in whatever they’re measuring?
Can take values from 0 to 1
0 = no overlap at all; 1 = complete overlap
Rule of thumb: minimum acceptable α = 0.65 or 0.70
Most commonly used
Chronbachs alpha assumptions
Each item contributes ~equally to the total score (“tau equivalence”)
Normally-distributed items
Uncorrelated errors
Unidimensionality
Validity
the accuracy of a measure
Face validity
does the measure appear (on its face) as if it measures the construct of interest?