Psychological measurement Flashcards

1
Q

Latent variable

A

inferences from observed variables

Most of what psychology seeks to measure can not simply be measured with one score directly, use latent variables instead, for ex. Intelligence, Happiness, Extraversion, Depression etc.

For example: Intelligence can be measured through multiple latent variables, such as arithmetic, vocab, matrix reasoning, and the measure itself will contain the measure of specific variance for those variables, plus error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

​​Classical test theory

A

An observed score on a test is the “true” score + error
Also applies to measures that aren’t tests!
The “true” score is a latent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Factor analysis

A

Fundamental point of factor analysis: maybe the variation that we see in a large number of observed variables really reflects the variation in a smaller number of latent variables

To what extent can we reduce the dimensions of our data to show these latent variables?

Put another way: the reason that observed variables correlate together might be that they’re all associated with a latent variable

With intelligence example, a proportion of varaince among IQ tests was theorized to be explained by “intelligence” a latent variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Exploratory versus confirmatory factor analysis: Four factor-analysis questions

A

How many factors are there? (exploratory)

What are they? (exploratory)

How well does a model where we assume k factors fit to the data? (confirmatory)

How well does each individual item represent those factors? (both)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Steps of factor analysis

A

Factor analysis extracts a set of new variables (factors) that summarise the overlap in variance among the individual items

Each factor has an eigenvalue: the amount of variance that it explains compared to one individual variable
e.g. An eigenvalue of 3.5 means that the factor explains as much variance as 3.5 of the observed variables
The factors are lined up in order of their eigenvalues

We use some criterion (there are many) to decide how many factors to keep
Higher eigen values indicate that factor accounts for more variance ie. has more power
Can create a scree plot of the eigen values, in that plot there will be an elbow
Where elbow flattens suggests those factors are not explaining very much
But this is subjective and can use other (confirmatory ?) tests to determine the variance explained by factors

Use number determined by elbow to run factor analysis ex. 6 factor (if that is number weather elbow flattens)
Then we look at which observed variables are most strongly linked to each factor
That is, their “factor loadings” (the correlation of each observed variable with the factor)

See the variance each factor explains for each variable
The eigenvalue is the sum of the (squared) loadings on the factor
Set factor loading cutoff to remove those values
Through this filtering, can see which factors determine more variance for which variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Limits of FA

A

Just because you have the items that read onto a construct, doesn’t mean its real
It’s easy to read too much into a factor analysis
The factors aren’t necessarily “real”
The interpretation isn’t straightforward

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Latent vs. formative variables

A

Intelligence is latent, where you measure it using factors such as arithmetic abilities and vocabulary, but intelligence isnt formed by those factors, rather intelligence informs how someone performs on those measures
SES is formative, measured using income, job status, etc., and the SES is formed by those measures, rather than the other way around

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Reliability

A

Reliability measures is the test consistently measuring the same thing?
Better reliability = more of the observed score consists of “true” variance rather than error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Test-retest reliability

A

does the test measure the same thing over time?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Internal consistency and split half reliability

A

do the different parts of the test measure the same thing?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Interrater reliability

A

do different raters use the test in the same way?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Chronbachs alpha

A

Measures the internal consistency of a set of scale items
To what extent do the items overlap in whatever they’re measuring?
Can take values from 0 to 1
0 = no overlap at all; 1 = complete overlap
Rule of thumb: minimum acceptable α = 0.65 or 0.70
Most commonly used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Chronbachs alpha assumptions

A

Each item contributes ~equally to the total score (“tau equivalence”)

Normally-distributed items

Uncorrelated errors

Unidimensionality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Validity

A

the accuracy of a measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Face validity

A

does the measure appear (on its face) as if it measures the construct of interest?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Concurrent validity

A

does the measure correlate with other measures of the same construct?

17
Q

Predictive validity

A

does the test predict some relevant outcome in the future?

18
Q

Construct validity

A

Construct validity is about how well a test measures the concept it was designed to evaluate.

19
Q

Different ways of testing construct validity

A

Group differences (e.g. a test of “attitudes towards going to church” should differ between churchgoers and non-churchgoers)

Internal structure (pattern of correlations/latent factors are theoretically expected – there should be convergent and discriminant validity)
If factors are too correlated they are redundant
If they are not correlated enough then they are unrelated and random

Changes over time (a high test-retest correlation could be good, or bad, for validity)

Studies of process (if people keep misreading an item, there’s error added to their score and the validity is lower)

The nomological net
The interlocking network of measurable variables in which a construct occurs
Built up over time by investigating the relation of that variable to many others
Constructs are not set in stone, just help provide a better picture

20
Q

R: Screeplot

A

scree(dataset)

21
Q

R: Parallel analysis

A

fa.parallel(dataset, fa = ‘fa’)

22
Q

R: Factor analysis

A

new_name <- fa(dataset, nfactors = #of_factors)

23
Q

R: Chronbachs alpha

A

psych::alpha (dataset[,col1:col2])