Quiz 3 Flashcards

1
Q

Why do we need multilevel models?

A

If observations are clustered, they are correlated, leading to incorrect standard errors for coefficient estimates. Also, may be interested in the variation at different levels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Assumptions of classical measurement model

A

Error varies randomly with mean zero
Items’ errors are independent of one another
Errors are not correlated with true score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Result of poor reliability

A

Attenuation of regression coefficient/measure of association

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Definition of reliability

A

Ratio of true score variance to observed score variance (where observed score variance is true score variance plus error variance)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Parallel test assumptions and consequence

A

Items have equal correlation with true score
Items have equal means
Items have identical error variance
IF we have parallel tests, their correlation is equal to reliability (even with just two tests)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Tau-equivalent tests

A

Items have equal correlation with true score
Items have equal means
Error variances do not need to be equal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Essentially tau-equivalent tests

A

Items have equal correlation with true score
Items do not need equal means (can add a constant)
Error variances do not need to be equal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Congeneric tests

A

Items do not need equal correlation with true score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How to use split half reliability to estimate full scale reliability

A

To convert to full scale reliability (rather than half), use Spearman-Brown formula, calculating rbar using the split half rxx

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How to get scale total variance

A

Sum all entries in scale variance-covariance matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Cronbach’s alpha

A

Ratio of communal variance to total variance (sum of off-diagonal elements over sum of all elements), adjusted by (k/(k-1)). Can also be written in terms of average item variances/covariances, or average correlations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

When is Cronbach’s alpha only a lower bound for reliability?

A

Only in congeneric tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Forms of reliability and ways of calculating them

A

Internal consistency (alpha; KR-20)
Inter-rater reliability (Cohen’s kappa)
Test-retest reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Major types of validity

A

Construct (convergent, discriminant, internal structure)
Content (coverage of domain – e.g., expert review, face validity)
Criterion (concurrent; predictive)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Messicks Unified Theory of Construct Validity

A
Consequential
Content
Substantive
Structural
External
Generalizability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Reasons for factor analysis

A

Test theory
Understand scale structure
Scale development

17
Q

What does orthogonal factor rotation do, and not do?

A

Examples: varimax, quartimax

Assumes independence of factors

Redefines factors so that their loadings tend to be very high or low, to improve interpretation

Does not improve model fit

18
Q

What does oblique factor rotation do, and not do?

A

Examples: promax, oblimin

Does not assume factor independence

Does not change uniquenesses

In this case, you would also present a structure matrix showing correlations between the variables and factors

19
Q

Difference between factor analysis and PCA

A

FA tries to explain correlations in observed data while PCA looks for components that are linear combinations of observed data

20
Q

Restrictions we might place on a variance-covariance structure

A

Sigma_i = Sigma for all individuals (in a balanced design)

Compound symmetry: equal variances and equal covariances

Heterogeneous compound symmetry: variances can differ; consequently structured covariances

Autoregressive: constant variance; declining correlations as time separation increases

Heterogeneous autoregressive: variance also changes with time

Exponential: variability depends on exact time difference between pairs

21
Q

Assumptions for mixed effects model

A

Individuals/clusters are independent
Errors are uncorrelated with random effects
Random effects are uncorrelated with covariates (which could be violated by time-invariant confounders)

22
Q

Fixed effects model assumptions

A

Expected value of epsilon is zero for all ij
Expected value of epsilon_i given covariates X_i is zero (meaning current value of y_ij given x_ij should not predict subsequent values/other values in cluster)

23
Q

In terms of variability, when are fixed effects models preferred?

A

When within-individual variability dominates between-individual variability