SEM Flashcards

1
Q

SEM assumptions/limitations

A

linear relationships, large samples (maximum likelihood
relies on asymptotic properties; unbiased and efficient as sample approaches infinity), measures need to be interval (categorical/discrete can be handled differently, multivariate normality)- Bollen (1989)

No outliers

Multivariate normality

No singularity or multicollinearity

Sample size (Hu et al):

  • Normality assumption met: 500
  • Normality assumption violated: 2500+
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

SEM- Parceling

A

controversial method of combining items for more parsimony or when sample is too small; risks hiding many forms of misspecification; combination may be
arbitrary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

SEM- Value congruence/fit

A

(Edwards & Parry, 1993)

difference scores are overly restrictive, assuming equal magnitude but opposite signs between two measures; polynomial regressions are unconstrained and a better method of measuring fit/congruence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Advantages of SEM

A

Can look at complex path models instead of doing piecemeal analysis

Full information model that estimates all relationships simultaneously

More efficient

Less standard error

Controls for error in measurement of construct, again improving estimates (helps with endogeneity)

Fit statistics, want:

  • CFI > 0.9 with change in CFI vs. baseline to be < 1%
  • Insignificant change in chi squared
  • RMSEA
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Disadvantages of SEM

A

Problem in one part of model affects rest of the model

SEM does not give you the advantage of causality – this has to do with research design

Difficult to do moderation

May never converge given that it uses maximum likelihood, unlike OLS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is SEM

A

tests fit of covariances matrices by examining your observed covariances to hypothetical covariance.

Minimizing measurement error through latent factor.

Latent factors are unobservable, and are therefore assumed to exist without measurement error

1 or more IVs & 1 or more DVs

IVs & DVs can be factors or measured variables (continuous or discrete)

Tabachnik & Fidell (2001)- SEM comprised of EFA and series of multiple regression analyses

Kline (1998)- SEM comprised of CFA and series of multiple regressions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Path Analysis vs. SEM

A

Path analysis – system of equations – controls for alternative explanations – whatever left is correlations that could be causal. Assumes no error Uses unstandardized variance/covariance matrix

SEM – measurement & structural model simultaneously so measures/models error Need 3 indicators. Causal correlational analysis – taking out alternative explanations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

types of models- SEM

A

Saturated model
Measurement model
Theoretical/ structural model
Constrained/ parsimonious model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Checks for SEM

A

Measurement- latent variables1 = latent variable 2 (by putting all items in 1 variable)

Structural relationship=0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Saturated model

A

Allows system to look at the links between all 12 variables- but we need some parsimony without oversimplifying

The perfectly saturated model represents a perfectly fitted model by which all other models can be compared.

The chi square will be 0. It gives us our observed covariance matrix but doesn’t say anything theoretical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Constraining model

A

assigning certain variables a value of 0. With an increase in constraints, iteration impact power and also runs risk of overmodelling

When Chi square tends towards 0, the model is closer to saturated. When it tends away from 0, it is constrained

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Measurement Model

A

Based on theory, we can constrain the perfectly saturated model by lining up specific measures with specific latent variables.

If the measurement model is really good, the chi square statistic will be insignificant.

We do not want to have a significant difference
between the estimated covariance matrix created by out measurement model and our
observed data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

SEM Model Selection

A

compare out models to the saturated model and check for change in chi-square and check if it is significant.

We also compare the models to each other and check the change in chisquare and hope for insignificance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly