SEM Flashcards
SEM assumptions/limitations
linear relationships, large samples (maximum likelihood
relies on asymptotic properties; unbiased and efficient as sample approaches infinity), measures need to be interval (categorical/discrete can be handled differently, multivariate normality)- Bollen (1989)
No outliers
Multivariate normality
No singularity or multicollinearity
Sample size (Hu et al):
- Normality assumption met: 500
- Normality assumption violated: 2500+
SEM- Parceling
controversial method of combining items for more parsimony or when sample is too small; risks hiding many forms of misspecification; combination may be
arbitrary
SEM- Value congruence/fit
(Edwards & Parry, 1993)
difference scores are overly restrictive, assuming equal magnitude but opposite signs between two measures; polynomial regressions are unconstrained and a better method of measuring fit/congruence
Advantages of SEM
Can look at complex path models instead of doing piecemeal analysis
Full information model that estimates all relationships simultaneously
More efficient
Less standard error
Controls for error in measurement of construct, again improving estimates (helps with endogeneity)
Fit statistics, want:
- CFI > 0.9 with change in CFI vs. baseline to be < 1%
- Insignificant change in chi squared
- RMSEA
Disadvantages of SEM
Problem in one part of model affects rest of the model
SEM does not give you the advantage of causality – this has to do with research design
Difficult to do moderation
May never converge given that it uses maximum likelihood, unlike OLS
What is SEM
tests fit of covariances matrices by examining your observed covariances to hypothetical covariance.
Minimizing measurement error through latent factor.
Latent factors are unobservable, and are therefore assumed to exist without measurement error
1 or more IVs & 1 or more DVs
IVs & DVs can be factors or measured variables (continuous or discrete)
Tabachnik & Fidell (2001)- SEM comprised of EFA and series of multiple regression analyses
Kline (1998)- SEM comprised of CFA and series of multiple regressions
Path Analysis vs. SEM
Path analysis – system of equations – controls for alternative explanations – whatever left is correlations that could be causal. Assumes no error Uses unstandardized variance/covariance matrix
SEM – measurement & structural model simultaneously so measures/models error Need 3 indicators. Causal correlational analysis – taking out alternative explanations
types of models- SEM
Saturated model
Measurement model
Theoretical/ structural model
Constrained/ parsimonious model
Checks for SEM
Measurement- latent variables1 = latent variable 2 (by putting all items in 1 variable)
Structural relationship=0
Saturated model
Allows system to look at the links between all 12 variables- but we need some parsimony without oversimplifying
The perfectly saturated model represents a perfectly fitted model by which all other models can be compared.
The chi square will be 0. It gives us our observed covariance matrix but doesn’t say anything theoretical
Constraining model
assigning certain variables a value of 0. With an increase in constraints, iteration impact power and also runs risk of overmodelling
When Chi square tends towards 0, the model is closer to saturated. When it tends away from 0, it is constrained
Measurement Model
Based on theory, we can constrain the perfectly saturated model by lining up specific measures with specific latent variables.
If the measurement model is really good, the chi square statistic will be insignificant.
We do not want to have a significant difference
between the estimated covariance matrix created by out measurement model and our
observed data.
SEM Model Selection
compare out models to the saturated model and check for change in chi-square and check if it is significant.
We also compare the models to each other and check the change in chisquare and hope for insignificance