WK 11 Flashcards
How do you interpret the meaning of the factors?
Based on the size and the sign of the loadings that you deem to be salient
In most research, what are salient loadings?
salient loadings are greater than or equal to |0.3| or |0.4|
What is a salient relationship?
Salient relationship = big enough relationship for us to be interested in
(It is of a signifcant size that we would note and retain in our analysis)
What do we do if we see an item that does not load on any factor above this salient value of 0.3?
We would deem it as one that isn’t being explained well or accounted for by our factor solution, and ear mark it as one we may remove from our analysis
What is a heywood case?
factor loadings > 1
What does a Heywood case tell you?
tells you there is a problem in the analysis
If a factor does not have a primary loading from at least 3 items that you are analysing, what do you do?
We query basically whether we have done this over-extraction
What loading criteria is in place for EFA solutions?
All factors load on 3+ items at salient level
What is the EFA criteria for the items?
All items have at least one loading above salinet cut off
What is the EFA criteria for heywood cases?
No heywood cases
What is the EFA criteria for complex items?
Complex items removed in accordance with goals
What is the EFA criteria for item content?
Item content of factors is coherent and substantively meaningful
What is one way to test replicability?
One way to test replicability is to see whether similar factors appear when similar data are collected
Aside from similarity checks, what is another way of testing replicability?
Test this formally by collecting data on another sample or split one large sample into two (exploratory vs confirmatory)
What is factor congruence?
It is a way of saying if I look across samples, how consistent is the factor solution I identify
What does factor congruence enable?
It enables us to look at similarity of solutions
What is congruence coeffiecents?
Essentially correlations between factor loadings stacked on top of each other
What is confirmatory factor analysis?
This tests the model where we explicitly say that this item has to relate with this factor -> We specify the grouping
- it is not driven by data
What is target rotation?
Target rotation is where you give the rotation algorithm, what items should relate to what factors (give it a structure)
What does it mean when the congruence coefficient is 1?
It suggests that there’s a perfect match between the factor solutions.
If the loading is comparatively high in one sample and comparatively high in the other, what do this mean?
that’s going to lead to a higher correlation because basically the rank is roughly the same.
In confirmatory factor analysis what do we do?
We specify a model and test how well it fits the data
How do we specify a model in CFA?
We specify a model by indicating what loadings we believe will be zero -> we then try to reject this model
What does it mean if the correlations derived from the CFA do not match the correlations from your data?
It probably means that parameters set to zero should not have been set to zero
What do you do in unit-weighting?
You sum raw scores on the observed variables which have primary loadings on each factor
What items do you sum in unit-weighting?
Which items to sum is a matter of defining what loadings are salient
What do Thurstone or Thompson method do?
They compute scores from observed item correlations and loadings (weighted method of producing scores)
What does Bartlett method focus on?
Focuses on minimizing the sums of squares for unique factors
What are three ways to factor scores?
unit weighting, thurstone method, bartlett method
What is the assumption in unit-weighting?
Assume things are equally weighted
What is a negative consequence of unit-weighting or regression method?
We assume that our factors will correlate with each other -> has a tendency to inflate correlations if we have an orthogonal solution
If we have forced an orthogonal correlation, what will we see?
We will see a pattern of correlations that are not the same as our factor analysis
What does the Anderson-Rubin method do?
It preserves orthogonality of factors
What does the Ten Berge method generalise and do?
It generalises the Anderson-Rubin method to preserve correlations (or lack thereof) between factors
When is Ten Berge score preferable?
If the goal is to try find higher-order factors, the correlations between the scores are important and so Ten Berge scores are preferable
What are the crucial determinants of minimal sample size?
Communalities and items to factor ratio
When can we get away with a smaller sample size?
When there are clear groupings
When planning a study what should you do to determine your minimum sample size?
- think of how many factors you expect and get many items measuring each
- use pilot data and previous studies to make an educated guess about what communalities you’re going to expect
In sample size, when can you have fewer subjects?
- if communalities were wide or high
- if the items to factor ratio is high
When are communalities most important?
When item to factor ratio = 20:7
(interaction effect)
What is an observed score
observed score = true score + error
What is the assumption in parallelism?
Assumes that both tests have approximately the same relationship to the true score with the same error variance
What is the tau equivalence assumption?
Assumes that both tests have the same relation to the true score, but error varies
What is the congeneric assumption?
When we have more than 4 tests, we can relax the relation assumption in regard to tau equivalence and assume each is an imperfect measure of the true score
What are the three assumptions of parallel tests?
Parallelism, tau equivalence and congeneric
What is split half reliability?
This measure indicates how internally consistent the test it
What is the issue with split-half reliabiltiy?
When we increase the number of items, the number of possible splits that we have gets very large
What can we decide in split-half reliability?
We have a choice of (n), this can mean that we end up with a range of possible values for our split half reliability
What is Cronbach’s alpha?
Cronbach’s alpha takes idea of correlating subsets of items to its logical end -> basically boiling down to split half on pairs
What does Cronbach’s alpha provide information on?
Gives sense of average correlations of all pairs of items existing in the measure
What is Cronbach’s alpha often used to denote?
It is often used to denote what’s called a unidimensional construct
What does Spearman-Brown propehcy formula calculate?
It calculates how much can we improve Cronbach’s alpha through the addition of more items based on the original alpha and then adding more items
What can any item measure?
They can measure:
- a “general” factor that load on all items
- a “group” or “specific” factor that loads on a subset of items
What are the two internal consistency measures of McDonald’s omega?
McDonald’s omega hierarchical and omega total
What is omega hierarchical?
It is essentially the proportion of variance that is general to an overarching factor in your analysis (amount of shared variance)
What is McDonald’s omega total?
It is the total proportion of reliable item variation
What is test-retest reliability?
Correlation between tests taken at 2+ time points
What is the risk of test-retest reliability?
People remember answers
What are we concerned about when picking time periods for test-retest reliability?
We are concerned whether people are going to show differential changes across the time period
What is interrater reliability?
Multiple people rating targets - we look at consistency across ratings
What is intraclass correlations?
Splits variance of a set of ratings into multiple components
What is content validity?
Refers to the fact that tests should only contain content to the intended construct
What is face validity?
Those taking the test, does the test appear to measure what it was designed to measure
What is construct validity?
Looks at the way in which the measure relates to other things
What is convergent validity?
Measures should have high correlations with other measures of the same construct
What is discriminant validity?
Measure should have low correlations with measures of different constructs
What is nomological net validity?
Measure should have expected patterns correlations with different sets of constructs
What is reliability?
Relation of true score with observed score
What is validity?
Correlations with other measures play a key role
What is the relationship beteen reliability and validity?
Logically, a score or measure cannot correlate with anything more than it correlates with itself, so reliabilty is the limit on validity