Factor Analysis Flashcards
Who invented factor analysis and why?
-Spearman, as a way to uncover ‘g’ factor
What is the definition of factor analysis?
- an umbrella term to cover a variety of multivariate statistical techniques to uncover the latent dimensions from a set of observed attributes.
What is factor analysis used for?
- to condense a large group of observed attributes into a much smaller set of constructs (factors/components)
The dimensions should be a linear combination of observed attributes. T/F
- true, should be, although modern developments allow for non-linear combinations
What allows for the dimensions/factors/combinations to be linear?
- the observed attributes must be numerical or possess an underlying continuous structure.
Is there a need to define causal independent/dependent variables in factor analysis?
- no, because FA is not assessing causal relationships
- attempts to maximise the attributes’ explanatory power than predictive power.
What are the major FA types?
- EFA (EXPLORATORY FACTOR ANALYSIS)
- PCA (PRINCIPAL COMPONENTS ANALYSIS)
- CFA (CONFIRMATORY FACTOR ANALYSIS)
What does EFA attempt to do?
- IDENTIFYING A HIDDEN STRUCTURE/CONSTRUCT
- almost any construct you know in psychology was empirically investigated through this approach
How does FA often violate the ‘numerical rule’ of linearity?
- often use likert scales which are ORDINAL SCALES
What does PCA attempt to do?
- the simplest type of EFA
- assumes ALL variance in the items can be explained by some hidden structure/construct.
ie. big assumption, therefore perhaps erroneous
What does confirmatory factor analysis attempt to do?
- looking to CONFIRM an already hypothesised, theorised or empirically identified structure/construct.
- also might use when we are looking to replicate an already done study
What is an item in FA?
- observed element of an attribute e.g. question in a questionnaire
What is factorability?
- the suitability of an items to be included in an FA model
- depends on numerical association between items –> items below 0.5 or above 0.9 need to be considered carefully
What is Bartlett’s test of sphericity?
- tests whether the item correlation matrix is significantly different from a matrix with ZERO CORRELATIONS (testing the factorability of a dataset)
What is simple structure?
- refers to the situation where items form distinct groups based on the degree of their associations i.e. items form highly independent dimensions
What do we need for a factor to be meaningful?
- factors need to be related both in a QUALITATIVE (CONCEPTUAL) AND QUALITATIVE (NUMERICAL) SENSE
What are orthogonal factors or components?
- factors that are considered to be independent from each other e.g. trait theory, neuroticism and extraversion
What are oblique factors/components?
- considered to be related to a degree to each other e.g. the Gf and Gc are not constructed to be independent.
What is factor loading?
- the correlation between an item and a factor (usually >0.4 to be considered to belong to that factor.
what is rotation in a geometric space?
- the geometric transformation of the factors in order to generate a model that contains a SIMPLE STRUCTURE
When is varimax rotation used?
- it is used to rotate orthogonal (unrelated/independent) factors in such a way that maximises the variance each of them explains
What is the Kaiser criterion?
- retain any factor that has an eigenvalue >/= 1
What is Cattell’s scree plot rule
- retain factors which do not form part of the ‘elbow’ –> i.e. that are on the steep slope
What is the variance explain rule?
- retain all factors that can collectively account for 80-90% of the total variance
What is the Joliffe criterion?
- retain all factors with eigenvalues greater than or equal to 0.70
What is the comprehensibility rule?
- retain all factors that are MEANINGFUL and CLEARLY interpretable within the context of a given study (final assessment rests on psychological knowledge- FA cannot name the factors for us)
What can Cronbach’s alpha be considered an index of?
- internal consistency reliability
- internal consistency validity
By assessing parallel-forms reliability what else are we assessing?
- concurrent validity
- convergence validity
What is inter-rater reliability closely related to?
- content validity
What can test-retest reliability (measured over time) be used as an index of?
- external validity
How can we allow for sufficient variability of items and participants?
- inversely keyed items and homogenous items
- minimisation of serial effects
- examine invariable responses closely
- item difficulty or clarity
- ceiling/floor effects
What is the problem with constant measurements?
- they cannot really be statistically analysed.
- data analysis cannot always account for fix design errors
What is a frame error?
- errors in the sampling (Q analysis)- we sampled ppl that don’t belong
What is domain error?
- errors in the domain we sampled
What are some errors in samples?
- nonresponse/participant errors
What are some errors with observed/true scores?
- measurement error
What is generalizability theory?
- it adds systematic error in the observed scores and attempts to map it and eliminate it (ie. control it)
What is item response theory?
- mathematically maps the difficulty of measurement items and maps them against participants’ ability on a construct
- can be used to make predictions
What are the uses of profiling?
- criminal personality profiling: eliminating suspects, identify unknown offenders, used with unusal crimes, adaptive interrogation techniques
What are the uses of psychography, psychobiography & psychohistory?
- identify and explain issues and themes throughout a person’s life from a psychological perspective