Factor Analysis pART 2 (WK 2) Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Name the 4 factor analysis diagnostics

A
  1. Kaiser-Meyer-Olkin measure of sampling adequacy (KMO)
  2. Bartlett’s test of sphericity
  3. Determinant value
  4. Percentage of non-redudant residuals <0.5
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the factor analysis diagnostics?

A
  • provided in the EFA output, measuring the amount of correlation present between items
  • can be helpful but not necessary!

KMO & factor analysis already do correlations for you - so no need to look at crrelations prior to testing (putting values in a correlation table & see how everything compares

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Are the FA diagnostics before or part of FA?

A

before factor analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe the Kaiser-Meyer-Olkin (KMO) measure

A
  • single value
    = computed as the ratio of the sum of squared correlations to the sum of squared correlations plus sum of squared partial correlations
    -provides an indicator of the proportion of variance in item responses that might be causes by underlying factors (summarises amount of correlational overlap between all items)
  • small values (
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe Bartlett’s test of sphericity

A

= tests null hypothesis that correlation matrix is an identity matrix (a correlation matrix consisting of zero correlation between variables)

  • Bartlett’s test is conservative, especially when sample sizes are large
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Significance is __________

what does this mean?

A

A: desirable

This indicates that the correlation matrix is significantly different from an identity matrix.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is an identity matrix?

A

a correlation matrix with zero correlation between variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What’s the difference between an approximate chi-square and a true chi-square value?

A

An approximate chi-square value = (~X^2)

A true chi-square value = (X^2)

either way:

  • round off to 2dp
  • unless significant values = then round off to 3dp
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Describe the determinant.

A
  • a test of multicollinearity (correlational overlap) in the data
  • we want determinant to be pretty small value (above .00001)
  • provided as a table note, often in exponential notation = making it easier to miss!
  • determinant is less important than other diagnostics

HOW TO DEAL WITH THE DETERMINANT:
> if it bad/different to what you want; comment “the determinant was not ideal”

> if there are not other issues, when looking at rest of FA; comment “this (determinant) did not appear to affect the data”

> if the determinant is exceeded enough to be an issue, the way we solve this is by looking for highly correlated items & deleting one out of its pair; i.e. if two items are very similair, they’re already messing up the FA (you can spot this easily)

> put it in your report and forget about it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe non-redunant residuals

A
  • the percentage of non-redundant residuals above .05 is provided in a table note under the reproduced correlation matrix (orthogonal rotation)
  • measure of multicollinearity too!
  • ideally want this percentage as low as possible, but up to 50% is still okay!

REMEMBER:
> it doesn’t have to highest/lowest but achieving a happy medium

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

how do you run correlations between items and inspect them, on SPSS?

A

A: Analyze > Bivariate > Correlate (throw them in & think about your output)

  • this step can be skipped as KMO & Bartlett’s test are already doing this
  • e.g. maybe 2 items are so highly correlated, than 1 is redudant and should be deleted ?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

define reliability

A

= the term we use for stability and consistency in psychometric testing

  • e.g. if I tested you this week, and you scored high on extraversion, I would expect you to score high on extraversion
  • if you score low, then your test is NOT reliable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what are the two types of reliability?

A

1) test-retest reliability

2) internal consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

describe test-retest reliability

A
  • more to do with temporal consistency (consistency across time)
  • ideal form of reliability
  • however, not the most commonly used as it requires a bit of work to measure test-retest

> i.e. have to measure something twice within a certain time period; repeated measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

describe split-half reliability

A
  • internal consistency is measured through split-half reliability (degree to which items are forming a consistent construct)
  • combine one half of a test with the other half; if we have items measuring the same thing, then they should roughly be correlated with another
  • not used today!
  • gold standard today = Cronbach’s alpha
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what is Cronbach’s alpha?

A
  • statistical formula that provides the mean of all possible split-half correlations for a given set of items
  • originally created by Cronbach to address the bias that people criticise split-half reliability for having (1950’s)
  • commonly used today as ideal measure of internal consistency (the more reliable our test, the more internable)
  • “you want Cronbach’s alpha to be above .6 at bare minimum, although .8 or ideally .9 is better”
17
Q

how do you conduct a reliability analysis, in SPSS?

A

A: Analyze > Scale > Reliability Analysis

  • transfer the set of items you want to analyse into Items box
  • ensure that Model is set to Alpha
  • enter a scale label (good organising tip)

DESCRIPTIVES WINDOW
[- select item, scale, scale if item deleted
- continue back to the RELIABILITY ANALYSIS WINDOW
- hit ok to run the analysis]

18
Q

how many different levels of output are there?

A

three.
remember, whilst it has it’s own name (reliability analysis), don’t consider it an optional extra to FA - it is still part of FA.

(1) RELIABILITY STATISTICS
[= you’ll find the overall Cronbach’s alpha for the item set here]

(2) ITEM STATISTICS
[= useful to check out to see if there are any skewed items, or where most participants have made the same response]

(3) ITEM-TOTAL STATISTICS
[=table contains 2 important values for looking at how anitem relates to the set of items:
- the corrected item-total correlation
- the Cronbach’s alpha if item deleted]

19
Q

describe corrected item-total correlation

A

= this value is the correlation of the item with the sum of the rest of the item set
(item you want to look at + all the other items [redundancy variance = how well does this item fit in with the rest?])

  • desirable to have corrected-item total correlations between .3 and .8 or so, as then the items fit well with other items without being redundant
  • NOTE: lower is always worse than hight (.8 is not as bad as below .3) aim for happy medium!
20
Q

describe Cronbach’s alpha if item deleted

A
  • more convienient = does exactly what it says
  • what this metric does is asks: “what would that value be if what we’re looking at, wasn’t in the scale?” = that’s the reliability if you took the item out of the scale
  • if removing the item increases reliability = then delete it ; but if keeping it improves the reliability (keep it) = but it’s all down to your interpretation!
  • can’t use rule of thumbs & ignore practicality/common sense* you should aim to achieve middle ground/balance
  • we have information from SPSS but it’s also our job to interpret it