WK 11 Flashcards

1
Q

How do you interpret the meaning of the factors?

A

Based on the size and the sign of the loadings that you deem to be salient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

In most research, what are salient loadings?

A

salient loadings are greater than or equal to |0.3| or |0.4|

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a salient relationship?

A

Salient relationship = big enough relationship for us to be interested in
(It is of a signifcant size that we would note and retain in our analysis)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What do we do if we see an item that does not load on any factor above this salient value of 0.3?

A

We would deem it as one that isn’t being explained well or accounted for by our factor solution, and ear mark it as one we may remove from our analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a heywood case?

A

factor loadings > 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does a Heywood case tell you?

A

tells you there is a problem in the analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

If a factor does not have a primary loading from at least 3 items that you are analysing, what do you do?

A

We query basically whether we have done this over-extraction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What loading criteria is in place for EFA solutions?

A

All factors load on 3+ items at salient level

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the EFA criteria for the items?

A

All items have at least one loading above salinet cut off

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the EFA criteria for heywood cases?

A

No heywood cases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the EFA criteria for complex items?

A

Complex items removed in accordance with goals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the EFA criteria for item content?

A

Item content of factors is coherent and substantively meaningful

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is one way to test replicability?

A

One way to test replicability is to see whether similar factors appear when similar data are collected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Aside from similarity checks, what is another way of testing replicability?

A

Test this formally by collecting data on another sample or split one large sample into two (exploratory vs confirmatory)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is factor congruence?

A

It is a way of saying if I look across samples, how consistent is the factor solution I identify

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does factor congruence enable?

A

It enables us to look at similarity of solutions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is congruence coeffiecents?

A

Essentially correlations between factor loadings stacked on top of each other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is confirmatory factor analysis?

A

This tests the model where we explicitly say that this item has to relate with this factor -> We specify the grouping
- it is not driven by data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is target rotation?

A

Target rotation is where you give the rotation algorithm, what items should relate to what factors (give it a structure)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What does it mean when the congruence coefficient is 1?

A

It suggests that there’s a perfect match between the factor solutions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

If the loading is comparatively high in one sample and comparatively high in the other, what do this mean?

A

that’s going to lead to a higher correlation because basically the rank is roughly the same.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

In confirmatory factor analysis what do we do?

A

We specify a model and test how well it fits the data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How do we specify a model in CFA?

A

We specify a model by indicating what loadings we believe will be zero -> we then try to reject this model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What does it mean if the correlations derived from the CFA do not match the correlations from your data?

A

It probably means that parameters set to zero should not have been set to zero

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What do you do in unit-weighting?

A

You sum raw scores on the observed variables which have primary loadings on each factor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What items do you sum in unit-weighting?

A

Which items to sum is a matter of defining what loadings are salient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What do Thurstone or Thompson method do?

A

They compute scores from observed item correlations and loadings (weighted method of producing scores)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What does Bartlett method focus on?

A

Focuses on minimizing the sums of squares for unique factors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What are three ways to factor scores?

A

unit weighting, thurstone method, bartlett method

30
Q

What is the assumption in unit-weighting?

A

Assume things are equally weighted

31
Q

What is a negative consequence of unit-weighting or regression method?

A

We assume that our factors will correlate with each other -> has a tendency to inflate correlations if we have an orthogonal solution

32
Q

If we have forced an orthogonal correlation, what will we see?

A

We will see a pattern of correlations that are not the same as our factor analysis

33
Q

What does the Anderson-Rubin method do?

A

It preserves orthogonality of factors

34
Q

What does the Ten Berge method generalise and do?

A

It generalises the Anderson-Rubin method to preserve correlations (or lack thereof) between factors

35
Q

When is Ten Berge score preferable?

A

If the goal is to try find higher-order factors, the correlations between the scores are important and so Ten Berge scores are preferable

36
Q

What are the crucial determinants of minimal sample size?

A

Communalities and items to factor ratio

37
Q

When can we get away with a smaller sample size?

A

When there are clear groupings

38
Q

When planning a study what should you do to determine your minimum sample size?

A
  • think of how many factors you expect and get many items measuring each
  • use pilot data and previous studies to make an educated guess about what communalities you’re going to expect
39
Q

In sample size, when can you have fewer subjects?

A
  • if communalities were wide or high
  • if the items to factor ratio is high
40
Q

When are communalities most important?

A

When item to factor ratio = 20:7
(interaction effect)

41
Q

What is an observed score

A

observed score = true score + error

42
Q

What is the assumption in parallelism?

A

Assumes that both tests have approximately the same relationship to the true score with the same error variance

43
Q

What is the tau equivalence assumption?

A

Assumes that both tests have the same relation to the true score, but error varies

44
Q

What is the congeneric assumption?

A

When we have more than 4 tests, we can relax the relation assumption in regard to tau equivalence and assume each is an imperfect measure of the true score

45
Q

What are the three assumptions of parallel tests?

A

Parallelism, tau equivalence and congeneric

46
Q

What is split half reliability?

A

This measure indicates how internally consistent the test it

47
Q

What is the issue with split-half reliabiltiy?

A

When we increase the number of items, the number of possible splits that we have gets very large

48
Q

What can we decide in split-half reliability?

A

We have a choice of (n), this can mean that we end up with a range of possible values for our split half reliability

49
Q

What is Cronbach’s alpha?

A

Cronbach’s alpha takes idea of correlating subsets of items to its logical end -> basically boiling down to split half on pairs

50
Q

What does Cronbach’s alpha provide information on?

A

Gives sense of average correlations of all pairs of items existing in the measure

51
Q

What is Cronbach’s alpha often used to denote?

A

It is often used to denote what’s called a unidimensional construct

52
Q

What does Spearman-Brown propehcy formula calculate?

A

It calculates how much can we improve Cronbach’s alpha through the addition of more items based on the original alpha and then adding more items

53
Q

What can any item measure?

A

They can measure:
- a “general” factor that load on all items
- a “group” or “specific” factor that loads on a subset of items

54
Q

What are the two internal consistency measures of McDonald’s omega?

A

McDonald’s omega hierarchical and omega total

55
Q

What is omega hierarchical?

A

It is essentially the proportion of variance that is general to an overarching factor in your analysis (amount of shared variance)

56
Q

What is McDonald’s omega total?

A

It is the total proportion of reliable item variation

57
Q

What is test-retest reliability?

A

Correlation between tests taken at 2+ time points

58
Q

What is the risk of test-retest reliability?

A

People remember answers

59
Q

What are we concerned about when picking time periods for test-retest reliability?

A

We are concerned whether people are going to show differential changes across the time period

60
Q

What is interrater reliability?

A

Multiple people rating targets - we look at consistency across ratings

61
Q

What is intraclass correlations?

A

Splits variance of a set of ratings into multiple components

62
Q

What is content validity?

A

Refers to the fact that tests should only contain content to the intended construct

63
Q

What is face validity?

A

Those taking the test, does the test appear to measure what it was designed to measure

64
Q

What is construct validity?

A

Looks at the way in which the measure relates to other things

65
Q

What is convergent validity?

A

Measures should have high correlations with other measures of the same construct

66
Q

What is discriminant validity?

A

Measure should have low correlations with measures of different constructs

67
Q

What is nomological net validity?

A

Measure should have expected patterns correlations with different sets of constructs

68
Q

What is reliability?

A

Relation of true score with observed score

69
Q

What is validity?

A

Correlations with other measures play a key role

70
Q

What is the relationship beteen reliability and validity?

A

Logically, a score or measure cannot correlate with anything more than it correlates with itself, so reliabilty is the limit on validity