WK 11 Flashcards

1
Q

How do you interpret the meaning of the factors?

A

Based on the size and the sign of the loadings that you deem to be salient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

In most research, what are salient loadings?

A

salient loadings are greater than or equal to |0.3| or |0.4|

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a salient relationship?

A

Salient relationship = big enough relationship for us to be interested in
(It is of a signifcant size that we would note and retain in our analysis)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What do we do if we see an item that does not load on any factor above this salient value of 0.3?

A

We would deem it as one that isn’t being explained well or accounted for by our factor solution, and ear mark it as one we may remove from our analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a heywood case?

A

factor loadings > 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does a Heywood case tell you?

A

tells you there is a problem in the analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

If a factor does not have a primary loading from at least 3 items that you are analysing, what do you do?

A

We query basically whether we have done this over-extraction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What loading criteria is in place for EFA solutions?

A

All factors load on 3+ items at salient level

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the EFA criteria for the items?

A

All items have at least one loading above salinet cut off

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the EFA criteria for heywood cases?

A

No heywood cases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the EFA criteria for complex items?

A

Complex items removed in accordance with goals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the EFA criteria for item content?

A

Item content of factors is coherent and substantively meaningful

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is one way to test replicability?

A

One way to test replicability is to see whether similar factors appear when similar data are collected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Aside from similarity checks, what is another way of testing replicability?

A

Test this formally by collecting data on another sample or split one large sample into two (exploratory vs confirmatory)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is factor congruence?

A

It is a way of saying if I look across samples, how consistent is the factor solution I identify

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does factor congruence enable?

A

It enables us to look at similarity of solutions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is congruence coeffiecents?

A

Essentially correlations between factor loadings stacked on top of each other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is confirmatory factor analysis?

A

This tests the model where we explicitly say that this item has to relate with this factor -> We specify the grouping
- it is not driven by data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is target rotation?

A

Target rotation is where you give the rotation algorithm, what items should relate to what factors (give it a structure)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What does it mean when the congruence coefficient is 1?

A

It suggests that there’s a perfect match between the factor solutions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

If the loading is comparatively high in one sample and comparatively high in the other, what do this mean?

A

that’s going to lead to a higher correlation because basically the rank is roughly the same.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

In confirmatory factor analysis what do we do?

A

We specify a model and test how well it fits the data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How do we specify a model in CFA?

A

We specify a model by indicating what loadings we believe will be zero -> we then try to reject this model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What does it mean if the correlations derived from the CFA do not match the correlations from your data?

A

It probably means that parameters set to zero should not have been set to zero

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What do you do in unit-weighting?
You sum raw scores on the observed variables which have primary loadings on each factor
26
What items do you sum in unit-weighting?
Which items to sum is a matter of defining what loadings are salient
27
What do Thurstone or Thompson method do?
They compute scores from observed item correlations and loadings (weighted method of producing scores)
28
What does Bartlett method focus on?
Focuses on minimizing the sums of squares for unique factors
29
What are three ways to factor scores?
unit weighting, thurstone method, bartlett method
30
What is the assumption in unit-weighting?
Assume things are equally weighted
31
What is a negative consequence of unit-weighting or regression method?
We assume that our factors will correlate with each other -> has a tendency to inflate correlations if we have an orthogonal solution
32
If we have forced an orthogonal correlation, what will we see?
We will see a pattern of correlations that are not the same as our factor analysis
33
What does the Anderson-Rubin method do?
It preserves orthogonality of factors
34
What does the Ten Berge method generalise and do?
It generalises the Anderson-Rubin method to preserve correlations (or lack thereof) between factors
35
When is Ten Berge score preferable?
If the goal is to try find higher-order factors, the correlations between the scores are important and so Ten Berge scores are preferable
36
What are the crucial determinants of minimal sample size?
Communalities and items to factor ratio
37
When can we get away with a smaller sample size?
When there are clear groupings
38
When planning a study what should you do to determine your minimum sample size?
- think of how many factors you expect and get many items measuring each - use pilot data and previous studies to make an educated guess about what communalities you're going to expect
39
In sample size, when can you have fewer subjects?
- if communalities were wide or high - if the items to factor ratio is high
40
When are communalities most important?
When item to factor ratio = 20:7 (interaction effect)
41
What is an observed score
observed score = true score + error
42
What is the assumption in parallelism?
Assumes that both tests have approximately the same relationship to the true score with the same error variance
43
What is the tau equivalence assumption?
Assumes that both tests have the same relation to the true score, but error varies
44
What is the congeneric assumption?
When we have more than 4 tests, we can relax the relation assumption in regard to tau equivalence and assume each is an imperfect measure of the true score
45
What are the three assumptions of parallel tests?
Parallelism, tau equivalence and congeneric
46
What is split half reliability?
This measure indicates how internally consistent the test it
47
What is the issue with split-half reliabiltiy?
When we increase the number of items, the number of possible splits that we have gets very large
48
What can we decide in split-half reliability?
We have a choice of (n), this can mean that we end up with a range of possible values for our split half reliability
49
What is Cronbach's alpha?
Cronbach's alpha takes idea of correlating subsets of items to its logical end -> basically boiling down to split half on pairs
50
What does Cronbach's alpha provide information on?
Gives sense of average correlations of all pairs of items existing in the measure
51
What is Cronbach's alpha often used to denote?
It is often used to denote what's called a unidimensional construct
52
What does Spearman-Brown propehcy formula calculate?
It calculates how much can we improve Cronbach's alpha through the addition of more items based on the original alpha and then adding more items
53
What can any item measure?
They can measure: - a "general" factor that load on all items - a "group" or "specific" factor that loads on a subset of items
54
What are the two internal consistency measures of McDonald's omega?
McDonald's omega hierarchical and omega total
55
What is omega hierarchical?
It is essentially the proportion of variance that is general to an overarching factor in your analysis (amount of shared variance)
56
What is McDonald's omega total?
It is the total proportion of reliable item variation
57
What is test-retest reliability?
Correlation between tests taken at 2+ time points
58
What is the risk of test-retest reliability?
People remember answers
59
What are we concerned about when picking time periods for test-retest reliability?
We are concerned whether people are going to show differential changes across the time period
60
What is interrater reliability?
Multiple people rating targets - we look at consistency across ratings
61
What is intraclass correlations?
Splits variance of a set of ratings into multiple components
62
What is content validity?
Refers to the fact that tests should only contain content to the intended construct
63
What is face validity?
Those taking the test, does the test appear to measure what it was designed to measure
64
What is construct validity?
Looks at the way in which the measure relates to other things
65
What is convergent validity?
Measures should have high correlations with other measures of the same construct
66
What is discriminant validity?
Measure should have low correlations with measures of different constructs
67
What is nomological net validity?
Measure should have expected patterns correlations with different sets of constructs
68
What is reliability?
Relation of true score with observed score
69
What is validity?
Correlations with other measures play a key role
70
What is the relationship beteen reliability and validity?
Logically, a score or measure cannot correlate with anything more than it correlates with itself, so reliabilty is the limit on validity