L3: Validity Flashcards

1
Q

Validity - Definition

A

The degree to which evidence and theory support the interpretation of test scores for proposed uses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are features of validity? (3)

A

1) Validity is a property of a psychological test score interpretation
2) A psychological test is valid to a certain degree (it is not binary)
3) Validity is based on both theory and empirical evidence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the various types of evidence for validity?

A

1) Content Validity
2) Response Process Validity
3) Internal Validity (structure)
4) Associative Validity
5) Consequential Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Content Validity - Definition

A

Degree to which the content of a measure truly reflects the full domain of the construct for which it is being used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Who evaluates the content validity of a measure?

A

Experts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Face Validity - Definition

A

Degree to which a measure appears to be related to a specific construct in the judgment of non-experts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Who evaluates the face validity of a measure?

A

Participants / test users

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What characteristics of a test reduce content validity?

A

1) Construct Underrepresentation
2) Construct Irrelevant Content

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Response Process Validity - Definition

A

The match between the psychological processes that respondents actually use when completing a measure & the processes that the researcher intends them to use

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Response Process Validity - Evidence

A

Direct Evidence:
1) Think out loud protocols
2) Interviewing respondents

Indirect Evidence:
1) Process Data (ex: eye tracker)
2) Statistical Analysis (of results)
3) Experimental manipulation of the response process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is direct evidence for response process validity?

A

1) Think out loud protocols
2) Interviewing respondents

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is indirect evidence of response process validity?

A

1) Process Data (ex: eye tracker)
2) Statistical Analysis (of results)
3) Experimental manipulation of the response process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are general threats to response process validity?

A

1) Poorly designed items
2) Respondents (guessing, lack of motivation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Internal Structure Validity - Definition

A

The match between the actual internal structure of a test & the structure that the test should theoretically possess.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are features of a test with good structural validity?

A

1) The nr of factors match the theory
2) Rotated factor loadings display the theoretical structure
3) Factor correlations comply with what is expected based on the theory/literature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are features of a test with poor structural validity?

A

1) The nr of factors to not match the theory
2) Rotated factor loadings do not display the theoretical structure
3) Factor correlations do not comply with what is expected based on the theory/literature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Construct Validity - Definition

A

Degree to which test scores can be accurately interpreted as reflecting a particular construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Nomological Network - Definition

A

Network that summarises all theoretical relations between the construct of interest & all other related constructs and variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A nomological network is used to examine what form of validity?

A

Associative Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Associative Validity - Definition

A

Match between a measures ACTUAL associations with other measures & the associations that the test SHOULD have with other measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the various parts of a nomological network?

A

1) Validity coefficients - the individual correlations
2) Convergent Evidence - correlations between constructs that ARE related in theory
3) Discriminant Evidence - lack of correlations between constructs that ARE NOT related in theory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Criterion Validity - Definition

A

Degree to which test scores are actually related to a particularly important criterion variable that they should be correlated with. Example: GPA & Salary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are the types of criterion validity?

A

1) Concurrent Validity
2) Predictive Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Concurrent Validity - Definition

A

Association between the construct & an observed variable AT THE SAME TIME

25
Q

Predictive Validity - Definition

A

Association between the construct & an observed variable measured IN THE FUTURE

26
Q

What is the main challenge when interpreting validity coefficients?

A

Uncertainty regarding whether the variance in the data is attributable to trait variance or method variance.

27
Q

Trait Variance - Definition

A

Shared variance in the test scores due to the same trait/construct

28
Q

Method Variance - Definition

A

Shared variance in the data due to the same method

29
Q

What is the Multitrait-Multimethod Matrices tool used for?

A

Interpreting validity coefficients

30
Q

Convergent Validity - Definition

A

Degree to which test scores are actually correlated with other measures that they SHOULD be correlated with

31
Q

Discriminant Validity - Definition

A

Degree to which test scores are actually uncorrelated with other measures that they SHOULD NOT be correlated with

32
Q

What association is examined under a monotrait-heteromethod correlation?

A

Same Trait & Different Method

33
Q

What form of validity is examined under a monotrait-heteromethod correlation?

A

Convergent Validity

34
Q

What association is examined under a heterotrait-heteromethod correlation?

A

Different Trait & Different Method

35
Q

What form of validity is examined under a heterotrait-heteromethod correlation?

A

Discriminant Validity

36
Q

What association is examined under a heterotrait-monomethod correlation?

A

Different Trait & Same Method

37
Q

What form of validity is examined under a heterotrait-monomethod correlation?

A

Discriminant Validity

38
Q

What correlation is a type of convergent validity?

A

Monotrait-heteromethod correlation

39
Q

What correlation is a type of discriminant validity?

A

1) Heterotrait-monomethod correlation
2) Heterotrait-heteromethod correlation

40
Q

What is the ideal relation between the correlations in a Multitrait-multimethod matrix?

A

You want the monotrait-heteromethod correlations to be large, and you want it to be larger than both the heterotrait-heteromethod correlations and the heterotrait-monomethod correlations.

41
Q

Consequential Validity - Definition

A

The match between the ACTUAL consequences of using a measure and the consequences that SHOULD be seen

42
Q

What are the types of evidence of consequential validity?

A

1) Intended Effects - degree to which the use of the test scores has the intended effects
2) Unintended Differential Impact on Groups - do the test scores end up unintentionally weighted in favour of one group and against another group?
3) Unintended Systemic Effects - does the test have an unintended effect on organisational systems?

43
Q

What are the main factors that affect validity coefficients?

A

1) Associations between constructs
2) Random measurement error (& reliability)
3) Restricted range
4) (Differential) skew
5) Method variance (lack of)
6) Prediction based on single vs multiple events
7) Time (between tests)

44
Q

How do associations between constructs affect validity coefficients?

A

The greater the association between constructs, the higher the correlation

45
Q

How does random measurement error affect validity coefficients?

A

Random measurement error in the data leads to lower reliability & smaller correlations + validity coefficients

46
Q

How does restricted range affect validity coefficients?

A

Artificially restricted range will lead to a smaller correlation & validity coefficient

47
Q

Restricted Range - Definition

A

When a test does not reflect the true distribution of scores, as it is limited to only a subset of the data. For example, Test A has many people scoring the maximum possible score, while Test B has a normal distribution of test scores.

48
Q

How does reliability affect validity coefficients?

A

Lower reliability (due to greater measurement error) will lead to a smaller correlation and validity coefficient.

49
Q

How does skew affect validity coefficients?

A

A differential skew will lead to a smaller correlation, and lower validity coefficients.

50
Q

How does method variance affect validity coefficients?

A

Method variance does not reduce the predictive validity correlation. Using an identical method will result in a higher correlation, but this will be artificially inflated due to method variance, and therefore not be a reflection of the true correlation. Therefore, the correlations will be more accurate when employing methods that differ from each other.

51
Q

How does event prediction affect validity coefficients?

A
  • Validity coefficients based on correlations between variables measured at different times (predictive validity correlations) will be smaller.
  • Validity coefficients based on correlations between variable measured at a single time point (concurrent validity correlations) will be larger.
52
Q

What is the potential effect of using predictive validity correlations when estimating validity coefficients?

A

It might result in an underestimation of validity coefficients

53
Q

What is the potential effect of using concurrent validity correlations when estimating validity coefficients?

A

It might result in an overestimation of validity coefficients

54
Q

How does intervals between testing affect validity coefficients?

A

Longer time periods between testing will reduce the (predictive) validity correlations

55
Q

Validity Generalisation - Definition

A

Process of evaluating a tests validity coefficients across a large set of studies (same constructs, different operationalisations. is the nomological network consistent?)

56
Q

What is a QVC procedure used for?

A

It is used to quantify the degree of fit between the predicted pattern of validity correlations & the actual pattern of validity correlations obtained in the data

57
Q

What are the steps of a QVC procedure?

A

1) Generate a predicted pattern of validity correlations
2) Gather data & compute actual pattern of validity correlations
3) Examine to what degree the predicted pattern matches the actual pattern

58
Q

For what kind of validity is a QVC procedure used?

A

Construct Validity