M&D3 Week 1 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Reliability

A

the consistency of a measurement repeated within a person or in a sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

test score

A

X = T + E
T (true score) is not known, E (error) is only estimated (X = the obtained score)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Accidental measurement errors (unpredictable)

A

Something you can’t predict beforehand, time of day, headache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Systematic measurement errors (constant)

A

Something you can predict beforehand, happens systematically

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Confidence interval formula

A

CI = X +/- z * SE
X = score; z = standardized coefficient, SE = standard error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

z = 0.99

A

the chances (odds) is 68% that the true score is located within the interval (lower precision, lower CI)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

z = 1.96

A

the odds are 95% that the true score located within the interval (medium precision, medium CI)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

z = 2.58

A

the odds are 99% that the true score located within the interval (higher precision, higher CI)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Confidence intervals indicate…

A

the limits within which a certain possible score may be
assumed to be true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Standard error (SE)

A

Standard deviation of raw scores around true scores (SD of measurement errors)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

SE formula

A

SE = σ * √(1 - rxx)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The higher rxx

A

the lower SE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The lower σ

A

the lower SE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Test-retest

A

A specific test is used multiple times or at least two times (coefficients between these tests)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Parallel (or alternative) versions

A

Coefficients between the different versions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Internal consistency measures (split-half, Cronbach’s alpha, KR-20)

A

Test used one time! KR-20 is for dichotomous items (only two options)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Raters coefficient: interclass correlation (ICC)

A

Consistency between responses of a specific group/reactions of a specific team or group

18
Q

Alpha coefficient is based on

A
  1. A single measurement of a test
  2. (Co)variances of the items
  3. The number of items – how many items do you have
19
Q

insufficient alpha coefficient

A

rxx < .80

20
Q

Validity

A

The extent to which a test measures what it should measure

21
Q

Face validity

A

How the test seems externally (to laymen/test takers)

22
Q

Construct validity

A

To what extent is the test a good measurement of the underlying theoretical concept?

23
Q

Convergent validity

A

To what extent is the test correlated with other measures measuring the same
concept (should have high pos or neg correlation)

24
Q

Divergent validity

A

To what extent is the test correlated with other measures measuring a different concept (should have no correlation)

25
Q

Criterion validity, diagnostic validity

A

How much the test predicts a concrete criterion, how much it has value in diagnosing

26
Q

Content validity

A

Does the test cover the domain of knowledge, skills, behavior that we’re supposed to measure?

27
Q

Internal structure

A

the structure of the test alone
o Number of dimensions (factors)
o Score differences between groups (e.g., high versus low extraversion)

28
Q

External validity

A

how the questionnaire correlates with other measures

29
Q

Multitrait-multimethod matrix used to test

A

for convergent and divergent validity

30
Q

Convergent validity

A

Hetero-method (more than one method)
Mono-trait (the same trait measured by different methods)

31
Q

Divergent validity

A

Across two or more methods
Hetero-method (more than one method)
Hetero-trait (different traits)

32
Q

Method variance

A

Within a single method
mono-method
hetero-trait

33
Q

FOUR RULES FOR TESTING CONSTRUCT VALIDITY BASED ON MTMM MATRIX

A
  1. Convergent validity (average) > 0
  2. Convergent validity (average) > divergent validity (average)
  3. Convergent validity (average) > method variance (average)
  4. Method variance and divergent validity approximately the same pattern in correlation matrix
34
Q

Immediate criteria

A

work sample in an Assessment Centre

35
Q

Low delay

A

first client satisfaction

36
Q

High delay

A

annual evaluation

37
Q

Operationalization

A

changing the conceptual ‘thing’ into something measurable

38
Q

Incremental = increased validity

A

how much personality adds to measuring cognitive capacity

39
Q

COTAN standards in personnel selection, a validity coefficient of … is seen as sufficient

A

.40

40
Q

Cohen (1977) standards (effect sizes) :

A

r = .10 low effect size
r= .30 medium
r= .50 high

41
Q

Possible problems with values that are too high (applies to other values too such as
correlation coefficients)

A
  1. Incorrect conceptualizations of your measure
  2. Selection of instrument to test for criterion validity is not correct
42
Q

The correction for attenuation formula

A

assumes there is an ideal value of validity & at some point, if instruments are perfectly reliable, the maximum available can be calculated