SPRING Psychometrics and reliability Flashcards

1
Q

what are psychometrics

A

measurement of the mind - stable characteristic of an individual
ie ability personality and atitude

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what should you consider with psychometrics

A

reliability and validity of the measure you are taking

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

reliability

A

measure of consistency
true score + random error (design flaws)
-aim to minimise error and get as close to true score as possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

validity

A

does the test measure what it is supposed to/reflect true to life constucts/abilities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

how can reliabiity be assessed

A

correlation (0-1)
if true score then consistent in
1- performance on diff parts of test
2- performance of same people on diff occasions
3- performance of same people by different testers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

general psychometric tests and reliability

A

IQ = 0.9
personality = 0.7
openended/subjective tests = 0.5
indirect = 0.2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

how can you test for reliability

A

test retest
parallel forms
inter item
inter rater

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is test retest reliability

A

give test twice to same people and correlate scores

appropriate on measures assumed to be stable within a person ie intelligence -NOT WHEN POSSIBLE TO CHANGE IE MOOD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is parallel forms reliability

A

reliability checked by designers of a test when developing alternative version of same test - make sure all versions present similar scores from the same person
- avoid memory/practice effects in repeated measures designs
if high correlation then likely to be equivalent in measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what is inter item reliability

A

diff items used in test to measure the same construct - average scores to give overall idea of whole contruct ie a personality trait
cronbachs alpha - indicate internal consistency by providing average measure of all ways of splitting items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

how do you know if something has high reliability

A

smaller diff between measured and true scores (reduction in random error)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

cronbachs alpha

A

how well groups of items correlate with eachother
larger no items = higher alpha
min 0;7, preferably 0.8
check for each independent subscale within the study + effect if one deleted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what is inter rater reliability

A

are the people conducting the test being reliable - PEARSONS R
training and instructinos to testers must be clear to ensure measure same beh and agree - reduce subj

can also be over time ie intervals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

measuring inter rater reliability

A

pearsons r
3+ raters can be tested by cronbachs
non para = kendalls (rank)
cohens kappa- possibility that raters actually guess on at least some variables due to uncertainty - correxts for variable of chance agreement k= p=

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

reliability to check for in questionnaires

A

inter item - cronbach

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

reliability to check for in rating scales

A

inter rater agreement - especially when non expert and novel scale

17
Q

reliability to check for in observations

A

inter observer/rater

check code same situations when independent

18
Q

if a study is invalid

A

it can still be reliable

19
Q

if a study is unreliable

A

it cannot be valid

20
Q

types of validity

A
face
content
predictive
concurrent
construct
21
Q

face validity

A

the items that are convincing/acceptable to users
asks appropriate qs and is taken seriously
aspect of content

22
Q

content validity

A

are the items appropriate for the intended purpose - relevant to the desired construct

23
Q

face and content

A

both to do with the kinds of items within a test

both are judged by a consulting expert and/or interested informants

24
Q

predictive validity

A

do the scores on a test correlate with what they are expected to

25
Q

concurrent validity

A

do the scored on a test correlate with alternative existing tests of a similar construct
Does the test properly reflect the theoretical nature of the psychological construct that it is intended to measure? - depends on agreement on what the theoretical nature of the construct is (e.g. intelligence).
OR Does test score correlate with other things (or differ between groups) in the way which, according to theory, it should?

26
Q

construct validity: convergent

A

does a test discriminate pos/neg with the things it theoretically should

27
Q

construct validity: discriminant

A

does the test not correlate with the things it is theoretically independent from

28
Q

campbell and fisk multi trait multi method matrix

A

test if shows both convergent and discriminant validity
test pps on possibly overlapping contructs that test claims to measure/not measure and check corr between same group of people

29
Q

lindsay et al 1994 multi trait multimethod matrix

A

anxiety and depression delf report for people with intellectual disabilities
- if svalid then convergent on anxiety and depression scores but not on anxiety/depression subscales ie social skills (disriminant)