SPRING Psychometrics and reliability Flashcards
what are psychometrics
measurement of the mind - stable characteristic of an individual
ie ability personality and atitude
what should you consider with psychometrics
reliability and validity of the measure you are taking
reliability
measure of consistency
true score + random error (design flaws)
-aim to minimise error and get as close to true score as possible
validity
does the test measure what it is supposed to/reflect true to life constucts/abilities
how can reliabiity be assessed
correlation (0-1)
if true score then consistent in
1- performance on diff parts of test
2- performance of same people on diff occasions
3- performance of same people by different testers
general psychometric tests and reliability
IQ = 0.9
personality = 0.7
openended/subjective tests = 0.5
indirect = 0.2
how can you test for reliability
test retest
parallel forms
inter item
inter rater
what is test retest reliability
give test twice to same people and correlate scores
appropriate on measures assumed to be stable within a person ie intelligence -NOT WHEN POSSIBLE TO CHANGE IE MOOD
what is parallel forms reliability
reliability checked by designers of a test when developing alternative version of same test - make sure all versions present similar scores from the same person
- avoid memory/practice effects in repeated measures designs
if high correlation then likely to be equivalent in measure
what is inter item reliability
diff items used in test to measure the same construct - average scores to give overall idea of whole contruct ie a personality trait
cronbachs alpha - indicate internal consistency by providing average measure of all ways of splitting items
how do you know if something has high reliability
smaller diff between measured and true scores (reduction in random error)
cronbachs alpha
how well groups of items correlate with eachother
larger no items = higher alpha
min 0;7, preferably 0.8
check for each independent subscale within the study + effect if one deleted
what is inter rater reliability
are the people conducting the test being reliable - PEARSONS R
training and instructinos to testers must be clear to ensure measure same beh and agree - reduce subj
can also be over time ie intervals
measuring inter rater reliability
pearsons r
3+ raters can be tested by cronbachs
non para = kendalls (rank)
cohens kappa- possibility that raters actually guess on at least some variables due to uncertainty - correxts for variable of chance agreement k= p=
reliability to check for in questionnaires
inter item - cronbach