Reliability and Validity Flashcards
What is each factor?
A scale that should measure something - all of the items should measure the construct
What does reliability refer too?
Does the questionnaire produce the same results when completed under the same conditions - degree of consistency
What does validity refer too?
Does the questionnaire measure what it’s meant to - accurately measuring the construct
What is a correlation?
A standardised measure of to what degree two variables co vary
What do correlation coefficients vary between?
+1 and -1
If X has a higher value when Y has a higher value = positive correlation
X has higher values when Y has lower values = negative correlation
1 = perfect
0 = none at all
What does a correlation suggest?
A logical common factor for why the variables are behaving in some way
For reliability, measuring whether some underlying characteristic or trait is measured by in different ways by by different measures
Doesn’t mean causality
What are the types of reliablity?
Reliability across time - involve administration of the same scale twice - test re test or alternative forms
Internal consistency - involve one administration of the scale , such as split half, cronbachs alpha an inter-scorer
What is test re test reliability?
Test-retest reliability:
The consistency of your measurement when it is used under the same conditions with the same participants
Procedure:
Administer your scale at two separate times for each participant
Compute the correlation between the two scores
Assume there is no change in the underlying condition/trait between test 1 and 2
Expect people high in e.g. empathy will score high both times
doesn’t work for mood questionnaires as change mood on different days
What is alternate forms reliability?
Alternate forms reliability:
Change the wording of the questions in a functionally equivalent form
Simply change the order of the questions in the first and the second survey of the same respondents
Calculate the correlation between the results obtained in two surveys
i.e. between the initial and re-worded/ordered questions
What is split half reliability?
Split-half reliability
Group questions in a questionnaire that measure the same concept
For example, split a 6-item scale/factor into two sets of three questions
Calculate the correlation between those two groups
BUT: reliability will depend on exactly how you split the data!
What is cronbach’s alpha?
Splits the questions on your scale every possible way and computes correlation values for all splits
The average of these values is equivalent to Cronbach’s α
Interpret as r: closer to 1 - the higher the reliability estimate of your instrument
To retain a scale, α ≥ .7 (acceptable reliabilty)
For good reliability α ≥ .8
Alpha if item removed:
Calculates alpha as above but leaving out each item one at a time
If alpha improves - scale is more reliable without it
Trying to identify the Weakest Link and get rid of it
What is the corrected item total correlation?
IRI - correlation between the score on the item and score on the scale as a whole times the SD of the item - if < .3, consider removing the item
What must you do before you run the reliability analysis?
Reverse code the items - when something measures the same idea but in opposite directions
What does reverse scoring mean?
Low score becomes high and vice versa give 1 - 5 2 - 4 3 - 3 4 - 2 5 - 1
computer will do it - subtract each score from max score plus 1
if you forget, will cause problems for alpha, won’t be able to interpret it
What items do you select when analysing them?
Items that are grouped together on a factor, these represent a scale that you have created to measure a particular aspect of your construct