Lecture 16 - Questionnaire Design Flashcards
What is one advantage of open questions?
More rich, qualitative data capturing peoples experiences
What is one disadvantage of open questions?
the responses are time consuming and harder to analyse
What is a dichotomous scale?
A question with only two possible answers, usually ‘yes’ and ‘no’ or ‘true’ and ‘false’
What is the classical theory of error?
by P. Kline
that any observed score is comprised of the TRUE score and error together
What is a split-half reliability test?
Items are split into two halves, usually randomly or arbitrarily
Then the scores of each half are correlated
correlation of 0.8 or higher is adequate reliability
What is a parallel forms reliability test?
where there is a large pool of items, and these items are randomly divided into two tests which are then given to the same participants
then the correlation between the two forms is calculated
difficult as needs a large number of items to be generated
What is Cronbach’s Alpha?
it is a value that is mathematically equivalent to the average of all possible split-half extimates
It goes up to +1 and values of +0.7 or above mean acceptable internal reliability
What is the Kuder-Richardson Formula? (KR-20)
it measures internal reliability for measures with dichotomous choices
Goes up to +1, scores of +0.7 or greater indicate acceptable internal reliability
How can we use test-retest to assess reliability
administer the test twice with an interval of time between
correlate scores between the two, 0.7 or above means we can assume test-retest reliabilitu
can be influenced by practice effects, boredom effects etc
How can we test for inter-rater reliability?
Cohen’s Kappa - values up to +1. Used when there are two raters
Fleiss’ Kappa - an adaptation of the above for when there are more than 2 raters
They both measure agreement between the raters, not accuracy!!
What is intra-rater reliability?
when the SAME rater does the SAME assessment on two or more occastions
they are then correlated
not great as rater is aware of their previous assessments and may be influenced by them
What are some things which can cause a lack of reliability during self-report
SDB guessing ambiguous or leading questions poor instructions low response rate
What is faith validity?
Just a belief in the validity of something without any objective data!
What is face validity?
Wether a test looks like it measures the concept it intends to - usually experts will look at it and say wether or not they think it will measure X accurately
What is content validity?
the extent to which a measure represents ALL facets of the phenomena being measured: e.g. jealous attitudes, jealous feelings, jealous behaviours