reliability Flashcards
what does reliability refer to?
how consistent a measuring device is
how could a particular measurement be described as being reliable?
if it is made twice and produces the same result
what are 4 ways of assessing reliability?
- test-retest
- inter-observer
- inter-rater
- inter-interviewer
what does test-retest reliability involve?
- give the same test / questionnaire to the same person (or people) on different occassions
- if the test / questionnaire is reliable, the results should be the same or similar each time it is given
- can also be applied to interviews
why must there be sufficient time between test and retest?
- to ensure pt cannot recall their answers to the questions
- not so long that their attitudes / abilites have changed
what happens to the data after the retest?
- two sets of scores are correlated to make sure they are similar
- if correlation is significant (and positive) then measuring instrument has good reliability
what is an issue of observational research?
- one observer’s interpretation of events may differ widely from someone else’s
- this introduces subjectivity, bias and unreliability into the data collection process
how many observers should be present in an investigation?
teams of at least two
how can inter-rater reliability be established?
- small-scale trial run (pilot study) of observation to check that observers are applying behavioural categories in the same way
- comparison may be reported at end of study
how should data be colected during an observation?
- observers should watch the same event or sequence of events
- record data independently
- data collected by the two observers should be correlated to assess its reliability
what is inter-rater reliability used for?
content analysis
what is inter-interview reliability used for?
interviews
how is reliability measured?
- using correlational analysis
- in test-retest and inter-observer reliability, the two sets of scores are correlated
- correlation coefficient should exceed +0.80 for reliability
how could a questionnaire produce low test-retest reliability?
questions could be:
- complex
- ambiguous
- interpreted differently be the same person on different occasions
how can the reliability of questionnaires be improved?
- remove or rewrite some questions
- replace open questions (with more room for misinterpretation) with closed alternatives which may be less ambiguous
how can the reliability of interviews be improved?
- use same interviewer each time
- if this is not possible or practical, all interviewers must be properly trained so one particular interview is not asking questions that are too leading or ambiguous
- use a structured interview, as interviewer’s behaviour is more controlled by the fixed questions
how can the reliability of observations be improved?
ensure behavioural categories:
- are properly operationalised
- are measurable
- are less open to interpretation
- do not overlap
- cover all possible behaviours
what happens if behavioural categories are not operationalised well, are overlapping or absent?
different observers have to make their own judgements of what to record, ending up with differing and inconsistent records
what needs to be done if reliability for observations is low?
- observers may need further training in using the behavioural categories
- observers may wish to discuss their decision with each other so they can apply their categories more consistently
what is the focus of reliability in experiments?
- standardised procedures
- procedures must be the same every time to compare the performance of different pts and to compare results from different studies