Lesson 3: Reliability Flashcards
What is meant by reliability?
Reliability refers to the consistency of a research or a measuring test.
What is meant by INTERNAL reliability
Internal reliability refers to the extent to which a measure is consistent within
itself.
Give an example of INTERNAL reliability
Whether the different questions in a questionnaire are all
measuring the same behaviour, attitudes
What method is used to assess internal reliability
The split-half method
What does the split-half method measure?
It measures the extent to which all parts of the test contribute equally to what is being measured
How is the split-half method conducted?
1) Test is split in half (e.g. first half and second half, or by odd and even
numbers).
3) The same participants complete both halves
2) If the two halves of the test provide similar results this would
suggest that the test has INTERNAL RELIABILITY.
What is EXTERNAL reliability
The extent a test measure is consistent over time
What method is used to asses EXTERNAL reliability
Test-retest
How is a test-retest conducted?
Test-retest involves giving participants the same test on two separate
occasions.
If the same or similar results are obtained then external reliability
is established
How would researchers use the test-retest method to check if a sleep questionaire was a reliable way of measuring sleep quality
If researchers wanted to check if a sleep questionnaire was a reliable measure of sleep quality:
1) The same participants would complete the sleep questionnaire on more than one occasion.
2) Each participant’s scores from the first occasion should be correlated with their
results from the later occasion, to be shown on a scattergraph to describe the
correlation with scores from the first test being plotted on one axis and the
scores from the second test plotted on the other axis.
3) The strength of the correlation should then be assessed using either a Spearman’s rho test or a Pearson’s r test.
4) The degree of reliability is then determined by comparing the correlation with the statistical table to determine the extent of the correlation
– there should be a STRONG POSITIVE correlation between the two sets of data.
Researchers generally accept 0.8 correlation between the test and retest.
What is Inter-observer reliability (inter-rater reliability)
Measures the degree of agreement between different people observing or assessing the same thing
Why is inter-observer reliability important?
It allows observer bias to be avoided
How would a researcher carry out an observation of how people spend their time at the gym?
1) They could use two observers (inter-observer reliability) who discuss and agree beforehand their interpretation of the behavioural categories.
2) Then a statistical comparison of data from both observers will be
carried out