Research Methods part 2 - Key terms Flashcards
Reliability
The extent to which the findings or research are consistent and replicable.
Test re-test reliability
The extent to which the same results will be achieved everytime the test is carried out (Applies to all research methods).
Experiments - if they have standardised instructions, procedures & controls we can replicate them and should get the same results each time.
Questionnaires – if we gave a questionnaire out again, we would expect a ppt to give the same answers again so, we would consider it reliable.
Inter-rater reliability
The extent to which the way data is interpreted in a consistent way. This applies when data has been collected by self-report and requires interpretation.
eg, a researcher who is using a questionnaire or interview with open questions may find that the same answers could be interpreted in different ways, producing low reliability. If these differences arose between different researchers, this would be an inter-rater reliability problem.
How to achieve inter-rater reliability
Best ways to increase inter-rater reliability:
* Clear and objective methods of collecting data
* Making rating scales as objective as possible to ensure that all researchers are using the same criteria to assess the behaviour and applying the scales in the same way.
Intra-rater reliability
whether one researcher is consistent overtime.
Inter-observer reliabilty
The extent to which the way data is interpreted in a consistent way.
* Applies when data has been collected by observation and requires interpretation.
eg. in an observation, researchers gave different interpretations of the same actions, this would be **low inter-observer reliability. **
How to achieve inter-observer reliability
- Have more 2 or more observeres
- Train them to carry out the observation in the same way to ensure that behavioural categories are clearly operationalised.
- Observe the same behaviour
- Compare results
- If there is 0.8 correlation between observers results (high similarity) then it’s considered a reliable observation.
Validity
The extent to which the findings of a study are ‘true’ and ‘accurate’.
Internal validity
Are we measuring what we set out the measure?
* If there are EVs, then validity is lowered - because we are no longer testing the effect of the IV on the DV.
External validity
Can we accurately generalise? Are the results true beyond the study?
Face validity
Is a subjective assessment of whether or not a test appears to measure the behaviour it claims to.
eg. a study measuring aggression amongst teenagers, measures the number of times they swear at each other (swearing is not always linked to aggression, especially in teenagers)
Content validity
Is an objective assessment of the items in a test to establish whether or not they all relate to and measure the behaviour in question. Experts assess whether the test is measuring what it set our to do.
eg. A driving test that assesses both theoretical knowledge of traffic rules and practical driving skills: This ensures the test covers all necessary components to be a safe driver.
Concurrent validity
Is a comparison between two tests of a particular behaviour. One test has already been established as a valid measure of the behaviour, and the other test is the new one. If the results from both significantly correlate, then the new test is valid.
eg. Comparing a new IQ test to a well established IQ test. The ppt should score similar on both to show that the new one is a valid measure.
Predictive validity
Refers to how well a test predicts future behaviour. eg. do your mock exams predict your actual results? Are they vaild?
External validity
Population validity
How well can we generalise from the sample to the population we want it to represent. Is the behaviour true for others? Is our sample representative?