Validity and Reliability Flashcards
What is a pilot study?
(Why do we do this?)
A pilot study is a small scale version of an investigation that takes place before the real investigation is conducted. The aim is to check that the procedures, materials, measuring scales, etc, work and to allow the researcher to make changes or modifications if necisarry
(The whole thing is the definition)
What is a case study?
An in depth study, using a range of methods, on 1 person or a small group
What are the advantages of using case studies?
Rich data - researchers get the ability to study rare phenomena in a lot of detail
Unique cases cases challenge the existing ideas and theories and make suggestions for future research
What are the disadvantages of using case studies?
With casual relationships cause and effect can’t be determined
Generalisability is limited
Ethics - informed consent
Give an example of a famous case study
Little Hans (Freud)
Are case studies usually reliable?
Yes
Why aren’t Freud’s case studies reliable?
They aren’t scientific
Why are case studies reliable?
Increases reliability by the process of repetability
Often case studies are done to see if further research is merited
Define reliability?
The overall consistency of a measure
What is internal reliability?
The extent to which a test is consistent within itself (the test is testing what it is ment to be testing)
What is external reliability?
The ability for a test to have a consistent measure from one use to another. (Give the same results)
For an experiment to be reliable what needs to be the case?
It gives similar results whenever it is done
The study must have standardised procedures
What are the different methods used to assess reliability?
Inter rater reliability
Split half method
Test retest method
Inter observer reliability
Explain the method of assessing reliability: Inter rater reliability?
This is when 2 or more researchers do the observation and they get the same outcome on 80% or more of the behaviours
Explain the method of assessing reliability: split half method?
You have 2 questions that are similar. You check to see if the participants answers match up or not. This helps see if there is social desirability bias
Explain the method of assessing reliability: test retest method
This is when someone repeats the experiment a month or so after the initial experiment
What is internal validity?
A study is high in internal validity when the study measures what it claims to measure
What is external validity?
The extent the results the study can be generalised to others (aka ecological validity)
In 1 word describe reliability and then validity?
Reliability = Consistency
Validity = Accuracy
What are the different ways of assessing validity?
How should they be thought of?
In the form x validity
Face
Concurrent
Predictive
Temporal
Content
They should be thought as: a study has face validity
Explain face validity?
A test is valid if it has produced predicted results
Explain concurrent validity?
A test is valid when results from a new test can be compared to a preciously well established test
What is predictive validity?
The degree to which a test scores accurately predicting scores on a criteria
Eg if a diagnosis is valid if it leads to successful treatment
What is temporal validity?
This assess if past data is still relevant today
What is content validity?
A study has content validity if experts in the field have checked the content of the case study
What is population validity?
A study has population validity if you can reasonably generalise your samples to the general population
How can you increase the reliability of questionnaires?
Use the test - retest method
The studies should have a correlation that exceeds +.80
How do you increase the reliability of interviews
Making sure the interviewer doesn’t ask leading questions and that the questions are clear
Using a structured interview
How do you increase the reliability of experiments
Precise replication of a particular method
How do you increase the reliability of observations?
Making sure the categories are properly operalisationalised
Measurable and self evident
Categories should not overlap
Explain the method of testing reliability: inter observer reliability
An observation should give similar results when done by two or more reserchers to be reliable
What is the difference between inter rater and inter observer reliability?
One is for observations the other is for everything else