Self Reports: Assessing And Dealing With Validity Flashcards
What might stop a questionnaire/interview accurately finding out what it sets out to find out
- people don’t know how they think/feel
- pop validity = low in questionnaires (only specific type of people do them)
- interviewer bias - interviewer’s expectations might influence the interviewees answers
- leading Qs
- forced choice Qs with limited answers may make participant answer in a way that they don’t actually feel
- interviewer bias in interpreting the open Qs answers
- Ambiguous answers
- interviewer effect - social desirability bias + influenced by gender/appearance of interviewer
What are some of the solutions to the problems of Questionnaires/interviews
- make it forced choice
- do an interview instead
- do a questionnaire instead
- get an independent interviewer in, who is unbiased
- reword problematic Qs
- remove problematic Qs
- make it open Qs
What are the 3 most appropriate types of validity related to self reports
- face validity
- content validity
- concurrent validity
What is face validity
- does a self report look like its measuring what the researcher intended it to
What is content validity
- does the self report measure what it intended to (assessed by an expert in that field)
What is concurrent validity
- compares performance of the self-report with other well established validated ones. If they produce similar outcomes = valid
How could you assess how valid something is from a face validity
Researcher looks at questionnaire and checks if the Qs look like they are going to measure what they say they will
How could you assess how valid something is from a content validity
- researcher looks to see if Qs are relevant (and not off topic), then get an expert to check if the Qs are valid
How could you assess how valid something is from a concurrent validity
Pilot study, give participants both your questionnaire and another published questionnaire: see if they’ll produce similar results
Why might a questionnaire not be reliable
- people won’t always interpret scales the same way
- people giving socially desirable answers will lower consistency and reliability
Why might an interview not be reliable
- interviewer bias: direct (through tone/gestures/leading Qs) and indirect (types of Qs used)
- semi-structured interview will be difficult to repeat due to the fact they’re different each time
- difference in style/gender/personality of interviewer will create different answers
How can you assess if a self-report has high or low internal reliability
- Split half method
1. Randomly select half the test items and placing them on form A and placing the other half on form B
2. Therefore you have 2 forms of the same test
3. To have good reliability, the scores from these tests should at least be 80% in agreement
How can you deal with it if a self-report has low internal reliability
Remove/change problematic Qs
How to assess if a self-report has high or low external reliability
- Test-retest method
1. Give someone same questionnaire / interview to the same person on 2 separate occasions
2. With a sufficient gap for the chance to forget
3. If both yield same results = reliable
How to deal with it if a self-report has low external reliability
- Rewrite problematic Qs
- Train interviewer