SELF REPORTS Flashcards
Self report
Asking a ps about their thoughts and behaviour and recording their answers.
examples of self reports
-questionnaires
-diary entries
-interviews
-psychometrics
Questionnaire
A series of questions in written form
Interview
A series of questions are given verbally,face to face between an interviewer and interviewee
Open questions
Allows the ps to answer however they want
Closed questions
provide a limited amount of answers ps must choose from.
Qualitative data
non numerical,rich in detail, usually textual or verbal and provides descriptions.
Quantitative data
Numerical,measurements of quantity or amount or how often something has occurred.
types of closed questions
-fixed choice
-checklist
-ranking
-likert scale
-semantic differential scale
Fixed choice question
These are phrased so that the respondent has to make a fixed choice answer, usually ‘yes’ or ‘no’.
Checklist questions
Give a list of options and told to choose as many as apply to the ps. (e.g “tick all that apply”)
Ranking questions
Ps are instructed to put a list of options into order.(e.g “rank from 1 - 10, 10 being most exciting”)
Likert scale questions
Ps indicate on a scale how much you agree with a statement. (strongly agree, agree, disagree etc…)
Semantic differential questions
Indicate where you stand on a scale between 2 contrasting objectives.( E.g circle a number: 1= bad etc…10= great)
Rating scale questions
Respondents are asked to give a number to represent their views. (e.g “on a scale of 1-10…”)
Interview
Ps responds verbally to questions from researcher
Structured interview
Predetermined questions with fixed closed questions
Semi structured interview
Guidelines on which question to as, contains open & closed questions, timing & phrasing determined by interviewer
Unstructured interview
Topic of discussion but no fixed questions all open.
Strengths of a structured interview
-easily repeated
-standardised for all ps
-easier to analyse
-answers more predictable
Weaknesses of a structured interview
-social desirability bias
-ps may give answers they believe are socially acceptable, not truthful
Strengths of a semi-structured interview
-more detailed info obtained
-subsequent questions specially shaped to the ps
Weaknesses of a semi-structured interview
-ore affected by interviewer bias
-probing questions on spot
-risk leading questions
Strengths of an unstructured interview
-potential to gather rich & detailed info from each ps
-conversational nature best suited to discuss sensitive/complex issues ps more relaxed.
Weaknesses of an unstructured interview
-lots of time and expense involved when training interviewers
-time consuming task to analyse and interpret when detailed.
Social desireability
Answering questions incorrectly so as to appear more acceptable to society
Unclear questions
poorly written questions that may receive answers that do not reflect the behaviour or thoughts of the ps.
Retrospective data
uses existing data that have been recorded for reasons other than research. human memory is often poor so have to be careful about self reports that rely on ps recalling thoughts and behaviours.
Leading questions
a question that suggests a particular answer is correct.
Psychometric questions
series of standardised closed questions to measure mental characteristics such as IQ, emotional intelligence, personality traits etc…
Population validity
refers to whether you can reasonably generalise the findings from your sample to target population.
Ecological validity
How well the results apply to real life situations and environments.
Internal validity
Does the PROCEDURE measure/test what it says it does?
External validity
Do the RESULTS apply to real life? (ecological, population)
Reliability
A measure/procedure is reliable if it gets consistent results.
Internal reliability
how consistent a measure is within ITSELF. E.g-Are the questions in an interview testing the same thing?
External validity
how consistent a measure is OVERTIME.
Test-retest (tests EXternal reliability)
testing the same individual with the same measures over a period of time.If results are same over time then measure has test-retest reliability.
Split half (tests INternal reliability)
splitting the results of a test in two and seeing how consistent they are. If both halves have similar scores then it has split half reliability
How can validity be improved?
-Remove leading questions
-Remove unclear questions
-Add open questions w qualitative data
-ensure answers will be anonymous and confident
How can reliability be improved?
-Train interviewers so they are standardised
-Provide standardised questions
-Add closed questions with quantifiable data
-Use split test/retest methods.
Internal validity
The degree to which an observed effect is due to experimental manipulation rather than other factors like extraneous variables.
External validity
The extent to which the results of an experiment can be generalised from the set of conditions created by the experimenter to other environmental conditions such as real life.
Factors affecting validity
-Demand characteristics
-Social desirability bias
-Researcher/experimenter bias
-
How can validity be improved?
f concurrent or construct validity are low questions on tests/questionnaires can be changed. By removing questions researchers can identify which questions are suspect to see if this improves correlation w the existing measure-then it can be assumed the questions are irrelevant.
Face validity
Whether items on a test look like they are assessing what researcher intended to assess (see if it is obviously related)
Concurrent Validity
Can be established by comparing performance on a new test with a previously validated one on the same topic. If new test has similar outcome then concurrent validity is demonstrated on the NEW.
Construct Validity
Refers to how well a test/tool measures the construct it was designed to.