W4 - Survey research Flashcards
What are the 2 purposes of surveys?
- Information gathering
- Exploratory
- Descriptive - Theory testing & building
- Explanatory
- Predictive
What are the processes to develop questionnaires?
- When to design:
- No established tool exists to measure
- Reliability/validity of established measure is in doubt
- Avoid jangles: different labels for same thing - Piloting - test on a small group of people
- Identify problematic items and revise
- Ask respondents for feedback - General design principles: short, readable and appropriate response options
What needs to be considered when designing surveys?
- Instructions: clear and easy-to-follow
- Order: can divide into sections (by topic or question types), easy/general -> specific
- Demographic: characteristics of sample & include relevant questions
- Types of questions: open (rich data, need analysis strategy) & close (unambiguous, clear response)
- Clearly written
What are the general rule to avoid to write good survey questions
- Double-barrelled questions - unclear
- Ambiguity -> multiple interpretations
- Negations (OR double-negative) -> positive tone instead
- Emotive language -> leading Ps
- Jargon
- Response bias (social desirability effects)
- Response acquiescence (mixed response)
What are the different formats of rating scale?
- Dichotomous (Yes/No)/Multichotomous (multiple options to choose)
- Likert scale (multi-point responses, ensure spacing of response options)
- Non-verbal scale (suitable for children, replace texts with images)
- Graphic rating scale (Ps can draw their own answer as a mark on the continuous line)
- Ranking scale (measure relative importance)
- Semantic differential scale (where opinion lies on a spectrum)
What are the ways to increase response rate?
- Keep questionnaire short
- Keep questionnaire simple & clear
- No extra cost to Ps
- Sending reminder
- Offer an incentive (compensation)
How to assess quality of questionnaire?
Psychometric: science of measuring psychological constructs (e.g. personality, cognitive ability test) -> questionable validity sometimes
Criteria:
1. Temporal consistency: same results under same conditions?
2. Internal consistency: does it actually measure construct?
3. Construct validity - assessed by:
- Convergent: correlates with tests of related constructs
- Discriminant: doesn’t correlate with tests of different constructs