Exam Flashcards
The 3 general research methods
Evaluation: measuring performance/effectiveness, objective indicators or subjective perceptions
Experimental: measures effect of input variables on output, cause and effect
Survey: sampling of individuals –> inferences about population
Principles of using survey methods
Theory driven, measuring concepts part of social/economic theories
Purpose: exploration, description, explanation
Issue: accurately translating research goals into questionnaire
Quantitative vs. qualitative approaches
Quantitative: objective approach
- Objective: measuring
- Based on large set of data (full or partial)
- Low validity, high reliability
- Less detailed but representative of whole population
Qualitative: subjective approach
- Objective: understanding
- Small data set, based on individuals’ experiences
- High validity, low reliability
- Deeper insight but cannot infer about whole population
Measurement issues
Validity: whether research instrument actually measures what is intended
Reliability: whether results are consistent over time, can be replicated, and accurately represents whole population
Units of analysis
individuals, groups, organisations
Micro- vs. macro level
Micro level: individual level, sample based
Macro level: aggregate level, concerning whole population
Ecological fallacy: macro-level learnings –> individuals
Reductionism: micro-level observations –> population
Cross-sectional vs. longitudinal
Cross-sectional: point in time observations (short-term, current tendencies)
Longitudinal: repeated observations over period of time; panel (random) or cohort (same birth date/year)
Types of scales (non-comparative focus)
Continuous rating: graphic rating scale – place mark on line (with items)
Itemised rating: each item labelled individually – example: Likert scale
Scales and measurement levels
nominal, ordinal (rank), interval, ratio
Simple measures vs. complex measures
Simple measures: use single indicators; sufficient for direct/indirect observables
Complex measures: use multiple indicators to increase validity/reliability; used for complex concepts/constructs
Index vs. scale (in context of complex measures)
Index: frequency, counting indicators, sum of scores assigned to individual variables
Scale: measuring intensity of indicators, recognises that some variables more important/impactful for research problem
Probability vs. non-probability sampling
Probability sampling: based on probability theory, random selection, precise representation of population
Non-probability sampling: not random, focus on specific/homogeneous group
Use when “hard to reach” groups, low frequency, no suitable sampling frame
Sample frame (and error)
List of population elements from which probability sample is collected
Sampling frame error: variation between actual population and population in database/sampling frame
General guidelines for questionnaire construction
- Options/items must be mutually exclusive
- Questions/items should be clear
- Avoid double-barrelled questions (multiple parts in one)
- Avoid negative items/questions
- Use normal, understandable, non-technical language
- Respondents must be competent to answer (assume no specialist knowledge)
- Avoid biased questions (leading questions, social desirability bias)
Sources of errors
Sampling errors: sampling bias (selecting individuals easily available), coverage (must be representative), probability of selection (must be equal)
Data collection errors: operationalisation error (translating objective –> question), measurement error (misunderstanding – cognitive process