ch5 Flashcards

1
Q

When to use surveys?

A
  • When you want to say something about ap population, but cannot measure the whole population
  • When you are interested in quantitative descriptors
  • Personal measures
    • Subjective measures
    • When observations not possible/feasible
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Survey research design decisions

A
  1. Operationalization of concepts
  2. Decide on survey mode
  3. Appearance of the questionnaire
  4. Data collection
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
  1. Operationalization
A
  • Reduction of abstract concepts

* To render them measurable in a tangible way

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Steps involved in operationalization

A
  1. Come up with a definition of the construct you want to measure
  2. Think about the content of a. measure, which is the instrument
  3. A response format, e.g. 7 Point Likert scale
  4. Reliability and validity of the assessment scale have to be assessed
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When are single-item measures used

A

When concrete singular object/attritbute

e.g. What is your marital status? What is your profession?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Multi-item scales

A
  • Use ‘off-the-shelf’ scales

* Develop your own scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Advantages and disadvantages of off-the-shelf-scales

A

+ Known and “good” validity and reliability
+ Comparability of results
+ Low cost
+- Not tailored to your exact research need
- Requires translation if in different language (source of error)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What to avoid in developing questions

A
  • Double-barreled questions
  • Ambiguous questions
  • Leading questions
  • Loaded questions
  • Double negatives
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

comparative scales (ranking scales)

A

used to tell preference between 2 or more items.

It is ordinal in nature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Paired comparison (comparative scales)

A

Respondent continuously picks between 2 options. Used to determine prefernces

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Rank ordering (comparative scales)

A

enables respondents to rank brands

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Constant sum (comparative scales)

A

e.g. diving 100 points among 5 brands

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Non comparative scales (rating scales)

A

Each object is scaled independently of the other objects in the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Continuous rating scale

A

A score is given to something.

E.g. rate the Bijenkorf on a scale of 0 to 100.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Likert scale

A

Agree/disagree on a 5- or 7-point scale. E.g.From strongly disagree to strongly agree.

Treated as interval.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Semantic differentials

A

Choice between 2 bi-polar options. Good/bad, modern/old fashioned.

Treated as interval.

17
Q

Response categories for categorical scales (Nominal) should be…

A
  • Mutually exclusive: only 1 answer applies

* Should be collectively exhaustive: the answer possibilities cover the realm of possible answers

18
Q

Survey modes

A

How the data are collected.

Interviewer-assisted vs self-administered

19
Q

Factors that play a role in deciding on survey mode

A
  • Measurement: interactivity, multi-media, interviewer presence or self-administration
  • Representation: coverage quality, sampling control, response rate
  • Economics: sample size, questionnaire size, speed, cost
20
Q

Mixed mode designs

A

•To trade off cost and error

e.g. using web surveys + mail surveys to senior citizens
Web: cheaper, mail: better coverage

21
Q

Why pre-test your questionnaire

A
  • Catch errors/unclarities
  • Discover sensitive topics
  • Check response categories
  • Optimize length
22
Q

How do you pretest your questionnaire

A
  • Pick 5-10 people from target group
  • Let them “think out loud” + observe
  • Improve survey

Note: any testing is better than no testing

23
Q

Response rate

A

of people that participated in the survey DIVIDED BY # of people sampled

24
Q

How to increase response rate:

A

•Maximize rewards of participation
-Show appreciation
-Use interesting/friendly questionnaires
-Offer tangible rewards
•Minimize the cost of participation
-Minimize time and effort required
-Minimize the chances of feeling threatened by questions
•Maximize trust
-Ensure anonymity/confidentiality
-Open lines of communication with the participant
-Identify research with well-known legitimate organization

25
Q

Validity

A

Does an instrument measure what it is supposed to measure?

26
Q

Reliability

A

Are the data accurate (free from measurement error) and consistent (from one occasion to another)?

27
Q

Measurement validity

A
  • Provide precedence (in off-the-shelf scales, refer to other studies who used the same scales)
  • Always provide sound logic to support that considerable conceptual overlap exists between measurement/proxy and construct
  • Be aware: single-item measures for abstract constructs = low validity. Multi item is better!
28
Q

Proxy

A

An indirect measure of the desired construct, which is strongly correlated to the construct. Proxy is commonly used when direct measures are unobservable/unavailable.

E.g. Fat -> BMI

29
Q

Social desirability bias

A

Respondents may not always be willing to communicate their true response in case of sensitive issues

Topics such as: alcohol and tobacco, healthy eating, finances, taxes

30
Q

To minimise socially desirable responding:

A

•Deliberately leading and/or loading the question to make the sensitive “normal”. e.g.:

  • “everybody-does-it”
  • “Assume-the behaviour”
  • “Authorities recommend it”
  • “Reasons-for-doing-it”

! Note: in all other instances, avoid leading and loading

31
Q

Reliability of survey measures

A
  • For multi-item measures: Cronbach’s alpha
  • Cronbach’s alpha measures to what extent a st of items are inter-related
  • High inter-relatedness = high reliability
  • Cronbach’s alpha = k DIVIDED BY k -1 (sum of covariances / sum of variances and covariances)
  • Cronbach’s alpha is between 0 and 1
  • Values >.7 are considered acceptable