week 3 Flashcards

1
Q

survey desgins

A

Surveys measure variables, but there is no manipulation involved
That’s what we do in experiments
Surveys use a non-experimental design
In experiments
The focus is on establishing causal relationships
Use randomised approaches, and test differences across conditions
If differences are found, infer they are due to experimental manipulation
In surveys, focus tends to be on relationships across variables
No up front manipulation involved
Dependent on sampling, can then generalise any relationships found to wider population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

THE AIMS OF YOUR SURVEY

A

Based on previous research what is the aim of your study?
What is the theoretical justification for your aim? Why do you expect to find an effect?
To answer your aims/hypotheses, what are you planning to measure?
How are you defining the behaviours/ traits you’re aiming to examine (based on previous research)?
How are you operationalising your terms?
This helps you identify what you want to measure (i.e., what questions to ask).
Once you have completed these steps you can put together your survey

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

STANDARDISED MEASURES

A

Where possible try to use standardardised scales to measure your variables
Rigorous design process
Start with hundreds of candidate questions (‘test items’)
Each item is evaluated independently
Undertake rigorous statistical analyses (e.g., factor analysis)
Validity and reliability repeatedly tested
Tests have descriptive statistics (norms) to act as a reference point for new data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

CREATING A STANDARDISED MEASURE: ITEM GENERATION

A

Kyriazos & Stalikas (2018) propose several questions to ask when planning a scale:
How many items are needed?
What response scale is most appropriate?
How will the scale be scored?
Which psychometric model is most appropriate?
What item evaluation process is suitable?
How to administer the test?
Items should be consistent with the theory
Run a focus group with population of interest
Need to decide how the items will be phrased
How will you ask your questions?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

STANDARDISED MEASURES: MEASUREMENT FORMAT

A

Things to think about when creating your items (Kyriazos & Stalikas, 2018):
What will be your time frame in your stem sentence (e.g., “In the last two weeks…”)?
How many response items will you have?
3, 4, 5, 6, or 7 point scale (or more)
What labels will you use? Agreement? Likelihood? Frequency?
Will you include a midpoint?
Will any items be reverse scored?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Common errors

A

Ambiguous questions = the format or focus of the required answer is unclear
Technical terminology = word(s) may be unfamiliar
Leading questions = lead participants to a particular response
Hypothetical questions = response dependent on hypothetical condition
Patronising tone
Value judgements = personal bias affecting wording
Context effects = Issues relating to other questions on the form
Multiple content/double-barrelled questions = asks about more than one thing
Hidden assumptions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

CREATING A STANDARDISED MEASURE: ITEM GENERATION

A

Once we have our items
Consult Subject-Matter Expert(s) (SME)
Use several SMEs if possible
Once the item are generated, have the SME(s) rate the quality of the items.
Select those items with the highest ratings
SMEs can also suggest revisions to the items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

ITEM EVALUATION: INITIAL EXAMINATION AND DIMENSIONALITY

A

Initial examination (Item means, variances, correlations)
Exploratory Factor Analysis
Statistical model specifying a structure underlying the data
Methodology: Computational procedures that enable us to analyse data and reveal underlying structures in an unconstrained way
Is your scale unidimensional (e.g., WEMWBS; Tennant et al., 2007) or multidimensional (e.g., DASS-21; Lovibond & Lovibond, 1995)?
Which items have the highest factor loadings?
Statistics-led approach
Confirmatory Factor Analysis
Identify whether the items in a questionnaire capture the construct(s) we want to measure
Theory-led approach
Exploratory Structural Equation Modeling (ESEM)
Combines features of EFA and CFA
Allows for cross-loadings
Optimise scale length

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

ITEM EVALUATION: DIMENSIONALITY

A

Item Response Theory
Provides an estimation of the a discrimination parameter, i.e., how well an item functions as a measure of a latent construct (similar to factor loading);
Allows the assessment of difficulty / severity thresholds in the latent construct continuum (occasional overeating… disordered binge eating)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A NOTE ON MEASUREMENT INVARIANCE

A

Measurement invariance (equivalence) is becoming an increasingly popular technique in scale development
Captures the degree to which your measure is testing the same thing across conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

RELIABILITY

A

Test-retest – if I measure your height today, will it be the same as yesterday?
Inter-rater reliability – if I measure you, will your height be the same as when Scott measured you?
Cohen’s Kappa
Inter-method (or parallel-forms) reliability – will your height be the same if I use a tape measure as when I used a ruler?
Internal consistency reliability – did you respond to similar questions in a similar way (e.g., when measuring extroversion)
Split-half technique
Cronbach’s alpha or McDonald’s Omega

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

TYPES OF VALIDITY

A

Construct Validity – is this a valid means of tapping into the construct you are attempting to measure?
Content Validity – the extent to which the domain of interest is adequately represented by the scale items
Predictive validity – is able to predict behaviour in the future
Concurrent validity – is the score related to another criterion of this construct tested at the same time
Convergent validity – is the score related to other measures of the same construct
Discriminant validity – is the score different from measures that theoretically measure something else
Floor-Ceiling effect – The extent to which scores cluster near the low (floor) / high (ceiling) extreme on the scale (e.g., how much do you love your pet, 1–7 Likert)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

DEVELOPING A SCALE

A

This is a very brief overview of the stages of scale development
Its rare to undertake all these steps in one study
Instead researchers gather data from multiple samples and multiple studies to test for different characteristics of the scale
Incrementally validate the measure
May look at only one aspect, such as the factor structure, reliability, validity or invariance testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

MAIL SURVEYS

A

Questionnaires are distributed and returned through the post
Questionnaires are self-administered
Advantages
Low costs
No interviewer bias
Suitable for sensitive topics
Disadvantages
Low completion rates (~30%)
Prone to non-response bias
Errors arising from participants misunderstanding questions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

PERSONAL (FACE-TO-FACE) INTERVIEWS

A

Respondents are contacted by trained interviewers (in their homes, at their place of business, in public places) who administer the questionnaires
Advantages
More control and flexibility over how the survey is administered
Can make use of computer technology – CAPI (Computer Assisted Personal Interview)
Disadvantages
Interviewer bias = interviewer influences participants’ responses or records them incorrectly
High costs (time and/or financial)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

TELEPHONE SURVEYS

A

Respondents are contacted by telephone by trained interviewers who administer the questionnaires over the phone
Advantages
Cost-effective & time-effective
Can make use of computer technology – CATI (Computer Assisted Telephone Interview)
Disadvantages
Sampling bias
Interviewer bias
Low response rate (may be perceived as cold-calling)

17
Q

INTERNET SURVEYS

A

Respondents complete the questionnaire online
Respondents can be recruited via online (e.g., email) or offline channels (e.g., info board ad)
Advantages
Low costs
Access to large, diverse, geographically remote, or underrepresented populations
Quick data collection
Can be used in conjunction with specialist software (e.g., Qualtrics) to automate recruitment (i.e., sending emails), data entry, and incipient data analysis
Disadvantages
Sampling bias
Non-response bias
Lower response rates compared to mail, personal and telephone surveys
Lack of control over the research environment (cf. online experimental research)