Exam 4 Flashcards
What Makes Surveys/Interviews Different from Other Research Forms
Surveys/Interviews rely on asking questions directly to the participant rather than making observations or manipulating a variable
What Can Be Studied With Surveys
- Just about anything: preferences, secrets, desires, opinions, etc.
- Aims to get honest/accurate info
- Response rates and honesty depend on survey type/topic
Mail Surveys
- Written + self-administered
- Sent via postal service
- Needs to be self-explanatory
- Needs to be interesting enough that people want to respond
- Inexpensive
Strengths and Weaknesses: Mail Surveys
Strengths
- Decreased likelihood of sampling bias
- Can reach a wide variety of people
- Can be distributed where otherwise unsafe
- May get people only available at certain times
- More complete responses since more time to respond
- More likely to get honest responses on sensitive issues
Weaknesses
- Distribution is not perfect
- Not everyone has an address
- Some people can’t read/write
- Very low response rate
- Maybe 30% tops
- Can do multiple mailings (but costly)
- Decreases participant confidence in anonymity
- Self-Selection
- May lead to biased data (esp w/ low response rate)
- No way to know exactly why people who respond do so
- 50% response rate generally required to conclude sample isn’t overly biased
How to Increase Response Rates: Mail Surveys
- Multiple Mailings
- Hand addressed/signed cover letter
- May cause hand cramps
- First-class postage
- Gets expensive
- Advance notice
- Incentives
- Prize drawings
- A small amount of money
- Self-addressed prepaid envelope
- Make it pretty!
- Neat, well-organized, easy to read
Strengths: Internet Surveys
- Wide + cheap distribution
- Automatic data coding
- Self-administered
- Needs to be self-explanatory + interesting - Easier to find specific groups
- e.g. disease sufferers, political affiliation, etc. - Time effective
- More honesty
- Allow for multimedia presentation and responses
- Overcomes some illiteracy issues
Weaknesses: Internet Surveys
- Response rate
- Combine w/ physical mail, incentives - Response rate may not be known
- Who is responding?
- Limited Participant Pool
- Who has internet? (Becoming less of an issue)
Group Surveys
Survey given to a group of individuals already present
- e.g. class, workshop, etc.
- Higher response rate
- Group may/may not be representative (convenience sample)
- Must still be self-explanatory (clarification invalidates survey)
Strengths and Weaknesses: Phone Interviews/Surveys
Strengths
- Higher Response Rate
- Harder to say no to a person + you can establish rapport and correct issues
- Less expensive than in person interviews
- Can clarify questions, get more flexibility in questions + responses
Weaknesses
- Not everyone has a phone
- When are you calling?
- Many phone #’s are unlisted and random dialing can lead to non-relevant numbers
- Call Screening
- Interviewer Bias
Strengths and Weaknesses: Personal Interviews
On the street, in the mall, in a home, etc.
Strengths
- No need to use list/directory (may be out of date or biased)
- You’re sure who’s providing info
- High response rate (80-90%)
- Even if interview isn’t given, some demographics/characteristics may be visible
- Longer + more in depth info
- Clarification, Follow Up ?’s, ask for more response
Weaknesses: Personal Interviews
- Hard to ensure anonymity
- Greatest opportunity for Interviewer Bias
- Interviews behaviors, questions, recording procedures, etc. cause unrepresentative data
- Unconscious/conscious beliefs and opinions
- Tone of voice, word choice, body language, interpretation of participants, etc - Interviews must be trained
- Higher likelihood of Socially Desirable Responses
- Participants might not have info on hand, may need to consult w/ others
- If interviewers are students/employees/etc. may not do smth right
- Sampling Bias
- Time consuming + Expensive
How Should Surveys Be Constructued
Order and Appearance Matters
- Should look easy + interesting
- Organize by topic, don’t jump around
Common Techniques Used in Survey Construction
- Funnel Structure
- Demographic Questions
- Branching
- Filter Questions
Funnel Structure
Start general, then move to specific questions
Demographic Questions
- Descriptive questions about the respondent
- Sometimes seen as intrusive/boring
- Often put @ end so respondents are already committed - Sometimes used as an icebreaker (phone/personal)
Branching
Determine which questions to ask based on previous responses
Filter Questions
- Determine which following questions will/wont apply to participant
- Avoids “N/A” responses
- Avoids boredom/low response rates
Closed Questions
- Possible responses are provided
- e.g. multiple choice, scale, etc.
Advantages
- Easier to quantify/analyze/perform statistics
Disadvantages
- Must provide enough possible responses (sometimes yes/no isn’t enough)
Open-Ended Questions
- Free answer
- e.g. short answer, essay, etc.
Advantages
- Can provide more info/explanation
Disadvantages
- Must be coded
- Can again use integrated reliability
Scales
- Make sure lowest/highest values are clearly labeled
- Consider labeling middle point as “dont know”
- Avoid making scale too large (1-100 isn’t always better than 1-10)
- Free Scales
Questions to Avoid in a Survey/Interview
- Loaded Questions
- Leading Questions
- Double-Barreled Questions
Loaded Questions
Include terms that are emotionally laden/non-neutral
- Often show researcher bias
Leading Questions
Suggest that there is a particular desired response
- The organization of questions can also be leading
Double-Barreled Questions
Any question to which a single person could have two separate answers
- Goal is to get unambiguous, unbiased, accurate info from respondents
Reliability
Does the measurement tool provide consistent results/data/etc.
Test-Retest Reliability
Do people respond the same way to a survey when its given a second time
- Best done w/ some time or activity between the tests
Alternative-Forms Reliability
How well do two forms/versions of same test yield comparable results?
Split-Half Reliability
Entire assessment aims to measure one thing
- Compare agreement between two halves of the assessment
Cronbach’s Alpha
Calculates correlation between each and every test item
Validity
Are you measuring what you say you are measuring?
Face Validity
Common sense measure
- Does it seem like your test actually measures what you intend?
- Not the best measure
Construct Validity
The extent to which the concepts you intend to measure are actually measured
Criterion Validity
How well do results of your instrument correlate with real outcomes/behaviors?