Q5 Surveys Flashcards
Two ways that surveys breed skepticism
- Aren’t surveys often wrong?
- How can a small sample represent an entire nation?
How can a small sample be fine?
If the sample is representative and diverse, since size is less important than diversity when it comes to a representative sample
Is a survey different from a poll?
To us, surveys and polls are identical because if they’re good then they’re done the same way
Four key principles to lessen interpretation errors
- Surveys are snapshots, not predictions (can only caption a person’s current thinking)
- Opinions can change
- A recent, high-profile event can influence survey responses
- Survey results are in a range, not a point
How are survey results in a range?
The percentages given as survey results are the center of the range, and extend + or - however many percentage points in the sampling error
What do presidential polls measure?
National preferences, not electoral votes
Why aren’t presidential polls done for each state?
- It’s too expensive
- We use Electoral College so small states can have more clout than they would have if each person’s vote was counted equally
How to get a higher confidence level
The range must be wider
How to get a smaller range
Lower the confidence level
Confidence level
The expectation that repeated surveys will be within the sampling error
Sampling error
The expected variance between a population and a representative sample
If you double the size of a sample of 1,200, and the sampling error is + or - 3 percentage points, what would the new sampling error be?
- Only 1 percentage point higher because of standard deviation
- Reducing sampling error by 1 percentage point requires a big, expensive increase in sample size
Sample subgroups
- Carry larger error margins
- A larger sample is needed if a subgroup analysis is important
Questions we should always ask
- Who did the survey?
- Who paid for the survey?
- Which survey mode was used?
How can you tell which organizations doing surveys are good?
- Should be transparent about its method and survey details (response rate, why some respondents were removed)
- Should publicly disclose all details about questions and answers (number of responses, whether it was randomized)
When does the issue of payment surface?
Whenever an advocacy group pays for a survey
Does the involvement of an advocacy group automatically invalidate the survey?
No, because if it’s done properly, then the survey can still be reliable
How are advocacy groups often selective?
Tend to release only results that support their position
How to make sure an advocacy groups’ survey is valid
See the entire survey and all the details about the methodology, the questions and answers
Survey modes
Written
- Mail
- In person
- Online panel
Interview
- Telephone
Mail survey mode
Cheap, but few respond
In-person survey mode
Reach is hard to find, and costly
Online panel survey mode
- Efficient, but answers could be insincere and have straightlining
- Not available to offline people
Telephone interview survey mode
- Effective, but hard to get responses
- Proven to be reliable and likely to elicit more sincere answers
- Cost is an issue
Does written or interview surveys have an advantage for sensitive subjects?
- Written has an advantage because it’s best for subjective subjects
- People are more candid in writing and more guarded when speaking with a human being
Random digit dial
Draws a representative sample by dialing phone numbers at random
What is the problem with telephone interviews today?
Fewer people these days are willing to answer telephone surveys (lower response rate)
Response rate
The percentage of valid potential respondents who agree to take a survey
What is the problem with lower response rate?
- Raises costs because more people must be called to get 1 response
- Invokes non-response bias
Non-response bias
- Occurs when those who choose to respond to a survey differ in a meaningful way from those who decline
- Potential risk, but not yet an issue
Best alternative for telephone surveys
Online panels
Some organizations offer polls on their websites or social media feeds
- Don’t represent the people who visit the websites, let alone the public
- Rely on non-representative samples, so they don’t matter
How does a valid online panel survey draw a representative sample?
Randomly chooses among panelists
Types of panels
- Volunteer, opt-in panel
- Invitation
Volunteer panel
Most common because it’s the cheapest
Invitation panel
Better to invite panelists at random by sending letters or dialing phones
How are survey panelist participants categorized?
By demographics to reflect the population, then assigned at random to different surveys
Straightlining
- When respondents speed through a survey without giving sincere answers
- Greater for opt-in panels and lower for invitation panels
What to do if the sample rate is only half the population rate
Assign the survey answers from those respondents twice the weight
Answers from overrepresented groups
Would be given proportionally less weight
Weighting
Adjusting the value of answers if the sample demographics differ substantially from the population
Social desirability bias
Tendency to give the “correct” answer to questions involving civic or social responsibility
Instead of asking “Do you plan to vote in the upcoming election?” what is better?
- “Identify your polling place”
- If they name it quickly, its evidence they do vote
- Does not work in states like FL with voting by mail
Pollsters use lists of registered voters to know if they’re likely to vote in the upcoming election
Does not remove social desirability bias, but it at least narrows the population by excluding those not registered
A slight change in wording in questions
Can change results, such as saying “forbid” versus “allow”
How does offering a choice in a survey matter?
- We are wired to say yes and want to be agreeable when given a “yes” or “no” question
- Asking for a preference produces a more valid answer
Acquiescence bias
A tendency to select a favorable response or agree when in doubt
How can different surveys, done properly, produce different results with different question order?
Asking other questions before can make people change their opinion on the core question
Priming
- When exposure to a stimulus unconsciously influences a response
- Not neutral!
Answer order in writing
The first one written will be chosen the most often
Answer order spoken
The last one spoken will be chosen most often
Primacy effect
- A tendency to choose or remember items offered first
- Advantage seen in written surveys
Recency effect
- A tendency to choose or remember items offered last
- Advantage seen in oral surveys
How can a telephone oral survey make sure no one answer choice has an advantage?
On the next call, move the last question to be the first, and so forth for the next calls
Likert scale
A consistent sequence that always begins with Agree option, a Neutral center, and ends with Disagree
Closed-ended vs. open-ended answer choices
Open-ended answers better reflects true public opinion but cannot make all surveys open-ended because it’s slow and costly
Adding in an answer choice saying the respondent “doesn’t know enough”
- When giving the option to admit lacking information, social desirability bias drops out
- Majority usually chooses it