Quiz 10 Flashcards
Survey Research Design
Research design that looks to understand a population by collecting data about knowledge, attitudes, behaviors, and beliefs from a sample. Collects data through asking questions of respondents, recording responses, and analyzing them statistically. Understanding of the sample “predicts attitudes and behaviors.”
Survey
Data collection instrument that asks questions through a questionnaire or interview that incorporates a level of standardization into the structure.
Can be used in either quantitative or qualitative studies.
What is the difference between Survey Research and Survey?
Survey Research : Research design used to “predict attitudes and behaviors” or “describe attributes”
- Non-experimental
- Can collect data on small groups or worldwide populations
Survey: Data collection tool that requires a level of standardization and can be used in quantitative (structured) or qualitative (unstructured) research
Data collected through a questionnaire or an interview
Answers analyzed statistically or thematically
Standardization
Everyone is asked the same questions in roughly the same order using the same terminology. Answers to those questions are analyzed numerically.
Power of Survey Research Design Example
The Pew Research Center’s Global Attitudes Project:
Public opinion surveys around the world on a broad array of subjects
people’s assessments of their own lives
views about the current state of the world
Survey Research- Personal Interview
Strengths – higher response rate, allows for elaboration
Weaknesses – slow, costly, difficult to control for biases
Survey Research- Telephone Interview
Strengths – fast, inexpensive, reaches large representative sample, random digital dialing (RDD), better response rate than mailed surveys
Weaknesses – higher non-response rate, no visual cues, hard to control for question confusion when answering
Survey Research-Mail Survey
Strengths – inexpensive, responder convenience
Weaknesses – low response rate, slow
Survey Research- Online Survey
Strengths – very low cost, timely, can reach global representative sample easily
Weaknesses – very low response rates, respondent bias
Training the Interviewers
Explain interviewer bias: This is especially a problem when the content of the survey is highly charged and people have strongly held convictions
Skills in questioning: Cues, leading questions, managing group dynamics, etc.
Writing Survey Research Questions
Focus:
Staying focused on a specific topic keeps the question clear
Clarity:
Keeping the question clear avoids misinterpretation and incorrect answers
Brevity:
Shorter questions are easier to answer.
Open/Unstructured Research Questions
Allow the respondents some sense of freedom to answer the question and give the opportunity to elaborate on the topic using his or her own words. Usually involves some form of qualitative analysis
Closed/ Structured Research Questions
Limit the responses that can be given by requiring that each respondent indicate agreement od disagreement with predetermined choices. Easy to quantify and turned into numerical form for analysis. Harder to write/comeup with questions
Nominal
Label, group, name or describe. One object is different from another; the number next to each response has no meaning except as a placeholder for that response Researcher does not assign a value to each response
Ordinal
Assign meaning through ranking. One object is bigger or better or more of anything than another with no way to determine the distance between responses
Interval
Assign meaning through ranking. Distance between responses is measured in standard increments 3Types: -Likert -Semantic Differential -Guttman
Interval Question: Likert Scale
Used to quantitatively assess abstract concepts, attitudes, behaviors, etc.
*Traditional 1-to-5 rating
Ask an opinion question on a 1-to-5 bipolar scale (it’s called bipolar because there is a neutral point and the two ends of the scale are at opposite positions of the opinion).
Level of agreement of disagreement with a statment
Interval Question: Semantic
Researchers measure attitudes, values, opinions, by having respondents rate their opinion or belief on a scale using bipolar adjectives
Interval Question: Guttman Scale
Cumulative rank scale often used to determine an individual’s knowledge or the existence or degree of agreement with a concept or belief. Statements are listed in ascending or descending order, starting with the least extreme and moving forward with the most extreme statement appearing last
Dichotomus Questions
When a question has two possible responses.
Surveys often use dichotomous questions that ask the respondent to select one of two possible answers
Yes/No
True/False
Agree/Disagree response
Sample size is determined by analyzing
Significance level
Confidence levels
Type I and Type II errors
How will you sample?
The first set of considerations has to do with the population and its accessibility
Looking for generalizable findings - probability sampling
Can the population be enumerated?
For some populations, you have a complete listing of the units that will be sampled
registered voters or person with active driver’s licenses
For other populations, it will be difficult
-If your study requires input from homeless
persons, you will need to go find your population
-May rule out mail surveys or telephone
interviews
Double Barreled
Asking two questions in one where the responded might feel differently about each statement and therefore be unable to answer.
Biased or Loaded Questions
Framing a question in a way that does not allow a respondent to disagree with the question or using words that create an assumption
Set the Tone
Set the tone by using an introduction that allows someone to admit to an undesirable behavior
sampling
Probability and non-probability sampling methods
Response rate must be taken into account when determining number of participants needed
Data Collection
Through asking questions in a questionnaire or interview
Data Analysis
Employs both descriptive and inferential data analysis
Structured questions are assigned a numeric value for each response during the design phase (precoded)
Answers to unstructured questions are assigned numerical values for each written-in response (postcoded)
Face Validity
Do the questions make sense?
Content Validity
Do the questions express the underlying concept they were designed to reflect?
Criterion Validity
Do the responses to the questions agree with an objective criterion or gold standard for the underlying concepts?
Construct Validity
Are the hypotheses concerning the relationships between the underlying concepts borne out by the responses?
Threats to Internal Validity
Threats to internal validity (results are true because the study worked and not due to a confounding variable or bias) Self-selection Response bias Recall bias Interview distortion False respondents
Survey Instrument Validity and Reliability
Instrument reliability
Test-retest
Does the same question have the same response over time or with a different sample?
Interrater
Do two interviewers with the same questionnaire get similar responses?
Internal consistency
Do questions designed to evaluate the same concept get equivalent responses?