Week 5: Survey Research Flashcards
Describe SURVEY research
Survey research is a quantitative and qualitative method with two important characteristics.
First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors.
Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for LARGE RANDOM samples because they provide the most accurate estimates of what is true in the population.
SURVEY research may be the
ONLY approach in psychology in which
RANDOM sampling is routinely used.
Beyond these two characteristics, almost ANYTHING GOES goes in survey research.
Although survey data are often analyzed using statistics, there are many questions that lend themselves to MORE qualitative analysis.
Describe the non-experimental vs experimental nature of survey research in psychology…
Most survey research is non-experimental.
It is used to describe SINGLE variables
(EXAMPLE - the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population, etc.)
AND to assess statistical relationships between variables (e.g., the relationship between income and health).
BUT surveys can also be used within experimental research.
EXAMPLE - The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research.
BUT their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.
Describe the Lerner 9/11 Study
Post- 9/11
Internet-based survey
2,000 US teens and adults 13-88 years
ASKED:
- Reaction to attacks
- judgments of various terrorism-related and other risks
Participants tended to overestimate most risks Females more than males
NO DIFFERENCE between teens and adults
Some participants were primed” to feel anger by asking them what made them angry about the attacks and by presenting them with a photograph and audio clip intended to evoke anger.
Others were **primed to feel fear **by asking them what made them fearful about the attacks and by presenting them with a photograph and audio clip intended to evoke fear.
Participants – primed to feel anger perceived LESS risk than the participants who had been primed to feel fear—showing how risk perceptions are strongly tied to specific emotions.
Describe the history of survey research
Survey research ROOTS— English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty
By the 1930s, the US GOVERNMENT was conducting surveys to document economic and social conditions in the country.
NEED - to DRAW CONCLUSIONS about the entire population helped spur advances in sampling procedures.
ELECTION POLLING - A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt.
A magazine called Literary Digest conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide.
At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide.
In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest before the election and all but guaranteed that his prediction would be correct. And of course, it was, demonstrating the effectiveness of careful survey methodology (We will consider the reasons that Gallup was right later in this chapter).
Gallup’s demonstration of the power of careful survey methods led later researchers to to local, and in 1948, the first national election survey by the Survey Research Center at the University of Michigan.
This work eventually became the American National Election Studies as a collaboration of Stanford University and the University of Michigan, and these studies continue today.
Describe history of- Survey research after 1930’s
Beginning in the 1930s, psychologists made important advances in questionnaire design - including techniques that are still used today, such as the…
Likert scale
Survey research has a strong historical association with the social psychological study of
- attitudes
- stereotypes
- prejudice
Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of university students that were routinely used in psychology (and still are).
How is survey research instrumental on a larger scale?
Survey data INSTRUMENTAL —
- Estimating the prevalence of various mental disorders
- Identifying statistical relationships among those disorders and with various other factors.
National Comorbidity Survey is a…
large-scale mental health survey conducted in the United States.
EXAMPLE - 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003.
(Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.)
USED BY basic researchers seeking to understand the causes and correlates of mental disorders
AND
clinicians and policymakers who need to understand exactly how common these disorders are.
Describe problem with surdey research data…
PROBLEM
Answers people give can be influenced in unintended ways by…
- the WORDING of the items
- the ORDER of the items
- the RESPONSE OPTIONS provided, and many other factors.
At best…
Add noise to the data.
At worst, they result in systematic biases and misleading results
Describe a cognitive model and give an example
A Cognitive Model
Model of the cognitive processes that people engage in when responding to a survey item (Sudman, Bradburn, & Schwarz, 1996)[1].
- Respondents must interpret the question
- retrieve relevant information from memory
- form a tentative judgment
- convert the tentative judgment into one of the response options provided (e.g., a rating on a 1-to-7 scale)
- finally edit their response as necessary.
How many alcoholic drinks do you consume in a typical day?
………a lot more than average
………somewhat more than average
……….average
………somewhat fewer than average
………a lot fewer than average
What are the problems with the “How many alcoholic drinks do you consume in a typical day?” question structure?
Although straightforward there are several problems…
- they must interpret the question.
Example - they must DECIDE whether “alcoholic drinks” include beer and wine (as opposed to just hard liquor) and whether a “TYPICAL DAY” is a typical weekday, typical weekend day, or both.
Even though Chang and Krosnick (2003) found that asking about “typical” behavior has been shown to be more valid than asking about “past” behavior, their study compared “typical week” to “past week” and may be different when considering typical weekdays or weekend days).
- They must retrieve relevant information from memory to answer it.
But what information should they retrieve, and how should they go about retrieving it? They might…
- think vaguely about some recent occasions on which they drank alcohol
- they might carefully try to recall and count the number of alcoholic drinks they consumed last week,
- or they might retrieve some existing beliefs that they have about themselves (e.g., “I am not much of a drinker”)
THEN they must use this information to arrive at a tentative judgment about how many alcoholic drinks they consume in a typical day.
EXAMPLE
this mental calculation might mean…
No. of drinks DIVIDED BY 7days
Then they must format this tentative answer in terms of the response options actually provided.
FINALLY
they must decide whether they
WANT to report the response
OR
want to edit it in some way.
(With high no. - they dont want tpo look bad)
Describe Context Effects on Survey Responses and give examples
Context Effects on Survey Responses
Unintended influences on respondents’ answers
= context effects because they are not related to the content of the item but to the context in which the item appears (Schwarz & Strack, 1990)[3].
EXAMPLE
item-order effect when the order in which the items are presented affects people’s responses.
One item can change how participants interpret a LATER item or change the information that they retrieve to respond to later items.
EXAMPLE Fritz Strack and his colleagues asked college students about both their…
1. General life satisfaction and
2. Dating frequency
When the life satisfaction item came first
LOW CORRELATION between variables.
When the dating frequency item came first, STRONG CORRELATION between variables
= those who date more have a strong tendency to be more satisfied with their lives.
Reporting the dating frequency first** made that information more accessible in memory** so that they were more likely to BASE THEIR LIFE satisfaction rating on it.
The response options provided can also have unintended effects on people’s responses… Explain and give examples…
The response options provided can also have unintended effects on people’s responses (Schwarz, 1999)[5].
EXAMPLE
When asked — how often they are “really irritated”? With responses ranging from
“less than once a year” to “more than once a month,”
they tend to think of major irritations and report being irritated infrequently.
But with…
“less than once a day” to “several times a month”
they tend to think of minor irritations and report being irritated frequently.
PEOPLE ALSO ASSUME middle response options represent what is normal or typical.
So if they think of themselves as normal or typical, they tend to choose middle response options.
EXAMPLE
TV WATCHING… people are likely to report watching more television when the response options are centered on a middle option of 4 hours than when centered on a middle option of 2 hours.
To mitigate against order effects…
Rotate questions and response items when there is no natural order.
Counterbalancing or randomizing the order of presentation of the questions in online surveys are good practices for survey questions and can REDUCE response order effects that show that among undecided voters, the first candidate listed in a ballot receives a 2.5% boost simply by virtue of being listed first[6]!
What are open-ended questions, why are they used and give examples…
Open-ended items…
Simply ask a question and allow participants to answer in whatever way they choose. The following are examples of open-ended questionnaire items.
“What is the most important thing to teach children to prepare them for life?”
“Please describe a time when you were discriminated against because of your age.”
“Is there anything else you would like to tell us about?”
USEFUL when researchers do not know how participants might respond
or
when they want to avoid influencing their responses.
Qualitative in nature
- used with vaguely defined research questions
- often in the early stages of a research project.
Open-ended items are…
- Easier to write because there are no response options to worry about
- Take more time and effort on the part of participants
- They are MORE DIFFICULT for the researcher to analyze because the answers must be transcribed, coded, and submitted to some form of qualitative analysis, such as content analysis.
DISADVANTAGE is that respondents are more likely to SKIP open-ended items because they take longer to answer.
BEST TO USE WHEN… when the answer is unsure or for quantities which can easily be converted to categories later in the analysis.
What are CLOSE-ended questions, why are they used and give examples…
Closed-ended items
Ask a question and provide a set of response options for participants to choose from.
EXAMPLE
How old are you?
………Under 18
………18 to 34
………35 to 49
………50 to 70
………Over 70
On a scale of 0 (no pain at all) to 10 (worst pain ever experienced), how much pain are you in right now?
Have you ever in your adult life been depressed for a period of 2 weeks or more? Yes No
Used when researchers have… a good idea of the different responses that participants might make.
They are more QUANTITATIVE in nature
Used when interested in a well-defined variable or construct such as.. participants’ level of agreement with some statement, perceptions of risk, or frequency of a particular behavior.
Closed-ended items
- MORE difficult to write because they must include an appropriate set of response options
- quick and easy for participants to complete.
- easier for researchers to analyze because the responses can be easily converted to numbers and entered into a spreadsheet.
THEREFORE - MORE common…
Explain and give examples - All closed-ended items include a set of response options from which a participant must choose
All closed-ended items include a set of response options from which a participant must choose.
For categorical variables like sex, race, or political party preference, the categories are usually LISTED and participants choose the one (or ones) to which they belong.
For quantitative variables, a RATING scale is typically provided.
A rating scale is an ordered set of responses that participants must choose from.
EXAMPLE
The number of response options on a typical rating scale ranges from three to 11—although five and seven are probably most common.
Five-point scales are best for unipolar scales where ONLY ONE construct is tested
EXAMPLE - frequency (Never, Rarely, Sometimes, Often, Always).
Seven-point scales are best for bipolar scales where there is a DICHOTOMUS spectrum
EXAMPLE - liking (Like very much, Like somewhat, Like slightly, Neither like nor dislike, Dislike slightly, Dislike somewhat, Dislike very much).
For bipolar questions…
Offer an earlier question — that BRANCHES them into an area of the scale;
if asking about liking ice cream,
first ask “Do you generally like or dislike ice cream?”
Once the respondent chooses like or dislike
REFINE it by offering them relevant choices from the seven-point scale.
Branching improves both reliability and validity (Krosnick & Berent, 1993)[7].
BEST TO… ONLY present verbal labels to the respondents but CONVERT them to numerical values in the analyses.
Avoid partial labels or length or overly specific labels.
In some cases, the verbal labels can be SUPPLEMENTED with (or even replaced by) meaningful graphics.
The last rating scale…
Visual-analog scale, on which participants make a mark somewhere along the horizontal line to INDICATE the MAGNITUDE of their response.
I——————I——————I
Describe the Likert Scale…
In the 1930s, researcher Rensis Likert…
CREATED SCALE for measuring people’s attitudes (Likert, 1932).
It involves presenting people with several statements—including both favorable and unfavorable statements—about some person, group, or idea.
Respondents then express their agreement or disagreement with each statement on a 5-point scale: Strongly Agree, Agree, Neither Agree nor Disagree, Disagree, Strongly Disagree.
Numbers are assigned to each response and then summed across all items to produce a score representing the attitude toward the person, group, or idea.
For items that are phrased in an opposite direction (e.g., negatively worded statements instead of positively worded statements), reverse coding is used so that the numerical scoring of statements also runs in the opposite direction.
IMPORTANT!!!
UNLESS you are measuring people’s attitude toward something by assessing their level of agreement with several statements about it,BEST TO AVOID calling it a Likert scale.
You are probably just using a “rating scale.”