Week 5: Survey Research Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Describe SURVEY research

A

Survey research is a quantitative and qualitative method with two important characteristics.

First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors.

Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for LARGE RANDOM samples because they provide the most accurate estimates of what is true in the population.

SURVEY research may be the
ONLY approach in psychology in which
RANDOM sampling is routinely used.

Beyond these two characteristics, almost ANYTHING GOES goes in survey research.

Although survey data are often analyzed using statistics, there are many questions that lend themselves to MORE qualitative analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe the non-experimental vs experimental nature of survey research in psychology…

A

Most survey research is non-experimental.

It is used to describe SINGLE variables
(EXAMPLE - the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population, etc.)

AND to assess statistical relationships between variables (e.g., the relationship between income and health).

BUT surveys can also be used within experimental research.
EXAMPLE - The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research.

BUT their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Describe the Lerner 9/11 Study

A

Post- 9/11
Internet-based survey
2,000 US teens and adults 13-88 years
ASKED:
- Reaction to attacks
- judgments of various terrorism-related and other risks

Participants tended to overestimate most risks Females more than males
NO DIFFERENCE between teens and adults

Some participants were primed” to feel anger by asking them what made them angry about the attacks and by presenting them with a photograph and audio clip intended to evoke anger.

Others were **primed to feel fear **by asking them what made them fearful about the attacks and by presenting them with a photograph and audio clip intended to evoke fear.

Participants – primed to feel anger perceived LESS risk than the participants who had been primed to feel fear—showing how risk perceptions are strongly tied to specific emotions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe the history of survey research

A

Survey research ROOTS— English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty

By the 1930s, the US GOVERNMENT was conducting surveys to document economic and social conditions in the country.

NEED - to DRAW CONCLUSIONS about the entire population helped spur advances in sampling procedures.

ELECTION POLLING - A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt.

A magazine called Literary Digest conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide.

At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide.

In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest before the election and all but guaranteed that his prediction would be correct. And of course, it was, demonstrating the effectiveness of careful survey methodology (We will consider the reasons that Gallup was right later in this chapter).

Gallup’s demonstration of the power of careful survey methods led later researchers to to local, and in 1948, the first national election survey by the Survey Research Center at the University of Michigan.

This work eventually became the American National Election Studies as a collaboration of Stanford University and the University of Michigan, and these studies continue today.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe history of- Survey research after 1930’s

A

Beginning in the 1930s, psychologists made important advances in questionnaire design - including techniques that are still used today, such as the…

Likert scale

Survey research has a strong historical association with the social psychological study of
- attitudes
- stereotypes
- prejudice

Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of university students that were routinely used in psychology (and still are).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How is survey research instrumental on a larger scale?

A

Survey data INSTRUMENTAL —
- Estimating the prevalence of various mental disorders
- Identifying statistical relationships among those disorders and with various other factors.

National Comorbidity Survey is a…
large-scale mental health survey conducted in the United States.

EXAMPLE - 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003.

(Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.)

USED BY basic researchers seeking to understand the causes and correlates of mental disorders

AND

clinicians and policymakers who need to understand exactly how common these disorders are.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Describe problem with surdey research data…

A

PROBLEM

Answers people give can be influenced in unintended ways by…
- the WORDING of the items
- the ORDER of the items
- the RESPONSE OPTIONS provided, and many other factors.

At best…
Add noise to the data.
At worst, they result in systematic biases and misleading results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Describe a cognitive model and give an example

A

A Cognitive Model

Model of the cognitive processes that people engage in when responding to a survey item (Sudman, Bradburn, & Schwarz, 1996)[1].

  1. Respondents must interpret the question
  2. retrieve relevant information from memory
  3. form a tentative judgment
  4. convert the tentative judgment into one of the response options provided (e.g., a rating on a 1-to-7 scale)
  5. finally edit their response as necessary.

How many alcoholic drinks do you consume in a typical day?

………a lot more than average
………somewhat more than average
……….average
………somewhat fewer than average
………a lot fewer than average

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the problems with the “How many alcoholic drinks do you consume in a typical day?” question structure?

A

Although straightforward there are several problems…

  1. they must interpret the question.
    Example - they must DECIDE whether “alcoholic drinks” include beer and wine (as opposed to just hard liquor) and whether a “TYPICAL DAY” is a typical weekday, typical weekend day, or both.

Even though Chang and Krosnick (2003) found that asking about “typical” behavior has been shown to be more valid than asking about “past” behavior, their study compared “typical week” to “past week” and may be different when considering typical weekdays or weekend days).

  1. They must retrieve relevant information from memory to answer it.

But what information should they retrieve, and how should they go about retrieving it? They might…
- think vaguely about some recent occasions on which they drank alcohol
- they might carefully try to recall and count the number of alcoholic drinks they consumed last week,
- or they might retrieve some existing beliefs that they have about themselves (e.g., “I am not much of a drinker”)

THEN they must use this information to arrive at a tentative judgment about how many alcoholic drinks they consume in a typical day.

EXAMPLE

this mental calculation might mean…
No. of drinks DIVIDED BY 7days

Then they must format this tentative answer in terms of the response options actually provided.

FINALLY

they must decide whether they
WANT to report the response
OR
want to edit it in some way.
(With high no. - they dont want tpo look bad)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe Context Effects on Survey Responses and give examples

A

Context Effects on Survey Responses

Unintended influences on respondents’ answers
= context effects because they are not related to the content of the item but to the context in which the item appears (Schwarz & Strack, 1990)[3].

EXAMPLE
item-order effect when the order in which the items are presented affects people’s responses.

One item can change how participants interpret a LATER item or change the information that they retrieve to respond to later items.

EXAMPLE Fritz Strack and his colleagues asked college students about both their…
1. General life satisfaction and
2. Dating frequency

When the life satisfaction item came first
LOW CORRELATION between variables.

When the dating frequency item came first, STRONG CORRELATION between variables

= those who date more have a strong tendency to be more satisfied with their lives.

Reporting the dating frequency first** made that information more accessible in memory** so that they were more likely to BASE THEIR LIFE satisfaction rating on it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The response options provided can also have unintended effects on people’s responses… Explain and give examples…

A

The response options provided can also have unintended effects on people’s responses (Schwarz, 1999)[5].

EXAMPLE

When asked — how often they are “really irritated”? With responses ranging from
“less than once a year” to “more than once a month,”

they tend to think of major irritations and report being irritated infrequently.

But with…

“less than once a day” to “several times a month”

they tend to think of minor irritations and report being irritated frequently.

PEOPLE ALSO ASSUME middle response options represent what is normal or typical.
So if they think of themselves as normal or typical, they tend to choose middle response options.

EXAMPLE
TV WATCHING… people are likely to report watching more television when the response options are centered on a middle option of 4 hours than when centered on a middle option of 2 hours.

To mitigate against order effects…

Rotate questions and response items when there is no natural order.

Counterbalancing or randomizing the order of presentation of the questions in online surveys are good practices for survey questions and can REDUCE response order effects that show that among undecided voters, the first candidate listed in a ballot receives a 2.5% boost simply by virtue of being listed first[6]!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are open-ended questions, why are they used and give examples…

A

Open-ended items…
Simply ask a question and allow participants to answer in whatever way they choose. The following are examples of open-ended questionnaire items.

“What is the most important thing to teach children to prepare them for life?”
“Please describe a time when you were discriminated against because of your age.”
“Is there anything else you would like to tell us about?”

USEFUL when researchers do not know how participants might respond

or

when they want to avoid influencing their responses.

Qualitative in nature
- used with vaguely defined research questions
- often in the early stages of a research project.

Open-ended items are…

  • Easier to write because there are no response options to worry about
  • Take more time and effort on the part of participants
  • They are MORE DIFFICULT for the researcher to analyze because the answers must be transcribed, coded, and submitted to some form of qualitative analysis, such as content analysis.

DISADVANTAGE is that respondents are more likely to SKIP open-ended items because they take longer to answer.

BEST TO USE WHEN… when the answer is unsure or for quantities which can easily be converted to categories later in the analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are CLOSE-ended questions, why are they used and give examples…

A

Closed-ended items

Ask a question and provide a set of response options for participants to choose from.

EXAMPLE

How old are you?

………Under 18
………18 to 34
………35 to 49
………50 to 70
………Over 70

On a scale of 0 (no pain at all) to 10 (worst pain ever experienced), how much pain are you in right now?

Have you ever in your adult life been depressed for a period of 2 weeks or more? Yes No

Used when researchers have… a good idea of the different responses that participants might make.

They are more QUANTITATIVE in nature

Used when interested in a well-defined variable or construct such as.. participants’ level of agreement with some statement, perceptions of risk, or frequency of a particular behavior.

Closed-ended items
- MORE difficult to write because they must include an appropriate set of response options
- quick and easy for participants to complete.
- easier for researchers to analyze because the responses can be easily converted to numbers and entered into a spreadsheet.

THEREFORE - MORE common…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Explain and give examples - All closed-ended items include a set of response options from which a participant must choose

A

All closed-ended items include a set of response options from which a participant must choose.

For categorical variables like sex, race, or political party preference, the categories are usually LISTED and participants choose the one (or ones) to which they belong.

For quantitative variables, a RATING scale is typically provided.

A rating scale is an ordered set of responses that participants must choose from.

EXAMPLE

The number of response options on a typical rating scale ranges from three to 11—although five and seven are probably most common.

Five-point scales are best for unipolar scales where ONLY ONE construct is tested
EXAMPLE - frequency (Never, Rarely, Sometimes, Often, Always).

Seven-point scales are best for bipolar scales where there is a DICHOTOMUS spectrum
EXAMPLE - liking (Like very much, Like somewhat, Like slightly, Neither like nor dislike, Dislike slightly, Dislike somewhat, Dislike very much).

For bipolar questions…

Offer an earlier question — that BRANCHES them into an area of the scale;

if asking about liking ice cream,
first ask “Do you generally like or dislike ice cream?”

Once the respondent chooses like or dislike
REFINE it by offering them relevant choices from the seven-point scale.

Branching improves both reliability and validity (Krosnick & Berent, 1993)[7].

BEST TO… ONLY present verbal labels to the respondents but CONVERT them to numerical values in the analyses.

Avoid partial labels or length or overly specific labels.

In some cases, the verbal labels can be SUPPLEMENTED with (or even replaced by) meaningful graphics.

The last rating scale…

Visual-analog scale, on which participants make a mark somewhere along the horizontal line to INDICATE the MAGNITUDE of their response.

I——————I——————I

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Describe the Likert Scale…

A

In the 1930s, researcher Rensis Likert…
CREATED SCALE for measuring people’s attitudes (Likert, 1932).

It involves presenting people with several statements—including both favorable and unfavorable statements—about some person, group, or idea.

Respondents then express their agreement or disagreement with each statement on a 5-point scale: Strongly Agree, Agree, Neither Agree nor Disagree, Disagree, Strongly Disagree.

Numbers are assigned to each response and then summed across all items to produce a score representing the attitude toward the person, group, or idea.

For items that are phrased in an opposite direction (e.g., negatively worded statements instead of positively worded statements), reverse coding is used so that the numerical scoring of statements also runs in the opposite direction.

IMPORTANT!!!
UNLESS you are measuring people’s attitude toward something by assessing their level of agreement with several statements about it,BEST TO AVOID calling it a Likert scale.

You are probably just using a “rating scale.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Describe the BRUSO model

A

Writing Effective Items

MINIMIZE - unintended context effects
MAXIMISE - the reliability and validity of participants’ responses.

BRUSO model

BRUSO stands for

Brief
Relevant
Unambiguous
Specific
Objective

Brevity = easier for respondents to understand and faster for them to complete.
Avoid long, overly technical, or unnecessary words.

Relevant - it also avoids annoying respondents with what they will rightly perceive as irrelevant or even “nosy” questions.

Unambiguous they can be interpreted in only one way. Part of the problem with the alcohol item presented earlier in this section is that different respondents might have different ideas about what constitutes “an alcoholic drink” or “a typical day.”

Specific A common problem here is closed-ended items that are “double barrelled.”

They ask about TWO CONCEPTUALLY SEPERATE issues but allow only one response.

EXAMPLE

“Please rate the extent to which you have been feeling anxious and depressed.”
(This should be seperated into TWO ITEMS)

Objective they aren’t “leading “ and dont REVEAL RESEARCHERS OWN OPINION

17
Q

Give BRUSO examples (poor questions and good questions)

A

Brief
POOR: “Are you now or have you ever been the possessor of a firearm?”
GOOD: “Have you ever owned a gun?”

Relevant
POOR: “What is your sexual orientation?”
Do not include this item unless it is clearly relevant to the research.

Unambiguous
POOR: “Are you a gun person?”
GOOD: “Do you currently own a gun?”

Specific
POOR: “How much have you read about the new gun control measure and sales tax?”
GOOD: “How much have you read about the new sales tax?”

Objective
POOR: “How much do you support the new gun control measure?”
GOOD: “What is your view of the new gun control measure?”

18
Q

How should categorical variables be presented? Explain and give examples

A

For closed-ended items it is also important to create an appropriate response scale.

For categorical variables, the categories presented should generally be…
mutually exclusive and exhaustive. Mutually exclusive categories DO NOT OVER LAP.

EXAMPLE
RELIGEON ITEM:

Christian and Catholic = NOT mutually exclusive Protestant and Catholic = mutually exclusive.

EXHAUSTIVE categories cover all possible responses.

EXAMPLE
Protestant and Catholic = not exhaustive
MANY OTHERS… Jewish, Hindu, Buddhist, and so on.

ALL may not be feasible = ‘Other’ category
(Respondent can fill in their own response)

19
Q

How should a numerical scale be presented?

A

For rating scales…

5 - 7 response options generally allow about as much precision as respondents are capable of.

HOWEVER

Numerical scales with more options can sometimes be appropriate.

EXAMPLE:
Attractiveness, pain, and likelihood etc
0-to-10 scale = easy to use and answer

MAKE SURE - “balanced” around a neutral or modal midpoint.

Bad Example
Unlikely | Somewhat Likely | Likely | Very Likely | Extremely Likely

Balanced Example
Extremely Unlikely | Somewhat Unlikely | As Likely as Not | Somewhat Likely |Extremely Likely

(Dont always have to have a middle point)

20
Q

Every survey should have a written or spoken introduction that serves two basic functions - Describe the first function

A

Every survey should have a written or spoken introduction that serves two basic functions.

  1. Encourage respondents to participate in the survey.

Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail—and the researcher must make a good case for why they should agree to participate.

Thus the introduction should briefly explain the - Purpose of the survey and its importance

  • Provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates)
  • Acknowledge the importance of the respondent’s participation
  • Describe any incentives for participating.
21
Q

Every survey should have a written or spoken introduction that serves two basic functions - Describe the second function

A

Establish informed consent

Remember that this involves…

Describing to respondents everything that might affect their decision to participate.

This includes the topics covered by the survey,

the amount of time it is likely to take,

the respondent’s option to withdraw at any time,

confidentiality issues, and so on.

Written consent forms are not always used in survey research (when the research is of minimal risk and completion of the survey instrument is often accepted by the IRB as evidence of consent to participate),

So it is important that this part of the introduction be…
WELL DOCUMENTED
PRESENTED CLEARY

22
Q

What are important things to note about intro and structure?

A

Present clear instructions for completing the questionnaire,
(inc examples of HOW to use any unusual response scales)

High interest in the beginning
Respondents - LEAST fatigued // Start with the MOST important…

Items should also be grouped by topic or by type.

EXAMPLE: items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together
= FASTER AND EASIER

DEMOGRAPHIC items are often presented last because they are LEAST interesting to participants but also easy to answer in the event respondents have become tired or bored.

END = Say THANK YOU!!

23
Q

Describe and give examples of the two types of sampling…

A

Sampling falls into two broad categories.

Probability sampling occurs when the researcher can SPECIFY THE PROBABILITY that each member of the population will be selected for the sample. (Eg. Census)

Non-probability sampling which occurs when the researcher CANNOT specify these probabilities.

EXAMPLE

Convenience samplinG studying individuals who happen to be nearby and willing to participate—is a very common form of non-probability sampling used in psychological research. (Eg. Uni Campus)

Other forms of non-probability sampling include…

Snowball sampling (in which EXISTING research participants help recruit additional participants for the study)

Quota sampling (in which SUBGROUPS in the sample are recruited to be proportional to those subgroups in the population)

Self-selection sampling (in which individuals CHOOSE to take part in the research on their own accord, without being approached by the researcher directly).

24
Q

Why do survey researchers use probability samples?

A

GOAL of most survey research is to make accurate estimates about what is TRUE in a particular population

These estimates are most accurate when based on a probability sample.

EXAMPLE - it is important for survey researchers to base their estimates of election outcomes—which are often decided by only a few percentage points—on probability samples of ‘LIKELY REGISTERED VOTERS.

25
Q

What dioes probability sampling require?

A

Compared with non-probability sampling…

Probability sampling requires…

Very clear specification of the population (which of course depends on the research questions to be answered).

Example…

The population might be
- all registered voters in Washington State
- all American consumers who have purchased a car in the past year
- women in the Seattle over 40 years old who have received a mammogram in the past decade
- or all the alumni of a particular university

Once the population has been specified…

Probability sampling requires a sampling frame.

= a list of all the members of the population from which to select the respondents.

Sampling frames can come from...
  • Telephone directories
  • lists of registered voters
  • hospital or insurance records.
  • a map can serve as a sampling frame, allowing for the selection of cities, streets, or households.
26
Q

How is simple random sampling done?

A

Simple random sampling is done in such a way that each individual in the population has an equal probability of being selected for the sample.

Eg. Drawing names from - mixed hat with all the names

Given that most sampling frames take the form of computer files

Random sampling is more likely to involve computerized sorting or selection of respondents.

A common approach in telephone surveys is…
random-digit dialing, in which a computer randomly generates phone numbers…

27
Q

Describe and give examples of 3 alternatives to simple random sampling

A

ALTERNATIVE to simple random sampling is…

Stratified random sampling in which the population is divided into DIFFERENT subgroups or “strata” (usually based on demographic characteristics) and then a random sample is taken from each “STRATUM.”

Proportionate stratified random sampling can be used to SELECT a sample in which the proportion of respondents in each of various subgroups MATCHES the proportion in the population.

EXAMPLE

About 12.6% of the American population is African American — stratified random sampling can be used to ensure that a survey of 1,000 American adults includes about 126 African-American respondents.

Disproportionate stratified random sampling can also be used to SAMPLE EXTRA respondents from particularly small subgroups—allowing VALID conclusions to be drawn about those subgroups.

EXAMPLE

Because Asian Americans make up a relatively small percentage of the American population (about 5.6%), a simple random sample of 1,000 American adults might include too few Asian Americans to draw any conclusions about them as distinct from any other subgroup.

If representation is important to the research question, however, then disproportionate stratified random sampling could be used to ensure that ENOUGH Asian-American respondents are INCLUDED in the sample to draw valid conclusions about Asian Americans a whole.

28
Q

Describe and give examples of cluster sampling…

A

Yet another type of probability sampling is…

Cluster sampling in which LARGER clusters of individuals are randomly sampled and then individuals within each cluster are randomly sampled.

This is the ONLY probability sampling method that does not require a sampling frame.

EXAMPLE

Sample of small-town residents in Washington…

a researcher might RANDOMLY select several small towns and then RANDOMLY select several individuals within each town.

Cluster sampling is especially USEFUL for = face-to-face interviewing
As it MINIMIZES the amount of traveling that the interviewers must do.

EXAMPLE

Instead of traveling to 200 small towns to interview 200 residents, a research team could travel to 10 small towns and interview 20 residents of each.

Eg. The National Comorbidity Survey was done using a form of cluster sampling.

29
Q

How large does a survey sample need to be? Describe and explain

A

How large does a survey sample need to be?

Depends on two factors

Level of confidence in the result that the researcher wants. The larger the sample, the closer any statistic based on that sample will tend to be to the corresponding value in the population.

Practical constraint in the form of the budget of the study Larger samples provide greater confidence, but they take more time, effort, and money to obtain.

As such - samples commonly 100 to about 1,000.

Conducting a power analysis prior to launching the survey helps to guide the researcher in making this trade-off.

30
Q

Why is a sample of about 1,000 considered to be adequate for most survey research—even when the population is much larger than that?

A

Consider - example

Sample of only 1,000 American adults is generally considered a GOOD sample of the roughly 252 million adults in the American population—even though it includes only about 0.000004% of the population! Why??

A statistic based on a larger sample will tend to be closer to the population value and that this can be characterized mathematically.

EXAMPLE

In a sample of registered voters…
- exactly 50% say they intend to vote for the incumbent.
- If there are 100 voters in this sample
- then there is a 95% chance that the true percentage in the population is between 40 and 60.

BUT

if there are 1,000 voters in the sample, then there is a 95% chance that the true percentage in the population is between 47 and 53.

Although this “95% confidence interval” continues to SHRINK as the sample size increases, it does so at a slower rate.

For example

if there are 2,000 voters in the sample, then this reduction only reduces the 95% confidence interval to 48 to 52. In many situations, the small increase in confidence beyond a sample size of 1,000 is not considered to be worth the additional time, effort, and money.

CONFIDENCE INTERVAL depend ONLY on the size of the sample and NOT on the size of the population.

So a sample of 1,000 would produce a 95% confidence interval of 47 to 53 REGARDLESS of whether the population size was a hundred thousand, a million, or a hundred million.

31
Q

Describe Sampling Bias

A

Sampling bias occurs when a sample is selected in such a way that it is NOT representative of the entire population and therefore produces INACCURATE results.Z

EXAMPLE

This is why… Literary Digest straw poll was so far off in its prediction of the 1936 presidential election.

The mailing lists used came largely from telephone directories and lists of registered automobile owners

Which over-represented wealthier people who were more likely to vote for Landon.

Gallup was successful because he knew about this bias and found ways to sample less wealthy people as well.

32
Q

Describe Non-response bias

A

Non-response bias

Survey Non-responders…
- Died
- choose not to participate (ie. on principle OR no time)
- moved away
- not interested

If these survey non-responders DIFFER from survey responders in systematic ways
= non-response bias.

EXAMPLE

Mail survey - alcohol consumption

ONLY about HALF the sample responded after the initial contact and two follow-up reminders

DANGER HERE… is that the half who responded might have different patterns of alcohol consumption than the half who did not, which could -
…lead to inaccurate conclusions on the part of the researchers.

So to test for non-response bias, the researchers later made unannounced visits to the homes of a subset of the non-responders—coming back up to five times if they did not find them at home.

They found that the original non-responders included an especially high proportion of abstainers (nondrinkers) which meant that their estimates of alcohol consumption based only on the original responders wereTOO HIGH

33
Q

How do you MAXIMISE THE RESPONSE RATE?

A

Although there are methods for statistically correcting for non-response bias, they are based on assumptions about the non-responders—for example, that they are more similar to late responders than to early responders—which may not be correct.

For this reason, the best approach to minimizing non-response bias is to minimize the number of non-responders

= MAXIMISE THE RESPONSE RATE

There is a large research literature on the factors that affect survey response rates (Groves et al., 2004)[2].

In-person interviews have the HIGHEST response rates, followed by….
Telephone surveys,
Mail
and Internet surveys.

Other factors that INCREASE response rates are…
Sending potential respondents a short pre-notification MESSAGE informing them that they will be asked to participate in a survey in the near future and sending simple follow-up reminders to non-responders after a few weeks.

PERCEIVED length and complexity - MAKES a DIFFERENCE,

KEEP IT - short, simple, and on topic as possible

OFFER INCENTIVES - like Cash - is a reliable way to increase response rates.

However, ethically…

Incentives TOO BIG = considered coercive.

34
Q

Benefits and restrictions of survey execution methods?

A

Conducting the Survey

The four main ways to conduct surveys are through
in-person interviews
telephone
mail
online

Depends on goals and the budget.

In-person interviews have the HIGHEST response rates and provide the closest personal contact with respondents.

Personal contact can be important
EXAMPLE

Mental Health Interviews
When the interviewer must see and make judgments about respondents, as is the case with some mental health interviews.

In-person interviewing = most expensive

Telephone surveys have LOWER response rates and still provide some personal contact with respondents.

Can be costly = but are generally less so than in-person interviews.

Traditionally, telephone directories have provided fairly comprehensive sampling frames.

However, this trend is LESS TRUE today = more CELLPHONES // Less directories

Mail surveys are less costly still but generally have even lower response rates—making them most susceptible to non-response bias

35
Q

Why use internet surveys?

A

Internet surveys are becoming more common.

Easy to construct and use

Although initial contact can be made by mail with a link provided to the survey, this approach does not necessarily produce higher response rates than an ordinary mail survey.

A better approach is to make initial contact by email with a link directly to the survey.

GOOD when the population consists of the members of an organization who have KNOWN EMAIL ADDRESSES & regularly use them (e.g., a university community).

Request to participate in the survey with a link to it can be posted on websites known to be visited by members of the population.

36
Q

Describe and explain - Three preconceptions and findings about data collected in web-based studies

A

Three such preconceptions about data collected in web-based studies:

  1. Preconception
    Internet samples are NOT demographically diverse
  2. Finding
    Internet samples are more diverse than traditional samples in many domains, although they are NOT completely representative of the population
  3. Preconception
    Internet samples are maladjusted, socially isolated, or depressed
  4. Finding
    Internet users do not differs from nonusers on markers of adjustment and depression
  5. Preconception
    Internet-based findings DIFFER from those OBTAINED with other methods
  6. Finding
    Evidence so far suggests that internet-based findings are CONSISTENT with findings based on traditional methods (e.g., on self-esteem, personality), but more data are needed
37
Q

Describe and discuss - Online Survey Creation

A

Online Survey Creation

After a questionnaire is created, a link to it can then be emailed to potential respondents or embedded in a web page.

The following websites are among those that offer free accounts.

Although the free accounts limit the number of questionnaire items and the number of respondents, they can be useful for doing small-scale surveys and for practicing the principles of good questionnaire construction. Here are some commonly used online survey tools:

SurveyMonkey—https://surveymonkey.com
PsyToolkit—https://www.psytoolkit.org/ (free, noncommercial, and does many experimental paradigms)
Qualtrics—https://www.qualtrics.com/
PsycData—https://www.psychdata.com/
A small note of caution: the data from US survey software are held on US servers, and are subject to be seized as granted through the Patriot Act. To avoid infringing on any rights, the following is a list of online survey sites that are hosted in Canada:

Fluid Surveys—http://fluidsurveys.com/
Simple Survey—http://www.simplesurvey.com/
Lime Survey—https://www.limesurvey.org
There are also survey sites hosted in other countries outside of North America.

Another new tool for survey researchers is Mechanical Turk (MTurk) created by Amazon.com https://www.mturk.com Originally created for simple usability testing, MTurk has a database of over 500,000 workers from over 190 countries[4].

You can put…

simple tasks (for example, different question wording to test your survey items),

set parameters as your sample frame dictates

deploy your experiment at a very low cost (for example, a few cents for less than 5 minutes).

MTurk has been lauded as an inexpensive way to gather high-quality data.