non - experimental research - survey methods Flashcards

1
Q

key issues for non experimental research

A

Third variable problem: an observed relation between two variables may be the result of some third, unspecified variable.
Direction of causation problem: a correlation does not indicate which variable is the cause and which is the effect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Nonprobability sampling

A

Convenience sampling – researcher requests volunteers from a group of people who meet the general requirements of the study
Snowball sampling – previous participants recruit additional participants through their network
Purposive sampling – researcher targets a particular group of individuals in a nonrandom way
Quota sampling – proportions of some subgroups are the same as subgroup proportions in the population but nonrandomly selected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Overview

A
  • Using surveys to measure subject variables
  • Defining and evaluating constructs
  • Using surveys in correlational designs
    Designing & evaluating survey questions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Subject variables

A
  • Subject variables are attributes that vary across individuals and situations.
    Subject variables can be studied with a range of methods, including survey methods.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Survey methods

A
  • A survey is a descriptive method in which participants are asked a series of questions or given statements to rate.
  • Survey methods can measure almost anything that can be observed, evaluated & reported accurately.
    Survey methods can be especially useful for measuring psychological dimensions that are difficult to induce or observe, including attitudes, beliefs & behaviours.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

fundamentals of measurement

A

validity refers to the accuracy of research

reliability refers to the consistency of research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

surveys in correlational designs

A

TV habits of 875 3rd graders in Hudson Valley NY evaluated with three questions in 286-item home interview
Each child rated every other child in his classroom on 10 aggressive behaviors
Strong positive relationship between violence rating of favorite programs, whether reported by mothers or fathers, and peer-rated aggression of boys

Ten years later Eron & colleagues evaluated TV habits & aggression in 427 of the original 875 children. They reported a significant cross-lagged correlation between violent TV in 3rd grade and aggression 10 years later – but only for boys. Partial correlations supported their interpretation that TV violence causes aggression.

tried to control for a third variance using partial correlations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

correlation and causation

A
  • When survey research methods are used in correlational designs, they are susceptible to problems of causal inference.
    • Direction of causation problem
    • Third variable problem
  • Longitudinal designs help to address direction of causation.
    In some surveys the third variable problem can be addressed with statistical techniques.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

designing survey questions

A

There are standard procedures for developing surveys & questionnaires to ensure that instruments are valid & reliable.
The most fundamental issue is whether your survey or questionnaire is fit for purpose:
Given your aim, are you measuring the right thing?
Survey research depends on people being able to observe, evaluate & report the target variable accurately.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

subjectivity and objectivity

A

approaches to measurement - experience, behaviour, performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Surveys & questionnaires may seem simple, but they involve complex cognitive processes

A
  • Understanding the question
  • Retrieving relevant information
  • Forming a judgment
  • Formatting the judgment to fit response alternatives
    Editing the answer
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Designing survey questions
The wording of survey questions is important

A
  • Avoid ambiguity
  • Avoid leading questions
  • Don’t ask two questions in one item
  • Aim for simplicity
  • Use complete sentences
    Avoid abbreviations, slang & jargon
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Evaluating survey questions

A

The legacy of introspectionism has influenced how survey questions are evaluated.
Just as Dallenbach and his colleagues reported on their experience on completing a task, we can ask participants to report on their experience whilst completing a survey.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Cognitive interviews

A

Cognitive interviews are an important tool for evaluating survey questions.
* Ronald Fisher & Edward Geiselman developed the cognitive interview technique to increase the accuracy of eyewitness testimony.
* Basic principle is to match the context of testimony to the context of the event to be remembered.
Cognitive interviews are useful for pilot study of survey questions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Psychometric evaluation

A
  • The underlying factor structure of surveys can be evaluated with factor analysis.
  • Factor analysis is a multivariate analysis in which a large number of variables are correlated with each other.
    Factor analysis allows researchers to identify which survey questions or items cluster together to form a scale.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The reliability of survey questions can be evaluated with correlational analyses

A
  • Inter-rater reliability – Correlation between two researchers
    • Test-retest reliability – Correlation between test scores from different times
      Split-half reliability – Correlation between scores on 1st half of test and 2nd half of test
17
Q

The validity of survey questions can also be evaluated with correlational analyses

A
  • Convergent validity – Correlation between accepted measures
    ○ Concurrent validity – Correlation with current behaviour or measure
    ○ Predictive validity – Correlation with future behaviour or measure
    Discriminant validity – No correlation with measure of some other construct
18
Q

summary

A

Survey methods are an excellent source of descriptive data.
Survey methods are primarily used to obtain information about attitudes, beliefs, and/or preferences.
To maximise validity & reliability, survey methods require strong operational definitions of variables, iterative evaluation, and careful attention to sampling.

19
Q

survey research

A

its a structured set of questions or statements given to a group of people to measure their attitudes, beliefs, values or tendencies to act.

20
Q

sampling issues in survey research

A

most likely to use probably sampling. although an entire population is not often tested in a study, the researcher hopes to draw conclusions about the population based on sample. in survey research its important for the sample to reflect the attributes of the target population as a whole. if the sample is not representative of the population then the sample is potentially biased.

self-selection bias is typical in surveys that appear in popular magazines and newspapers. a survey will appear along with an appeal to readers to reply, usually online. then the results of those who reply are reported, usually implying that the results are valid. the person reporting the survey will try to impress you with the total number of returns rather than the representativeness of the sample.

21
Q

surveys vs psychological assessment

A

most surveys include various questions that are delivered online, through the mail or administered in some direct fashion. it is important to note that surveys tend to assess attitudes, beliefs, opinions and projected behaviours. in contrast psychological tests can include questionnaires but such tests are used in more formal assessments of psychological functioning usually in a clinical setting. psychological tests have usually undergone rigorous test of reliably and validity in order to establish a measure that accurately reflects some psychological construct, such as personality, depression or self-esteem.

22
Q

creating an effective survey

A

what is needed in a good survey can differ slightly depending on the format - interview, written etc. written questionnaire surveys = can be delivered in person, in an interview, mailed to potential survey-takers, conducted over the telephone or done online.

when designing a survey the researcher must create items that effectively answer the empirical questions at hand and must be very careful with the structure and wording of items.

survey research begins with empirical questions that develop into hypothesis by collecting data.

23
Q

types of survey questions or statements

A

once an empirical question is framed and its terms operationally defined the reseracher decides on the type of items to use and wording of those items. survey items can be phrased as a question or a statement. when questions are asked they can either be open-ended or closed.

an open-ended question requires a response beyond a yes or no - ptps must provide narrative info. can be useful for eliciting a wide range of responses including some not Verne conceived of by the researchers. they can also increase the respondents sense of control while completing the survey. because they can produce a wide range of responses however open ended questions are difficult to score and can add considerably to the time required to complete the survey. so should be used sparingly. one good method is to give respondents an opportunity to elaborate on their responses to closed questions. another good use of open ended questions is in a pilot study as a a way of identifying alternatives for a subsequent questionnaire to be composed of closed items.

some survey items can be ‘partially’ open by including a specific checklist ending it with an ‘other’ category and allowing respondents to write in their responses.

a closed question can be answered with a yes or no or by choosing a single response from among several alternatives. often phrased in terms of statements and respondents are asked to indicate their level of agreement with the statement. can use an interval scale which can help the researcher better ascertain the intensity of one’s attitude or opinion about a particular item. the most common type of interval scale used in surveys is the likert scale - typically has from 5 to 9 points on it which each point reflecting a score on the continuum. respondents circle or state the number or label on the scale that indicates what they believe because number correspond to particular response options response from all the ptps can be summarised with means and SDs. when using a likert scale its good to label some of the statements favourably and some unfavourably.

response bias - response acquiescence - a tendency to agree with statements. to avoid these statements surveys with likert scales typically balance favourable and unfavourable statements. this forces respondents to reach each item carefully and make item by item decisions.

its also important to be concerned about the sequencing of the items in a survey. Start the survey with questions that are not especially personal and are both easy to answer and interesting.

24
Q

assessing memory and knowledge

A

surveys sometimes attempt to assess the respondents memory or what they know. there are two guidelines for this - don’t overburden memory and use DK (‘ don’t know’) alternatives sparingly.

when asking how often respondents have done certain things in the past requires one to remember past events and making the interval too long increases the chances of memory errors. one way to aid memory is to provide lists.

When inquiring about what a person knows, there is always the chance the honest answer will be “I don’t know.” Hence, survey items that deal with knowledge often include what is called a DK alternative (“don’t know”). Some experts discourage the use of DK alternatives because respondents might overuse them, conservatively choosing DK even if they have some knowledge of the issue. Survey results with a lot of DK answers are not useful. On the other hand, omitting DK as a choice might force respondents to mentally flip a coin on items about which they are truly ignorant (Schuman & Presser, 1996). One way to include DK choices, while encouraging respondents to avoid overusing them, is to disguise knowledge questions by prefacing them with such statements as “Using your best guess . . .” or “Have you heard or have you read that . . .” (examples from Fink, 1995, p. 76). Also, DK alternatives should be used only when it is reasonable to expect that some respondents will have no idea what the answer might be (Patten, 1998).

25
Q

adding demographic information

A

its the basic data that identifies the characteristics of survey respondent. these data can include age, gender, socioeconomic status, marital status and so on. sometimes the empirical question will determine the type of demographic information needed. including demographic information enables the survey researcher to group the results by demographic categories. in general its good to put questions about demographic information at the end of a survey. if you start the survey with them ptps might become bored and not attend to the key items as well as you would like. also only include only demographic categories that are important for the empirical question that interests you. the more demographic information you include the longer the survey and the greater the risk that respondents will tune out.
respondents might become irritated; some requests for demographic information (e.g., income) can be perceived as invasions of privacy, even when respondents are assured about confidentiality (Patten, 1998).

26
Q

survey wording

A

A major problem in survey construction concerns the wording of the items. Although it is impossible for the survey writer to ensure that all respondents interpret each question or statement the same way, some guidelines can help in the construction of a good survey. The most important one is to conduct a pilot study (Chapter 3) to test the instrument on several groups of friends, colleagues, and even people you don’t like. You will be surprised at how often you think you have a perfect item and then three friends interpret it three different ways. One tip is to define any terms you think could be interpreted in more than one way. Avoid these three specific problems: linguistic ambiguity, asking two questions in one item, and asking leading questions.

first questions can be ambiguous. second survey writers sometimes include too much in an item resulting in one that actually asks for two response at once. this is sometimes referred to as a double-barrelled question. third is what layers call a leading question - one that is structured so that it is likely to produce an answer desired by the asker. its a type of survey bias and is used frequently in world of business and politics.

always opt for simplicity over complexity
use complete sentences
avoid negatively phrased questions, negative statements are more difficult to process than positive ones.
use balance items not those favouring one position or another.
avoid most abbreviations
avoid slang and colloquial expressions
avoid jargon

Another important consideration in creating a survey is to avoid creating any carry‐over effects (see Chapter 6) across items on a survey that may bias survey responses. Carry‐over effects within surveys occur when earlier items may influence responses on later items.

27
Q

collecting survey data

A

All data collection for survey data begins with some form of written survey that is delivered in‐person in the form of an interview, through the mail, on the phone, or online. In‐person surveys may be done in the form of an interview, which we will discuss later. Of course, all surveys begin with well‐ written survey items that can effectively engage the participant. Each mode of delivery of a survey (i.e., mail, on the phone, and online) have their own unique considerations

28
Q

in-person interviews

A

Kinsey report - most famous sex survey used interview survey

The interview format for surveying individuals about attitudes, opinions, beliefs, and the like has the advantages of being comprehensive and highly detailed.3 Even though the interviewer typically asks a standard set of questions, the skilled interviewer is able to elicit considerable information through follow‐up questions or probes. Having an interviewer present also reduces the problem of unclear questions; the interviewer can clarify information on the spot. Sampling is sometimes a problem because, in many cases, sizable segments of the population may not be included if they refuse to be interviewed, cannot be located, or live in an area the interviewer would prefer to avoid. Interviews can occur in a group format; Besides sampling issues, other major problems with the interview approach are cost, logistics, and interviewer bias. Interviewers must be hired and trained, travel expenses can be substantial, and interviews might be restricted to a fairly small geographic area because of the logistical problems of sending interviewers long distances. And despite training, there is always the possibility that interviewer bias can affect the responses given in the face‐to‐face setting.

The careful researcher using interviews will develop a training program to standardize the interview process as much as possible. Certain types of interviewers may be trained for specific purposes

29
Q

mailed written surveys

A

Written surveys sent through the mail may have problems with how many people actually return a completed survey (as you know, because you’ve probably thrown a few away). Return rates of 85% or higher are considered excellent (and rare), 70–85% very good, and 60–70% more or less acceptable (Mangione, 1998). Anything below 60% makes researchers nervous about whether the data are representative. Another problem with rate of return occurs when people who return surveys differ in some important way from those who don’t return them, a problem called nonresponse bias (Rogelberg & Luong, 1998). When this happens, drawing a conclusion about a population is risky at best. The profile of the typical non‐responder is an older, unmarried male without a lot of education (Mangione, 1998); nonresponse also occurs when people have some attribute that makes the survey irrelevant for them (e.g., vegetarians surveyed about meat preferences). The best chance for a decent return rate for a mailed written survey is when (a) the survey is brief and easy to fill out; (b) the form starts with relatively interesting questions and leaves the boring items (i.e., demographic information) until the end; (c) before the survey is sent out, participants are notified that a survey is on the way and that their help will be greatly appreciated; (d) nonresponse triggers follow‐up reminders, a second mailing of the survey, and perhaps even a phone request to fill out the form; (e) the entire package is highly professional in appearance, with a cover letter signed by a real person instead of a machine; (f) return postage is included; and (g) the recipient has no reason to think the survey is merely the first step in a sales pitch (Fowler, 1993; Rogelberg & Luong, 1998). Return rates can also be improved by including a small gift or token amount of money. For example, Dillman et al., (2009) reported a study that increased response rate from 52% to 64% by adding a token gift of a dollar in the survey mailing. One important problem that exists with all forms of survey research is a social desirability bias. Sometimes, people respond to a survey question in a way that reflects not how they truly feel or what they truly believe, but how they think they should respond—that is, they attempt to create a positive picture of themselves, one that is socially desirable.

Ensuring survey participants of anonymity can help reduce the social desirability bias, but the problem is persistent and it is hard to gauge its extent. The social psychology literature has a long history of research showing the attitudes people express on some issue do not always match their behavior. Thus, the results of survey research must be interpreted with this response bias in mind, and conclusions drawn from surveys can be strengthened to the extent that other research provides converging results.

30
Q

phone surveys

A

according to Dillman et al 2009 phone surveying had its peak popularity in 1980s - virtually every phone had a telephone, households could be selected randomly through a procedure called random-digit dialing and people were generally open to being surveyed.
And then two developments created serious difficulties for the phone surveying business: telemarketing and cell phones. Telemarketing produced a level of annoyance sufficient to lead to the creation of national do‐not‐call lists, and the marketing strategy also had the effect of creating high levels of suspicion and distrust in the general public, especially when telemarketers begin the call by pretending they are conducting a survey, when in fact they are selling a product (a ploy known in marketing research as sugging— Selling Under the Guise of a survey).5 As for cell phones, they have changed the dynamic of phone surveying. When the only phones in the house were landlines, surveyors would call and the unit of measurement typically would be the “household.” With cell phones, however, the unit is the individual, thereby changing the nature of the population. Nonetheless, phone surveying does have positive aspects. Unlike a mailed written survey, there is more human contact when one is speaking over the phone than when one is reading a paper survey. The method combines the efficiency of a mailed survey with the personal contact of an interview. To increase the chances of people responding, one technique used by legitimate phone surveyors is to precede the call with a brief letter, post card, or e‐mail alerting the respondent that a phone call is on the way (“and please cooperate, we could really use your help; and we promise it will be brief”). This is an example of the mixed‐mode approach (mixing mail and phone) advocated by Dillman et al., (2009).

31
Q

online surveys

A

One of the most common forms of Internet data collection involves online surveys, which is accomplished in several ways. First, online surveys can be sent as Internet url’s via e‐mail to a selected sample of individuals. E‐mail lists can be purchased or, following the ethically dubious lead of spammers, obtained by using “search spiders” that search the Internet for posted e‐mails and accumulate e‐mail addresses. A second form of online survey is one that can be posted on a listserv or social media site, collecting data from those who choose to respond. A third procedure is to follow a probability sampling procedure with incentives for participating in online surveys. For example, some companies create national samples by randomly sampling addresses rather than phone numbers. They then recruit subjects by regular mail to participate in multiple online surveys by providing the incentive of free Internet access (e.g., Wirth & Bodenhausen, 2009). Several technologies have emerged that allow survey developers to use software to create online surveys and even experiments. For example, SurveyMonkey and Qualtrics are two popular types of software for developing online surveys. Amazon’s Mechanical Turk (MTurk) is a platform that allows researchers to place their surveys and program experiments that can be conducted online. Further, MTurk participants can be paid for their participation for very small amounts of money (e.g., a dollar or two per participant). The main advantage of online surveying is that a large amount of data can be collected in a relatively short time for minimal cost. There are costs for researcher time and surveying software, but usually no postage, phone, or employee costs. And with the Internet open 24 hours a day, online surveying can be completed in less time than other forms of surveys. Problems exist, however. Although Internet use is widespread, the sample tends to be biased; for instance, responders are unlikely to be representative of all income and education levels. With e‐mail surveys, the researcher runs the risk of having the survey appear to be just another piece of spam or, worse, a potential entryway for a virus. By one estimate (Anderson & Kanuka, 2003), 85% of e‐mail users delete messages without reading them, at least some of the time. So, at the least, the e‐mail survey must have a subject line that will catch the reader’s attention. With web‐ based surveys, the problem (which you probably have already recognized) is that the sample is bound to be self‐selected, resulting in bias. Furthermore, it could be that a teenager out there has little else to do but respond to your survey several hundred times a day for a week or two. Despite the difficulties, however, online surveying occupies a large niche in the 21st‐century survey business. As with phone surveying, savvy researchers can use a mixed mode approach—sending a letter in the mail that appeals to the reader to complete the survey, perhaps including in the letter a website address with a password to enter the survey. Or, as mentioned earlier, survey companies sometimes provide incentives for participation.

32
Q

ethical considerations

A

in survey research is some cases the APA does not require informed consent - in anonymous questionnaires. but it is customary for researcehrs to include consent language in cover letter that precedes the survey or in the opening screens of an online survey. A second point about survey research and ethics is that decisions affecting people’s lives are sometimes made with the help of survey data, and if the surveys are flawed and biased, or poorly constructed, people can be hurt or, at the least, have their time wasted. Although professional psychologists operating within the APA’s ethics code are unlikely to use surveys inappropriately, abuses nonetheless can occur. The problem is recognized by the judicial system, which has established a set of standards for the use of survey data in courts of law (Morgan, 1990). The standards amount to this: If you are going to collect and use survey data, behave like a professional psychologist—that is, be careful about sampling, survey construction, and data analysis