Chapter 6- Surveys and observations- describing what people do Flashcards
Survey/poll
In this context, survey and poll mean the same thing. They are both a method of posing questions to people online, in personal interviews, or in written questionnaires
Question formats (3)
- Open ended questions
- Forced-choice questions
- Likert scale
Open ended questions- pros and cons
Questions that allow respondents to answer any way they like. Ex- name the public figure you admire the most. Pro- provides researchers with spontaneous, rich information. Con- it’s difficult and time consuming to code and categorize the responses
Forced choice questions
People give their opinion by picking the best of two or more options. Often used in political polls- participants are asked who they would vote for if the election was today. Also used to measure personality- narcissistic personality inventory and adverse childhood experiences
Likert scale
People are presented with a statement and are asked to use a rating scale to indicate their degree of agreement. (strongly agree-strongly disagree).
Semantic differential format
A form of the Likert scale- respondents are asked to rate a target object using a numeric scale that is anchored with adjectives. RMP is an example. Professors are rated from 1 (Profs get Fs too) to 5 (a real gem)
Leading questions
The wording of the questions leads people to a particular response- the questions suggest a particular viewpoint, leading some people to change their answers. Survey writers should word every question neutrally, avoiding potentially emotional terms
How can researchers measure how much wording matters in a survey?
If researchers want to measure how much the wording matters for their topic, they word each question more than one way- if the results are the same, they can conclude that question wording does not affect people’s responses to that particular topic
Double barreled questions
Questions that ask two questions in one. These questions have poor construct validity because people might be responding to the first half of the question, the second half, or both. Therefore, we can’t know which construct is being measured. The two questions should be separated
Negatively worded questions
Questions that contain negative phrasing and can cause confusion, reducing construct validity. Ex- “abortion should never be restricted”. People who oppose abortion would have to think in double negatives to answer (“I disagree that abortion should never be restricted”) which can be confusing
How can researchers measure the affect of negatively worded questions?
Researchers can ask the question both ways (negative and neutral) and then study the items’ internal consistency (using Crombach’s alpha) to see if people responded similarly to both questions.
Question order
Earlier questions sometimes change the way respondents understand and answer the later questions. Ex- white people are more likely to say they support affirmative action for minorities if they are asked if they support affirmative action for women- possibly because they want to be consistent
How can researchers control for the affects of question order?
Best way to control this is to prepare different versions of a survey with questions in different sequences- researchers can report the results separately if they differ
Can self-report measures be reliable?
Sometimes. People can give meaningful responses- self report measures can be accurate and ideal in many situations. In some cases, self reports are the only measures you can get- only that individual can report how happy they feel, for example. Some traits are not observable, like if a person was a victim of violence.
Response sets
A type of shortcut people can take when answering survey questions. Instead of thinking about each question, a person might answer each question positively, negatively, or neutrally, especially at the end of a long survey. Acquiescence and fence sitting are examples
Acquiescence
A potential response set where people say “yes” or “strongly agree” to every item instead of thinking critically about each one. People have a bias to say yes to every item, no matter what it states. Acquiescence threatens construct validity because the survey could be measuring the tendency to agree or lack of motivation to think carefully rather than what it was intended to study
How can acquiescence be controlled for?
One way to control for this is to include reverse-worded items- changing the wording of some of the items to mean the opposite. This can slow people down so they will answer more carefully and increases construct validity- high or low averages would be measuring the actual construct. Con- sometimes reverse-wording results in negatively worded items, which are more difficult to answer
Fence sitting
Playing it safe by answering in the middle of the scale, especially when survey items are controversial (or a question is confusing). Weakens construct validity- middle of the road scores suggest that some responders don’t have an opinion, even though they actually do. Difficult to distinguish who is unwilling to take a side vs who is truly ambivalent
How can researchers control for fence sitting?
Can control for this by getting rid of the neutral option, although this isn’t beneficial for people who actually have no opinion. Could also use forced choice questions, but this could frustrate people who feel their opinion falls in the middle of 2 answer choices
Socially desirable responding/faking good
When survey respondents give answers that make them look better than they really are. To avoid this, a researcher might ensure that the participants know their responses are anonymous, like by conducting the survey online. However, anonymous respondents might treat the survey less seriously
Ways researchers can control for socially desirable reporting (3)
- Include special survey items that identify socially desirable respondents with target items, “I don’t find it difficult to get along with obnoxious people”, for example
- Researchers can also ask people’s friends to rate them
- Researchers might also use computerized measures to evaluate people’s implicit opinions about sensitive topics- the Implicit association test is an example
In which situations would people be reporting more than they could know?
Self reports can be inaccurate when people are asked to describe why they are thinking, behaving, or feeling the way they do. Nisbett and Wilson 1977- they put out 6 pairs of stockings on a table and asked a group of women to choose which stockings they would buy (all stockings were exactly the same). Almost everyone selected the last pair on the right- this is likely because people are biased toward the last item they evaluate. When asked to explain why they chose the stockings, most people insisted it was due to the quality of the stockings and did not seem to be aware that they were inventing a justification for their preference.
Are self reports of memories of events reliable?
Memories for significant life experiences can be very accurate. Some studies have shown that people accurately recalled their own abuse, even if they were not accurate about details of specific incidents. However, vividness and confidence of memories are unrelated to how accurate memories are. Years later, people who are extremely confident in their memories are about as likely to be wrong as people who report their memories with little or no confidence
Are ratings of consumer products accurate?
Online ratings are examples of frequency claims. One study found little correspondence between five star ratings on Amazon.com and the ratings of the same products by Consumer Reports. Instead, consumers’ ratings were actually correlated with the cost of the product and the prestige of its brand- suggests that people might not always be able to accurately report on the quality of the products they buy