Chapter 6- Surveys and observations- describing what people do Flashcards

1
Q

Survey/poll

A

In this context, survey and poll mean the same thing. They are both a method of posing questions to people online, in personal interviews, or in written questionnaires

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Question formats (3)

A
  1. Open ended questions
  2. Forced-choice questions
  3. Likert scale
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Open ended questions- pros and cons

A

Questions that allow respondents to answer any way they like. Ex- name the public figure you admire the most. Pro- provides researchers with spontaneous, rich information. Con- it’s difficult and time consuming to code and categorize the responses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Forced choice questions

A

People give their opinion by picking the best of two or more options. Often used in political polls- participants are asked who they would vote for if the election was today. Also used to measure personality- narcissistic personality inventory and adverse childhood experiences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Likert scale

A

People are presented with a statement and are asked to use a rating scale to indicate their degree of agreement. (strongly agree-strongly disagree).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Semantic differential format

A

A form of the Likert scale- respondents are asked to rate a target object using a numeric scale that is anchored with adjectives. RMP is an example. Professors are rated from 1 (Profs get Fs too) to 5 (a real gem)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Leading questions

A

The wording of the questions leads people to a particular response- the questions suggest a particular viewpoint, leading some people to change their answers. Survey writers should word every question neutrally, avoiding potentially emotional terms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How can researchers measure how much wording matters in a survey?

A

If researchers want to measure how much the wording matters for their topic, they word each question more than one way- if the results are the same, they can conclude that question wording does not affect people’s responses to that particular topic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Double barreled questions

A

Questions that ask two questions in one. These questions have poor construct validity because people might be responding to the first half of the question, the second half, or both. Therefore, we can’t know which construct is being measured. The two questions should be separated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Negatively worded questions

A

Questions that contain negative phrasing and can cause confusion, reducing construct validity. Ex- “abortion should never be restricted”. People who oppose abortion would have to think in double negatives to answer (“I disagree that abortion should never be restricted”) which can be confusing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How can researchers measure the affect of negatively worded questions?

A

Researchers can ask the question both ways (negative and neutral) and then study the items’ internal consistency (using Crombach’s alpha) to see if people responded similarly to both questions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Question order

A

Earlier questions sometimes change the way respondents understand and answer the later questions. Ex- white people are more likely to say they support affirmative action for minorities if they are asked if they support affirmative action for women- possibly because they want to be consistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How can researchers control for the affects of question order?

A

Best way to control this is to prepare different versions of a survey with questions in different sequences- researchers can report the results separately if they differ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Can self-report measures be reliable?

A

Sometimes. People can give meaningful responses- self report measures can be accurate and ideal in many situations. In some cases, self reports are the only measures you can get- only that individual can report how happy they feel, for example. Some traits are not observable, like if a person was a victim of violence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Response sets

A

A type of shortcut people can take when answering survey questions. Instead of thinking about each question, a person might answer each question positively, negatively, or neutrally, especially at the end of a long survey. Acquiescence and fence sitting are examples

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Acquiescence

A

A potential response set where people say “yes” or “strongly agree” to every item instead of thinking critically about each one. People have a bias to say yes to every item, no matter what it states. Acquiescence threatens construct validity because the survey could be measuring the tendency to agree or lack of motivation to think carefully rather than what it was intended to study

17
Q

How can acquiescence be controlled for?

A

One way to control for this is to include reverse-worded items- changing the wording of some of the items to mean the opposite. This can slow people down so they will answer more carefully and increases construct validity- high or low averages would be measuring the actual construct. Con- sometimes reverse-wording results in negatively worded items, which are more difficult to answer

18
Q

Fence sitting

A

Playing it safe by answering in the middle of the scale, especially when survey items are controversial (or a question is confusing). Weakens construct validity- middle of the road scores suggest that some responders don’t have an opinion, even though they actually do. Difficult to distinguish who is unwilling to take a side vs who is truly ambivalent

19
Q

How can researchers control for fence sitting?

A

Can control for this by getting rid of the neutral option, although this isn’t beneficial for people who actually have no opinion. Could also use forced choice questions, but this could frustrate people who feel their opinion falls in the middle of 2 answer choices

20
Q

Socially desirable responding/faking good

A

When survey respondents give answers that make them look better than they really are. To avoid this, a researcher might ensure that the participants know their responses are anonymous, like by conducting the survey online. However, anonymous respondents might treat the survey less seriously

21
Q

Ways researchers can control for socially desirable reporting (3)

A
  1. Include special survey items that identify socially desirable respondents with target items, “I don’t find it difficult to get along with obnoxious people”, for example
  2. Researchers can also ask people’s friends to rate them
  3. Researchers might also use computerized measures to evaluate people’s implicit opinions about sensitive topics- the Implicit association test is an example
22
Q

In which situations would people be reporting more than they could know?

A

Self reports can be inaccurate when people are asked to describe why they are thinking, behaving, or feeling the way they do. Nisbett and Wilson 1977- they put out 6 pairs of stockings on a table and asked a group of women to choose which stockings they would buy (all stockings were exactly the same). Almost everyone selected the last pair on the right- this is likely because people are biased toward the last item they evaluate. When asked to explain why they chose the stockings, most people insisted it was due to the quality of the stockings and did not seem to be aware that they were inventing a justification for their preference.

23
Q

Are self reports of memories of events reliable?

A

Memories for significant life experiences can be very accurate. Some studies have shown that people accurately recalled their own abuse, even if they were not accurate about details of specific incidents. However, vividness and confidence of memories are unrelated to how accurate memories are. Years later, people who are extremely confident in their memories are about as likely to be wrong as people who report their memories with little or no confidence

24
Q

Are ratings of consumer products accurate?

A

Online ratings are examples of frequency claims. One study found little correspondence between five star ratings on Amazon.com and the ratings of the same products by Consumer Reports. Instead, consumers’ ratings were actually correlated with the cost of the product and the prestige of its brand- suggests that people might not always be able to accurately report on the quality of the products they buy

25
Q

Observational research

A

When a researcher watches people or animals and systematically records how they behave. Many psychologists trust behavioral data more than survey data. Observational research can be used for all types of claims, including frequency claims (how much food people eat at a restaurant, for example)

26
Q

When are observational studies most useful?

A

Self reports can be used to know what people think is causing their behavior, but observational studies are better if you want to know what they’re really doing. Example- Mehl study. Samples of students wore a recording device that recorded 30 seconds of conversation randomly throughout the day. Women only spoke about 3% more words than men, demonstrating that the stereotypes of women being more talkative aren’t accurate. Observation was necessary in this case because no one would be able to record how many words they spoke in a day.

27
Q

When interrogating the construct validity of any observational measure, we should ask (2)

A

What is the variable of interest, and did the observations accurately measure that variable?

28
Q

Observer bias

A

Occurs when observers’ expectations influence their interpretations of the participants’ behaviors or the outcome of the study. Observers don’t rate the behaviors objectively, but according to their own expectations or hypotheses. Ex- Langer and Abelson, 1974. All participants were shown the same tape of a man talking to a professor about his work and life experiences. Some participants were told he was a patient, some were told he was a job applicant. Participants described the man much differently based on who they were told he was.

29
Q

Observer effects

A

When observers inadvertently change the behavior of those they are observing, so that participant behavior changes to match observer expectations. Ex- Rosenthal and Fode, 1963. Researchers gave each student a psychology course five rats to test as a lab experiment- students timed how long it took for their rats to learn a simple maze every day for several days. Each student was given a randomly selected group of rats, but researchers told one group of students that they were given rats that were “maze-bright” and the other group that their rats were “maze-dull”. The rats believed to be maze-bright completed the maze a little faster each day and with fewer mistakes.

30
Q

How can observer bias and observer effects be prevented? (3)

A
  1. Training observers
  2. Using multiple observers
  3. Masked design
31
Q

How are observers trained?

A

Researchers develop clear rating instructions, often called codebooks, so observers can make reliable judgements with minimal bias. Codebooks are precise statements of how the variables are operationalized. The more precise the statements are, the more valid the operationalizations will be

32
Q

ICC

A

A correlation that quantifies the degree of agreement. The closer it is to 1, the more the researchers agreed with each other

33
Q

Does having multiple observers impact the validity of a study?

A

Using multiple researchers allows the researchers to assess the interrater reliability of their measures. However, even if an operationalization has good interrater reliability, it still might not be valid. Two observers might agree with each other because they have the same biases. In the Langer and Abelson study, participants in the same groups (patient vs job applicant) agreed with each other because they shared the same biases about the man.

34
Q

Masked design

A

The observers are unaware of the purpose of the study and the conditions to which participants have been assigned. In the rat study, the students gave subtle clues to the rats to evoke different behavior

35
Q

Reactivity

A

A change in behavior when study participants know another person is watching. Example- if observing a first grade classroom, the kids will probably be on their best behavior, be staring at you, etc.

36
Q

Solutions to reactivity (3)

A
  1. Blend in
  2. Wait it out- wait until children in a classroom for example forget they are being watched.
  3. Measure the behavior’s results- measure the traces a behavior leaves behind rather than observing the behavior directly. Ex- in a museum, the amount of wear and tear on the floor can indicate which part of the museum is most popular.
37
Q

Unobtrusive observations

A

A way to blend in/make yourself less noticeable to prevent reactivity. In a public setting, a researcher might act like a casual onlooker. In a more private setting, an observer could sit behind a one way mirror.

38
Q

When is it ethical to observe people?

A

Most psychologists believe it’s ethical to observe people in museums, classrooms, sporting events, and at the sinks of public bathrooms, where you can reasonably expect your actions to be public. More secretive methods, like one way mirrors and covert video recording, are considered ethical in some conditions. If hidden video recording is used, the researcher must explain the procedure at the conclusion of the study. If the person objects to being recorded, the researcher must delete the file without watching it. Institutional review boards assess each study to decide whether it can be conducted ethically.