Chapter 6: Describing What People Do: Surveys, Observations and Sampling Flashcards

1
Q

What are some ways survey and poll questions are done?

A
  • phone, personal interviews, paper and pencil questionnaire or over the net.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Open ended questions?

L> advantages/disadvantages

A
  • questions that allow respondents to answer in any way they see fit
    L> provides spontaneous, rich info
    L> downside: coding and categorizing responses is time consuming and difficult.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Forced choice format?

L>ex?

A

people give their opinion by picking the best of two or more options.
L> could also be asked their opinion on current issues or their preference between two options
ex: Narcissistic Personality Inventory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Likert Scale?

L>ex?

A
  • people are presented with a statement and are asked to use a rating scale to indicate their degree of agreement.
  • anchored by the terms: strongly agree, agree, neither, disagree, strongly disagree.
  • ex? This app.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a scale called that does not fully follow the format of a Likert scale?

A
  • Likert-type scale
    L> ex:
    1 2 3 4 5
    Strongly Strongly
    Disagree Agree
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Semantic Differential Format?

A
  • ppl are asked to rate a target object using a numeric scale that is anchored with adjectives.
    ex: rate my prof dot com.
    Easiness:
    easy 1 2 3 4 5 hard
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Do researchers ever combine formats for a single survey?

A

yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which is more important: the way a question is worded or the oder of the survey questions or the format of survey questions?

A
  • the way questions are worded and ordered are equally important
  • the type of format is not as important as the above.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Leading questions?

L> solution?

A
  • questions that kind of suggest the desired response

L> word questions as neutrally as possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do researchers measure the extent of question wording’s influence on results?

A
  • phrase questions more than one way
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Double barrelled questions?

L> good construct validity?yes, no?

A
  • asking two questions in one
    L> they have poor construct validity…..bc ppl might be responding to the first half or the second half or both parts of the question.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Double negatives?

A
  • these are cognitively difficult for people to respond to…causing confusion and therefore reducing construct validity.
  • two negatives in a sentence
    L> impossible and never for ex
  • when possible these should be avoided.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Solution to double negatives?

A
  • ask them as two separate questions

- then use Cronbachs alpha to see where people respond similarly to both questions. (internal consistency)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the most direct way to control for order effects?

A
  • prepare different versions of a survey, with questions in different orders.
  • if results for the first order differ from the results of the second….one can report the results for the two separately….also they may be safe in assuming ppls endorsement of the first question on a survey is unaffected by the previous question
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Response sets?

A
  • type of shortcut respondents can take when answering a survey…..
    L> when answering a set of related questions people may adopt a consistent way of answering all of them that has little to do with their sincere opinions.
  • answering positive, negative or in between for all.
    L> hurting construct validity.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Acquiescence? ( response set)

A
  1. Yea saying - people say yes or strongly agree to every item
  2. Nay saying - people say no or strongly disagree to every item.
    - survey measures the construct of peoples laziness or agreeableness..not opinions… (hurts construct v)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Solution to Acquiescence?

A
  • reverse worded items.
    L> slows people down so they think more about the questions
  • can help us distinguish the yea sayers from true believers.
    ex: strongly agree becomes five and strongly disagree becomes 1 ….people that usually circle five will end up scoring closer to the middle of the scale.
    original: 1 2 3 4 5 6 7
    ** increases construct validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

draw back of reverse worded items?

A
  • sometimes they are more difficult to answer because they might contain double negs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Fence sitting??(response set)

A
  • when a survey asks controversial questions some people play it safe and answer in the middle of the scale.
    L> or if the q is confusing or unclear.
  • these people end up seeming like they don’t have an opinion when they really do!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Solution to fence sitting?

A

-take the neutral response away
L> aka make the responses uneven
* downside: forces people to choose when maybe they really are in the middle.
- forced choice format is another way….

21
Q

Socially desirable responding? (response set)

A
  • faking good

- give responses that make them look better than they really are

22
Q

Socially desirable responding solutions?

A
  • ensure participants know their answers are completely anon.
    L> but most people do this without even realizing it.
  • discard their responses from the results
  • put filler items in and mask the purpose of the survey.
  • Implicit Association Test
    L>asks people to respond quickly to - and + words mixed with faces of different social groups.
23
Q

When can self report be really inaccurate?

A
  • when people are asked why they are thinking, behaving or feeling the way they do…they tend to give inaccurate responses often unintentionally
24
Q

Surveys and polls are great at measuring what people think they are doing and what they think is influencing their behaviour but if you want to know what people are really doing or what really influences them you should do what?

A
  • observe them
25
Q

Surveys and polls are most commonly used for what?

A
  • frequecny or level claims…..
26
Q

Observational research can be the basis for what?

A
  • frequency claims

L> also can operationalize variables in association and causal claims.

27
Q

Observer bias??

A
  • when observers record what they want to see or expect to see rather than what is really happening …. (occurs when they have no training/little training)
28
Q

Observe biases can do what to results?

A
  • influence them by changing the behaviour of those they are observing ( unintentional cues that affect them) and also by interpreting what they want to see
29
Q

What is a masked / blind study design?

A
  • observers are unaware to the conditions to which participants have been assigned and sometimes unaware of the purpose of the study in general.
30
Q

Observer effects? (reactivity)

A
  • when people change their behaviour when they know they are being watched.
31
Q

Solutions to observer effects?(3)

A
  1. Hide: make unobtrusive observations…ex one way mirror…another face in the crowd in public.
  2. Wait it out: let the participants get use to being observed…eventually they will behave normally.
  3. Measure the behaviours results: use unobtrusive data…..instead of measuring behaviour the researcher measures the races that a behaviour leaves behind.
32
Q

Codebooks?

A
  • clear rating scales for observers to make reliable judgements with less bias.
33
Q

Sample?

A

subset of a population

34
Q

Population?

A

set of individuals or objects of interest.

35
Q

What causes sampling biases?(3)

A
  1. Researchers study those cases they can easily contact only (ex: pick psych students)
  2. They study cases that are the only ones they are able to contact ( sample they contact is different from the pop to which they generalize)
  3. Study people who are very eager to participate
36
Q

Self selection?

A
  • sampling bias

- when sample is known to contain only people who volunteer to participate….(web polls ex)

37
Q

Probability sampling?

A

drawing the sample from random from the population…..best option…
* every member has equal chance of being picked

38
Q

Single random sample?

A
  • the most basic form of probability sampling..

think names in a hat

39
Q

Cluster sampling?

A
  • ex: Sample uni students
    Province : new brunswick
    Unis: UNB, UNBSJ, Mt.A, St.T (clusters)
    random sample : of three and include those entire populations in your sample
40
Q

Multistage sampling?

A

Province : NB
Unis: UNB, UNBSJ, Mt.A, St.T (clusters)
- sample: UNB, Mt.A
L> only take a random sample from those two…not the entire pop.

41
Q

Stratified Random Sampling?

A
  • select particular demographic categories on purpose and then randomly selects within each category.
42
Q

Oversampling?

A

sample= 1000 canadians
- focusing on South Asians.
in canada they rep 4% but thats not enough in the sample for sig so its 10%…..adjusts the results so they are weighted as if they were 4

43
Q

Systematic Sampling?

A

researcher starts by selecting two random numbers using a computer ex 4 and 7…..start with the fourth person and then every 7th person joins the sample.

44
Q

Random Sampling vs Random Assignment

A
  • Random sampling: draw samples via random method…enhancing external v
  • Random Assignment: only in experimental design…..assign participants to different groups ( treatment vs comparison) ..enhancing internal v.
45
Q

Convenience sampling?

A
  • samples are chosen merely on the basis of who was easy to access.
    L> common in behavioural research
46
Q

Purposive sampling?

A
  • study only certain kinds of people…..seeking out only them.
47
Q

Snowball sampling?

A
  • samples are hard to obtain a researcher asks a participant to recruit people.
48
Q

Large sample or small?

A
  • large ish
    L> larger sample size= less error….aka more accurate.
  • 1000 IS THE IDEAL SAMPLE. as long as the sample was random
    L>over this = takes more people to gain accuracy
    ** how the sample was picked is more important than size when trying to generalize.