Ch6 Flashcards
Data for frequency claims come from
- surveys and polls
- behavioral observations
Surveys/polls
method of asking people questions face to face, on the phone, on written questionnaires, or online
Three main subcategories in the construct validity of surveys/polls:
- Choosing question formats
- Writing well-worded questions
- Encouraging accurate responses
Formats of survey questions:
- Open-ended questions
- Forced-choice format
- Likert scale
- Semantic differential format
Open-ended questions, pros and cons
- answer however
Pros: Provides rich info
Cons: processing coding+ categorizing responses is labor intensive
Forced-choice format (define + example)
- choosing best of two or more options
ex:
- political polls- which candidate
- yes or no: abortion
- which of these two statements describes you?
Likert Scale
- given a statement, choose degree to which it is true
- format: Strongly agree, agree, neither agree nor disagree, disagree, and strongly disagree
- if it deviates, its a likert-type scale
ex: easy 12345 hard
Semantic differential format (define + example)
- rates using numerical scale anchored by adjectives
Ex:
- show up and pass 1 2 3 4 5 hardest thing I’ve ever done
- Five-star internet rating systems- one star = poor, 5 = outstanding
types of wording to avoid to create well-worded questions
- Leading questions
- Double-barreled questions
- Negatively worded questions
Leading question (define + ex)
- wording leads to a particular response
- Questions should be worded as neutrally as possible
Ex: how fast was the car going when it hit/smashed into the other car, smashed leads to higher speed estimate
double-barreled question (define + problem + ex)
- two questions in one
- problem: poor construct validity, can’t tell what construct is being measured OR what the answer is referring to
Ex: do you agree that the 2nd amendment guarantees the right to own a gun and that the 2nd amendment is as important as your other constitutional rights?
negatively worded questions
- makes q unnecessarily complicated, more confusion/ difficult to answer using double negatives
Ex: does it seem possible or does it seem impossible to you that the Nazi extermination of the Jews never happened?
how can you control for negatively worded questions?
- asking q’s two ways
- abortion should never be restricted (disagree=1, agree = 5)
- I favor strong restrictions on abortion (disagree=1, agree = 5)
- Then look at the internal consistency between the items using cronbach’s alpha
question order
- the order questions are asked in can affect responses to a survey
- Earlier q can change the meaning of later q’s
ex:
- “how often do your children play?” different if put after question about music lessons
- affirmative action for women, minorities- if asked about women first, white people more likely to support for minorities
how to control for question order?
- by having different versions of sequencing for the survey
- report both results separately
when is self report especially important/useful?
- Very useful to report own gender identity, ethnicity, etc.
- Sometimes the only option- content of dreams
types of shortcuts
- Response sets
- Acquiescence
- Fence-sitting
Response sets/ nondifferentiation (define, prevalent when?)
- a shortcut in which someone responds the same way to all questions
- most prevalent in long questionnaires
- Weaken construct validity by not giving truthful answers
Acquiescence/ yeasaying
- type of response set in which someone responds “yes” or “strongly agree” to all q’s - common bias
- Threatens construct validity- measuring tendency to agree or not thinking carefully instead of construct
how to control for acquiescence (+ drawback)
- use reverse-worded items
- Slows people down to answer more carefully
- Drawback- sometimes creates negatively worded items
- Must rescore before computing average- switch 1 and 5
Fence sitting
- type of response set- giving all neutral answers
- Can weaken construct validity by suggesting more people have no opinion
how to control for fence-sitting
- Control by removing middle, neutral option
- Sometimes people really do have no opinion though
- Control by using forced-choice questions- choose one of two answers
Trying to look good
- Socially desirable responding/ faking good
- Faking bad
Socially desirable responding/ faking good + faking bad
- giving answers that make you look better
- Embarrassed, shy, hiding unpopular opinion
- Faking bad is less common, same idea but reversed
how to control for faking good/bad ( 3 ways, + ex)
1) Control by reminding participants of anonymity
- This can cause less serious, less careful answers
- more likely to use response sets
2) Control by including items that identify socially desirable responding
- Ex: I don’t find it particularly difficult to get along with loud-mouthed, obnoxious people
3) Can also control by asking people’s friends to rate them
how do you test for implicit biases
Implicit associations test- quickly attach positive or negative words to right or left key along with black or white faces
- if people respond more white/positive and black/negative, researchers can assume participant has a bias
Self-reports can be especially inaccurate when
- asking why someone answers/acts the way they do
- Ex: 6 pairs of stockings, most women chose the far right pair.
Researchers asked why, they said they were better quality, even
when asked if it was because of the placement. All stockings
where the same
- Ex: 6 pairs of stockings, most women chose the far right pair.
- Self reporting memories of events- people’s memories are not very accurate
- rating products - people rated products based off cost and prestige of brand
ex of memory of events innacuracy
Ex: researchers administer short questionnaire to students day after dramatic event asking who they were with, where they were, etc
- A few years later, given same questionnaire, + how confident are they in their memory of it
- Overall accuracy is very low, doesn’t matter how confident people were in their answers
Observational research
- researcher watches people or animals and systematically records what they are doing
- Another type of data (in addition to self-report) that can be used to support frequency claims
Observing how much people talk- study
Mattias Mehl developed an electronically activated recorder (EAR) for observing what people say in everyday conversations
- Has a microphone and digital recording device
- Every 12.5 minutes, records for 30 sec., which is later transcribed
- Example of experience sampling
- result: Women didn’t use significantly more words than men
experience sampling
people’s behaviors are reported throughout the day
- ex: observing how much people talk study
Observing hockey moms and dads
- Observed hockey games and recorded violent, negative behavior along with positive/supportive behavior by parents
- 64% comments were positive, 4% negative
Observing families in the evening
Study of families where both parents work- camera crew follow both parents (30 families) from the time they got home till 8
behaviors were coded from the tapes-
- emotional tone (results: slightly positive)
- Food-related conversation- (kids- distaste, parents - health)
what should you ask of observational measures?
- what is the variable of interest?
- did the observations accurately measure that variable?
Construct validity of observations can be compromised by:
- Observer bias
- observer effects
- Reactivity
Observer Bias:
- observer sees what they expect to see
Ex: therapists shown video of man talking to professor about feelings and work experiences- some told he was a patient, others job applicant
- Those told he was applicant used terms like attractive, candid, innovative, vs those thought he was a patient saw him as “tight, defensive”
how to control for observer bias
keeping observer unaware of study’s hypothesis/ exact research q
Observer effects (experimenter bias, expectancy effects)
Observers inadvertently change the behavior of those they’re observing
Ex: Bright and Dull rats- Those believed to be bright went through the maze faster and made fewer mistakes
ex: clever hans- He was reading the cues of the questioners
Preventing Observer Bias and Observer Effects
- Careful researchers train observers well
- Develop clear rating instructions - often called codebooks
Codebooks
precise statements of how the variables are operationalized
- The clearer the codebook statements, the more valid the
operationalization
how to test construct validity of coded measure
Assesses interrater reliability- multiple observers
- If interrater reliability is low,
- coders need to be retrained
- or a clearer coding system for behaviors may need to be
developed
- or both
correlation that quantifies interrater agreement
ICC
Masked Design (Blind Design)
observers don’t know the purpose of the study or the conditions participants have been assigned to
Reactivity
Change in behavior when study participants know someone is watching
3 solutions to reactivity
1) blend in: make yourself less noticeable
- Unobtrusive observations
- ex: one-way mirror, act as casual onlooker
2) wait it out: allow research participants to get used to your presence by visiting many times until they are comfortable acting normal
- ex: jane goodall + chimps
3) Measure the behavior’s results: instead of observing behavior directly, measure the traces of a behavior
- Unobtrusive data
- ex: alcohol consumption via # bottles
when is ok to use secretive observing methods like one-way mirrors?
- Usually must obtain permission in advance to watch or record people’s private behavior
- Always has to go through some sort of IRB process
- If hidden video recording is used, it must be explained in the debrief