Exam 2 study guide Flashcards
pros of open-ended questions
can get lots of info this way
can be quantified
cons of open-ended questions
hard to analyze/compare responses/more time to analyze
forced-choice format
providing options that the participants have to chose from
pro: easy to analyze
con: limited information, may not accurately capture true feelings
likert-type scale
given scale reflects degree of agreement, anchored by terms like not at all/agree
pro: not as limiting as forced choice
con: people might interpret scale differently/cultural difference on how people respond (ex. some don’t like to pick extreme answers)
semantic differential format
scale anchored by adjectives that rate a certain thing/person/making a decision based on 2 different adjectives
cons: restricted responses
leading question
hurts construct validity: not actually capturing people’s true thoughts
ex) how fast do you think the car was going when he smashed into the other car?
double-barreled question
combining 2 questions into one, hard to tell
confusing participant, may not get valid answer
ex) do you enjoy swimming and wearing sunscreen?
negatively worded question
not clear, confused=not valid response
ex) people who do not drive with a suspended license should never be punished, 1-disagree 5-agree
question order
very impactful
ex) ask gender before other questions may impact results -or- asking name before other questions can impact results as well
this is an example of what? (2 answers)
Question #3: Is this your favorite class and do you have two legs?
a. Yes
b. No
leading question and double-barreled question
advantages of self-reports
people are usually their own best expert
access to thoughts, feelings, and intentions (others only have access to these IF you reveal to them)
definitional truth: the data are true by definition if one is assessing wha people think about themselves (ex. self-esteem, no one can say that your self-esteem is wrong)
cost effective
shortcuts: response sets + 2 examples
peoples tendency to respond unrelated to questions
acquiescence (yea-saying)
- saying yes to all questions w/o reading carefully
- threatens construct validity
fence-sitting
- choosing middle/neutral option in scale
- threatens construct validity
are respondents’ responses accurate?
sometimes people use shortcuts
trying to look good
self-reporting “more than they know”
self-reporting memories of events
carelessness
rating products (not always accurate)
trying to look good
socially desirable responding/faking good
- respond in favorable way
faking bad
- opponent/rebellious adolescent trying to be cool
preventing/reducing social desirability bias
keep it anonymous
- not motivated to respond one way or the other, don’t ask any identifying info
identify socially desirable responders
use implicit measures (e.g., Implicit Association Test)
- ppl aren’t aware of what is being assessed
informants’ report
- ask people who know about participants questions about them for accurate assessment
informants’ reports
-acquaintances, co-workers, family members
-may be more accurate than self-judgments for extremely desirable or undesirable traits
-large amount of info
-real-world bias
-definitional truth
disadvantages of informants’ reports
- limited behavioral info
- lack of access to private experience
- error: more likely to remember behaviors that are extreme, unusual, or emotionally arousing
- bias
- recommendation effect (more likely to be positive)
- prejudice and stereotypes
“wouldn’t you agree that your opponent’s policies aren’t entirely ineffective in addressing the current economic challenges?”
the problem with this question is that it is: (2 answers)
a. leading
b. double-barreled
c. negatively worded
d. there is no problem
leading + negatively worded
Imagine a research study focused on assessing how employees in a large corporation handle conflicts and their general behavior in the workplace. the study aims to understand the effectiveness of conflict resolution strategies employed by employees.
would you use self-report or informants’ report? why?
informants’: success measurement may be less biased, could collect both to have better assessment
these are examples of what?
observing how much people talk
observing hockey moms and dads
observational research
observer bias
when observers see what they expect to see
expectations affect perception
observer effects
when participants confirm observer expectations
aka expectancy effects
rosenthal and jacobson
teacher expectation effects!
(+) expectation of students = increase in performance
(-) expectation of students = didn’t increase performance
3 ways to prevent observer bias
training
clear instructions
masked design
- double-blind (both researcher and participant don’t know condition they were assigned to)
- single (one party is aware of condition)
reactivity
mere presence changes behavior
3 solutions to reactivity
blend in
- unobtrusive observations (trying to make yourself less noticeable, i.e. one way mirror)
wait it out
- wait until participant is comfortable with presence
measure the behavior’s result
- instead of behavior directly!
is it ethical for researchers to observe the behaviors of others?
w/o consent in public… yes! may be annoying but as long as you are in public/doesn’t effect person being observed
but, recording w/o consent is unethical
which of the following is an example of observer bias in a study on arm strength and mood?
a. a research assistant records the participant as stronger in the happy condition than the sad condition, because that fits the hypotheis
b. study participant performs with more strength in the happy mood condition because of subtle, encouraging cues from research assistant
c. study participant feels self-conscious in the experiment
a
which of the following is an example of observer effects in a study on arm strength and mood?
a. a research assistant records the participant as stronger in the happy condition than the sad condition, because that fits the hypotheis
b. study participant performs with more strength in the happy mood condition because of subtle, encouraging cues from research assistant
c. study participant feels self-conscious in the experiment
b
external validity
concerns both samples and settings
how generalizable findings are to other settings
population / population of interest
pop: whole
pop of interest: target pop
sample
smaller set of people taken from pop. of interest
impossible to recruit all/whole target pop.
census
study of every person in pop. of interest
biased vs. unbiased samples
biased: unrepresentative sample
- not all members have = chance of being included
unbiased: representative sample
- ex) random calling
- each member of pop. have = chance of being included in study
convenience sampling
researchers sample easiest people to recruit
ex) psych sample pool (like SONA), not representative as all these students need a psych class for major
self selection
sampling only those volunteering to participate
ex) volunteering to review products online
simple random sampling
like picking a name out of a bowl
- need good sampling frame/list of all people in pop. of interest
cluster sampling
used w/ groups of ppl but groups are arbitrary/not meaningful … everyone is recruited
can be problematic if clusters do have similarities/significantly different from each other
multistage sampling
random select people from clusters, not using everyone from each cluster
stage 1: random sample of clusters is selected
stage 2: select random sample of participants from the selected cluster
stratified random sampling
groups are created to specifically represent the pop/ of interest … groups are created in a meaningful way (ex. race, age, etc.)
sample size needs to represent groups of population
oversampling
variation of stratified random sampling
over represent certain groups on purpose b/c group is too small usually
systematic sampling
random select starting point and interval you select people in
ex) starting point is #2 and use interval of 3
random sampling vs. random assignment
sampling: increases external validity = generalization
assignment: used only in experimental designs to assign participants to groups at random
- increases internal validity
settling for an unrepresentative sample: convenience sampling
if other researchers can replicate as well
if internal validity > external validity
settling for an unrepresentative sample: purposive sampling
use specific type of person/study a specific group
- recruiting is biased
settling for an unrepresentative sample: snowballing sampling
asking them to recommend other people they know that the study is observing
- not random sample but allows easier access to ppl
settling for an unrepresentative sample: quota sampling
identify group/cluster population
- need to set target #/quota of recruitment until quota is met
when external validity is lower priority with _________ ________ in the real world/research studies
non probability samples
in a frequency claim, _______ ________ is a priority
external validity
larger samples are not always ______ __________
more representative
experimental variables
experiment
manipulated variable (IV)
measured variable (DV/outcome)
conditions
levels of an independent variable