Ch 2: Research Methods - Vital Safeguards Against Error Flashcards
2 Modes of Thinking
- intuitive
- analytical
Intuitive Thinking
- quick, reflexive
- almost automatic
- relies heuristics
- autopilot = ON
Analytical Thinking
- slow
- reflective
- effortful
- autopilot = OFF
4 Broad Research Designs
- naturalistic observation
- case studies
- correlation designs
- experimental designs
Internal Validity (control)
- can we make a casual claim?
- did X cause Y?
- most account/control for alternative explanations
External Validity (generalizability)
- is this how people behave in the real world?
- does the conclusion generalize outside the lab?
Naturalistic Obs.
- watching behavior in the real-world
- ex: Jane Goodall experiments with the chimpanzees
- high external validity
- low internal validity b/c we can’t establish causation
Case Study
- studying one person or a group over an extended period of time
- low external validity
- no internal validity
Correlational Design
- measuring 2 or more things to see if they are related
- vary from r = -1 to +1
- perfect corr.: +1 or -1
- no relation: 0
- positive: (one goes up, the other goes up)
- negative: (one goes up, the other goes down)
- no internal validity
- sometimes has external validity
Correlation vs Causation
just because 2 things are related does not mean that one causes the other
Experimental Designs
- variable: anything that can be measured
- independent var.: whatever was manipulated
- dependent var.: the variable being measured
- high internal validity
- lower external validity than other designs
Determining causation
you must manipulate one variable, and measure how it affects the other
Random Assignment
- helps control for individual differences
- helps rule out confounds w/ participant assignment
Placebo Effect
- improvement due to mere expectation of improvement
- may show similar effects to real drugs
- nocebo effect (you receive harm b/c you expect harm)
- solutions: blind studies
Experimenter Expectancy Effect
- researcher expectations influence participant behavior
- solution: double-blind studies
Demand Characteristics
- participants try to guess the hypothesis and alter their behavior
- prevents researchers from accurate/unbiased observation of participant behavior
- solutions: cover story, filler items
Self-report Measures
- rely on participants self-assessments rather than experimenter observation
- sensitive to format and wording
- disadvantages: participants may lie or be biased
- advantages: easy/low cost, most accurate sometimes, can be easy to interpret
Alternatives to self-report
- measure behavior
- indirect measures
- have others evaluate them
Validity (define)
extent to which a measure assesses what it claims to measure
Reliability
- how consistent the measurement is
- test-retest reliability: do people score similarly on the same test over time?
Descriptive Stats
- communicative pattern of results
- numerical interpretations
Inferential Stats (define)
draw conclusions from results
Measures of central tendency
- mean
- median
- mode
Mean
avg. of all scores
Median
middle score in the data set
- less sensitive to outliers than the mean
Mode
most frequent score in the data set
Bell-curve
- positive
- normal
- negative
Variability
how loosely or tightly bunched the scores are
- standard deviation
- range
Standard deviation
measures how far each data point is from the mean
Range
difference between the highest and lowest score
Inferential Statistics
- significance testing (p-values) indicate the probability your findings occurred by chance
- the smaller the p-value, the more evidence against the null (so we’re more confident that we can reject the null hyp.)
Statistical significance
- p < 0.05
- if p < 0.05, reject the null (there is an effect)
- does not indicate real-world importance of finding
Practical significance
- p <0.05 does not say anything about the size of the effect
- effect size may say something about the importance or predictive value of an effect
Evaluating Research
- statistics can also be misleading
- report unrepresentative measures
- truncate graphs: y-axis does not start at 0
- neglect base rates
- peer review
-push for transparency
Evaluating Psych in the Media
- researchers incentives may not align with truth-telling
Ethical Issues in Human Research
- institutional review boards (IRB): review proposals for research value and potential harm
- informed consent
- debriefing
Examples of unethical experiments
- Tuskegee Study: study of syphilis by US gov’t., but did not inform black men of its presence or an antibiotic
- Milgram Study: made people think they were lethally shocking people (appeal to authority)
- Stanford Prison Experiment