Lecture 2: Research Methods Flashcards
Why is Research Design Important?
research methods are important because it shapes research findings
Case Studies
focus on one (maybe two) people.
EXISTENCE PROOFS:
- shows us what’s possible - proves what can exist (like in the case of the Hogan twins that shared a brain)
Pro: Rich information
Con: low External Validity (or low generalizability)
- the observations don’t apply to real life or other people
Naturalistic Observation
go out into the world and observe people in their natural environments/ the world
must beware of Reactivity: people change behaviour when watched
Pro: High External Validity
- natural behaviour can easily be applied to others
Con: Low Internal Validity
- less control over environment and variables within it - am I studying what I think I’m studying
Archival Research
doing research on existing records or available data sets
Pro: Less invasive
- doesn’t interfere with anyone’s life
Con: Lack of Quality Control
- can’t control quality of data within research we didn’t conduct
- results may be vague or just not as good or not be exactly what you’re looking for
- may effect internal validity
Surveys/ Questionnaires/ Self-Report Measures
information given voluntarily
Random Selection:
- diverse sample that ensures a wide variety of responses/that every person in a population has an equal chance of being chosen to participate
Pro: Ease of Administration
- not costly
- simply print papers or post on internet
Con: Response Error/ Bias
- survey responses may not always be accurate simply because people not be truthful
Errors in Self-Report measures
Errors in Judgement
- what people say they will do vs what people actually do may not be the same
Malingering and Social Desirability Bias
- malingering = faking answers
- often to get some kind of outcome (ex. telling doctor pain is 10 to get what you need)
- anonymity helps! - don’t see them fill it out or add a name or nothing - people tend to be more honest
Ambiguity in Measurement
- unspecific questions and responses (ex. measuring happiness)
- Operational Definition = define variables in a way that can be measured/ quantified
Reliability
a test is reliable when it produces similar results over and over again
Internal Consistency:
- relationship between questions in survey
- do survey responses agree?
- if every question gives roughly the same response/ points in the same direction - then high internal consistency
Test - Retest Reliability:
- are tests results stable?
- given same test across different days, same result
Inter - rater Reliability:
- do two people/ scientists agree on the results?
Validity
a test is valid if it measures what it is supposed to measure
Face Validity:
- does it appear to measure what it says it measures?
- ex. Survey says its about bicycles but its all about cars; low face validity
Convergent Validity:
- does the test agree with others that measure the same thing?
- do my results correspond with results of research of the same topic/ same measurements
Divergent Validity:
- does the test diverge from others that measure different things?
- ex. your responses in a questionnaire about bicycles vs a questionnaire about cars should be different; high divergent validity
Correlational Study
researches the relationship between two or more variables.
R Value = Correlation Coefficient
(-1.0 to +1.0) = Strength and Direction
-1.0 perfect = negative correlation
+1.0 = perfect positive correlation
0 = no correlation
- rarely see perfect correlations
Pro: Predictions
- if correlation between variables is known, you can make good estimations/ predictions
- ex. If anxiety and memory are correlated, i can predict your memory based on your anxiety
Con: Cannot Infer Causality
- we don’t know why relationships are correlated
- describing a relationship between variables does not mean causality
- ex. ice cream and murders
Correlations and Scatterplots
Positive Correlation:
- as one value goes up, the other goes up too /
- as x axis increases, y axis increases
Negative Correlation:
- as one value goes up, one value goes down \
- as x axis increases, y axis decreases
No Correlation = graph with random scattered points
Correlation Strength
Perfect
- +1 or -1
Strong/ High
- close to +1 or - 1 like 0.9
Weak/ Low
close to 0 like +0.5 or -0.5
No correlation = 0
What can we say about correlations?
- correlational studies describe relationships between variables, but not casual relationships!
- we cannot say correlation and causation are the same
- it could be that…
A: X causes Y:
B: Y causes X
Third Variable Problem = Z affects both X and Y
Correlations by luck or chance = X and Y unrelated, correlation just lucky
E: illusory correlation = correlation isn’t real (ex. moon effects emotion)
Experimental Design
- permits CAUSE AND EFFECT: can actually prove correlations are causations
2 Key Ingredients for Experiments to make Causal Claims:
1. Random Assignment: - randomly choose participants/ split into groups
- random assignment distributes unknown confounding variable evenly across groups
Confounds = variables that could alternatively explain your effects- also rival hypothesis
Variable = anything we can control, measure in studies- Manipulation of the independent variable
Independent Variable: the variable that causes the change
- Manipulation of the independent variable
- also rival hypothesis
- variable we manipulated
- is designed by researcher
Dependent Variable: the variable that is impacted by the change/ the effects
- “depends” on the independent variable
Control Group: the comparison group
- group to reference whether the manipulation was effective
- doesn’t receive the manipulations
- ex. in drug study, they would get fake pill and we would expect that not to change
Downfalls of Experimental Design
Confounds: variable that could alternatively explain your effects (rival hypotheses)
Placebo Effect: when you feel the effects from ineffective manipulations
- our expectations lead us to believe an effect has taken place
- ex. do people get better because of the pill or because they expect to get better after taking the pill?
Participant Demand: behaving the way you think the researcher wants you to
- could invalidate study
- low internal validity (results don’t reflect what you were measuring properly)
Experimenter Effects: when researchers bias the study
- invalidates study and causes low internal validity
- ex. Wanting drug to be successful, so exaggerating the positive results of the people who take the drug, and exaggerating the negative results of the people who are taking the fake drugs
To avoid the above…
Single Blind: when participants don’t know which group they’re in
Double Blind: when researchers and participants don’t know which group they’re evaluating or in
Quasi-Experimental Designs
used when random assignment is not possible
- causality can’t be as strong
- difficulty with causal inferences because you can’t randomly assign people (to these groups) / can’t manipulate independent variable
Reasons for Quasi-Experiments = Existing Group Membership
- martial status
- ethnicity
- childhood experience
- ability/ disability
Can’t cause people to marry (for true random assignment, I’d have to choose who gets married)