2 - Research Methods Flashcards
Confirmation bias
the predisposition to want to confirm previously held beliefs rather than wanting to test and falsify them
scientific theory
a statement that:
- is about two or more constructs
- describes causal relationships
- is general in scope
constructs
abstract and general concepts that are used in theories and that are not directly observable
e.g. “anxiety”, “evaluation of own attitudes”, etc.
independent variable
a concrete measurement or manipulation of a construct that is thought to influence other constructs
cause
dependent variable
a concrete measurement of a construct that is thought to be influenced by other constructs
effect
operationalization
- defining your variables
- making an abstract, fuzzy concept distinguishable, observable, and measurable
e. g. measure being a hockey fan on the amounth of $ spent on hockey merchandise
hypothesis
a clearly stated, falsifiable prediction based on prior knowledge
e.g. canadians will spend more money on hockey items than non-canadians
construct validity
A. definition
B. 3 related validity concepts
C. corresponding research aspect
- the extent to which the IVs and DVs used in research actually correspond to the theoretical constructs under investigation
1. IVs and DVs must correspond to the intended construct
2. they must not correspond to other constructs
convergent validity - the degree to which an operation is similar to other operations that it theoretically should also be similar to (e.g. one IQ scale and another IQ scale)
discriminant validity - the degree to which an operation is not similar to other operations that it theoretically should not be similar to (e.g. a self-esteem scale and a narcissism scale)
content validity - the extent to which a measure represents every element of a given construct, (e.g. the entire intended domain of content)
convergent validity
convergent validity - the degree to which an operation is similar to other operations that it theoretically should also be similar to (e.g. one IQ scale and another IQ scale)
discriminant validity
discriminant validity - the degree to which an operation is not similar to other operations that it theoretically should not be similar to (e.g. a self-esteem scale and a narcissism scale)
content validity
content validity - the extent to which a measure represents every element of a given construct, (e.g. the entire intended domain of content)
social desirability response bias
- people’s tendency to act in ways that they believe others find acceptable and approve of
- threat to internal, construct validity
What are some ways of ensuring construct validity?
- using the best measure types for the purpose (e.g. self-report measures, performance measures)
- using multiple measures
internal validity
- the extent to which it can be concluded that changes in the IV actually caused changes in the DV
- depends mostly on the research design of the study
- ensuring internal validity
**experimental research **
random assignment
corresponding research aspect - design
ensuring internal validity
- experimental research - a research design in which researchers randomly assign participants to different groups and manipulate one or more IVs
- random assignment - procedure of assigning participants to different experimental groups so that every participant has exactly the same chance as every other participant of being in any given group
nonexperimental research
- a research design in which both the IVs and DVs are meaured
- low in internal validity (confound variables)
- high in construct validity (natural contexts)
used when
- a construct can’t be intentionally varied (e.g. race)
- when they can’t ethically be manipulated (e.g. marriage happiness)
- when everyday life situation is the best way of studying a construct
experimental research
- a research design in which researchers randomly assign participants to different groups and manipulate one or more IVs
- allow cause and effect relationship to be inferred
- high in internal validity (few confound variables)
- low in construct validity (when ethics/practicality make manipulations weak) and high (when manipulations can accurately vary the constructs)
- low in external validity - can be hard to geenralize to outside the lab
*these two procedures allow us to reasonably conclude that observed differences in the DV were caused by the manipulation of the IV, since no other differences between the random groups are expected to exist
external validity
the extent to which research results can be generalized to other people, times, and settings
(the theory regarding causal relations among abstract constructs, not the specific findings/conditions needs to be generalizable)
applied research - specific target population
but most social psych research aims to generalize across several factors
relevant research aspect - populations and settings
ensuring external validity
- representative sampling - using subjects representative of the population you want it to generalize to
- replicate the experiment across different people and settings
can be difficult
- representativeness
- cultural differences
- lab setting (people get influenced by researcher, short time span, artificialness, too much attention, trying to act differently/figure out the study)
- demand characteristics - participants’ expectations of what the researcher wants can influence their behaviour
demand characteristics
demand characteristics - participants’ expectations of what the researcher wants can influence their behaviour
- e.g. The participant knows the hypothesis and wants to either prove or disprove it
- threat to internal, construct validity
expectancy effects
when the researcher’s expectations influence the results through influence on participants or on data
- threat to internal, construct validity
strategies to counter threats to validity
- Blind- and double-blind designs
- Measuring and evaluating possible confounds
- Setting these threats in opposition to your hypothesis… e.g. designing study on purpose to overcome them
- Multiple measures and studies
beneficience
The benefits of the research outweigh the risks to participants; benefits should be maximized and risks minimized
- Try not to subject participants to more risk than their would be in the outside world
autonomy
- Participants should be respected
- should be fully informed as to the risks and benefits of the study before agreeing to participate (Informed Consent) as well as afterwards (Debriefing)
- should be able to discontinue research if they feel the need to
- deceptive/hidden info should be debriefed at end
Justice
Selection of participants is equitable (fair); no particular group is unfairly burdened, and the benefits are equally distributed to all relevant groups
IRB
Institutional Review Board goes over your study to make sure it is ethical
types of research
- descriptive research
- correlational research
- experimental research
- meta-analysis
descriptive research
observe and describe a phenomenon
- i.e. going out and watching a group of people over time
Meta-analysis
combining the results of a lot of different studies to test the cumulative results
correlational research
- assesses the relationship between two variables
- Does NOT allow us to infer cause and effect
- The statistic used is the correlation coefficient (measures the direction and strength of the effect: super negative = -1, super positive=+1, none = 0)
- enables researchers to study problems in which intervention is impossible (due to ethical or practical issues)
- can have low internal validity due to reverse causality, the third variable problem
correlation coefficient
theSLOPE in a scattergram
number that measures the direction and strength of an effect in a correlational study
ranging from negative correlation (-1) to positive correlation (+1), with no correlation at the middle (0)
reverse causality
cause and effect can be switched
temporal precedence
being able to show that the causal event came first
used to get around reverse causality
third variable problem
something you’re not measuring may be influencing the results
can measure possible confounds to ID them and show they are unrelated
3 possible explanations for the results of a study
- A genuine effect!
- Chance variability (we use statistics and the p-value - the prob it was due to chance- to rule this out)
- Systematic bias - Confounds should be controlled for, measured and adjusted for, or set up in opposition to the effect you are trying to find