Unit 2: Research Methods In Psychology (Chapter 2) Flashcards
Theory
An integrated set of related principles that explains and generates predictions about some phenomenon in the world.
Ex: Schachter-Singer theory of emotion (uses environmental cues to label unexplained or ambiguous signs of arousal).
Putting common sense to the test
Ex: Late fee daycare study
Hypothesis
A testable prediction about what will happen under specific circumstances if the theory is correct.
Ex: Subjects who encounter the lady on the dangerous bridge (vs. safe bridge) are more likely to call her and include erotic content in their written responses.
Data
A set of observations that are gathered to evaluate the hypothesis.
Replication study
Repetition of the study with a new group of participants. Direct replications attempt to recreate the original experiment exactly. Conceptual replications try to recapture the original finding using different methods or measures.
Open science movement
Initiative to make scientific research, data, and methods openly accessible and transparent, with the goal of increasing reproducibility of research.
Meta-analysis
Combination of the results of multiple studies (since there is only so much confidence one can obtain from a single study).
Peer review
Critical evaluation of the study’s quality by trained psychological scientists.
Just because a research paper has been peer reviewed does not mean it’s free of limitations.
Variable
Anything (typically something of interest) that can take on different values.
Ex (person): age, gender, ethnicity, mood, dance floor confidence level.
Ex (content, condition or situation): time of day, temperature, ambient noise, stressfulness.
Manipulated variable
Variable intentionally changed by the researcher. Not all variables can be manipulated.
Ex: participants assigned to low vs. high stress condition.
Measured variable
A variable whose values are simply recorded. Used in every study, as all variables can be measured.
Ex: # of life stress events a participant has experienced within the past year.
Operational variable
Specific description of how a variable will be measured or manipulated in a study. (From abstract to concrete, quantifiable, specific.)
Creating an operational definition = operationalizing a variable.
Ex: Operationalizing positive emotionality = smiles in high school yearbook photos (using systematic method to describe and analyze facial movement).
Fixed response questionnaires (surveys)
Specific set of questions and possible responses predetermined by the researchers.
Ex: Beck Depression Inventory.
Open-ended (self-report) questions
Participant gives any answer that comes to mind. Helpful when studying something we don’t know much about yet. A way of gathering information to generate more specific questions later on.
Self-report
People describe themselves and/or their behaviour.
Ex: Asking participants how many hours they spend on social media per week.
Self-report advantages
- Allows us to “get inside people’s heads”.
- Easy, relatively inexpensive (in the case of surveys).
- Allows us to collect data from more participants, which will make our study stronger.
Self-report limitations
- Social desirability bias.
- May be difficult to identify and verbalize experience (ex: how one feels).
- Not always aware of why we do the things we do. [Often relies on retrospective report -memories may be inaccurate or coloured (biased) by current experience. This can be mitigated by using methods where participants are asked to report their experience soon after it happens (ex: immediately after, at the end of each day).]
Social desirability bias
Tendency to answer questions in a manner that will be viewed favourably by others. Includes impression management (“faking good”), which can be mitigated with anonymous participation, and self-deceptive enhancement (honestly held but unrealistic self-views).
Behavioural observation
Direct observation: Researchers observe and record the occurrence of behaviour. Can take place in a lab or the field and use technology.
Ex lab research: Stage emergency to examine factors promoting or inhibiting helping behaviour. Ex naturalistic research: Jane Goodall’s observation of intergroup warfare among Gombe chimps.
Behavioural observation advantages
- More objective than self-report (if done right).
- Observe real-world behaviour (or at least a close approximation).
- Source of nuanced, rich information.
- Possible to capture behaviours in their natural context.
Behavioural observation limitations
- More time and resource-intensive. (Requires extensive training to achieve consistency and minimize bias. May not be able to recruit as many participants.)
- Reactivity
(May use unobtrusive observation/recording, but this raises ethical issues.)
Reactivity
A change in behaviour caused by the knowledge one is being observed.
Indirect measures
Designed to avoid reactivity and social desirability. Ex: reaction time = the time it takes to respond to a stimulus on a screen. Can be used to assess implicit attitudes (the automatic tendency to associate a given stimulus with positive or negative feelings).
Indirect measures pros
- Avoid social desirability & reactivity problems (could be particularly useful for sensitive topics).
Indirect measures cons
- Big gap between construct of interest and operationalization (can we be sure that we are studying what we think we are studying?).
Physiological responses
Body’s reaction to various experiences/stimuli.
Ex: autonomic nervous system activity, hormone changes, immune system changes, brain activity.
Physiological responses pros
- Interesting in their own right (ex: understanding link between relationships and health).
- Outside participants’ control (not susceptible to social desirability bias, etc.).
Physiological responses cons
- Very expensive = smaller sample size.
- Ambiguity in interpretation.
- Could be more invasive (depending on the measure).
Population of interest
The full set of cases the researcher is interested in.
Sample
The group who participated in research, and who belong to the larger group (the population of interest) that the researcher is interested in understanding. Sample size matters (bigger sample = better estimate)!
Ex: Jelly beans in jar
Random sample
Every person in the population of interest has equal chance of inclusion.
You cannot obtain a random sample of the world population.
Case study
Researchers study one or two individuals in depth, often those who have a unique condition. Do not generalize to the larger population, but may offer theoretical insights and research inspirations.
Correlational research
A type of study that measures two (or more) variables in the same sample of people, and then observes the relationship between them.
Ex: Social media x well-being
Scatterplot
A figure used to represent a correlation. The x-axis represents values for one variable, and the y-axis represents values for the other variable.
- Each individual in the study is represented by a dot placed between the axes according that individual’s variable values.
- Relationships can be positive (sloping upwards) or negative (sloping downwards); strong (dots clustered tightly together) or weak (dots spread out).
Correlation coefficient / Pearson’s r / r
Statistic showing the direction and the strength of a relationship. Ranges from -1.0 to +1.0. Direction of the relationship is indicated by - or +. Strength of relationship is indicated by the value:
- The closer it is to 0, the weaker the relationship.
- The closer it is to -1 or +1, the stronger the relationship.
Three criteria that must be met to establish causality
1) Two variables must be correlated.
2) One variable must precede the other.
3) There must be no reasonable alternative explanations for the pattern of correlation.
An experiment is the ideal way to establish causality.
Experimental research
A study in which one variable is manipulated, and the other is measured (while all other variables are kept constant).
Independent variable
The manipulated variable in an experiment.
Dependent variable
The measured variable in an experiment. This variable depends on the level (or version) of the independent variable.
Random assignment
Participants are as likely to be assigned to one condition as to another so that everything averages out to be the same in both groups.
Control group
A condition comparable to the experimental condition in every way except that it lacks the one “ingredient” hypothesized to produce the expected effect on the dependent variable.
Moderator variable
The effect of the independent variable on the dependant variable is conditional on value of the moderator.
The independent variable does not cause the moderator.
Ex: Maybe social media is only detrimental for younger individuals (i.e., age is moderator).
Mediator variable
The independent variable exerts its effect on dependant variable through some other variable.
Ex: Social media use increases upward social comparisons, which leads to depression (upward social comparison is the mediator).
Measurement validity / Construct validity
Are you measuring what you think you are measuring? The measure should make sense, be grounded in theory, be associated with theoretically similar measures, and have a predictive value.
Reliability
Do you get the same results every time you administer the measure?
- Test-retest reliability = Measures at more than one time point don’t fluctuate.
- Inter-rater reliability. = Do other researchers score about the same?
A measure can be reliable, but not valid.
Internal validity
Can we rule out alternative explanations in an experiment? Internal vaildity is threatened by the presence of confounds.
How to avoid threats to internal validity
- Keep experimental conditions constant for all variables (except for the variable you want to study).
- Use random assignment
- Standardize study scripts
- Do not reveal hypotheses
- Make the study double-blind (if possible)
Confound
An alternative explanation for a relationship between two variables; “muddies up” results. Occurs when two experimental groups accidentally differ on more than just the independent variable.
Ex: Rating a professor’s competency based on a profile – is the effect due to gender or age?
Placebo effect
May experience improvement after receiving inert substances or inactive treatments. Without this condition, we would not know whether the improvement we see can be attributed to our treatment. Takeaway: Expectations are really powerful.
Ex: Knee surgery video.
Double-blind procedures
Neither the experimenters nor the participants know who is in the experimental group or control group. This reduces the influence of bias and expectations.
Observer expectancy effect
- The experimenter may, consciously or unconsciously, cue the participant in a way that confirms their expectations, ask leading questions, or interpret the participant’s behaviour differently (confirmation bias)
- The subtle cues (demand characteristics) from the experimenter may give the participant a sense of what is expected of them. The participant may try to “help” by acting in a way that conforms to the experimenter’s expectations.
Ex: The curious case of “Clever Hans”.
Differential attrition
Participants drop out from experimental and control groups at different rates; the participants that drop out are not random. This is a common threat to internal validity.
External validity
1) Can our results be generalized to other samples?
2) Can our results be generalized to other situations?
Difficult to establish high external and internal validity in the same study. Solution = Collect more data.
Statistic
Numerical value derived from dataset that can help us describe the dataset or evaluate our research hypothesis.
Descriptive statistics
Summarize sets of data.
Ex: mean, median, mode, standard deviation.
Effect size
Values describing the strength of an association or magnitude of the effect.
Ex: r coefficient.
Inferential statistics
Help us assess whether there is sufficient evidence to support a claim or hypothesis. Allow us to make inferences about the population from our
sample using rules of probability.
Ex: Hypothesis testing.
Cohen’s d
Difference between groups (e.g., experimental & control) expressed in terms of standard deviation. Determined not just by absolute difference between groups, but also their spread.
Ex: Tallians vs Shortians.
Hypothesis testing
An inferential technique; how do we know that the difference we observed between our two groups did not occur by chance alone?
Null hypothesis
A type of statistical hypothesis that proposes that no statistical significance exists in a set of given observations
Ex: There is no effect of quitting social media on depression.
P-values
The probability of getting a result as extreme as the one we observed if there really was no difference between the two groups (or no relationship between two variables). I.e. how likely the obtained results are under the null hypothesis.
- Takes on values between 0 and 1.
- P-value < 0.5 = reject the null.
- P-value > 0.5 = do not reject the null.
Factors affecting the size of P-values
- The size of the observed effect. All else being equal, larger effects are more likely to be statistically significant.
- The number of participants in our study. All else being equal, results are more likely to be significant when we have more participants. With very large samples, even small effects may be statistically significant.
Statistical significance ≠ practical significance.
Ex: Flip a coin for a cup of coffee. How many times does one have to win for the other to accuse them of cheating?
What the P-value does not mean
- Statistical significance (low p value) does not mean the hypothesis is “true”.
- Statistical insignificance (high p value) does not mean the hypothesis is “false”.
- A statistically significant p-vale does not mean that the finding is important from a practical point of view. With a big enough sample size, even a trivial difference could be statisticallly significant.
Institutional Review Board (IRB)
Panel made up of researchers, community members tasked with evaluating whether research study meets ethical standards:
- Autonomy;
- Beneficence;
- Justice.
Autonomy
Each participant must have the right, without intimidation or coercion, to decide whether to participate in a study.
Informed consent
Researcher must fully explain study procedures, including risks and potential benefits, prior to participation.
Deception in research may be required to maintin the integrity of a study. Here, the board would weigh harm to participant against benefits.
Beneficience
- Obligation to promote well-being & minimize harm.
- Benefits of the study must outweigh the risks of harm. (What is the risk to participants? Can it be minimized?What are the benefits to society? Is there a risk of not doing this research?)
- Low risk with high benefit is most preferred for approval.
Justice
Fairness in distribution of benefits and burdens of research without discrimination or favouritism. Participants must be representative of the population that will benefit from the study.
Examples of unethical experiments
- Tuskegee syphilis experiments.
- Concentration camp experiments.
- MK Ultra mind control experiments.
Guideline principles for psychologists testing on non-human animal subjects
- Replacement: Find alternatives for animals when possible.
- Refinement: Modify procedures to minimize animal diseases. Provide humane housing conditions.
- Reduction: Use the fewest animal subjects as possible.
Theory-data cycle
The process of the scientific method, in which scientists collect data and can either confirm or disconfirm a theory.
Scientific method
The process of basing one’s beliefs on systematic, objective observations of the world, usually by setting up research studies to test ideas.
Descriptive research
A type of study in which researchers measure one variable at a time, with the goal of describing what is typical.
Third-variable problem
When a correlation observed between two variables can actually be explained by some third variable.
Descriptive statistics
Graphs or computations that describe the characterstics of a batch of scores, such as its distribution, central tendancy, or variability.
Frequency distribution
A bar graph in which the possible scores on a variable are listed on the x-axis from the lowest to highest, and the total number of people who got each score is plotted on the y-axis.
Mean
Arithmetic average of a group of scores.
Median
A measure of central tendancy that is the middlemost score.
Mode
The most common score in the batch.
Standard deviation
A variability statistic that caluclates how much, on average, a batch of scores varies around its mean.
Open science
The practice of sharing one’s data, materials, analysis plans, and published articles freely so others can collaborate, use, verify, and learn about the results.
Preregistration
When researchers make their hypotheses, methods, and planned statistical analyses public before they carry out a study.
Experiment
A conducted study that investigates the direct effect of an independent variable on a dependent variable.
Difference between set average score and variability
While the central tendency, or average, tells you where most of your points lie, variability summarizes how far apart they are.