Unit 2: Research Methods In Psychology (Chapter 2) Flashcards

1
Q

Theory

A

An integrated set of related principles that explains and generates predictions about some phenomenon in the world.

Ex: Schachter-Singer theory of emotion (uses environmental cues to label unexplained or ambiguous signs of arousal).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Putting common sense to the test

A

Ex: Late fee daycare study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Hypothesis

A

A testable prediction about what will happen under specific circumstances if the theory is correct.

Ex: Subjects who encounter the lady on the dangerous bridge (vs. safe bridge) are more likely to call her and include erotic content in their written responses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Data

A

A set of observations that are gathered to evaluate the hypothesis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Replication study

A

Repetition of the study with a new group of participants. Direct replications attempt to recreate the original experiment exactly. Conceptual replications try to recapture the original finding using different methods or measures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Open science movement

A

Initiative to make scientific research, data, and methods openly accessible and transparent, with the goal of increasing reproducibility of research.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Meta-analysis

A

Combination of the results of multiple studies (since there is only so much confidence one can obtain from a single study).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Peer review

A

Critical evaluation of the study’s quality by trained psychological scientists.
Just because a research paper has been peer reviewed does not mean it’s free of limitations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Variable

A

Anything (typically something of interest) that can take on different values.

Ex (person): age, gender, ethnicity, mood, dance floor confidence level.
Ex (content, condition or situation): time of day, temperature, ambient noise, stressfulness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Manipulated variable

A

Variable intentionally changed by the researcher. Not all variables can be manipulated.

Ex: participants assigned to low vs. high stress condition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Measured variable

A

A variable whose values are simply recorded. Used in every study, as all variables can be measured.

Ex: # of life stress events a participant has experienced within the past year.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Operational variable

A

Specific description of how a variable will be measured or manipulated in a study. (From abstract to concrete, quantifiable, specific.)
Creating an operational definition = operationalizing a variable.

Ex: Operationalizing positive emotionality = smiles in high school yearbook photos (using systematic method to describe and analyze facial movement).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Fixed response questionnaires (surveys)

A

Specific set of questions and possible responses predetermined by the researchers.

Ex: Beck Depression Inventory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Open-ended (self-report) questions

A

Participant gives any answer that comes to mind. Helpful when studying something we don’t know much about yet. A way of gathering information to generate more specific questions later on.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Self-report

A

People describe themselves and/or their behaviour.

Ex: Asking participants how many hours they spend on social media per week.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Self-report advantages

A
  • Allows us to “get inside people’s heads”.
  • Easy, relatively inexpensive (in the case of surveys).
  • Allows us to collect data from more participants, which will make our study stronger.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Self-report limitations

A
  • Social desirability bias.
  • May be difficult to identify and verbalize experience (ex: how one feels).
  • Not always aware of why we do the things we do. [Often relies on retrospective report -memories may be inaccurate or coloured (biased) by current experience. This can be mitigated by using methods where participants are asked to report their experience soon after it happens (ex: immediately after, at the end of each day).]
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Social desirability bias

A

Tendency to answer questions in a manner that will be viewed favourably by others. Includes impression management (“faking good”), which can be mitigated with anonymous participation, and self-deceptive enhancement (honestly held but unrealistic self-views).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Behavioural observation

A

Direct observation: Researchers observe and record the occurrence of behaviour. Can take place in a lab or the field and use technology.

Ex lab research: Stage emergency to examine factors promoting or inhibiting helping behaviour. Ex naturalistic research: Jane Goodall’s observation of intergroup warfare among Gombe chimps.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Behavioural observation advantages

A
  • More objective than self-report (if done right).
  • Observe real-world behaviour (or at least a close approximation).
  • Source of nuanced, rich information.
  • Possible to capture behaviours in their natural context.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Behavioural observation limitations

A
  • More time and resource-intensive. (Requires extensive training to achieve consistency and minimize bias. May not be able to recruit as many participants.)
  • Reactivity
    (May use unobtrusive observation/recording, but this raises ethical issues.)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Reactivity

A

A change in behaviour caused by the knowledge one is being observed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Indirect measures

A

Designed to avoid reactivity and social desirability. Ex: reaction time = the time it takes to respond to a stimulus on a screen. Can be used to assess implicit attitudes (the automatic tendency to associate a given stimulus with positive or negative feelings).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Indirect measures pros

A
  • Avoid social desirability & reactivity problems (could be particularly useful for sensitive topics).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Indirect measures cons

A
  • Big gap between construct of interest and operationalization (can we be sure that we are studying what we think we are studying?).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Physiological responses

A

Body’s reaction to various experiences/stimuli.

Ex: autonomic nervous system activity, hormone changes, immune system changes, brain activity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Physiological responses pros

A
  • Interesting in their own right (ex: understanding link between relationships and health).
  • Outside participants’ control (not susceptible to social desirability bias, etc.).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Physiological responses cons

A
  • Very expensive = smaller sample size.
  • Ambiguity in interpretation.
  • Could be more invasive (depending on the measure).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Population of interest

A

The full set of cases the researcher is interested in.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Sample

A

The group who participated in research, and who belong to the larger group (the population of interest) that the researcher is interested in understanding. Sample size matters (bigger sample = better estimate)!

Ex: Jelly beans in jar

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Random sample

A

Every person in the population of interest has equal chance of inclusion.
You cannot obtain a random sample of the world population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Case study

A

Researchers study one or two individuals in depth, often those who have a unique condition. Do not generalize to the larger population, but may offer theoretical insights and research inspirations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Correlational research

A

A type of study that measures two (or more) variables in the same sample of people, and then observes the relationship between them.

Ex: Social media x well-being

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Scatterplot

A

A figure used to represent a correlation. The x-axis represents values for one variable, and the y-axis represents values for the other variable.
- Each individual in the study is represented by a dot placed between the axes according that individual’s variable values.
- Relationships can be positive (sloping upwards) or negative (sloping downwards); strong (dots clustered tightly together) or weak (dots spread out).

35
Q

Correlation coefficient / Pearson’s r / r

A

Statistic showing the direction and the strength of a relationship. Ranges from -1.0 to +1.0. Direction of the relationship is indicated by - or +. Strength of relationship is indicated by the value:
- The closer it is to 0, the weaker the relationship.
- The closer it is to -1 or +1, the stronger the relationship.

36
Q

Three criteria that must be met to establish causality

A

1) Two variables must be correlated.
2) One variable must precede the other.
3) There must be no reasonable alternative explanations for the pattern of correlation.
An experiment is the ideal way to establish causality.

37
Q

Experimental research

A

A study in which one variable is manipulated, and the other is measured (while all other variables are kept constant).

38
Q

Independent variable

A

The manipulated variable in an experiment.

39
Q

Dependent variable

A

The measured variable in an experiment. This variable depends on the level (or version) of the independent variable.

40
Q

Random assignment

A

Participants are as likely to be assigned to one condition as to another so that everything averages out to be the same in both groups.

41
Q

Control group

A

A condition comparable to the experimental condition in every way except that it lacks the one “ingredient” hypothesized to produce the expected effect on the dependent variable.

42
Q

Moderator variable

A

The effect of the independent variable on the dependant variable is conditional on value of the moderator.
The independent variable does not cause the moderator.

Ex: Maybe social media is only detrimental for younger individuals (i.e., age is moderator).

43
Q

Mediator variable

A

The independent variable exerts its effect on dependant variable through some other variable.

Ex: Social media use increases upward social comparisons, which leads to depression (upward social comparison is the mediator).

44
Q

Measurement validity / Construct validity

A

Are you measuring what you think you are measuring? The measure should make sense, be grounded in theory, be associated with theoretically similar measures, and have a predictive value.

45
Q

Reliability

A

Do you get the same results every time you administer the measure?
- Test-retest reliability = Measures at more than one time point don’t fluctuate.
- Inter-rater reliability. = Do other researchers score about the same?
A measure can be reliable, but not valid.

46
Q

Internal validity

A

Can we rule out alternative explanations in an experiment? Internal vaildity is threatened by the presence of confounds.

47
Q

How to avoid threats to internal validity

A
  • Keep experimental conditions constant for all variables (except for the variable you want to study).
  • Use random assignment
  • Standardize study scripts
  • Do not reveal hypotheses
  • Make the study double-blind (if possible)
48
Q

Confound

A

An alternative explanation for a relationship between two variables; “muddies up” results. Occurs when two experimental groups accidentally differ on more than just the independent variable.

Ex: Rating a professor’s competency based on a profile – is the effect due to gender or age?

49
Q

Placebo effect

A

May experience improvement after receiving inert substances or inactive treatments. Without this condition, we would not know whether the improvement we see can be attributed to our treatment. Takeaway: Expectations are really powerful.

Ex: Knee surgery video.

50
Q

Double-blind procedures

A

Neither the experimenters nor the participants know who is in the experimental group or control group. This reduces the influence of bias and expectations.

51
Q

Observer expectancy effect

A
  • The experimenter may, consciously or unconsciously, cue the participant in a way that confirms their expectations, ask leading questions, or interpret the participant’s behaviour differently (confirmation bias)
  • The subtle cues (demand characteristics) from the experimenter may give the participant a sense of what is expected of them. The participant may try to “help” by acting in a way that conforms to the experimenter’s expectations.

Ex: The curious case of “Clever Hans”.

52
Q

Differential attrition

A

Participants drop out from experimental and control groups at different rates; the participants that drop out are not random. This is a common threat to internal validity.

53
Q

External validity

A

1) Can our results be generalized to other samples?
2) Can our results be generalized to other situations?
Difficult to establish high external and internal validity in the same study. Solution = Collect more data.

54
Q

Statistic

A

Numerical value derived from dataset that can help us describe the dataset or evaluate our research hypothesis.

55
Q

Descriptive statistics

A

Summarize sets of data.

Ex: mean, median, mode, standard deviation.

56
Q

Effect size

A

Values describing the strength of an association or magnitude of the effect.

Ex: r coefficient.

57
Q

Inferential statistics

A

Help us assess whether there is sufficient evidence to support a claim or hypothesis. Allow us to make inferences about the population from our
sample using rules of probability.

Ex: Hypothesis testing.

58
Q

Cohen’s d

A

Difference between groups (e.g., experimental & control) expressed in terms of standard deviation. Determined not just by absolute difference between groups, but also their spread.

Ex: Tallians vs Shortians.

59
Q

Hypothesis testing

A

An inferential technique; how do we know that the difference we observed between our two groups did not occur by chance alone?

60
Q

Null hypothesis

A

A type of statistical hypothesis that proposes that no statistical significance exists in a set of given observations

Ex: There is no effect of quitting social media on depression.

61
Q

P-values

A

The probability of getting a result as extreme as the one we observed if there really was no difference between the two groups (or no relationship between two variables). I.e. how likely the obtained results are under the null hypothesis.
- Takes on values between 0 and 1.
- P-value < 0.5 = reject the null.
- P-value > 0.5 = do not reject the null.

62
Q

Factors affecting the size of P-values

A
  • The size of the observed effect. All else being equal, larger effects are more likely to be statistically significant.
  • The number of participants in our study. All else being equal, results are more likely to be significant when we have more participants. With very large samples, even small effects may be statistically significant.
    Statistical significance ≠ practical significance.

Ex: Flip a coin for a cup of coffee. How many times does one have to win for the other to accuse them of cheating?

63
Q

What the P-value does not mean

A
  • Statistical significance (low p value) does not mean the hypothesis is “true”.
  • Statistical insignificance (high p value) does not mean the hypothesis is “false”.
  • A statistically significant p-vale does not mean that the finding is important from a practical point of view. With a big enough sample size, even a trivial difference could be statisticallly significant.
64
Q

Institutional Review Board (IRB)

A

Panel made up of researchers, community members tasked with evaluating whether research study meets ethical standards:
- Autonomy;
- Beneficence;
- Justice.

65
Q

Autonomy

A

Each participant must have the right, without intimidation or coercion, to decide whether to participate in a study.

66
Q

Informed consent

A

Researcher must fully explain study procedures, including risks and potential benefits, prior to participation.

Deception in research may be required to maintin the integrity of a study. Here, the board would weigh harm to participant against benefits.

67
Q

Beneficience

A
  • Obligation to promote well-being & minimize harm.
  • Benefits of the study must outweigh the risks of harm. (What is the risk to participants? Can it be minimized?What are the benefits to society? Is there a risk of not doing this research?)
  • Low risk with high benefit is most preferred for approval.
68
Q

Justice

A

Fairness in distribution of benefits and burdens of research without discrimination or favouritism. Participants must be representative of the population that will benefit from the study.

69
Q

Examples of unethical experiments

A
  • Tuskegee syphilis experiments.
  • Concentration camp experiments.
  • MK Ultra mind control experiments.
70
Q

Guideline principles for psychologists testing on non-human animal subjects

A
  • Replacement: Find alternatives for animals when possible.
  • Refinement: Modify procedures to minimize animal diseases. Provide humane housing conditions.
  • Reduction: Use the fewest animal subjects as possible.
71
Q

Theory-data cycle

A

The process of the scientific method, in which scientists collect data and can either confirm or disconfirm a theory.

72
Q

Scientific method

A

The process of basing one’s beliefs on systematic, objective observations of the world, usually by setting up research studies to test ideas.

73
Q

Descriptive research

A

A type of study in which researchers measure one variable at a time, with the goal of describing what is typical.

74
Q

Third-variable problem

A

When a correlation observed between two variables can actually be explained by some third variable.

75
Q

Descriptive statistics

A

Graphs or computsations that describe the characterstics of a batch of scores, such as its distribution, central tendancy, or variability.

76
Q

Frequency distribution

A

A bar graph in which the possible scores on a variable are listed on the x-axis from the lowest to highest, and the total number of people who got each score is plotted on the y-axis.

77
Q

Mean

A

Arithmetic average of a group of scores.

78
Q

Median

A

A measure of central tendancy that is the middlemost score.

79
Q

Mode

A

The most common score in the batch.

80
Q

Standard deviation

A

A variability statistic that caluclates how much, on average, a batch of scores varies around its mean.

81
Q

Open science

A

The practice of sharing one’s data, materials, analysis plans, and published articles freely so others can collaborate, use, verify, and learn about the results.

82
Q

Preregistration

A

When researchers make their hypotheses, methods, and planned statistical analyses public before they carry out a study.

83
Q

Experiment

A

A conducted study that investigates the direct effect of an independent variable on a dependent variable.

84
Q

Difference between set average score and variability

A

While the central tendency, or average, tells you where most of your points lie, variability summarizes how far apart they are.