PSYC 100 Chapter 2 Flashcards
scientific method
The process of basing one’s confidence in an idea on systematic, direct observations of the world, usually by setting up research studies to test ideas
theory-data cycle
The process of the scientific method, in which scientists collect data that can either confirm or disconfirm a theory.
theory
A set of propositions explaining how and why people act, think or feel.
hypothesis
A specific prediction stating what will happen in a study if the theory is correct
Data
A set of empirical observations that scientists have gathered.
replication
When a study is conducted more than once on a new sample of participants, and obtains the same basic results.
journal
A periodical containing peer-reviewed articles on a specific academic discipline, written for a scholarly audience
variable
Something of interest that varies from person to person or situation to situation
measured variable
A variable whose values are simply recorded
manipulated variable
A variable whose values the researcher controls, usually by assigning different participants to different levels of that variable
internal validity
ability to infer causal relationships
external validity
ability to find the same result in the real world
Naturalistic observation
Observing real behavior without trying to actively manipulate what
is going on
Naturalistic observation
Observing real behavior without trying to actively manipulate what
is going on
High in External Validity
Low in Internal Validity
Case Study
Examining a small number of people in a very detailed way, tells nothing about its prevalence in the whole population
Existence proof
one example of a psychological phenomenon
Self-report measures & Surveys
usually used to assess things that are only available to the people themselves
-Easy and inexpensive to use
-Works well enough to be used for some traits
-Not all people may have enough insight into themselves to successfully
report traits
-May result in response sets (distortions in answering questions)
Random Selection
every person in a population has an equal chance to be in a poll
(can be better than large non-randomized sample)
Reliability
whether or not a measurement is consistent across different factors
Test-retest reliability
is the test the same if you give it again?
Interrater reliability
do different people agree on what they are rating?
validity
whether a measurement
actually assesses what it should
Rating others (pros and cons)
-Rating others may avoid blind spots in our own performance
-Susceptible to the halo effect – one positive trait can make other traits
seem more positive
-Horns effect – disliking a person can blind you to their positive trait
Correlational Design
a research design that investigates the
association between two variables
Positive Correlation
as one variable increases, another
variable also increases
Negative correlation
as one variable increases, another
variable decreases
No correlation
No relationship exists between two variables
Correlation is not equal to…
Causation
experiment
A design where participants are randomly assigned to conditions that manipulate an independent variable
Random Assignment
participants have an equal chance of
ending up in any condition
Experimental group
the group that is manipulated by the
experimenter
Control group
the group who is not manipulated
Independent Variable
the variable that is changed by the experimenter
Dependent Variable
the variable that is measured; is expected to change in response to the independent variable
Operational definition
a “working” definition of what an experimenter is measuring
Confounding variable
an additional difference between the experimental and control groups
Placebo effect
participants show improvement because it is expected
Nocebo effect
participants show harm because it is expected
Blind
awareness of who is in the control group
Experimenter Expectancy Effect
the hypothesis of a researcher leads to
unintentionally biasing an outcome
Double-blind
neither researchers nor participants know who is in what group
demand characteristics
when participants act in a way that reflects what they think the experimenter wants
descriptive formulas
the mathematical formulas that we use to describe a single variable
third-variable problem
For a given observed relationship between two variables, an additional variable that is associated with both of them, making the additional variable an alternative explanation for the observed relationship
confound
An alternative explanation for a relationship between two variables; specifically, in an experiment, when two experimental groups accidentally differ on more than just the independent variable, which causes a problem for internal validity
Central tendency
ways of measuring the most common cluster of scores in a data set
Mean
or average is the sum of all the data points divided by the number of data points
Median
when all data points are ordered, the number in the middle
Mode
the most common value of data point
variability
or dispersion, tells us how loosely or tightly packed the data is
range
the difference between the highest and lowest values of this variable
standard deviation
a measure of how far each data point is from the mean
Effect size
A numerical estimate of the strength of the relationship
between two variables. It can take the form of a correlation coefficient
or, for an experiment, the difference between two groups (with some
calculations).
Inferential statistics
to decide whether or not a sample’s results can be applied to make conclusions about an entire population
Statistical significance level
the probability of finding such an [experimental groups mean difference] by chance
Base rate fallacy
ignoring the overall likelihood of an event when measuring
Informed consent
participants know they are in a study, and know what risks are involved
meta-analysis
A process in which researchers locate all of the studies that have tested the same variables and mathematically average them to estimate the effect size of the entire body of studies.
IRB approval
an institutional review board must approve the study
Debriefing
Participants must be informed of what happened in full after the experiment
Standards to be considered an ethical study
IRB approval, Debriefing, Informed consent, Scientific knowledge outweighs harm
false positive
A statistically significant finding that does not reflect a real effect.
HARKing
making a hypothesis after you know the results
p-hacking
questionable data analysis techniques
open science
the practice of sharing one’s data, hypotheses, and materials freely so others can collaborate, use, and verify the results.
preregistration
A researcher’s public statement of a study’s expected outcome before collecting any data.
scientific method
theory, hypothesis, design, collect data, compare