Midterm Flashcards
availability heuristic
things that come to mind easily tend to guide our thinking
present/present bias
we often fail to think about what we cannot observe (ex: coincidences)
confirmation bias
tendency to look only for info that agrees with what we already believe
bias blind spot
belief that we are unlikely to be biased
empiricism
using evidence from the senses (or instruments that assist the senses) as the basis for conclusions (ideas/intuitions checked against reality)
research question
question researcher seeks to answer (expressed in terms of the variables)
inspiration for research questions:
-Informal observations
-Practical problems
-Previous research
theory
a coherent explanation or interpretation of one or more phenomena
-Functional (why)
-Mechanistic (how)
theory-data cycle
theory => research question(s) => research design => hypotheses => data => supports & strengthens OR doesn’t support & revises theory/design
hypothesis
an empirically testable proposition about some fact/behavior/relationship, usually based on theory, that states an expected outcome resulting from specific conditions or assumptions
basic research
conducted primarily to gain a better understanding of phenomena
applied research
conducted primarily to address a practical problem
translational research
uses the lessons from BASIC research to develop & test APPLICATIONS to healthcare, psychotherapy, treatments, or interventions
basic-applied research cycle
Basic research => Applied research => Translational research
peer review cycle
1) Author submits manuscript of journal (can suggest certain people to review or not to review)
2) Editors assess the manuscript (rejects, transfers, or sends to reviewers)
3) Reviewed (single-blind, double-blind, transparent, open)
4) Editor addresses comments => Author makes revisions => Editor assesses again
5) Finally rejected, transferred, or accepted
empirical papers
-Report of an original study
-Abstract, intro, methods, results, & discussion
-Quantitative info
review article
-Qualitative review of the scholarly lit on a topic
-Draw conclusions about trends, controversies, & future directions
-“Review” in title
meta-analysis
-Quantitative review of the evidence on a topic (statistical techniques to evaluate weight of evidence)
-“Meta-analysis” in title
-May be one component of a paper
theoretical article
-Describes a theory or model of a psychological process in detail
-Integrates empirical & theoretical findings to show how a theory of a model can help guide future research
opinion/perspective/thought piece
-Drawing on recent empirical research
-Formulates an opinion about a controversy, important findings, or a disagreement in a theoretical foundation, methodology, or application
questions for evaluating a research question
-Is it ethical?
-Is it interesting?
-Is it important?
-Is it feasible?
conceptual variable
abstract concept/construct
operational variable
describes the way of measuring or manipulating the variable
operationalization
process of starting with a conceptual variable & creating an operational variable
measured variable
variation is observed & recorded
manipulated variable
variation is controlled by researcher
what determines if a variable is measured or manipulated?
-Some can ONLY be measured
-Some cannot be manipulated ETHICALLY
-Some can be either measured OR manipulated
frequency claims
describe the rate or degree of a single, measured variable
contains a percentage, number, or rate/time phrase
association claims
argues that one level of a variable is likely associated with the particular level of another variable (probabilistic)
causal claims
argues that one variable is responsible for changing the other
requirements to support a causal claim:
-Covariance (change in 1 associated with change in other)
-Temporal precedence (directionality)
-Internal validity (are other explanations ruled out?)
causal claim variables
independent variable (manipulated)
dependent variable (measured)
association claim variables
predictor variable (~IV)
criterion/outcome variable (~DV)
construct validity
how well is a conceptual variable operationalized? are you measuring what you think you are?
external validity
how well do the results generalize?
-To other people
-To other settings/situations/contexts
statistical validity
how well does the data support the conclusions? what is the likelihood that the results were found by chance?
internal validity
are alternative explanations sufficiently ruled out by the study’s design?
naturalistic observation
observing individuals’ behavior in the environment in which it typically occurs
case studies
in-depth examinations/observations of an individual (or a few)
structured observation
observations made of specific behaviors in a somewhat controlled setting
ethogram
inventory of operational definitions of behaviors, used when collecting observation data
state (observations)
recording the duration of a behavior
event (observations)
record the number of occurrences (behavior treated as instantaneous)
focal sampling
record observations of ONE individual
-good for obtaining info of subtle or rare behaviors
scan sampling
recording behaviors of multiple individuals at once
-predetermined interval
reactivity
individuals change their behaviors when they know they’re being watched
observer/expectancy effects
observers subconsciously change the behavior of those they are observing
observer bias
observer’s expectations influence their interpretation of behaviors
validity
accuracy & reliability
reliability
necessary for validity, but not sufficient
consistency of measurements
face validity
measure is subjectively a plausible operationalization of the conceptual variable
content validity
measure captures all parts of the defined construct
criterion validity
measure is associated with a concrete behavioral outcome that is logical
known-groups paradigm
test whether scores can discriminate among groups whose behavior is already confirmed
convergent validity
measure is most strongly relate to measures of similar constructs
discriminant validity
measure is not strongly associated with measures of dissimilar constructs
interrater reliability
the degree to which 2+ coders/observers give consistent ratings of a set of targets
fixes for low interrater reliability:
-Revised codebook/ethogram
-More training
-Throw out inconsistent behaviors
test-retest reliability
assesses whether scores are consistent each time they’re measured
internal reliability
assesses whether answers are consistent no matter how the question is phrased
cohen’s kappa
common measure for interrater reliablity
cronbach’s alpha
correlation measure typically used for internal reliability
when to assess validity & measures
Before used to test a hypothesis
pros & cons of surveys:
Pros
-Can be very accurate
-Sometimes the only way to assess a variable
Cons
-Can be sensitive to the way that the questions are asked (order, phrasing, scales)
forced-choice questions
need to choose between 2+ options (only one)
open-ended questions
can answer in a free-write way
likert scale questions
strongly agree to strongly disagree
semantic differential questions
number rating from one adjective to another
primacy effect
more likely to remember words at the beginning of a list
recency effect
most recently presented items will most likely be remembered best
leading questions
biases people to answer in a certain way
double-barreled questions
actually asking 2 questions
negatively worded questions
uses double-negative phrasing (confusing)
how should questions be ordered?
Broad to focused
response sets/non-differentiation
people respond the same way to ALL questions
acquiescence response set
responding with “agree” or “strongly agree” to everything
solved with reverse-worded questions
fence-sitting response set
respondent “plays it safe” by always answering in the middle of the scale
solved with no neutral option, even number of response options, or forced-choice questions
socially desirable responding
respondents give answers to make them “look better” than they really are
solved with anonymity, removing based on target questions
biased sample
some members of the population of interest have a higher probability of being included in the sample than others
confidence interval (CI)
a range of values, indicated by a lower & upper value, that is designed to capture the population value for an estimate (describes the uncertainty of an estimate)
margin of error
half the width of the entire confidence interval
correlational statistics
can be used in studies testing all types of claims
correlational design
tests an association claim
bivariate correlation
an association involving 2 variables
common uses of correlational designs:
-How 2 variables relate within individuals
-How 2 variables relate between different individuals
-How a variable of an individual relates to a variable of the environment
statistical validity topics for association claims:
-Strength
-Precision
-Significance (statistical)
-Replication
-Outliers
-Restriction of range
-Curvilinear
assessing relationship strength
Direction (+, -, or 0)
Strength (magnitude of r)
R^2 (variance of Y that is accounted for by X)
measures of precision:
-Confidence interval
-Margin of error
primary indicator of precision
sample size (larger = more precise)
probability estimate (p)
what is the likelihood of finding this correlation by chance?
null hypothesis significance testing (hypotheses)
Hypothesis => Effect of manipulation; Difference between groups, Correlation btwn variables
Null Hypothesis => No effect; No real difference; No correlation
NHST possible scenarios
-True positive (data indicates hypothesis is true & it is)
-False negative/Type II error (data indicates hypothesis is false, but it is true)
-True negative (data indicates the null is true, & it is)
-False positive/Type I error (data indicated the null is false, but it is true)
outlier
a score that is either much higher or much lower than most of the other scores in a sample
can drastically change r & have larger effect when small sample size
causes of extreme values:
-Chance
-Measurement error
-Instrument error
-Human error
-Unmeasured (third) variable
-Incomplete theoretical foundation
how to deal with extreme values:
-No definitive rules for what is an outlier
-Quantitative ways to test if a single point has a disproportionate influence on an association
-Can report results of statistical analyses with & without outliers
-May talk about in results & discussion sections
restriction of range
is there isn’t a full range of scores in one variable, the correlation can appear smaller than it truly is
how to solve restriction of range:
-Recruit individuals at the ends of the spectrum
-Statistical techniques can help correct
multiple/multivariate regression
calculates the proportion of total variability that is due to the effect of different variables
helps rule out third variables/control for them
beta
similar to r, but describes the strength & direction between 2 variables when one or more variables are controlled for
regression tables
-Show beta values of all predictors
-Can compare relative importance of variables
-No standard guidelines for strong/moderate/weak
-p-value of beta => probability that the beta came from a population in which the relationship is 0
third variables
A & B only appear related because C causes both A & B
mediation variables
a variable that helps explain the relationship between 2 other variables (A & B are related because A leads to C which leads to B)
moderation variables
A variable that, depending on its level, changes the relationship between 2 variables (A & B are related for one type of C, but not for another type of C)
covariates
variables being “controlled for”
when to use scatterplots:
correlation between two QUANTITATIVE variables
when to use histograms/bar graphs:
correlation between a categorical & a quantitative variable
when to use a double bar graph/histogram:
correlation between 2 CATEGORICAL variables
longitudinal designs
provide evidence for temporal precedence by measuring the same variables in the same subjects/participants at several points in time
associations calculated in longitudinal designs:
-Cross-sectional
-Autocorrelations
-Cross-lag
cross-sectional correlations
test whether 2 variables, measured at the same point in time, are correlated
problems with cross-sectional correlations:
-Don’t establish temporal precedence
-Cohort effects
cohort effects
differences in generations/ages due to time-periods
autocorrelations
test the correlation between one variable & itself, tested at 2 different time points
cross-lag correlations
a correlation between an earlier measure of one variable & a later measure of another variable
why can’t longitudinal & multiple regression establish causation?
-Multiple regression lacks temporal precedence
-Longitudinal designs still have a third-variable problem
evidence-based treatments
therapies based on research
replication
study conducted again to test reliability
falsifiability
a hypothesis that, when tested, could fail to support therapy
universalism
claims are evaluated according to merit, independent of researcher’s credentials/reputation
community (scientific norm)
scientific knowledge is created by a community & its findings belong to said community
disinterestedness (scientific norm)
scientists strive to discover the truth & it will not be swayed by a scientist’s own beliefs
organized skepticism (scientific norm)
question everything, including own theories, widely accepted ideas, & “ancient wisdom”
self-report measure
recording people’s answers to questions about themselves in a questionnaire/interview
observational measure
recording observable behaviors or physical traces of behaviors
physiological measures
recording bological data
categorical variable
categories; nominal
quantitative variables
coded with meaningful numbers
ordinal scale
numerals represent a rank order
interval scale
numerals represent equal intervals between levels with no “true zero”
ratio scale
numerals have equal intervals & a “true zero”
effect size
strength of relationship between 2+ variables
parsimony
degree to which a scientific theory provides the simplest explanation of some phenomenon