Quantitative Flashcards
Goal of research
design studies carefully to make alternative interpretations implausible
Methods are about designing a study so that if a particular finding is obtained we can reach a conclusion
Illusory Correlation
cognitive bias that occurs when we focus on two events that stand out and occur together
How do we know things
feelings, intuition, AUTHORITY (Expert), reasoning (logic)- assumption has to be true.
How do we know things part II
Empiricism- idea that knowledge is based on observations SCIENCE- empiricism and reasoning
Process of science
Hypothesis-new hypothesis- theory building- body of knowledge
Goodsteins Evolved Theory of Science
1) data play a central role
2) cientists are not alone- observations reported to other scientists and the public
3) science is adversarial- can be falsified or supproted
4) peer reviewed
Tenets of Science
Empiricism
Replicability
Falsifiability- can be testable
Parsimony- simple account
Hypothesis gains support
Hypothesis can not be proved
Extend literature
take idea further
remove confounds and improve generalizability
behavioral science goals
describe behavior, predict behavior, explain behavior, determine the causes of behavior
causation
temporal, and covariation, and elimination of alternatives
causation
temporal, and covariation, and elimination of alternatives
causation
temporal, and covariation, and elimination of alternatives
Efficacy vs effectiveness
does intervention produce expected result in ideal circumstances/// degree of benefit in clinical settings
Construct Validity
adequacy of there operational definition
Internal Validity
Ability to draw conclusions about causal relationships
Integrity of experiment
Ability to draw casual link between IV and DV
Mediating Variables:
psychological processes that mediate
the effects of the situational variable
on a particular response
COnstruct vs Variable
the idea then what’s used to test it
Operational Definitions
Set of defined and outlined procedures used to measure and manipulate variables
A variable must have operational definition to be studied empirically
Allows others to replicate!
Construct validity
Adequacy of the operational definition of variables
Does the operational definition reflect the true theoretical meaning of the variable?
Nonexperimental method
Variables are observed as they occur naturally
If they vary together, there is a relationship (correlation)
Reduction of internal validity
Experimental Control
Extraneous variables are kept constant
Every feature of the environment is held constant except the manipulated variable
Strong internal validity requires:
Temporal precedence
Covariation between the two variables
Elimination of plausible alternative explanations
Issues When Choosing A Method
Often the higher the internal validity, the lower the external validity (generalization)
Harder to generalize when strict experimental environment
Reliability and validity of measurement
Not to be confused with internal or external validity of a study
However, reliability and validity of measurement affects internal validity of a study
Measured score = “true” score (real score on the variable) + measurement error
Reliability of Measures
Consistency or stability of a measure of behavior
We expect measures to give us the same result each time
Should not fluctuate much
Test-retest reliability- same individuals at two points in time
Practice effects – literally more practiced 2nd time
Maturation – simply that subjects change because time has passed
Alternate forms reliability
Individual takes 2 different forms of the same test
Also at 2 different times
Internal consistency reliability
Generally measures whether several items that propose to measure the same general construct produce similar scores
Assessed using responses at only one point in time
In general, the more the number of questions, the higher the reliability
3 common measures of internal consistency:
Item total
Split –half reliability
Cronbach’s alpha (α)
Item-total
Correlation between an individual item and the total score without that item
For example, if you had a test that had 20 items, there would be 20-item total correlations. For item 1, it would be the correlation between item 1 and the sum of the other 19 items
Helpful in identifying items to remove
Or in creating a short-from
Split –half reliability
Correlation of the total score on one half of the test, with the total score on the other half
Randomly divided items
Spearman-Brown split-half reliability coefficient
We want > .80 for adequate reliability
However, for exploratory research, a cutoff as low as .60 is not uncommon
Cronbach’s alpha
How closely related a set of items are as a group
How well all the items “hold together”
Simply put:
Average of all possible split-half reliability coefficients
Expressed as a number between 0 and 1
Generally want > .80, (in practice > .70 considered ok)
By far the most common measure you will see reported
Interrater reliability
Correlation between the observations of 2 different raters on a measure
Measured by: Cohen’s Kappa
By convention > .70 is considered acceptable
Construct Validity
To what extent does the operational definition of a variable actually reflect the true theoretical meaning of the variable?
Does the measure reflect the operational definition?
Ex. Depression
DSM-5 criteria
BDI sxs
Face validity
Content of the measure appears to reflect the construct being measured
Very subjective
Easy for a participant to “fake”
Content Validity
Extent to which a measure represents all facets of a given construct
Subject matter experts may be part of the process
Criterion
Measures how well one measure predicts an outcome for another measure
Criterion
Predictive Validity
Scores on the measure predict behavior on a criterion measured at a future time
Ex: GRE -> grad school success
Concurrent Validity
Relationship between measure and a criterion behavior at the same time