Midterm 1 Flashcards
explain the scientific method research process
1) ask a question
2) assume a natural cause for the phenomenon
3) consult past research
4) state a testable hypothesis
5) make a conclusion
6) submit report to peer-reviewed journal
explain the method for testing a hypothesis
1) design study to test hypothesis
2) seek ethical approval
3) collect data
4) analyze data
5) revise hypothesis
6) repeat 1-5 (usually a few times)
hypothesis
a possible answer (may be true or untrue) to the question asked
falsifiable hypothesis
- must be specific
- must take risks
- must be testable (genuine test tries to refute rather than confirm)
what is the purpose of the concept of falsifiability?
evaluates the scientific status of a theory
what theory talked about in class is falsifiable?
einstein’s relativity
- took risks
- opportunity to test
what theory talked about in class is not falsifiable?
freud’s psychoanalysis
- no way to revise cause confidence
a hypothesis that is consistent with every possible outcome is essentially useless (should be incompatible with some), what is an example of this and why?
predict that weather tomorrow can be sunny, cloudy, rainy, or snowy –> not specific, takes no risks, always correct
what is the scientific status of a theory based on?
- falsifiability
- refutability
- testability
hypothesis generate models of the world to help us:
1) predict phenomena
2) determine causes of phenomena
3) explain phenomena
4) control phenomena
NOT DESCRIBE
what is important about new data for a hypothesis?
must account for everything that old data does, and provide additional info
how long do hypotheses survive for?
until data which it can’t account for in uncovered
when is a hypothesis unfalsifiable?
- when no empirical evidence in obtainable
- when it’s predictions are irrefutable
- when additional assumptions are introduced after refuted by data
falsifiability in practice
some theories not immediately discarded after contrary evidence is obtained
- revised to improve experimental methods
- useful but not testable (hope that will be testable one day) –> ex: string theory
- can’t be as specific cause of lack of knowledge (ex: neuroscience more complex than physics)
operational definition
a specific description of how a concept will be measured
operationalization
links concepts to data collection
operational variables
quantities of interest that serve as substitutes for measuring concept of interest
- ex: number of smiles to show happiness
what is the purpose of operational definitions?
- allow us to consistently quantify and measure concepts
- communicate ideas to others
what makes a good operational definition?
- reliability
- validity
- absence of bias (ex: external factors)
- cost (ex: low cost)
- practicality (ex: easy to measure)
- objectivity (ex: physical measurement not subjective)
- high acceptance (ex: many others have used)
bias
difference between the measurement made and the “true” value of that variable
reliability and bias must be determined be over ___ measurements.
many
reliability
- reproducibility of repeated measurements
- must be based on concrete observable behaviours
- facilitates consistency across measurements
what is the same as bias
systematic error
what is the same as reliability
precision, consistency
what is the opposite of bias
accuracy, validity
what is the opposite of reliability
variability, random error, noise
theory to prediction path
theory –> hypothesis (maybe many) –> operational definition –> prediction (based on OD)
hypothesis vs. prediction
hypothesis:
- framed as a statement about something (phenomenon) that may or may not be true
- often present tense
- derived from broader theory
prediction:
- conclusion related to specific methodological details of the study
- often future tense
- derived from a more general hypothesis
what is the same as validity
accuracy
validity
- whether measures whats intended to measure (“true value”)
- must be based on relevant behaviours
- facilitates accuracy of measurements
what is the opposite of validity
bias, systematic error
what is the same as variability
random error, noise
what is the opposite of variability
reliability, precision, consistency
variability
how spread out the difference of measurements are
measurement = ?
true score + measurement error (systematic + random)
what are some factors that may contribute to measurement error?
- specificity of operational definition not good enough
- internal noise of measurement device (living or nonliving)
interrater reliability
use multiple raters and compare the extent to which measurements agree
test-retest reliability
test that measures specific quantity (ex: IQ test)
2 types of test-retest reliability
1) same test
2) alternate forms
limitation of test-retest reliability
memory can affect results so won’t reliably measure changes
3 types of internal consistency reliability
1) split-half reliability
2) Cronbach’s alpha
3) item-total
split half reliability
randomly select half of subject and compare with other half –> test if halves are consistent
Cronbach’s alpha
measure how closely related set of items are as a group
item-total
correlate if each item relates to rest of tests/groups –> look at items individually
correlation coefficient (r)
one of the best ways to quantify relationship bw 2 coders
- r > 0 = positive
- r = 0 = no relationship (OD not specific enough)
- r < 0 = negative (something wrong with coders)
indicators of construct validity (how well constructed OD its)
1) face validity
2) content validity
3) predictive validity
4) concurrent validity
5) convergent validity
6) discriminant validity
face validity + example
- degree at which test subjectively (based in what individuals think) covers concept its suppose to measure (looks like measure what should)
- ex: face memory test
content validity + example
- degree at which test measures all things relevant to what its suppose to measure
- ex: autism spectrum quotient (items corresponding to social skills, communication skills, imagination, attention to detail, attention switching)
- almost opp. of face validity
predictive validity + example
- degree at which data accurately predicts a future event based on criterion
- ex: SAT indicate gpa in uni
concurrent validity + example
- degree at which data for a present event correlates to previously validated criterion
- ex: course grade
convergent validity + example
- degree at which two measurements that should be related are
- ex: new data agrees with other(s) in literature with same hypothesis
discriminant validity + example
- degree at which measure is related to another concept it shouldn’t be related to
- ex: new data agrees with other(s) in literature of different hypothesis
predictive/concurrent vs convergent/discriminant
predictive/concurrent
- based on gold standard (well known & agreed upon by many)
convergent/discriminant
- based on other measures (in literature, etc)
variable
any event, situation, behaviour, or individual characteristic that can take more than one value (can change)
divisions on variables
- quantitative
- categorical
quantitative variables + example
- have specific numbers
- ex: measure magnitude
categorical variables + example
- have different levels, not numbers on defined scale
- ex: eye color
how to distinguish between quantitative and categorical variables?
subtraction test –> subtract lower level from higher level
- if differences all have same meaning = quantitative
- if differences have diff meaning = categorical
types of quantitative variables
- discrete
- continuous
how to distinguish between discrete and continuous quantitative variables?
midway test –> take 2 levels and go midway between
- have no meaning = discrete
- still have meaning = continuous
discrete example
number of siblings
continuous example
time
divisions of quantitative variables
- interval
- ratio
interval scale + example
- have equal intervals but no meaningful zero
- ex: IQ
ratio scale + example
- have equal intervals and a meaningful zero (means lack of something)
- ex: speed
divisions of categorical variables
- ordinal
- nominal
ordinal scale + example
- has order
- rank differences don’t need to reflect constant change
- ex: military rank
nominal scale + example
- no particular order
- ex: eye color
Likert scale –> interval or ratio?
- treated as interval when analyzing data (technically ordinal)
- 5-point or 7-point –> but usually self reported
- ex: can’t have zero happiness
positive linear relationship
2 variables change in same direction by set amounts (x up, y up)
negative linear relationship
2 variables change in different direction by set amounts (x up, y down)
curvilinear relationship
still have certain relationship at any given time but direction of relationship not monotonous (positive from 1-5 seconds, negative from 6 - 10 seconds)
no relationship
usually somewhat circular shape, no relation on one variable change to another
linear relationship
variables change by set amount each time
non-linear relationship
variables do not change by set amount each time
monotonic relationship
- overall relationship curve move in same direction (doesn’t matter if linear or not)
- ex: positive non-linear
non-monotonic relationship
- overall relationship changes direction is some places
- ex: curvilinear
non-experimental method
- observations only –> both variables measured
- aka correlational method
experimental method
- at least one variable manipulated (independent), one variable measured (dependent)
non-experimental vs. experimental method –> which one prefer?
experimental method
how to interpret correlation data?
1) correlation is spurious
2) A causes B
3) B causes A
4) third variable causes A and B –> A & B not correlated directly
limitation of non-experimental method
- correlation doesn’t imply causation –> spurious or third variable problem
- directionality problem (A –> B or B –> A?)
spurious correlation
- just a coincidence
- seems to happen when looking at many things –> at least some will happen to have similar patterns
confounding variables
variables intertwined with another independent variable so can’t determine which is operating given situation (alternate explanation)
types of confounding variables
- operational definitions –> ex: poor validity
- participant factors –> ex: social status (based on individuals personal situation)
- order effect –> ex: fatigue, practice (treatment effects)
- group factors –> ex: self-selection
how to minimize confounding variables?
random assignment to conditions –> potential confound likely affect one group as other
internal validity
degree to which all confounding variables have been controlled (how confident can cause be inferred)
limitations of experimental method
plausible alternative explanations need to be eliminated
random assignment
each participant has equal change of being placed into any experimental group/condition
random sampling
randomly choose portion of larger group for doing experiment