Midterm Review Flashcards
claim
a statement about what is true/valid
basis (of a claim)
the basis for a claim is the reason we should accept the truth or validity of that claim (evidence used to prove the claim is true)
four criteria of scientific evidence
- transparent procedures
- systematic use of evidence
- consider alternatives
- acknowledge uncertainty
appeal to authority
arguing that a claim is true because a person with authority says its true
appeal to personal experience
a claim based on one’s own personal (non-systematic) observation or one’s own reaction to an observation
appeal to common sense
unscientific evidence
normative claims
a claim about what is desirable or undesirable (what should/should not be)
basis/evidence for a normative claim
must assume a value judgement about what is desirable/undesirable
value judgements
normative claims that state what goal is “right” or “good”, or provide criteria for judging what is better/worse
prescriptive claims
are normative claims that assert what kinds of actions should be taken
basis/evidence for a prescriptive claim
an empirical claim about the consequences of some action and assumption that some value judgement is correct
empirical claims
a claim about what is/exists or how things that exist affect each other
basis/evidence for empirical claims
consists of observation of the world, and no assumption about what is good/desirable
causal claims
are claims about how one phenomena (X) affects or causes another phenomena (Y), state that X acts on Y in some way, not merely that they appear together in some pattern
descriptive claims
claims about what exists (or has existed/will exist in the world)
falsifiable
can prove an empirical claim wrong:
- falsifiable if the claim can be shown wrong by empirical evidence
- unfalsifiable if there is no empirical evidence that shows the claim is wrong
verifiability
if we had an empirical claim, H1 (H for hypothesis) and, if H1 were true or valid, then it implies we should make certain empirical observations O1
concepts
are abstract or general categories that we apply to particular cases using a set of rules/criteria that determine membership in the category
rules for concepts
ontological, observable, relevant
ontological concept
the traits we use for a concept are about what mean to be in this category
observable concept
defining traits in a concept must be something we can observe (empirical)
relevant concept
traits are relevant to predicting how cases belonging to the concept affect other things, are affected by other events, or are part of a causal process
variable
a measurable property of case (phenomena, group, or individual) that corresponds to a concept or part of a concept (dimension) and can potentially take on different values across cases and time (it varies across cases)
measures
a procedure for determining the value a variable takes for specific cases based on observation
four levels of measurement
- nominal
- ordinal
- interval
- ratio
nominal measurement
place cases into unranked categories
- ex. type of crime
ordinal measurement
places cases into categories that are ranked
- ex. university rankings
interval measurement
assign cases numbers that rank the cases
- ex. years (but not years since some event)
ratio
assign cases numbers that rank the cases
- ex. rates (unemployment)
variable value types
absolute and relative
absolute values
variable values with counts given in raw units
relative values
variable values that are given in fractions of rates or ranks
validity
degree of fit between a variables concept and what the variable is intended to capture (link between variable and concept)
threats to validity
- variable does not cover enough of the concept
- variable covers things outside the concept
- variable captures different things across units: non-comparability
measurement error
is a difference between the true value of of a variable for a case and the observed value of the variable for that case produced by the measurement procedure
two forms of measurement error
- systematic measurement error (bias)
2. random measurement error
systematic measurement error (bias)
error produced when our measurement procedure obtains values that are, on average, too high or too low (or, incorrect) compared to the truth
sources of systematic measurement error (bias)
- researcher subjectivity/interpretation
2. obstacles to observation
random measurement error
errors that occur due to random features of measurement process or phenomenon and the values that we measure are, on average, correct
sources of random measurement error
- imperfect memory, random changes in mood/concern, researcher interpretation
population
full set of cases (countries, individuals, etc.) we’re interested in describing
sample
a subset of the population that we observe and measure
inference
a subset of the population that we observe and measure
sampling to work
we need to:
- ensure the sample is representative of the population (does not differ from the population)
- know the level of uncertainty associated with our inference
- use random sampling
random sampling
sampling cases from the population in a manner that gives all cases an equal probability of being chosen
sampling error
the difference between the value of the measure for the sample and the true value of the measure for the population
sampling bias
the procedure by which cases are chosen for the sample does not give every member of population an equal chance of being in sample
random sampling error
by chance we get samples where there are too many/few certain types of people (compared to the population)
two varieties of sampling error
- sampling bias
2. random sampling error
measurement error
incorrectly describe the world because you incorrectly observe values for the case(s) you study
sampling error
incorrectly describe the world because the sample cases that are different from the population you want to learn about
when is sampling error = measurement error?
sampling error is measurement error when you are evaluating descriptive claims about the population you sample (the case we measure is the population)
when is sampling error ≠ measurement error?
sampling error is not measurement error when you are evaluating claims about the cases you sample (the case we measure are, ex. the survey respondents)