POLI 110 FINAL Flashcards
Levels of Measurement
- Nominal - what exists/type: unranked categories based on presence/absence of traits, exhaustive (Religion, party affiliation, crime type, regime type, cause of death)
- Ordinal - amount: ranked categories based on more/less of something, intervals unmeaningful, relative levels, not absolutely defined (University rankings, test score percentiles, ideology, level of democracy, strictness of gun laws, strongly agree neutral disagree)
- Interval - amount: numbers that rank cases, intervals are consistent+meaningful indicating how much more/less of something each case has from another, 0 & ratios are meaningless, doesn’t mean absence (Year, temperature, date)
- Ratio - Amount, amount relative to time+place+conditions: numbers that rank cases on consistent+meaningful intervals, difference indicates how much more/less of something each case has, 0 indicates absence, ratios meaningful (Time since, change over time, counts of events, rates, proportion, percentage, gun deaths)
Process to Proving/Evaluating a Descriptive Claim + Issues that Arise
Summary: is or is there not a situation at odds to our values, its nature, relevance of a value judgment, are key components of causal claims. Can evidence prove claims to be wrong or lead us to accept false claims? Abstraction→Observable→Procedure
Descriptive Claim+Case: specific individual, group, event, action existing in specific time & place that we are interested in identifying, grouping, measuring attributes
* Lack of transprency/systematic
Concept: define terms transparently, abstract, general applied to particular cases/instances which can be used systematically, not opaque or idiosyncratic that can be scientifically tested + builds onto theories
* Validity Error: variable doesn’t map onto concept
Variable: measurable/observable property in principle that corresponds to a concept, varies across & b/w cases+time, translate concepts into something we can observe/measurable, should correspond to concept & doesn’t correspond to other concepts
* Measurement Error: procedure or by chance doesn’t return true value
Measurement: procedure for determining the value a variable takes for specific cases based on observation, how to observe & translate world into a value of a variable, transparent & systematic procedure with known uncertainty to observe attributes of specific cases, not opaque, bias or high uncertainty
Answer:
Value of Science in Politics
Politics: how people live together in communities, how should we live, organize and who/what is a member
Science: keeping assumptions open to challenge and scrutinizing the ways in which claims may be wrong.
1) Science helps us be rational in responding to political crises, form of knowledge about world in a manner free/less susceptible from manipulation, interferences, domination, power, ideology
* Science can answer what is happening, causes, outcomes and consequences of some action (“is”), such as climate change, immigration, inequality, social media, technology, new or old problem
2) Science is value neutral (“ought or should”): how can it help us solve value questions, avoid it becoming tools of domination/oppression, allow it to grapple indoctrinated values
* Science cannot resolve questions of value (Weber), cannot tell us what we should do, such as what is good vs bad, desirable vs undesirable
What is Power?
Politics is fundamentally about power and science can provide justifications for that power with the capacity to motivate individuals to alter their behavior. Power is the ability of A to motivate B to think/do something it would not otherwise thought/done involving justification, normatively neutral, no value, could be “good or bad”
To have and exercise power means being able to influence, use, determine, occupy or even seal off the space of reasons for others
Justification + Key Elements
Justification: reason to motivate someone to adopt some behavior/alter behavior by manipulating reality including some should (value judgements), moral intuitions to prefer “good” justification (prescriptive claim “should”), factual claims about the world (is=descriptive claims) and ability to factually learn whether justifications are good (is=causal claims)
* Value(s) about what is good/desirable (heaven, violence bad security good, more people=more support, climate change bad)
* Factual claim(s) about state of the world/reality to show relevance of values (donating to church gives you excess grace, increase in violence, bigger crowd, Climate change)
* Causal Factual claim(s) about what causes various phenomena (enough grace would bring you to heaven, migrants cause violence, more people support Trump, CO2 drive CC)
Poor Justification v. Good Justification
Criteria: Critical Theory Principle CRITERIA: the acceptance of a justification does not count if the acceptance itself is produced by the coercive power which is supposedly being justified, if itself is dependent on using domination or unjustified power as a method/procedure of justification, not content specifically
Poor Justification: acceptance of a justification doesn’t count if acceptance itself is produced by the coercive power which is supposedly being justified
Good Justification: no threat of violence, dupes/misleads/misrepresentation, be treated as we want to be treated
EXAMPLES:
* silencing critics, censorship, control over info, violence, distortion/misrepresentation, undermining or sponsoring/advertising research or beliefs
* Domination: one justification for power dominates all other reasons by limiting ability of others to question/challenge by controlling info or using threats/violence
* Violence: others reduced to objects to be moved/destroyed, its use means A no longer can motivate a change in the behavior of B, a loss of power, material capability for violence is meaningless when it loses justification. Power isn’t just material/brute capability but requires value
How can facts help us?
- Interrogate content and quality of justifications about what the world is and what causes what,
- Investigate how power may be used to coerce/manipulate us into accepting justifications
Plato’s Allegory of the Cave
Truth=real world, puppet show=perceived/power influenced world, our perception of our political world can be manipulated/tricked, those who shape what/how we see have power over us so proper justification could be impossible
Elements of Sampling
- Population: full set of cases interested in describing
- Sample: subset of population observed/measured, generalizes entire population, the larger the more accurate, the less random errors
- Inference: description of unmeasured population based on measure of sample, always with uncertainty as only sample is measured
Sampling
Purpose: when there are too many cases to observe to answer a descriptive claim directly, not many samples are required to get an accurate representation of entire population (ex.CAN 16,000)
When is Sampling Error also a Measurement Error?
Sampling Error=Measurement Error when measure requires inference about population
Sampling Distribution + Use
Sampling Distribution: with only one sample compare it to simulation of all possible samples & their results using a procedure & visualized using histogram to assess bias in procedure + how much random error by comparing means and spread.
Sampling Error + Types
Sampling Error: a type of measurement error ValueSample-ValuePopulation doesn’t equal0
- Sampling Bias: cases in sample aren’t representative of population, sample process/not every member has equal chance of being in sample causing an error that is consistently in same direction (ex. Not all students in class, especially those working, consistently making it look like we pay less for rent)
- Random Sampling Error: due to chance sample doesn’t reflect popultion, on average too high/low compared to population average, cancel out after many samples, produces margin of error=sampling uncertainty (ex.People in sample misrepresent themselves or misclick survey)
What Makes a Good Sample?
- Large+many samples
- Random Sampling means No Sampling Error (bias and random): all samples have equal probability of being chosen, on average unbiased inferences about population regardless of size, on average sample average=population average. Guarantees no systematic error/bias as everyone has equal chance of being selected in sample
Tells us exactly how much random error exists, margin of error
What Happens to Data Without Random Sampling
Bias error, systematically leaves out a part of the population
Survey suggested Biden would win by 8.4% (sample), while he actually won by 4.5% (population). What are possible Sampling Error, Sampling Bias, & Measurement Bias
Sampling Error: Since Value of Sample doesn’t equal Value of Population there is sampling error
Sampling Bias: Democrats more excited to do survey than Republicans so more democrats in sample → Sample is unrepresentative of population
Measurement Bias: shyness from Republicans → on average republican support is lower however sample could still be representative
Tolerability of Measurement Error
Measurement bias/random measurement error are a problem when they create a situation where the measurement procedure fails weak severity (it is incapable of finding claim wrong even if it is is; it is incapable of finding claim right even if it is
* When bias is opposing what we are claiming it is tolerable bias, otherwise intolerable
* When RV is large alters conclusion it is intolerable, otherwise it is tolerable
* Problem Attenuation Bias: when observing pattern any Random Error is intolerable as large outliers+too much noise makes association impossible to discover
* Relative Change Overtime: bias staying constant overtime is intolerable, otherwise intolerable
Tolerability Depending Type of Descriptive Claim:
* Type of Specific phenomenon: S + R intolerable
* Amount/Frequency of Phenomena: S intolerable, R tolerable
* Relative Amount/Frequency of Phenomena across Diff Places/Times: R intolerable, S tolerable if constant
* Patterns/Correlation b/w 2 Diff Phenomena: R intolerable, S tolerable if constant
Causes of Measurement Error in Social Science
Human error
Systematic/Bias Measurement Errors:
* Subjectivity/Perspective: researcher systematically perceives/evaluates cases incorrectly: Gender racial bias selecting candidates, police reports perceptions of objective threat, media echo chambers affecting beliefs
* Motives/Incentives to Mis-Represent: observed generate data based on Social Norms: discourage revelation of info that is socially undesirable, values in society about what is important/relevant/interesting (social desirability bias) ex. News reporting, How racists are you?, not understanding the question. Political actors have agendas to conceal info from each other, wealthy misrepresent assets to avoid taxation, police officers facing prosecution will hide misconduct
* Use of Data Beyond Intended Purposes: without knowing how data is produced unanticipated errors can arise
Double counting values using two agencies data, undocumented migrants aren’t all detected causes undercounting
Random Measurement Error: anything that affects values that are unrelated to actual values for the observed case
Imperfect memory
* Typos/mistakes
* Arbitrary changes (ex.Mood, hangry, weather)
* Researcher interpretation
* Misperceptions
* Observed have motives/incentives to mis-represent
* Measurement tools used for purposes other than intended
Some bias is good as it makes it more falsifiable, stronger severity
Types of Claims
Empirical: can be evaluated using science assuming there is an objective world that we share open to scrutiny, what is/exists, how things that exist affect each other
* Basis: observation of the world, no value/assumptions about what is good/desirable
* Descriptive Claims “is”: what exists/existed/will exist in the world, its frequency/amount across different places/times, patterns, correlation/shared appearance/non-appearance with different phenomena
* Causal Claim “causes/effects”: how X affects/causes Y, not just correlation/appearing in some pattern, conditions under which something happens, process through which one thing affects another. Recognize: includes a causal verb or phrase (causes, because, influences, makes happen, incr, decor, result in, necessary for), if X is manipulated, it would change Y
Normative: cannot be fully evaluated using science, what is desirable/undesirable, should/shouldn’t, too much/not enough, better/worse, best/worst
* Basis: assume a value about what is desirable/undesirable
* Value Judgements “is good/bad”: can’t be evaluated with science, state what goal/ideal is right/good or provides criteria/rules for judging what is better/worse, not invalid/bad empirical claim
* Prescriptive Claim “should”: partially evaluated with science, includes empirical claims in its basis (evidence supporting an empirical claim about the consequence of some action) + an assumption that some value judgment is correct, what actions should be taken, overlap with justifications/reasons given by power. To be true both N&E must be accepted, we must accept the causal claim than A→B and that value judgment B is good
How to Find Sources/Reasons of Error in Procedure
- Comparison with known quantities or better measurement procedures
- Understand process of observation to identify limitations, incentives, specific steps that might lead to errors
- Pattern of Error Identifies Type, Direction & Magnitude of Error
- How Error Affects Evidence for Claims:
Type: random or systematic, source of error suggest a systematic direction of error
Direction: systematic pattern upward or downward
Magnitude: large or small, how wrong could it be
Measurement Error + Types
Measurement Error: difference b/w observed value & true value doesn’t = 0, truth is different than what was observed, both patterns can occur at same time and mean different implications
* Bias/Systematic Measurement Error: measurement procedure obtains values that are on average too high or low or incorrect compared to the truth, consistent systematic pattern that occurs after repeated measurement, can vary across subgroups. Good when uniform across cases when looking at relative values, Bad when looking for absolute values or if differs across cases, more data won’t solve the issue.
* Random Measurement Error: random features of measurement process that cause errors in both directions that balance out after many experiments, no pattern/systematic tilt to errors or underlying process, in aggregate values balance out. Good when false negative better than false positive, Bad when we need precision/observe few cases, solved by more data
Severity Test When Evaluating Descriptive Claims
Want evidence that is capable of showing claim to be wrong (weak severity) and stand up to multiple checks on where it could be wrong (strong severity), sensitive to properties (falsifiable), failure points (assumptions)
* Concepts not transparent/systematic fails weak severity, if passes continues to
* Variable doesn’t map onto concept (lack validity) fails weak severity, if passes continues to
* Procedure doesn’t return true value (measurement error) fails weak severity
Validity
Validity: variables may not correspond/map/capture/fit to concept even if measure is perfect evidence is potentially irrelevant, may even work for other concept, does variables accurately capture concept or another
* Issues: subjective perception of concept, maps onto other concepts, other variables could better map onto concept
* Variable(s) with validity ensures a true causal effect
Variables + Types
Variable(s): measurable/observable property in principle that corresponds to a concept, varies across & b/w cases+time, translate concepts into something we can observe/measurable, should correspond to concept & doesn’t correspond to other concepts
* Absolute: values are counts in raw units ($, #)
* Relative: values are fractions, rates, ranks, % (fractional, no units)
Concept + Criteria + Definition
Concepts: define terms transparently, abstract, general applied to particular cases/instances which can be used systematically, not opaque or idiosyncratic (ex.Chair). Abstracts away (overgeneralizes) from highly particular, complex, unique features of reality, never perfectly corresponding to reality. Without them all experiences are completely unique, independent, we cannot anticipate or predict regularities/similarities in the world nor function/act. Too abstract can stray away, imposing concept on reality has consequences (ex.Artificial forests), conceptual limits (ex.Borders), Concepts are defined using observable traits that identify what it means to be in this category
1) Abstractions from reality
2) Defining concepts used to answer descriptive claims
3) Relevant & observable traits makes something an “X”
4) Objective=used even if disagree
Good scientific concepts can be understood & used by all regardless if one agrees or disagrees if it has the right label, without these it becomes difficult to falsify, undefined terms, appeal to loopholes/cherry picking, unreplicable, unobservable
* Testing Claims Scientifically: red states is accessible definition, observable, traits tell what it means. Transparent: clear, accessible definition, label later, traits are about what it means to be in this category. Used Systematically: no loopholes, tied to observable attributes
* Building Theories (not focus): red states is useless to prediction. Tied to Prediction: find regularities, shared behavior/actions to better understand, 2 things with same definition should be produced by same elements/conditions and affect others the same way, relevant to ordinary use
Choice of label reflected value judgements & common usage, definition is never disputed but the label chosen for the definition (its power) is always disputed. Many definitions for same label depending on questions we ask, values we have
Logic of Inference
Logic is Valid when premises are true, then conclusion must be true:
Confirmation/Verification: evidence that claim is right
Falsification: evidence that claim is false, embodies severity requirements (open to scrutiny, able to be falsified, just as equally confirmed).
* Claim could be false, however many other pathways could confirm the claim aside from this (auxiliary claims), warrants & theories linking claims to empirical predictions could be wrong and it doesn’t rule out other explanations. Difficult as too complex to admit simple falsifications, hard to isolate 1 test that falsifies and yet embodies the strong severity requirement
* Conspiracy: these instruments are conspire to confirm claim even if false, starts from result, as if H were true but in fact false rigged hypothesis, always invokable, guaranteed, no way to prove or falsify line of logic, fails weak severity requirement. Something else other than H explains data that appears to confirm H