POLI 110 FINAL Flashcards

1
Q

Levels of Measurement

A
  1. Nominal - what exists/type: unranked categories based on presence/absence of traits, exhaustive (Religion, party affiliation, crime type, regime type, cause of death)
  2. Ordinal - amount: ranked categories based on more/less of something, intervals unmeaningful, relative levels, not absolutely defined (University rankings, test score percentiles, ideology, level of democracy, strictness of gun laws, strongly agree neutral disagree)
  3. Interval - amount: numbers that rank cases, intervals are consistent+meaningful indicating how much more/less of something each case has from another, 0 & ratios are meaningless, doesn’t mean absence (Year, temperature, date)
  4. Ratio - Amount, amount relative to time+place+conditions: numbers that rank cases on consistent+meaningful intervals, difference indicates how much more/less of something each case has, 0 indicates absence, ratios meaningful (Time since, change over time, counts of events, rates, proportion, percentage, gun deaths)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Process to Proving/Evaluating a Descriptive Claim + Issues that Arise

A

Summary: is or is there not a situation at odds to our values, its nature, relevance of a value judgment, are key components of causal claims. Can evidence prove claims to be wrong or lead us to accept false claims? Abstraction→Observable→Procedure
Descriptive Claim+Case: specific individual, group, event, action existing in specific time & place that we are interested in identifying, grouping, measuring attributes
* Lack of transprency/systematic
Concept: define terms transparently, abstract, general applied to particular cases/instances which can be used systematically, not opaque or idiosyncratic that can be scientifically tested + builds onto theories
* Validity Error: variable doesn’t map onto concept
Variable: measurable/observable property in principle that corresponds to a concept, varies across & b/w cases+time, translate concepts into something we can observe/measurable, should correspond to concept & doesn’t correspond to other concepts
* Measurement Error: procedure or by chance doesn’t return true value
Measurement: procedure for determining the value a variable takes for specific cases based on observation, how to observe & translate world into a value of a variable, transparent & systematic procedure with known uncertainty to observe attributes of specific cases, not opaque, bias or high uncertainty
Answer:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Value of Science in Politics

A

Politics: how people live together in communities, how should we live, organize and who/what is a member
Science: keeping assumptions open to challenge and scrutinizing the ways in which claims may be wrong.
1) Science helps us be rational in responding to political crises, form of knowledge about world in a manner free/less susceptible from manipulation, interferences, domination, power, ideology
* Science can answer what is happening, causes, outcomes and consequences of some action (“is”), such as climate change, immigration, inequality, social media, technology, new or old problem

2) Science is value neutral (“ought or should”): how can it help us solve value questions, avoid it becoming tools of domination/oppression, allow it to grapple indoctrinated values
* Science cannot resolve questions of value (Weber), cannot tell us what we should do, such as what is good vs bad, desirable vs undesirable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Power?

A

Politics is fundamentally about power and science can provide justifications for that power with the capacity to motivate individuals to alter their behavior. Power is the ability of A to motivate B to think/do something it would not otherwise thought/done involving justification, normatively neutral, no value, could be “good or bad”

To have and exercise power means being able to influence, use, determine, occupy or even seal off the space of reasons for others

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Justification + Key Elements

A

Justification: reason to motivate someone to adopt some behavior/alter behavior by manipulating reality including some should (value judgements), moral intuitions to prefer “good” justification (prescriptive claim “should”), factual claims about the world (is=descriptive claims) and ability to factually learn whether justifications are good (is=causal claims)
* Value(s) about what is good/desirable (heaven, violence bad security good, more people=more support, climate change bad)
* Factual claim(s) about state of the world/reality to show relevance of values (donating to church gives you excess grace, increase in violence, bigger crowd, Climate change)
* Causal Factual claim(s) about what causes various phenomena (enough grace would bring you to heaven, migrants cause violence, more people support Trump, CO2 drive CC)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Poor Justification v. Good Justification

A

Criteria: Critical Theory Principle CRITERIA: the acceptance of a justification does not count if the acceptance itself is produced by the coercive power which is supposedly being justified, if itself is dependent on using domination or unjustified power as a method/procedure of justification, not content specifically

Poor Justification: acceptance of a justification doesn’t count if acceptance itself is produced by the coercive power which is supposedly being justified

Good Justification: no threat of violence, dupes/misleads/misrepresentation, be treated as we want to be treated

EXAMPLES:
* silencing critics, censorship, control over info, violence, distortion/misrepresentation, undermining or sponsoring/advertising research or beliefs
* Domination: one justification for power dominates all other reasons by limiting ability of others to question/challenge by controlling info or using threats/violence
* Violence: others reduced to objects to be moved/destroyed, its use means A no longer can motivate a change in the behavior of B, a loss of power, material capability for violence is meaningless when it loses justification. Power isn’t just material/brute capability but requires value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can facts help us?

A
  • Interrogate content and quality of justifications about what the world is and what causes what,
  • Investigate how power may be used to coerce/manipulate us into accepting justifications
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Plato’s Allegory of the Cave

A

Truth=real world, puppet show=perceived/power influenced world, our perception of our political world can be manipulated/tricked, those who shape what/how we see have power over us so proper justification could be impossible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Elements of Sampling

A
  • Population: full set of cases interested in describing
  • Sample: subset of population observed/measured, generalizes entire population, the larger the more accurate, the less random errors
  • Inference: description of unmeasured population based on measure of sample, always with uncertainty as only sample is measured
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Sampling

A

Purpose: when there are too many cases to observe to answer a descriptive claim directly, not many samples are required to get an accurate representation of entire population (ex.CAN 16,000)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

When is Sampling Error also a Measurement Error?

A

Sampling Error=Measurement Error when measure requires inference about population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Sampling Distribution + Use

A

Sampling Distribution: with only one sample compare it to simulation of all possible samples & their results using a procedure & visualized using histogram to assess bias in procedure + how much random error by comparing means and spread.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Sampling Error + Types

A

Sampling Error: a type of measurement error ValueSample-ValuePopulation doesn’t equal0

  • Sampling Bias: cases in sample aren’t representative of population, sample process/not every member has equal chance of being in sample causing an error that is consistently in same direction (ex. Not all students in class, especially those working, consistently making it look like we pay less for rent)
  • Random Sampling Error: due to chance sample doesn’t reflect popultion, on average too high/low compared to population average, cancel out after many samples, produces margin of error=sampling uncertainty (ex.People in sample misrepresent themselves or misclick survey)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What Makes a Good Sample?

A
  1. Large+many samples
  2. Random Sampling means No Sampling Error (bias and random): all samples have equal probability of being chosen, on average unbiased inferences about population regardless of size, on average sample average=population average. Guarantees no systematic error/bias as everyone has equal chance of being selected in sample
    Tells us exactly how much random error exists, margin of error
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What Happens to Data Without Random Sampling

A

Bias error, systematically leaves out a part of the population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Survey suggested Biden would win by 8.4% (sample), while he actually won by 4.5% (population). What are possible Sampling Error, Sampling Bias, & Measurement Bias

A

Sampling Error: Since Value of Sample doesn’t equal Value of Population there is sampling error
Sampling Bias: Democrats more excited to do survey than Republicans so more democrats in sample → Sample is unrepresentative of population
Measurement Bias: shyness from Republicans → on average republican support is lower however sample could still be representative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Tolerability of Measurement Error

A

Measurement bias/random measurement error are a problem when they create a situation where the measurement procedure fails weak severity (it is incapable of finding claim wrong even if it is is; it is incapable of finding claim right even if it is
* When bias is opposing what we are claiming it is tolerable bias, otherwise intolerable
* When RV is large alters conclusion it is intolerable, otherwise it is tolerable
* Problem Attenuation Bias: when observing pattern any Random Error is intolerable as large outliers+too much noise makes association impossible to discover
* Relative Change Overtime: bias staying constant overtime is intolerable, otherwise intolerable

Tolerability Depending Type of Descriptive Claim:
* Type of Specific phenomenon: S + R intolerable
* Amount/Frequency of Phenomena: S intolerable, R tolerable
* Relative Amount/Frequency of Phenomena across Diff Places/Times: R intolerable, S tolerable if constant
* Patterns/Correlation b/w 2 Diff Phenomena: R intolerable, S tolerable if constant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Causes of Measurement Error in Social Science

A

Human error
Systematic/Bias Measurement Errors:
* Subjectivity/Perspective: researcher systematically perceives/evaluates cases incorrectly: Gender racial bias selecting candidates, police reports perceptions of objective threat, media echo chambers affecting beliefs
* Motives/Incentives to Mis-Represent: observed generate data based on Social Norms: discourage revelation of info that is socially undesirable, values in society about what is important/relevant/interesting (social desirability bias) ex. News reporting, How racists are you?, not understanding the question. Political actors have agendas to conceal info from each other, wealthy misrepresent assets to avoid taxation, police officers facing prosecution will hide misconduct
* Use of Data Beyond Intended Purposes: without knowing how data is produced unanticipated errors can arise
Double counting values using two agencies data, undocumented migrants aren’t all detected causes undercounting

Random Measurement Error: anything that affects values that are unrelated to actual values for the observed case
Imperfect memory
* Typos/mistakes
* Arbitrary changes (ex.Mood, hangry, weather)
* Researcher interpretation
* Misperceptions
* Observed have motives/incentives to mis-represent
* Measurement tools used for purposes other than intended

Some bias is good as it makes it more falsifiable, stronger severity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Types of Claims

A

Empirical: can be evaluated using science assuming there is an objective world that we share open to scrutiny, what is/exists, how things that exist affect each other
* Basis: observation of the world, no value/assumptions about what is good/desirable
* Descriptive Claims “is”: what exists/existed/will exist in the world, its frequency/amount across different places/times, patterns, correlation/shared appearance/non-appearance with different phenomena
* Causal Claim “causes/effects”: how X affects/causes Y, not just correlation/appearing in some pattern, conditions under which something happens, process through which one thing affects another. Recognize: includes a causal verb or phrase (causes, because, influences, makes happen, incr, decor, result in, necessary for), if X is manipulated, it would change Y

Normative: cannot be fully evaluated using science, what is desirable/undesirable, should/shouldn’t, too much/not enough, better/worse, best/worst
* Basis: assume a value about what is desirable/undesirable
* Value Judgements “is good/bad”: can’t be evaluated with science, state what goal/ideal is right/good or provides criteria/rules for judging what is better/worse, not invalid/bad empirical claim
* Prescriptive Claim “should”: partially evaluated with science, includes empirical claims in its basis (evidence supporting an empirical claim about the consequence of some action) + an assumption that some value judgment is correct, what actions should be taken, overlap with justifications/reasons given by power. To be true both N&E must be accepted, we must accept the causal claim than A→B and that value judgment B is good

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How to Find Sources/Reasons of Error in Procedure

A
  • Comparison with known quantities or better measurement procedures
  • Understand process of observation to identify limitations, incentives, specific steps that might lead to errors
  • Pattern of Error Identifies Type, Direction & Magnitude of Error
  • How Error Affects Evidence for Claims:
    Type: random or systematic, source of error suggest a systematic direction of error
    Direction: systematic pattern upward or downward
    Magnitude: large or small, how wrong could it be
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Measurement Error + Types

A

Measurement Error: difference b/w observed value & true value doesn’t = 0, truth is different than what was observed, both patterns can occur at same time and mean different implications
* Bias/Systematic Measurement Error: measurement procedure obtains values that are on average too high or low or incorrect compared to the truth, consistent systematic pattern that occurs after repeated measurement, can vary across subgroups. Good when uniform across cases when looking at relative values, Bad when looking for absolute values or if differs across cases, more data won’t solve the issue.
* Random Measurement Error: random features of measurement process that cause errors in both directions that balance out after many experiments, no pattern/systematic tilt to errors or underlying process, in aggregate values balance out. Good when false negative better than false positive, Bad when we need precision/observe few cases, solved by more data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Severity Test When Evaluating Descriptive Claims

A

Want evidence that is capable of showing claim to be wrong (weak severity) and stand up to multiple checks on where it could be wrong (strong severity), sensitive to properties (falsifiable), failure points (assumptions)
* Concepts not transparent/systematic fails weak severity, if passes continues to
* Variable doesn’t map onto concept (lack validity) fails weak severity, if passes continues to
* Procedure doesn’t return true value (measurement error) fails weak severity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Validity

A

Validity: variables may not correspond/map/capture/fit to concept even if measure is perfect evidence is potentially irrelevant, may even work for other concept, does variables accurately capture concept or another
* Issues: subjective perception of concept, maps onto other concepts, other variables could better map onto concept
* Variable(s) with validity ensures a true causal effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Variables + Types

A

Variable(s): measurable/observable property in principle that corresponds to a concept, varies across & b/w cases+time, translate concepts into something we can observe/measurable, should correspond to concept & doesn’t correspond to other concepts
* Absolute: values are counts in raw units ($, #)
* Relative: values are fractions, rates, ranks, % (fractional, no units)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Concept + Criteria + Definition

A

Concepts: define terms transparently, abstract, general applied to particular cases/instances which can be used systematically, not opaque or idiosyncratic (ex.Chair). Abstracts away (overgeneralizes) from highly particular, complex, unique features of reality, never perfectly corresponding to reality. Without them all experiences are completely unique, independent, we cannot anticipate or predict regularities/similarities in the world nor function/act. Too abstract can stray away, imposing concept on reality has consequences (ex.Artificial forests), conceptual limits (ex.Borders), Concepts are defined using observable traits that identify what it means to be in this category
1) Abstractions from reality
2) Defining concepts used to answer descriptive claims
3) Relevant & observable traits makes something an “X”
4) Objective=used even if disagree

Good scientific concepts can be understood & used by all regardless if one agrees or disagrees if it has the right label, without these it becomes difficult to falsify, undefined terms, appeal to loopholes/cherry picking, unreplicable, unobservable
* Testing Claims Scientifically: red states is accessible definition, observable, traits tell what it means. Transparent: clear, accessible definition, label later, traits are about what it means to be in this category. Used Systematically: no loopholes, tied to observable attributes
* Building Theories (not focus): red states is useless to prediction. Tied to Prediction: find regularities, shared behavior/actions to better understand, 2 things with same definition should be produced by same elements/conditions and affect others the same way, relevant to ordinary use

Choice of label reflected value judgements & common usage, definition is never disputed but the label chosen for the definition (its power) is always disputed. Many definitions for same label depending on questions we ask, values we have

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Logic of Inference

A

Logic is Valid when premises are true, then conclusion must be true:

Confirmation/Verification: evidence that claim is right

Falsification: evidence that claim is false, embodies severity requirements (open to scrutiny, able to be falsified, just as equally confirmed).
* Claim could be false, however many other pathways could confirm the claim aside from this (auxiliary claims), warrants & theories linking claims to empirical predictions could be wrong and it doesn’t rule out other explanations. Difficult as too complex to admit simple falsifications, hard to isolate 1 test that falsifies and yet embodies the strong severity requirement
* Conspiracy: these instruments are conspire to confirm claim even if false, starts from result, as if H were true but in fact false rigged hypothesis, always invokable, guaranteed, no way to prove or falsify line of logic, fails weak severity requirement. Something else other than H explains data that appears to confirm H

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Attributes of Scientific Evidence

A

Systematic Use of Evidence: clear rules, avoid cherry picking, confirmation bias (ex.Gay vs Straight contact)
* How: clear rules on what, how, comparison of observations
* Why: avoid picking, confirmation bias, replicability, no secret sauce, enables challenging assumptions, objectivity

Transparent Procedures: assumptions to interrogate, procedure to replicate(objectivity)+validify+scrutinize, check data/math/comparison/choices (ex.California vs Florida)
* Most Important: with this attribute a study was caught being a fraud, tried replicating
* How: data observations used, comparisons, choices
* Why: assumptions/choices to replicate (objectivity) result & challenge

Acknowledgement of Uncertainty: highlight assumptions that link that might be wrong (ex.Chance, other factors, Qs left over)
* Limitations: qs remained unanswered after study, possible false assumptions, possibility of result driven by chance or spurious relationship

Consider Alternatives: test rival claims, interpret data differently to rule out other claims, seeking ways to falsify (ex.Personality, attractiveness, random assignment, subject matter)
* Test claim against other competing claims, which claim survives many different tests is best
* Why: openness to being wrong, no assumption above challenge, evidence consistent with different assumptions, one piece of evidence can be consistent with many claims, best claim generates most useful predictions

28
Q

Severity Requirement

A
  • Weak Severity Requirement: unscientific if data agrees with claim but method is guaranteed to find agreement, little/no capability of finding flaws even if they exist, nothing has been done to rule out ways the claim may be false, doesn’t mean it isn’t true
  • Strong Severity Requirement: scientific if data/evidence for claim survives stringent scrutiny to warrants/assumptions, just to the extent it survives a stringent scrutiny, passes a test that is highly capable of finding flaw or discrepancies, yet none or few are found, using different plausible assumptions/warrant, evidence procedures that make weaker assumptions
29
Q

Tolstoi v. Weber

A

Tolstoi: scientific evidence share similarities in how it adheres to severity requirements, while unscientific evidence will fail severity requirement in their many different ways

Weber: science is mastery of the world which can magnify tools of power but can’t justify itself as the questions that get or don’t get asked scientifically are determined by justifications invoked by power

30
Q

Claims + Elements

A

Claims: statement about what is right/true (ex.It rained last night, Trudeau caused inflation)
* Basis for Claim: reason we should accept the truth/validity of the claim, composed of
* Evidence: proves truth of the claim, data, information, etc. (ex.Saw street was wet this morning, Stats)
* Warrant: assumptions that permit/link/validify evidence to count/the truth, rules out other possibilities that would falsify the claim about the tools, instruments, procedure of the evidence (ex.Water isn’t from another source, eyes are clear, actions can be taken now to reverse high prices/no other causes responsible for inflation)

31
Q

Weber’s Analysis of Science

A

See Doc

32
Q

Pick one of the causal claims from above.
- Write down a variable that corresponds to this concept.
- Propose a measure for this variable
- How could it lack validity

A
33
Q

(a) Propose a measure for this variable that would produce systematic measurement error. Be sure to explain why the procedure would produce systematic measurement error
(b) What is the direction of the error you would expect to result from your procedure?

A
34
Q

Identify the level of measurement.

A
35
Q

This an example of
measurement bias (systematic measurement error)

A
36
Q

What is the claim in this quote?
What kind of claim is it (identify whether it is empirical or normative and then which sub-type it
is)?
What is the basis given for this claim?
Identify one way in which the basis for this claim does not meet the criteria for scientific bases for claims (identify the specific criterion it does not meet and explain why this is so)

A
37
Q

a. “We should buy mosquito nets for people living in places with a high risk of malaria”
(Q) Assume that you accept the claim (b) “reducing avoidable deaths from infectious disease is
desirable”
- What kind of claim is (a)?
- What kind of claim is (b)

If we accept (b) is true, on the basis of that alone, are we able to accept (a)? If not, give
an example of another claim (c) that we would have to accept (in addition to (b)) in order
to accept (a)? What kind of claim is (c)?

A
38
Q
  1. What type of claim is (i)? Be as specific as possible. ( point)
  2. If we assume (i) is true, on the basis of that alone, can we accept (ii) is true? If not, give
    an example of another claim (iii) that we would have to accept (in addition to
    (i)) in order to accept (ii) and indicate what kind of claim (iii) is. (2 points)
A
39
Q

Give an example of one way that using the measure you gave in Q1
could suffer from random measurement error. Be sure to explain why (in
your example) this measure would produce specifically random
measurement error (as opposed to measurement bias). If you used this
measure, would this lead to a problem with validity or reliability

A
40
Q

Describe a measure for this
variable that suffers from measurement bias that results from sampling
bias. Explain clearly why the procedure would generate sampling bias and
why the sampling bias would also lead to measurement bias in this case.

A
41
Q

What is random sampling? Please describe it in terms of both the population and the
sample.

A
42
Q

Ecological Fallacy

A

Assumed that relationships observed at an aggregated level imply that the same relationships exist at the individual level

Variables are observed as aggregates, making inferences about individual behaviors using aggregate variables, inferences about the nature of individuals are deduced from inferences about the group to which those individuals belong, very narrow conditions/assumptions that allow for validity, risky/rare

43
Q

Tradeoff When Doing Experiment v. Conditioning

A

Experiment Tradeoff: incr. Confidence of correlation yield an unbiased estimate of X→Y (internal) comes at price of limiting kinds of cases we can examine & kinds of causal variables we can examine (external)

Conditioning Tradeoff: Low internal validity, practically & ethically impossible to completely control & add treatments (ex. Political violence, accounting for population removed entire correlation). High external validity by using actual data & statistics collected from population, any cases for any possible-cause X

44
Q

Cause of Effects v. Effects of Causes

A

Causes of Effects (? → Effect): attribute a cause to observation (effect)
Why did Russia invade Ukraine?
Effects of Causes (Cause → ?): consequences of an action(cause)
Are the effects of China invading Taiwan?

45
Q

Causality + Necessary Elements

A

Causality: a change in something changes the outcome of something else made of a combination of 2 different descriptive claim (explicit or implied) factual (reality) & counterfactual (alternate reality) that have different effects Yi indicates

Causality:Yi(X=0) doesnt equal Yi(X=1),

No Causality: Yi(X=0)=Yi(X=1)

Factual (X=0): claim of how world actually is, the way the event of interest actually occured (US has lax gun laws)
Counterfactual (X=1): key to causality, claim of how world would be if event of interest transpired differently/changed (US has strong gun laws)
Potential Outcomes: values of Y=potential outcome=effect for specific cases of X=state of world factual or counterfactual
Case: i corresponds to a specific case (ex.Person, Place, Time, etc.), must stay constant b/w factual & counterfactual to confirm causality

46
Q

Necessary Conditions v. Sufficient Conditions

A

Necessary Conditions: effect observed → necessary for cause to occur, cause present doesn’t mean effect necessarily occurs (could occur), but is still necessary for it to occur
(ex.Fire necessitates oxygen, but just cause oxygen is present doesn’t mean fire will happen, Cookie necessitates flour, but just cause flour is part of recipe doesn’t mean cookie will always be made)
Dictatorship (Econ crisis=No) → No , Dictatorship (Econ Crisis=Yes) → Yes or No

Sufficient Conditions: presence of cause → effect always occurs, absence of cause → effect may or may not occur
Protest(military coup=No) → Yes or No, Protest(military coup=yes) → yes
(ex.Cookie in mouth –> will be eaten, cookie not in mouth –> may or may not be eaten)

Combination of conditions

47
Q

Types of Causal Claims

A

Deterministic CC: claims about outcomes with certainty under specific causal conditions, cause of effects, very rare
* Cause is present → Effect always happens
* Cause not present → Effect never happens

Probabilistic CC: presence/absence of cause → effect more/less likely to occur on average, nothing to do with randomness of politics, effects of causes (Cause changes likelihood of outcome of Effect)
* Warning Not Probabilistic: likely to cause, probably cause, more likely to occur where this happens

Complex Causality: neither necessary (people who recognized misinfo didn’t need edu.) nor sufficient (people given edu. Didn’t recognize misinfo), easier to just say likelihood of recognizing misinfo increased with edu.

Conjunctural Causality: Multiple factors

Multiple Causality: diff causes produce same effect

Multiple & Conjunctural Causality: diff groups of factors together be sufficient

48
Q

Fundamental Problem of Causal Inference + Solution

A

Fundamental Problem of Causal Inference: are about counterfactuals expressed in terms of potential outcomes which can never be observed as they are an alternative reality not what actually occurred failing weak severity, can’t explain causes of effects

Solution → Correlation requiring assumptions: must find ways to provide scientific evidence that is observable that corresponds to missing counterfactual by focusing on effects of causes observe variation in exposure to the cause
1. Compare observed values of outcome Y in cases that have different values of X
2. Assume factual (observed) outcomes from one case as equivalent to counterfactual (unobserved) outcome from another
3. Observed patterns b/w X & Y are correlations

Testable causal claims by translating into potential outcomes (observed) & relationship b/w independent (cause) & dependent (outcome) variables, what should be observed if claim true, compare cases with different values of independent variables
Independent Variables: captures cause X
Dependent Variables: captures outcome/affected Y
Potential Outcomes: values of Y if exposed to different values of X

49
Q

Correlation + Elements

A

Definition: association/observed relation b/w X & Y in which empirical evidence for casual claims rely on examined by numerical summary (correlation coefficient, linear regression), scatterplots (x-y line of best fit) or bar plots (histogram), “plug in” missing counterfactuals from other cases
* Direction: positive=same direction (both can be negative), negative=opposite direction. Denoted by sign -/+
* Strength: strong=usual move together cluster, weak=don’t usually move together spread out. Denoted by number (-1,1) 0=no correlation, +/-1 strong, not slope but variance from line of best fit
* Magnitude: slope of line not variance, how much Y changes with X on average, larger the steeper, smaller the flatter. Correlation doesn’t show

50
Q

Draw Scatter Plot of Correlation 1, 0.8,0.4,0, -0.4, -0.8, -1

A

See doc.

51
Q

2 Problems/Assumptions with Correlation

A

Random Association: correlation occurs by chance, doesn’t reflect any systematic relation, can never be rule out, how likely errors of different sizes are can be found using probability theory (function of # of cases & strength) just like random sampling error

Confounding/Bias: correlation is not the result from causal relation b/w those variables

Other Things to Notice
Perfect relationship that isn’t linear can have 0 correlation, even though it evidently does have an association
Weak correlation but large change in Y across or vice versa, correlation doesn’t give a good indication of magnitude of relation

52
Q

Statistical Significance + P-Value

A

Statistical Significance: Probability of Chance Correlation, relation b/w N & p as each change, the higher the more unlikely, stronger correlation with many cases are less likely to happen by chance

Given strength of correlation, sample size N and chance process of generation observations p value (0-1) number measure of statistical significance p<0.05 threshold for significant result=not chance event, if greater then no correlation due to chance event, assuming 0 correlation & chance procedure, lower=greater statistical significance=lower likelihood of chance event

p-hacking or data dredging: comparing many correlations, only reporting on ones that are significant, choosing low p-values that occur by chance guaranteeing result fails weak severity

See graph in doc

53
Q

Confounding

A

Confounding (systematic bias in correlation): the systematic observed correlation b/w X & Y is not their true causal relationship. Correlation means plug in missing counterfactuals from other cases and If this equivalence (=) is false then confounding (bias) - due to
* 3rd variable consistently influencing ind. & dep. Variable altering the observed causal relation. Has a causal path toward X & Y = backdoor paths from X & Y lead to a variable. Differences between cases aside from variable X that create different Y values
* A reverse causality

54
Q

Causal Graphs

A

Causal Graphs: model of what we assume are the true causal relationships b/w variables, permits us to identify possible 1)confounding variables. 2)direction of bias 2)solutions for confounding, assuming our proposed view of how the world works
Nodes/dots represent variables
Arrows represent direction of causality

55
Q

3 Not Confouding Variables That Differ B/w Cases + Draw Causal Graphs

A

Antecedent Variables (sometimes confounding): affects x, if no causal path to Y that isn’t through X is confounding

Intervening Variable (no confound): affected by X, then affects X

Reverse Causality (confounding): dependent Y causes independent X, when in reality its the other way around thus is always confounding (doesn’t depict true causal relationship consistently)

56
Q

Confounding Direction of Bias

A

Direction of Bias: take products of signs along all causal paths if only 1 backdoor path exists (more backdoors would require more information)
* Upward: overestimates/amplifies the true causal relationship/covariance, more positive
* Downward: underestimates/dampens the true causal relation/covariance, more negative

57
Q

When does confouding lead to false & correct conclusions

A

FALSE CONCLUSION - Intolerable When Fails weak severity: if direction bias helps support/amplifies the correlation and harder to falsify
* Ex. downward bias and a negative correlation → confounding leads us to find negative causal effect even if truth is positive causal claim
* Ex. upward bias and a positive correlation → confounding leads us to find positive causal effect even if truth is negative causal claim

CORRECT CONCLUSION - Tolerable if direction of bias works against the correlation, makes it harder to support and easier to falsify
* Ex. upward bias and negative correlation observed → confounding leads us to accept correlation, a correct conclusion to truth as bias dampens its true relation strength
* Ex. downward bias and positive correlation observed → confounding leads us to accept correlation, a correct conclusion to truth as bias dampens its true relation strength

58
Q

Solutions to Confounding

A
  1. Prevent confounding variable from changing, keep it constant so it can’t create a correlation b/w X & Y as it is unchanging b/w cases. How to identify all relevant differences/causal paths?
  2. Break causal path b/w confounding variable & X. How to break this connection?

Best Solution: Experiments uses Random Assignment, all confouding variables held constant/equal across both groups being compared.

Conditioning uses holding confouding variables held constant by comparing groups of cases with same level of confouding variable.

59
Q

Selection Effects

A

Selection Effects: people will select themselves to a specific cause/confound, they’re already different than those that don’t creating confound (confirmation bias)

60
Q

Experiments + Key Elements

A

Correlation as unbiased inferences of the average causal effect of X on Y across cases if these assumptions are met
* Random Assignment to Treatment (change X) & Control (change nothing) Groups: all cases have equal probability of ending up in each condition/exposure to X → breaks confounding backpaths/links as each group have similar potential outcome on average before treatment, (w) held constant. Treatment (manipulate X) & Control groups (no manipulation) are observable counterfactuals of each other
* Exclusion Restriction: only variable changing b/w groups is X avoiding adding confounding to design

Solves bias, uncertainty (calculate chance/random correlation), strong severity (assumptions clear + easy to check)

61
Q

Internal v. External Validity

A

Internal Validity: research design, causal relation unbiased & strong assumptions
- Selection bias, confounding, random assignment, exclusion restriction

External Validity: ability to translate to real world, plausibility, causal relation found is relevant to causal question/claim & cases interested in
- Sampling bias, validity issues (concept mapping onto variables), not all factors controlled, too artificial/different from real world & behavior

62
Q

Conditioning + Issues/Assumptions

A

Observe X & Y for multiple cases, removes confounding by identifying & measuring confounding variables and examining the correlation of X & Y within groups of cases that are the same value on those confounding variables W

Solves all confounding breaking all links b/w W & X, hold W constant so its changing is not associated with the change in Y. W is no longer able to be responsible for changing Y
The cases are counterfactuals for each other fulfilling the definition of causality

Issues/Assumptions: you will never be able to confirm these assumptions are correct, no random assignment, vulnerable to selection effects if not all confounds are held constant
1. Ignorability Assumption: no other confounding variables, all have been conditioned, very strong takes a big leap uncheckable. Blocking the originating confounding variables can help hold constant many unknown (W) & their paths, Give reasons/arguments to why remaining confounding is small or biased against the correlation
1. Variables used to condition are measured without error including random measurement error: without could lead to not removing bias, no longer comparing like-with-like, confound still exists so conditioning isn’t strong
1. Cases that are the same on confounding variables (W) exist & differing in (X), observable & can be found → So many confounds not many cases fear not USE BEFORE & AFTER

63
Q

Designed Based Solutions + Key Assumptions

A

Design Based Solutions: remove confounding by selected cases for comparison that eliminate many known/unknown measurable/unmeasurable confounding variables (W), holds constant classes (share specific properties) of (W), not specific like conditioning

Before & After: compare the same case to itself before & after change in X, holds constant all unchanging attributes of the case that don’t change over the time period of comparison as it cannot produce change in Y with change in X. No need to think, observe or measure (W).
* Assumptions: No other variables that affect Y & change with X over time (causal or non-causal link)
Violated:
(W) that affects Y change with X over time
Y has long-term trend in one direction (ex.HC increasing over time, no change in trend even with rally)
X changes in response to extreme changes in Y (ex.Gun crackdown incr. gun violence)
Measurement Bias: X changes measurement of Y (ex. More reports the more media attention)

Difference in Differences: compares two trends; factual & counterfactual, “treated & “untreated”, hold constant all unchanging confounding variables & all similarly changing together over time confounding variables. Removing confounding variables and prior underlying trends that change over time from Before & After.
* Difference 1: After-Before (unchanging attributes constant)=1.6
* Difference 2: Treatment-Control (similar changing attributes constant)=-.2
* Difference in Difference=1.6-(-.2)=1.8
* Assumption:
Untreated case equals the counterfactual of treated case
Treated & untreated have parallel trends in Y, absent treatment they would change similarly, have same potential outcomes
No other variables that affect Y and change over time differently in “treated & “untreated” cases

64
Q

Table of All Solutions to Confounding: how bias solved, which bias removed, assumptions, internal & external validity

A

See doc

65
Q

All issues & solutions to testing causal claims

A

Causal Claim –> FPCI Counterfactuals Don’t Exist –> Correlation –> Confounding –> Conditioning –> Random Assignment + Exclusion Restriction + –> Difference in Differences

66
Q

Errors Correlation Suffers

A

Random correlation/error & confounding (bias)

Random Error: by chance we observe patterns in X & Y even though in truth there isn’t represented by P-Value