POLI 110 Midterm Flashcards

1
Q

Levels of Measurement

A
  1. Nominal - what exists/type: unranked categories based on presence/absence of traits, exhaustive (Religion, party affiliation, crime type, regime type, cause of death)
  2. Ordinal - amount: ranked categories based on more/less of something, intervals unmeaningful, relative levels, not absolutely defined (University rankings, test score percentiles, ideology, level of democracy, strictness of gun laws, strongly agree neutral disagree)
  3. Interval - amount: numbers that rank cases, intervals are consistent+meaningful indicating how much more/less of something each case has from another, 0 & ratios are meaningless, doesn’t mean absence (Year, temperature, date)
  4. Ratio - Amount, amount relative to time+place+conditions: numbers that rank cases on consistent+meaningful intervals, difference indicates how much more/less of something each case has, 0 indicates absence, ratios meaningful (Time since, change over time, counts of events, rates, proportion, percentage, gun deaths)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Process to Proving/Evaluating a Descriptive Claim + Issues that Arise

A

Summary: is or is there not a situation at odds to our values, its nature, relevance of a value judgment, are key components of causal claims. Can evidence prove claims to be wrong or lead us to accept false claims? Abstraction→Observable→Procedure
Descriptive Claim+Case: specific individual, group, event, action existing in specific time & place that we are interested in identifying, grouping, measuring attributes
* Lack of transprency/systematic
Concept: define terms transparently, abstract, general applied to particular cases/instances which can be used systematically, not opaque or idiosyncratic that can be scientifically tested + builds onto theories
* Validity Error: variable doesn’t map onto concept
Variable: measurable/observable property in principle that corresponds to a concept, varies across & b/w cases+time, translate concepts into something we can observe/measurable, should correspond to concept & doesn’t correspond to other concepts
* Measurement Error: procedure or by chance doesn’t return true value
Measurement: procedure for determining the value a variable takes for specific cases based on observation, how to observe & translate world into a value of a variable, transparent & systematic procedure with known uncertainty to observe attributes of specific cases, not opaque, bias or high uncertainty
Answer:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Value of Science in Politics

A

Politics: how people live together in communities, how should we live, organize and who/what is a member
Science: keeping assumptions open to challenge and scrutinizing the ways in which claims may be wrong.
1) Science helps us be rational in responding to political crises, form of knowledge about world in a manner free/less susceptible from manipulation, interferences, domination, power, ideology
* Science can answer what is happening, causes, outcomes and consequences of some action (“is”), such as climate change, immigration, inequality, social media, technology, new or old problem

2) Science is value neutral (“ought or should”): how can it help us solve value questions, avoid it becoming tools of domination/oppression, allow it to grapple indoctrinated values
* Science cannot resolve questions of value (Weber), cannot tell us what we should do, such as what is good vs bad, desirable vs undesirable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Power?

A

Politics is fundamentally about power and science can provide justifications for that power with the capacity to motivate individuals to alter their behavior. Power is the ability of A to motivate B to think/do something it would not otherwise thought/done involving justification, normatively neutral, no value, could be “good or bad”

To have and exercise power means being able to influence, use, determine, occupy or even seal off the space of reasons for others

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Justification + Key Elements

A

Justification: reason to motivate someone to adopt some behavior/alter behavior by manipulating reality including some should (value judgements), moral intuitions to prefer “good” justification (prescriptive claim “should”), factual claims about the world (is=descriptive claims) and ability to factually learn whether justifications are good (is=causal claims)
* Value(s) about what is good/desirable (heaven, violence bad security good, more people=more support, climate change bad)
* Factual claim(s) about state of the world/reality to show relevance of values (donating to church gives you excess grace, increase in violence, bigger crowd, Climate change)
* Causal Factual claim(s) about what causes various phenomena (enough grace would bring you to heaven, migrants cause violence, more people support Trump, CO2 drive CC)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Poor Justification v. Good Justification

A

Criteria: Critical Theory Principle CRITERIA: the acceptance of a justification does not count if the acceptance itself is produced by the coercive power which is supposedly being justified, if itself is dependent on using domination or unjustified power as a method/procedure of justification, not content specifically

Poor Justification: acceptance of a justification doesn’t count if acceptance itself is produced by the coercive power which is supposedly being justified

Good Justification: no threat of violence, dupes/misleads/misrepresentation, be treated as we want to be treated

EXAMPLES:
* silencing critics, censorship, control over info, violence, distortion/misrepresentation, undermining or sponsoring/advertising research or beliefs
* Domination: one justification for power dominates all other reasons by limiting ability of others to question/challenge by controlling info or using threats/violence
* Violence: others reduced to objects to be moved/destroyed, its use means A no longer can motivate a change in the behavior of B, a loss of power, material capability for violence is meaningless when it loses justification. Power isn’t just material/brute capability but requires value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can facts help us?

A
  • Interrogate content and quality of justifications about what the world is and what causes what,
  • Investigate how power may be used to coerce/manipulate us into accepting justifications
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Plato’s Allegory of the Cave

A

Truth=real world, puppet show=perceived/power influenced world, our perception of our political world can be manipulated/tricked, those who shape what/how we see have power over us so proper justification could be impossible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Elements of Sampling

A
  • Population: full set of cases interested in describing
  • Sample: subset of population observed/measured, generalizes entire population, the larger the more accurate, the less random errors
  • Inference: description of unmeasured population based on measure of sample, always with uncertainty as only sample is measured
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Sampling

A

Purpose: when there are too many cases to observe to answer a descriptive claim directly, not many samples are required to get an accurate representation of entire population (ex.CAN 16,000)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

When is Sampling Error also a Measurement Error?

A

Sampling Error=Measurement Error when measure requires inference about population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Sampling Distribution + Use

A

Sampling Distribution: with only one sample compare it to simulation of all possible samples & their results using a procedure & visualized using histogram to assess bias in procedure + how much random error by comparing means and spread.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Sampling Error + Types

A

Sampling Error: a type of measurement error ValueSample-ValuePopulation doesn’t equal0

  • Sampling Bias: cases in sample aren’t representative of population, sample process/not every member has equal chance of being in sample causing an error that is consistently in same direction (ex. Not all students in class, especially those working, consistently making it look like we pay less for rent)
  • Random Sampling Error: due to chance sample doesn’t reflect popultion, on average too high/low compared to population average, cancel out after many samples, produces margin of error=sampling uncertainty (ex.People in sample misrepresent themselves or misclick survey)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What Makes a Good Sample?

A
  1. Large+many samples
  2. Random Sampling means No Sampling Error (bias and random): all samples have equal probability of being chosen, on average unbiased inferences about population regardless of size, on average sample average=population average. Guarantees no systematic error/bias as everyone has equal chance of being selected in sample
    Tells us exactly how much random error exists, margin of error
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What Happens to Data Without Random Sampling

A

Bias error, systematically leaves out a part of the population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Survey suggested Biden would win by 8.4% (sample), while he actually won by 4.5% (population). What are possible Sampling Error, Sampling Bias, & Measurement Bias

A

Sampling Error: Since Value of Sample doesn’t equal Value of Population there is sampling error
Sampling Bias: Democrats more excited to do survey than Republicans so more democrats in sample → Sample is unrepresentative of population
Measurement Bias: shyness from Republicans → on average republican support is lower however sample could still be representative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Tolerability of Measurement Error

A

Measurement bias/random measurement error are a problem when they create a situation where the measurement procedure fails weak severity (it is incapable of finding claim wrong even if it is is; it is incapable of finding claim right even if it is
* When bias is opposing what we are claiming it is tolerable bias, otherwise intolerable
* When RV is large alters conclusion it is intolerable, otherwise it is tolerable
* Problem Attenuation Bias: when observing pattern any Random Error is intolerable as large outliers+too much noise makes association impossible to discover
* Relative Change Overtime: bias staying constant overtime is intolerable, otherwise intolerable

Tolerability Depending Type of Descriptive Claim:
* Type of Specific phenomenon: S + R intolerable
* Amount/Frequency of Phenomena: S intolerable, R tolerable
* Relative Amount/Frequency of Phenomena across Diff Places/Times: R intolerable, S tolerable if constant
* Patterns/Correlation b/w 2 Diff Phenomena: R intolerable, S tolerable if constant

18
Q

Causes of Measurement Error in Social Science

A

Human error
Systematic/Bias Measurement Errors:
* Subjectivity/Perspective: researcher systematically perceives/evaluates cases incorrectly: Gender racial bias selecting candidates, police reports perceptions of objective threat, media echo chambers affecting beliefs
* Motives/Incentives to Mis-Represent: observed generate data based on Social Norms: discourage revelation of info that is socially undesirable, values in society about what is important/relevant/interesting (social desirability bias) ex. News reporting, How racists are you?, not understanding the question. Political actors have agendas to conceal info from each other, wealthy misrepresent assets to avoid taxation, police officers facing prosecution will hide misconduct
* Use of Data Beyond Intended Purposes: without knowing how data is produced unanticipated errors can arise
Double counting values using two agencies data, undocumented migrants aren’t all detected causes undercounting

Random Measurement Error: anything that affects values that are unrelated to actual values for the observed case
Imperfect memory
* Typos/mistakes
* Arbitrary changes (ex.Mood, hangry, weather)
* Researcher interpretation
* Misperceptions
* Observed have motives/incentives to mis-represent
* Measurement tools used for purposes other than intended

Some bias is good as it makes it more falsifiable, stronger severity

19
Q

Types of Claims

A

Empirical: can be evaluated using science assuming there is an objective world that we share open to scrutiny, what is/exists, how things that exist affect each other
* Basis: observation of the world, no value/assumptions about what is good/desirable
* Descriptive Claims “is”: what exists/existed/will exist in the world, its frequency/amount across different places/times, patterns, correlation/shared appearance/non-appearance with different phenomena
* Causal Claim “causes/effects”: how X affects/causes Y, not just correlation/appearing in some pattern, conditions under which something happens, process through which one thing affects another. Recognize: includes a causal verb or phrase (causes, because, influences, makes happen, incr, decor, result in, necessary for), if X is manipulated, it would change Y

Normative: cannot be fully evaluated using science, what is desirable/undesirable, should/shouldn’t, too much/not enough, better/worse, best/worst
* Basis: assume a value about what is desirable/undesirable
* Value Judgements “is good/bad”: can’t be evaluated with science, state what goal/ideal is right/good or provides criteria/rules for judging what is better/worse, not invalid/bad empirical claim
* Prescriptive Claim “should”: partially evaluated with science, includes empirical claims in its basis (evidence supporting an empirical claim about the consequence of some action) + an assumption that some value judgment is correct, what actions should be taken, overlap with justifications/reasons given by power. To be true both N&E must be accepted, we must accept the causal claim than A→B and that value judgment B is good

20
Q

How to Find Sources/Reasons of Error in Procedure

A
  • Comparison with known quantities or better measurement procedures
  • Understand process of observation to identify limitations, incentives, specific steps that might lead to errors
  • Pattern of Error Identifies Type, Direction & Magnitude of Error
  • How Error Affects Evidence for Claims:
    Type: random or systematic, source of error suggest a systematic direction of error
    Direction: systematic pattern upward or downward
    Magnitude: large or small, how wrong could it be
21
Q

Measurement Error + Types

A

Measurement Error: difference b/w observed value & true value doesn’t = 0, truth is different than what was observed, both patterns can occur at same time and mean different implications
* Bias/Systematic Measurement Error: measurement procedure obtains values that are on average too high or low or incorrect compared to the truth, consistent systematic pattern that occurs after repeated measurement, can vary across subgroups. Good when uniform across cases when looking at relative values, Bad when looking for absolute values or if differs across cases, more data won’t solve the issue.
* Random Measurement Error: random features of measurement process that cause errors in both directions that balance out after many experiments, no pattern/systematic tilt to errors or underlying process, in aggregate values balance out. Good when false negative better than false positive, Bad when we need precision/observe few cases, solved by more data

22
Q

Severity Test When Evaluating Descriptive Claims

A

Want evidence that is capable of showing claim to be wrong (weak severity) and stand up to multiple checks on where it could be wrong (strong severity), sensitive to properties (falsifiable), failure points (assumptions)
* Concepts not transparent/systematic fails weak severity, if passes continues to
* Variable doesn’t map onto concept (lack validity) fails weak severity, if passes continues to
* Procedure doesn’t return true value (measurement error) fails weak severity

23
Q

Validity

A

Validity: variables may not correspond/map/capture/fit to concept even if measure is perfect evidence is potentially irrelevant, may even work for other concept, does variables accurately capture concept or another
* Issues: subjective perception of concept, maps onto other concepts, other variables could better map onto concept
* Variable(s) with validity ensures a true causal effect

24
Q

Variables + Types

A

Variable(s): measurable/observable property in principle that corresponds to a concept, varies across & b/w cases+time, translate concepts into something we can observe/measurable, should correspond to concept & doesn’t correspond to other concepts
* Absolute: values are counts in raw units ($, #)
* Relative: values are fractions, rates, ranks, % (fractional, no units)

25
Q

Concept + Criteria + Definition

A

Concepts: define terms transparently, abstract, general applied to particular cases/instances which can be used systematically, not opaque or idiosyncratic (ex.Chair). Abstracts away (overgeneralizes) from highly particular, complex, unique features of reality, never perfectly corresponding to reality. Without them all experiences are completely unique, independent, we cannot anticipate or predict regularities/similarities in the world nor function/act. Too abstract can stray away, imposing concept on reality has consequences (ex.Artificial forests), conceptual limits (ex.Borders), Concepts are defined using observable traits that identify what it means to be in this category
1) Abstractions from reality
2) Defining concepts used to answer descriptive claims
3) Relevant & observable traits makes something an “X”
4) Objective=used even if disagree

Good scientific concepts can be understood & used by all regardless if one agrees or disagrees if it has the right label, without these it becomes difficult to falsify, undefined terms, appeal to loopholes/cherry picking, unreplicable, unobservable
* Testing Claims Scientifically: red states is accessible definition, observable, traits tell what it means. Transparent: clear, accessible definition, label later, traits are about what it means to be in this category. Used Systematically: no loopholes, tied to observable attributes
* Building Theories (not focus): red states is useless to prediction. Tied to Prediction: find regularities, shared behavior/actions to better understand, 2 things with same definition should be produced by same elements/conditions and affect others the same way, relevant to ordinary use

Choice of label reflected value judgements & common usage, definition is never disputed but the label chosen for the definition (its power) is always disputed. Many definitions for same label depending on questions we ask, values we have

26
Q

Logic of Inference

A

Logic is Valid when premises are true, then conclusion must be true:

Confirmation/Verification: evidence that claim is right

Falsification: evidence that claim is false, embodies severity requirements (open to scrutiny, able to be falsified, just as equally confirmed).
* Claim could be false, however many other pathways could confirm the claim aside from this (auxiliary claims), warrants & theories linking claims to empirical predictions could be wrong and it doesn’t rule out other explanations. Difficult as too complex to admit simple falsifications, hard to isolate 1 test that falsifies and yet embodies the strong severity requirement
* Conspiracy: these instruments are conspire to confirm claim even if false, starts from result, as if H were true but in fact false rigged hypothesis, always invokable, guaranteed, no way to prove or falsify line of logic, fails weak severity requirement. Something else other than H explains data that appears to confirm H

27
Q

Attributes of Scientific Evidence

A

Systematic Use of Evidence: clear rules, avoid cherry picking, confirmation bias (ex.Gay vs Straight contact)
* How: clear rules on what, how, comparison of observations
* Why: avoid picking, confirmation bias, replicability, no secret sauce, enables challenging assumptions, objectivity

Transparent Procedures: assumptions to interrogate, procedure to replicate(objectivity)+validify+scrutinize, check data/math/comparison/choices (ex.California vs Florida)
* Most Important: with this attribute a study was caught being a fraud, tried replicating
* How: data observations used, comparisons, choices
* Why: assumptions/choices to replicate (objectivity) result & challenge

Acknowledgement of Uncertainty: highlight assumptions that link that might be wrong (ex.Chance, other factors, Qs left over)
* Limitations: qs remained unanswered after study, possible false assumptions, possibility of result driven by chance or spurious relationship

Consider Alternatives: test rival claims, interpret data differently to rule out other claims, seeking ways to falsify (ex.Personality, attractiveness, random assignment, subject matter)
* Test claim against other competing claims, which claim survives many different tests is best
* Why: openness to being wrong, no assumption above challenge, evidence consistent with different assumptions, one piece of evidence can be consistent with many claims, best claim generates most useful predictions

28
Q

Severity Requirement

A
  • Weak Severity Requirement: unscientific if data agrees with claim but method is guaranteed to find agreement, little/no capability of finding flaws even if they exist, nothing has been done to rule out ways the claim may be false, doesn’t mean it isn’t true
  • Strong Severity Requirement: scientific if data/evidence for claim survives stringent scrutiny to warrants/assumptions, just to the extent it survives a stringent scrutiny, passes a test that is highly capable of finding flaw or discrepancies, yet none or few are found, using different plausible assumptions/warrant, evidence procedures that make weaker assumptions
29
Q

Tolstoi v. Weber

A

Tolstoi: scientific evidence share similarities in how it adheres to severity requirements, while unscientific evidence will fail severity requirement in their many different ways

Weber: science is mastery of the world which can magnify tools of power but can’t justify itself as the questions that get or don’t get asked scientifically are determined by justifications invoked by power

30
Q

Claims + Elements

A

Claims: statement about what is right/true (ex.It rained last night, Trudeau caused inflation)
* Basis for Claim: reason we should accept the truth/validity of the claim, composed of
* Evidence: proves truth of the claim, data, information, etc. (ex.Saw street was wet this morning, Stats)
* Warrant: assumptions that permit/link/validify evidence to count/the truth, rules out other possibilities that would falsify the claim about the tools, instruments, procedure of the evidence (ex.Water isn’t from another source, eyes are clear, actions can be taken now to reverse high prices/no other causes responsible for inflation)

31
Q

Weber’s Analysis of Science

A

See Doc

32
Q

Pick one of the causal claims from above.
- Write down a variable that corresponds to this concept.
- Propose a measure for this variable
- How could it lack validity

A
33
Q

(a) Propose a measure for this variable that would produce systematic measurement error. Be sure to explain why the procedure would produce systematic measurement error
(b) What is the direction of the error you would expect to result from your procedure?

A
34
Q

Identify the level of measurement.

A
35
Q

This an example of
measurement bias (systematic measurement error)

A
36
Q

What is the claim in this quote?
What kind of claim is it (identify whether it is empirical or normative and then which sub-type it
is)?
What is the basis given for this claim?
Identify one way in which the basis for this claim does not meet the criteria for scientific bases for claims (identify the specific criterion it does not meet and explain why this is so)

A
37
Q

a. “We should buy mosquito nets for people living in places with a high risk of malaria”
(Q) Assume that you accept the claim (b) “reducing avoidable deaths from infectious disease is
desirable”
- What kind of claim is (a)?
- What kind of claim is (b)

If we accept (b) is true, on the basis of that alone, are we able to accept (a)? If not, give
an example of another claim (c) that we would have to accept (in addition to (b)) in order
to accept (a)? What kind of claim is (c)?

A
38
Q
  1. What type of claim is (i)? Be as specific as possible. ( point)
  2. If we assume (i) is true, on the basis of that alone, can we accept (ii) is true? If not, give
    an example of another claim (iii) that we would have to accept (in addition to
    (i)) in order to accept (ii) and indicate what kind of claim (iii) is. (2 points)
A
39
Q

Give an example of one way that using the measure you gave in Q1
could suffer from random measurement error. Be sure to explain why (in
your example) this measure would produce specifically random
measurement error (as opposed to measurement bias). If you used this
measure, would this lead to a problem with validity or reliability

A
40
Q

Describe a measure for this
variable that suffers from measurement bias that results from sampling
bias. Explain clearly why the procedure would generate sampling bias and
why the sampling bias would also lead to measurement bias in this case.

A
41
Q

What is random sampling? Please describe it in terms of both the population and the
sample.

A
42
Q

Ecological Fallacy

A

Assumed that relationships observed at an aggregated level imply that the same relationships exist at the individual level

Variables are observed as aggregates, making inferences about individual behaviors using aggregate variables, inferences about the nature of individuals are deduced from inferences about the group to which those individuals belong, very narrow conditions/assumptions that allow for validity, risky/rare