Quiz 2 Flashcards
Type of Inquiries in Poli Sci
Application of scientific methods to understanding power dynamics in politics(about resources and policies)
Definition of Political Science
scientific study of politics
H. Lasswell : “who gets what, when and how”
The discipline is divided
Characteristics of normative questions? (7)
- “should, ought”
- how things should be done,
- is more opinionated
- much more difficult to measure
- value-based
- the use of general principles, persuasion and logic
- Are a source of debate
Characteristics of empirical questions? (6)
- factual-based
- observing then explaining as real as it is
- based on testing
- descriptive
- focused on measurement
- use scientific method
- cannot draw the same inferences due to moral differences
What about bias?
- We need to be aware of our bias, on our method on how we answer and the qts we leave out;
- It is problematic if there’s no diversity of opinions among the research
Aims of empirical analysis in Political Science (4)
- Contextual description :
to examine and to know more about the average knowledge; not engaged in any generalization; the expert of that specific subject - Classification and measurement:
categorize things into groups; distinctions of certain types; not making cause or arguments; to understand variances - Hypothesis testing:
it needs to be as specific as possible; the origins of hypotheses: identify a problem, look at other ppl’s researches; need to be a good observer of the world; - Prediction:
when you’re very certain about a certain event; very rare in poli sci
Why is Political Science a probabilistic science?
Because you can not 100% predict phenomenons
What was the IV and DV in Michael Moore’s video?
IV: Marilyn Manson, bowling
DV: gun violence
Basic research?
when we go beyond the surface; to advance knowledge
Applied research?
is focused but not in-depth to specific problems; maximize effectiveness and efficiency in the short term
Inductive research(broad)? (2)
- data to theory, progression from empirical evidence to generalization
- begin with an open mind
Deductive research(narrow)? (2)
- general to specific, set out to test hypotheses and theory in the real world
- assumptions = logic or pre-existing research
Hypothesis(def.)?
- statement of two variables
2. no normative statements
Proposition?
a statement has to be true or false
Characteristics of a hypothesis?(5)
- relationship
- comparison
- direction(+ or -)
- testability
- unit of analysis
causality?
A causes B
Temporal order?
one event occurs in reaction to another event
Continuum?
Ability to classify variables that can be ordered or ranked
How can I classify variables?(2)
Ideal type
Typology : different types of things(political views : socialism, communist, capitalist)
Multivariate?
more than one independent variable
Spurious relationship?
controlling/holding variable C constant causes the relationship between A and B to disappear
Ecological fallacy?
not to project ecological characteristics onto single behaviours
Intervening variables?
Variables that impact the causality flow/variation
Reinforcing variables?
a variable that strenghten the relationship between A and B
Multiple independent variables?
Assumption that independence between the causal or independent variables may not reflect the true relationship between variables in the real world.
How scientific is Michael Moore? (3)
he doesn’t show his entire data(showing chosen excerpts and shorts one); not balance, not condensing
Intersubjectivity(2)? And why it is needed?
- requires more than one observation
- scientific process = replication
cannot create knowledge w/ one person/research
Essence of Scientific Method?(3)
- not about common sense or intuition, but objective observation (empiricism)
- Impartiality
- Intersubjectivity and Replicability
What researchers should do? (essence of scientific method)(2)
- has to hold its own belief outside the research
2. should not fear the retroaction of the public
DA/RT Initiative?
- data access, research transparency and analytic transparency
- need to take other experiences into account
Scientific Method Graph
RQ -> Theory -> Hypotheses, Operationalization, Research Design -> Observation -> Reformulation, Generalization, Data analysis -> cycle
How to formulate a good reseach question?
broadest method you can approach a project
you have to be passionnate, curious about the project(suitable); it has to be feasible
1. Relevance-Importance
2. Examining todays political developmennt
3. Curious topic
4. Aware of outsiders
What is a theory?
It goes a step further, it is a potential explanation of a political phenomenon through logically related propositions(statements).
It reveals the direction of the research question
How can a theory be formulated?
induction :
bottom-up approach
making generalizations based on observation
deduction :
top-down approach
starting from a theory and derive empirical implications from that theory
How to link a research question to a theoretical framework(theory)?(5)
look for potential problems w/ the theories used. Do we maybe see other variables the theory did not include
take a famous theory and apply it to set of (new) cases- maybe new insights
if there are outliers: cases that don’t seem to fit the theory very well - e.g the role of Nevada (what do we learn about theory)
replicate an existing theory and tests w/ a new set of measures
to soak, poke and observe to find a new theory for an unexplained phenomenon
dummy variable
the answer is plain : yes or no
common errors in hypotheses(6)
statement fails to specify how the v. are related,
only one v. or is vague,
is incomplete or improperly specified,
use tautologies, proper names and value judgement
Correlation vs. Causation
Correlation : related
Causation : cause
What do Potential Outcomes Framework do?
helps us visualize what are the problems(many variables interfere)
Causal inference problems? Give examples
Reverse causality
Spurious relationship(selection effects can cause it)
No relationship
Why is an experimental design so advantageous for addressing causality?
experiments are very good at excluding other factors out of our consideration; this limitation procedure are controlled by us;
Difference between a test group and a control group?
Test group is exposed to the dependent variable while the control group is not
Types of Research Designs? (4)(not in quasi-experimental research)
Observation without control group
Natural experiment without pre-measurements
Natural experiment
True experiment (random and equal assignment)
Regression to the mean?
dependant variable measurement once can show signs of errors; not constant
best way to eliminate a third variable that may affect the causation is?
To separate the subjects into new little groups and to do the same analysis within each groups .
Types of Research Designs in Quasi-experimental research?
There’s no random assignment.
Post-test : no comparison
Post-Test with group control : one group is exposed to the IV
Pre and Post-Test : compare a case to itself
Pre and Post-Test with control group : Compare 2 cases
Dosage Design : compare cases of different manipulation strength of the IV
What logic do we have to use for observational studies?
experimental logic
How to test causality? Steps
- Showing a correlation
- Excluding other factors
- Temporal order
- Control group (if possible)
OR
Randomization of assignment or equivalent (matched control – if possible) - Need causal mechanism
Special Features of the Hersh Study that ensure causality flow : (4)
- Subjective data = surveys and self-reports, versis documented data
- As-if randomization through 9/11 victimhood(similar to lottery)
- Uses per and post-data
- Creation of a control group that is VERY similar(geographic linkage to victims and full matching)
As-if randomnization is? (3)
subjects do not self-select themselves into treatment and control groups
assignment to treatment and control groups is plausibly uncorrelated with alternative explanations
Lower internal validity than if we had truly random assignment
Example of a Natural experiment
9/11 victim
Birth lottery in Vietnam
How often do governments use randomization?
Rarely
Internal validity? (2)
- experiments are better for that validity
2. The study is properly set up to determine if the independent v. has a causal effect on the dependent v. .
External validity? (2)
The results of the study can be generalized to the real world or beyond a case
Lab experiments: advantages(5) and weaknesses(3)
Researcher in full control
Complete randomnization into treatment and control groups
Good for internal validity
Relatively easy to replicate,
Often cheaper and less time-consuming than field experiments
Artificial environment - low realism.
Demand characteristics : participants are aware of the experiment, behaviour may change
Experimenter effects : bias when experimenter’s expectations affect their behaviour
Field experiments: advantages(3) and weaknesses(4)
ppl behave more naturally = high realism
easier to generalize
ppl often do not know they are being studied
do not use consent; not ethical
weak control of competing variables
time-consuming and costly
participation vary
Types of field experiments
canvassing experiments, civic course experiments, vote compass experiments, mock elections, evaluating programs/policies
Why randomized controlled experiments make it possible to isolate causal effects?
We infer causal effects from our observations.
fundamental problem of causal inference
we cannot observe subjects in both their treated and untreaded states b/c reality is now and time is only once
2 key characteristics to experimental method
planned intervention by researcher and random assignment
What is the nature of observational studies?
they are passive
How internal and external validity are related?
The internal validity strengthens throughout the research and external validity will follow
Survey Experiments: advantages(3) and weaknesses(2)
Substantive questions(important to the study) Can reach more people(internet)(heterogenous) Brings great generability of the results
May be different from real-life setting
Perceptions of the subjects on issues/questions may differ from the researcher
Pratical and ethical limits of experimentation?
To test human nature(e.g. rationality) is unethical
To belive in total control is unpractical
To not give true information is unethical
To violate one’s ground of equity and fairness
Operationalization + concerns
Movement from an abstract concept to a concrete measure
Concerns : potential conflicts and controversies around the measurement?
Concepts ?(def. + 3)
Concept is an idea or a term that enables us to classify phenomena
can be concrete or abstract
categorical concepts have diff. characteristics
continuous concepts have sequentially connected characteristics(continuum)
Variables(when operationalizing)?(3)
transforming our conceptual idea into a quantifiable, observable phenomenon
unlike concepts, it can take on different values
the variable empiracally captures the variation within the concept
Indicators?
assigning each individual case to different values
What do multiple variables and indicators do?
help understand the concept/variable more
Level of measurements(3)?
Nomimal-level v.
Categories can not be ordered or ranked
Ordinal-level v.
Ranking relative to the position of other categories; organize them along a continuum b/c we do not know the distance b/w the cat.
*agree/disagree qts are useful for this level of measurement
Interval-level v.
placed on a continuum and the categories are separated by a standard unit
Issues of accuracy
Measurement Validity and Reliability
Measurement Validity + categories of validity (5)
measures need to be appropriate and complete
Face validity : to make it understandable for any reader(logic at the surface)
Convergent v. : compares indicators designed to measure the same v.
Discriminant v. : compares indicators designed to measure opposite v. (both v. should yield diff. results)
Predictive v. : to predict an outcome of a certain v.
Perfect v. : impossible ideal
solution to perpetual existence of measurement validity problem
use multiple variables and indicators
Reliability
if the measure is consistent regardless of circumstances
Reliability does not ensure validity
Random errors : measure is inaccurate, but the inaccuracy is not systematic(results slightly vary, is reliable)
Non-random errors : “ “ “, but the inaccuracy is systematic(it is not reliable )
Logic behind and creation of scales and indexes
It acts as a complex multiple indicators; to quantify the conceptual defintion = meaning becomes more comprehensible
combining indicators into indexes to pinpoint wich indicators have the strongest reliability
Cronbach’s alpha
examines the elements used in the construction of an index; C’s a. score = 0»_space;> 1(1 being the most reliable)(researchers usually drop the measures from an index if it’s below 0.7)
question design : qts have to be =
use neutral language
be clear
avoid response sets(yea-sayers/ nay-sayers will follow a pattern)
keep response categories mutually exclusive and exhaustive (don’t know)
select the highest lvl of measurement
pay close attention to question order
minimize defensive reactions
Value of the case study’s thorough examination of a research topic :
Detailed analysis of a single discrete phenomenon, which begins from the observation of a counter intuitive.
Counter-intuitive : condition that occurs when a situation, event, or outcome differs from dominant theoretical expectations or common sense.
Qualitative approach that examine specific events on a small scale. Less variables
Descriptive Case Study (6)
When a phenomenon is completely novel(fresh) or unknown; emerges from new info;
Goal : to describe the phenomenon as the basis of contributing to an emerging of future research agenda;
Open-minded researcher follows the unfolding event with a sharp eye; looks for new v. and connections b/w v.; must state scope conditions
Scope conditions : the limits to which particular research make valid claims;
Restrict insights to the phenomenon and curtails the ability to generalize to a large-n research
B/c it is so close to a phenomenon, it is difficult to make generalized statements from them
Theory Testing/Modification Case Study(4 + 2 kinds of study)
When a phenomenon shifts its expectations/ theory:
Failed most-likely study : expected to confirm it, but refutes it
Wants to offer the need to rethink a theory’s claims
Successful least-likely study : expected to refute it, but confirms it
Wants to offer the need to relax/rethink the scope conditions, theory explains more than its proponents claim(original person who advocates the theory)
Important distinction : role of falsification = empirical refutation of a theoretical proposition
Goal : Theory modification
Considerations for Case Study Research (4)
Clear definitions of the subject and object of the study case
Does the case study have rigorous and clear conceptualization
Case studies are well suited to provide conceptual refinement, whereas statistical research can be at risk of conceptual stretching (using more general conceptual definitions to increase sample size)
Does the case study employ process tracing
Process tracing : primary mean by which case study research generates causal reasoning
Causal pathway is established and leads to the current outcome
Applicable generalization to a wider population sample
If it is undergeneralized, it can fail the “So what” test
Benefits of the comparative approach :
small-n research, systematically contrasts a number of cases in order to create a stronger generalizations(allow greater explanatory power and prediction -> broaden our knowledge of the political world)
Issues of the comparative approach :
Not all political units are suitable for all research qts
Greater sample randomization = greater sample errors. But to reduce sample errors in smaller sample size makes selected cases less representative
Logic of random sampling doesn’t hold for such small populations; purposive sampling = allows researcher to use specific knowledge of systems in order to choose political units that could lead to more fruitful comparisons
Most-similar-system design :
Similarity of cases means we control for many explanations, one factor lead to different outcomes
Most-dissimilar-system design :
Takes vastly dissimilar systems and attempts to explain commonalities b/w them
Selecting an appropriate Comparitive Research Design
+
Galton’s problem
Must be careful about the operationalization of v.
Must be aware of the social and political context, b/c the op. of one v. depends from one culture to another
Goal : equivalent measures, not necessarily identical ones;
aim is to measure a concept, use appropriate indicators to diff. Contexts
Galton’s problem : researcher must ensure that the units under observation are independent of one another;
Diffusion of cultural norms and experiences makes cultural comparisons more difficult. Mostly for neighboring countries(US-Canada) or culturally similar countries(France-Canada(Quebec, New-Brunswick)
Theory of sampling
Sampling : process of drawing a sample cases from a larger population and selecting a number of cases for further study
Logic of drawing representative samples from larger populations
using a sample costs less and takes less time, researchers are able to monitor data collection due to the study’s smaller scale
Quantitative research
Seeks to measure population characteristics in numeric terms(population parameter)
uses probability sampling
Qualitative research
Often have smaller sample, but seeks to measure population characteristics
Uses non-probability sampling
Practical techniques for drawing samples
Representativeness is important to make generalizations; is determined by 3 factors related:
Sample framing: a list of all the units in the target population (accuracy and completeness = no missing cases or inaccurate info)(complete frame is rare, b/c records are incomplete and subject to change)
Not all target populations have a listing(direct or indirect)
Sampling size: can be divided in two categories:
Probability sampling : based on probability theory + allow researchers to use inferential statistics to test representativeness
Non-probability: not based on probability theory + researchers can’t use statistical analysis to make inferences
What does Probability sampling do:
gives confidence in regards to accurate representation of the population
What is simple random sampling?
process by which every case in the popu. is listed and the sample is selected randomly from this list
What is sampling distribution?(2)
All the sample means for a given sample size
SD of means is created by totaling the number of combinations
Confidence interval :
range of values within which the population parameter is likely to fall known as a confidence interval
sampling error :
difference b/w sample statistic(estimated value) and population parameter(actual value)
when using probability sampling techniques, sapling error is reduced
3 factors of sample size:
Goal is to explain heterogeneity or homogeneity
Number of variables
Desired degree of accuracy
Types of sampling methods : (3)
Systematic selection:
Selection interval (1/k) e.g. 5% = 1/20, so 1 case out of 20 cases, then we chose a random start(ing number)
More practical and efficient than sample random sampling
Stratified sampling: breaking the population into mutually exclusive subgroups or strata and randomly sampling each group
Disproportionate stratified sampling used to deal with population variances (n*%)
is not representative anymore(oversampled), so we assign weights to respondents (proportionate population/ sample size)
Cluster sampling: dividing population into a number of subgroups(clusters) and randomly selecting clusters within which to randomly sample
Considers geographic units(regions)
Adv. : cost reduction, effiency increased
Disadv. : may seem un-representative on face value(to the public eye)
Types of Non-Probability Sampling(4)
Accidental sample : “accidently” encounters convenient individuals
Self-selection: participants are limited to those who opt in, not really representative
Purposive sampling/ Judgmental sampling : according to a criteria
Snowball(or network) sampling: often employed to study social networks, ask for further referals till the sample, who becomes larger, reaches logistical and financial considerations
Quota sampling: accidental/ purposive combined with stratification
(Sampling)non-random selection of cases =
no margins of errors or confidence intervals
The Problem of Cross-level Inference
Attempt to make inferences about one unit of analysis with data from another unit of analysis
e.g. ecological fallacy
Most effective way to increase voter turnout
Face-to-Face canvassing (up to 30%)
Volunteer phone banks (3-5%)
Commercial phone banks
Direct snail mail
Targeting via social media(60 million ppl, 6%)(might have ethical issues depending on the subject of the experiments)
Unif of analysis in Empirical Research
The unit of analysis in a hypothesis specifies what type of actor the hypothesis applies to e.g. individuals, countries, etc.
What is a case?
(def. By John Gerring: a case study is the intensive study of a single case where the purpose of that study is – at least in part – to shed light on a larger class of cases(a population)
Charateristics of case studies(5)
explaining a complex theory
develop new classifications or concepts
pick specific cases “deviant cases”
use case study to generate hypothesis or to look at causal mechanism
inferences based on one case are less secure
Difference b/w descriptive case study and observational studies
IV is not clear, the RQ is wide-open
complex units + are named
variables emerge during research
deliberate choice
temporal sequences
low external validity and internal depends
probabilistic nature -> cannot specify size of effect
Risk in single case study
Greater risk of researcher bias
luck may validate or invalidate
can deviate b/c of a faulty measurement
a single negative case cannot invalidate a probabilistic theory
Textual analysis + use
Is the systematic examination of the messages and meanings conveyed by texts
Text: any form of communication that feature content(words, symbols = has a message)
Use: to define/understand ideas, goals, motivations and activities of politicians, poli. org. and institutions.
Shed light on political issues and events
Content analysis and discourse analysis + disadvantage(1)
Content analysis: quantitative; used to explore the message characteristics
Discourse analysis: qualitative; seeks text meaning reflected in content(which in this case is called discourse)
Both leave important questions unanswered
Features(2)
Structural features: focus on the communication’s format and the content’s presentation
Substantive features: focus on what is said and what is meant; content convey particular meanings, norms and assumptions
Type of content analysis(2)
Manifest content; surface meaning of the subject(easier+quantitative)
Latent content: underlying or implied meaning(qualitative)
Strengths + Weaknesses (7)
Methodology tool for many approaches to poli sci research
Used as a part of mixed methods
Availability of text(abundance or inaccessibility) is both a S and W
Is objective
Key assets: Reliability and validity
Rigor in quantitative, important message may be ignored b/c of reductionist tendencies(marginalized texts)
Qualitative can’t disaggregate meanings, only unified whole meanings(bias underlying messages or identifying patterns)
Ethical considerations:
It is unobtrusive and non-reactive; part of public domain
If it involves private matters, ethical concerns are raised(consent and confidentiality)
Interview: definition and 5 basic steps
One person asks questions and the other respond; effort to obtain necessary data by promoting discussion
Requires understanding of political process or context to be credible and to avoid missteps
Obtains very detailed data, and often private, otherwise inaccessible information
Contact developed = trust
Is time-consuming
Is reactive(desire to self-promote/self-protect) -> misleading, lying, present false information thought to be true
Steps:
Selecting that kind of individuals suit better the interview
Contact potential respondents and request an interview(helpful people first, hostile/busy people last)
Clearly know the data you are seeking by making an interview framework(of the qts you want to ask)(positive and neutral)
Preschedule the interview to have enough time
Awareness of body language + take notes even if you are recording it.
Focus group: definition and one basic step
Enable researchers to probe beneath the surface of public opinion(why ppl dislike this or not)
Provide context and community perspective on broader issues
Discussion(at length and in-depth of the same topic in a structured conversation for an extended period(1-2 hours)
Time and money efficient method to obtain data from multiple participants
Understanding attitudes»_space; measuring them
Participants have to be knowledgeable, willing and capable of communicating + facilitator has to obtain their trust(skillful) so participants won’t conform to a dominant thinking in the group
Should have enough participants to yield diversity(6-12) but not too large so people won’t get intimidated
Research does not always anticipate how the discussion will turn into
Has a moderator(must remain focused on the topic) and a note-taker + are video-taped
Food facilitates presession conversation
Steps:
Moderator gives a short intro giving the purpose of the meeting
Complements quantitative data
Observation research:
Observe actual behavior rather than relying on reported behavior(which may be biased b/c of the researcher or the participants)
Known as ethnography
Interest: events occurring in natural circumstances
Pay attention to context, cultural setting and power relations
Inductive and exploratory
Obtrusive observation = subjects are aware that they are being observed
If participants are told that they are observed = Hawthorne effect
Participant observation: researcher becomes part of the community being observed(context-driven > structure)(time-consuming but necessary for valid data)
Association =physical risk and damage to researcher’s credibility
Entrenchment to a group = deviate a research from his task
Field notes = data
Questions types in Interviews, Focus grouos and observations:
Interviews: promote discussion, not a simple yes-no question
Focus groups: open-ended question + framework is more flexible
Observations: Observation schedule for a more structured observation(checklist of behaviors; indicators exclusive and exhaustive)
Ethical issues in a qualitative research
Protecting the ID of participants for their safety
Interviews and focus groups: have to say their intentions(data collection and anonymity)
Focus group should not disclose sensitive information
Be an honest person = good reputation, competent, knowledgeable and energetic researcher likely to produce something significant