flashcards process tracing best practices (12 nov)
process tracing is primarily used by research communities that:
- start research with puzzle-driven question (question guides methods)
- aim for contextualized explanations that combine (social and institutional) structure and agency (actors’ decisionmaking) + specify scope conditions
- focus on “effects of causes” kind of questions
two forms of analysis: theory development vs theory testing
theory development = inductive
- take a case with an outcome, identify and stylize an explanation
- researcher can compare their explanation to alternatives (diff causal mechanisms or diff mechanisms with the same cause)
e.g. Evangelista tried to figure out a causal mechanism to explain the outcome (End of the Cold War)
*also deductive element bc tests empirical implications of diff theories
theory testing = deductive
- examine observable implications of hypothesized causal mechanisms (defined through deduction) within a case to test or probe the mechanism’s plausibility
e.g. Winward study of Indonesia = tests empirical implications of steps of the causal process
best practices process tracing
caveats + benefits
caveats =
- rules of thumb (not strict template to be applied)
- relevance of each of the practices depends on state of the research
benefits:
- more systematic and transparent research
- methodologically plural: can be combined with other methods + use any type of data
best practices - list
- cast the net widely for alternative explanations
- be equally though on alternative explanations
- consider potential biases of evidentiary sources
- take into account how critical the case is for the alternative explanations
- make justifiable decision on when to start (how far back?)
- be relentless in gathering diverse and relevant evidence
- combine PT with case comparisons when useful (and feasible)
- be open to inductive insights
- think of empirical implications deductively: “if my explanation is true, what will be the the specific process leading to the outcome?”
- my analysis does not need to be fully conclusive
- cast the net widely for alternative explanations
-> more credible and persuasive research
where do I look?
- case-specific: regional specialists, experts in the topic, historians
- non-scholarly sources: participants, and journalists may know explanations scholars ignore
- scholarly publications: go over typical domains of explanation in the social sciences (material power, institutions, social norms, etc.) + go beyond level of analysis (focus on agency and structure)
e.g. Evangelista: looks at realists explanations + ideas and norms
- be equally though on alternative explanations
beware of confirmation bias: consider properly the evidence that fails to fit my hypothesis
some explanations might be easier to reject than others, which depends on the kind of evidence we have and the test we can accordingly apply
- hoop tests = necessary evidence for explanation to be relevant, but does not say if it is strong relative to others
= useful to disqualify at the outset alternative explanations that look weak - smoking-gun tests: sufficient evidence for explanation to be strong relative to others
= NOT necessary for relevance: failing the test does not disqualify an explanation - doubly-decisive test = evidence that automatically renders it as relative and alternatives as irrelevant
- straw-in-the-wind test = evidence does not guarantee relevance nor strength, but suggests plausibility
= may increase confidence when many of these pieces of evidence accumulate
- Consider the potential biases of evidentiary sources
- primary sources: agents may have ulterior motives, consider the context in which statements are made and adjust credibility of sources accordingly
- selection or availability bias: absence of evidence (sources not made available to you) does not necessarily mean evidence of absence
- secondary sources: consider historiography from more than one school of interpretation
- take into account how critical the case is for the alternative explanations
understand/assign prior expectations for each explanation according to how critical this case is for that explanation
- most-likely case -> more persuasive if you can tell why theory fails
- least-likely case -> analyze why an explanation holds
cold war example: most likely case for testing realism (bc it concerns Great Power competition)
- if you exclude the realist explanation, argue that its implications don’t explain the case, you need to be persuasive as to why
- make justifiable decisions on when to start (how far back)
depends on the question
suggestion = identify the most proximate decision node (critical juncture)
- but consider whether earlier or later ones are relevant, if necessary -> transparency and falsifiability
(be open to inductive insights, maybe change the starting date)
e.g. Evangelista went back to 1950s and 60s to explain the end of the cold war + 1941 Chernobyl disaster was starting point of cognitive theories based on learning
- be relentless in gathering diverse and relevant evidence… but make a justifiable decision on when to stop
diff sources and data types allow triangulation
- triangulation = cross-checking inferences
- bc if data comes from the same stream (e.g. docs provided by interviewee), they are subject to same biases and errors accumulate
make a justifiable decision on when to stop: gathering detailed + diverse data is time consuming
stop when:
- financial and temporal constrains
- repetition: when seeing same results many times, it’s unlikely more data (from similar streams) will change your beliefs
!!!no way to escape the trade-off between risk of stopping too soon and making poor inferences, and risk of stopping too late and wasting time
transparency and well-reasoned justification matter for credibility
- combine PT with case comparisons when useful (and feasible)
Starting with case study using PT, possible synergies:
- comparative design to (further) test causal mechanism (e.g. Winward tests his causal mechanism developed in the Central Java case through a most similar comparison of East and West Java)
- situate the case in comparative perspective (in relation to a scholarly debate): what type of case is this, what does it do to theory (does it nuance scope conditions or causal claims?)
starting with comparatie design, possible synergies:
- PT is a systematic way to look inside cases
- PT may help rethink selection of cases and justify them better
e.g. cold war: in-time comparison Kruschev’s reforms vs Gorbachev’s reforms = most-similar design, main IV is diff personal history
- think of empirical implications deductively
“if my explanation is true, what will be the specific process leading to the outcome?”
- theories often stated in very general terms -> need to adapt them to the case
- define them prior to the analysis to minimize the risk of ad hoc operationalizations
- if hypothesized explanation comes from the same case, I need evidence independent from the evidence that created the explanation
- my analysis does not need to be fully conclusive
- may conclude that you don’t have the data to rule out an alternative explanation
- may conclude that more than one explanation works, perhaps explaining diff aspects of the outcome (complementary explanations)
this kind of transparency makes your analysis more credible