reading 3 - review of core ideas Flashcards
case studies: types, designs and logics of inference
Jack S. Levy
typology of case studies based on their purposes:
- idiographic (inductive and theory-guided)
- hypothesis-generating
- hypothesis testing
- plausibility probe
diff case study research designs:
- comparable cases
- most and least likely cases
- deviant cases
- process tracing
selection bias
“single logic” debate
emphasize the utility of multi-method research
introduction: increasingly multi-method research
common idea = good case study research lacks a method = wrong
case studies: types, designs and logics of inference
what is a case study?
no consensus on definition
common = case study: attempt to understand and interpret a spatially and temporally bounded set of events
last three decades = polsci moves to more theoretical orietnation -> cases as instance of something else (theoretically defined clas of events)
George and Bennett:
- case = instance of a class of events
- case study= detailed examination of an aspect of a historical episode to develop or test historical explanations that may be generalizable to other events
central question: what is this a case of?
!case is not equivalent to observations: cases include many observations of the same variable
case studies do not equate to a narrative approach: association case studies and qualitative approach is a methodological affinity, not a definitional entailment
case studies: types, designs and logics of inference
typology of case studies
most typologies combine research objectives and case selection techniques (e.g. deviant case study for hypothesis generation)
simpler + more useful = focus on theoretical/descriptive purposes of research objectives of a case study + distinguish those from various research designs or case selection techniques
->
- idiographic case studies = aim to describe/explain/interpret a particular case + can be inductive or theory guided
- hypothesis generating case studies + hypothesis testing cases = combine Lijphart’s theory-confirming and theory informing cases
- plausibility probes = intermediary step between hypothesis generation and hypothesis testing, includes illustrative case studies (ideal types)
idiographic case studies
aim to describe/explain/interpret/understand a single case as an end in itself rather than as a vehicle for developing broader theoretical generalizations
= e.g. historians
subtypes = based on how much the analysis is guided by an explicit theoretical framework
inductive/descriptive case studies
= descriptive, lack theoretical framework to guide analysis
- total history idea = everything is connected to everything
- aims to explain all aspects of a case and their interconnections
- still theoretical preconceptions and biases
theory-guided case studies
= structured by conceptual framework focusing on theoretically specified aspects of reality, neglecting others
- e.g. efforts to explain origins of WW1 and the cold war
case studies: types, designs and logics of inference
hypothesis-generating case studies
aim to generalize beyond the data
examine case(s) to create a more general theoretical proposition that can be tested through other methods
!contribute to the process of theory construction rather than to theory itself
- theory = logically interconnected set of propositions = requires more deductive orientation than case studies provide
case studies useful to explain cases that don’t fit existing theory, to explain why the case violates theoretical predictions
theory guides an empirical analysis of a case -> case is used to suggest refinements in the theory -> can be tested on other cases
case studies help specify causal mechanisms: process tracing (intensive analysis of the dev of a sequence of events over time)
case studies: types, designs and logics of inference
hypothesis testing case studies
hypothesis-testing contributions of crucial case studies based on most/least likely case designs
case studies: types, designs and logics of inference
plausibility probes
pilot study ish (pilot study is in experimental or survey research)
allows the researcher to sharpen a hypothesis or theory, to refine the operationalization or measurement of key variables, or to explore the suitability of a particular case as a vehicle for testing a theory before engaging in a costly and time-consuming research effort
the analyst probes the details of a particular case in order to shed light on a broader theoretical argument
e.g. illustrative case studies of IR: brief case studies that fall short of the degree of detail needed to explain a case fully or to test a theoretical proposition
- aim to give the reader a “feel” for a theoretical argument by providing a concrete example of its application
!plausibility probe often used rather loosely (growing theoretical/methodological expectation -> use this as cop-out)
case studies: types, designs and logics of inference
varieties of case study research design
increasingly case selection needs to be theoretically justified -> considerations of intrinsic interest or historical importance no longer acceptable
some issues of case selection are important in hypothesis testing are less concerning at the hypothesis-generating stage (e.g. selecting on the DV + appropriate nr of cases)
- the more cases used to construct a theory, the less can be used to test it
case studies: types, designs and logics of inference
selection bias
random selection in small-N research -> serious biases
need for theory-guided case selection = risk selection bias
- picking a case used to generate the hypothesis
- picking a case bc it fits the hypothesis
- over-representing cases from either end of the distribution of a key variable (esp. cases with extreme DV values bc it underestimates the strength of causal effects)
- case study research that relies on historians with the same set of analytic biases -> case study researcher predisposed toward certain theoretical interpretations
problem of “selecting on the DV” -> need to include negative cases
but: with process-tracing within case studies selecting on the DV is not a problem bc you’re not comparing + if you’re looking into necessary conditions only cases with an outcome and without the necessary conditions can falsify hypotheses (so then you can select on the DV)
!scholars need to test their explanations against alternative interpretations
case studies: types, designs and logics of inference
comparable-case research designs
criticism quantitative res. on case studies = case studies more variables than cases -> df problem -> outcomes causally underdetermined
Lijphart: comparative method follows diff logic, looks at control by selecting comparable cases (to rule out confounds)
-> logic of inference statistical and comparable case methods quite similar
John Stuart Mill: System of Logic =two methods for the empirical testing of theoretical propositions
-> most similar and most diff systems design of Przeworski and Tuene similar
- method of difference = select cases with diff values on the DV and similar values on all but on of the possible IV
- most similar systems design: similar IVs, diff DV - method of agreement = cases that are similar on the DV and diff on all but one of the IVs
- most diff systems design: diff IVs, same DV
problem = identify cases that are truly comparable in this way -> congruence method easier
= longitudinal design looking at a single case over time
problem = causal inference: hard to establish with interaction effects etc. -> Mill’s method needs to be supplemented by within-case methods
(Ragin: multiple conjuctural causation = when nr of variables increases, nr of interaction effects also increases)
case studies: types, designs and logics of inference
process tracing
problem with other methods: demonstrating that observed patterns of covariation reflect a causal relationship
-> process tracing (causal process observations) provide additional evidence about cause and effect
= useful for studying e.g. decision-making and complex causation (e.g. analysis path dependence and critical junctures)
process tracing can be combined with other methods to examine alternative causal mechanisms associated with observed patterns of covariation
case studies: types, designs and logics of inference
crucial case designs
= based on most-likely or least-likely designs (assume that some cases are more important than others for the purposes of testing a theory)
- when a case is not expected to be consistent with a theory and it is, it leverages support and confidence in our theory
esp. when it is a most-likely case of another theory - !!! Sinatra inference: if I can make it there, I can make it anywhere !!!!
- evidentiary support for a theory from a most likely case or lack of support for a least likely case leads to only a modest shift in one’s confidence in the validity of a theory
= for testing certain types of theoretical arguments (when theory provides precise predictions + when measurement error is low)
e.g. Allison: 3 models of foreign policy decision-making applied to the Cuban missile crisis
- most likely case for rational unity actor model of foreign policy -> evidence contradicted it
- least likely case for alternative organizational process and governmental politics models -> evidence supported this model
case studies: types, designs and logics of inference
deviant case design
Focus on observed empirical anomalies in existing theoretical propositions
Aims to explain why a case deviates from theoretical expectations
= to refine existing hypotheses/theories
Similar to studying residuals in statistical methods
Examination of deviant cases -> theory refined -> must be tested against new evidence
Intent often to save a theory from damaging evidence -> contributes to both hypothesis testing, generating and refining
Also important in analysis of borderline cases
Aim to check for the possibility of measurement error in key variables that might affect the classification of cases or the validity of the unit-homogeneity assumption
case studies: types, designs and logics of inference
conclusions
rapidly expanding literature on case study methodology reflects an increasing theoretical orientation and methodological self-consciousness among casestudy researchers. They now generally see cases primarily as vehicles for constructing and supporting broader theoretical generalizations, and even most idiographic studies are guided by a well-developed theoretical framework. The role of theory is particularly evident in the criteria for case selection and logics of interpretation in most/least likely designs, deviantcase strategies, and comparable-case designs.
qualitative methodologists argue that process tracing, unlike large-N and cross-case comparative work, is not susceptible to the problem of selecting cases on the dependent variable, because process tracing follows a different logic of inference
positivism and interpretivism share a goal: deriving testable implications from alternative theories
different = methodological rules about case selection, role of process tracing, emphasis on role of causal mechanisms etc.
case, case study, and causation: core concepts and fundamentals
p. 51.60 “set-relational causation”
set-relational causation = establishes relationships between sets in which cases are either members or nonmembers
- e.g. countries with a large welfare state vs countries without (those are part of the negation of the set of interest)
-> diff from covariational case studies:
- set-relational causation is based on invariant cause-effect relationships
-> you only care about cases within the sets, only about e.g. level of spending in countries with open economies - invariance -> asymmetric causation (rather than symmetric causality in covariational analyses)
- asks how condition X is related to an outcome Y (signals causal inference is about patterns of invariance)
cornerstones of set relations = sufficiency and necessity
case, case study, and causation: core concepts and fundamentals
p. 51.60 “set-relational causation”
-> sufficiency
condition is sufficient for an outcome when the presence of the condition coincides with the presence of the outcome
X->Y (Boolean logic)
X is a subset of Y
e.g. democratic peace phenomenon: two democracies do not fight each other
= argument of sufficiency: if a dyad is democratic (X is present), one observes peace (Y is present)
also: asymmetry: argument does not imply that nondemocratic dyads are always at war
(Venn diagram: peaceful dyads include democratic dyads, democratic dyads venn does not exceed the peaceful venn)
case, case study, and causation: core concepts and fundamentals
p. 51.60 “set-relational causation”
-> necessity
condition is necessary when the outcome occurs only if the condition is present
X<-Y
the set of cases with Y present is a subset of the set of cases with the necessary condition (X)
e.g. nondemocratic war phenomenon:
- Y = war
- X = nondemocratic dyad
the cases where there is war is a subset of the nondemocratic dyads: not all cases of X are associated with Y, but Y requires X (bc we know that two democracies don’t fight each other)
venn diagram: circle of Y sits within the circle of X
case, case study, and causation: core concepts and fundamentals
p. 51.60 “set-relational causation”
-> necessity and sufficiency
condition is necessary and sufficient if and only if the outcome is present in the presence of the condition
e.g. pari of democratic countries is the condition and the outcome is peace -> democratic dyad would be necessary and sufficient for peace if:
- all democratic dyads were to maintain peaceful relations (sufficiency)
- all instances of peace were to involve democratic dyads (necessity)
X<->Y condtion and outcome form perfectly overlapping sets/venns
- absence and presence of the outcome depend only on the absence and presence of the condition
= symmetric: outcome always changes as the condition changes
case, case study, and causation: core concepts and fundamentals
p. 51.60 “set-relational causation”
example Gamson’s law
outcome = party’s share of cabinet posts
cause = party’s share of seats in the parliament
set-relational study
-> condition (X) = high seat share
-> negation = low seat share
-> outcome (Y) = high cabinet share
-> negation outcome = low cabinet share
if the condition (high seat share) is necessary AND sufficient for a high cabinet share -> cabinet share of a party is low when the seat share is low and high when the seat share is high
= symmetric: cabinet share always changes as the seat share changes
correlational inference relying on diff in kind of IV and DV -> cabinet share of a party changes from low to high as the party’s seats share changes from low to high and vice versa
= also symmetric, but says nothing about the cabinet share when the seat share is low or high
only works if there is a change of IV -> does not entail anything about necessity and/or sufficiency
case, case study, and causation: core concepts and fundamentals
p. 51.60 “set-relational causation”
-> equifinality
two or more conditions are individually sufficient for the same outcome
e.g. democratic peace: power asymmetry leads to peace within dyads too
- conditions democratic dyad and power asymmetry are both individually suficient for peace: each condition can bring about the outcome
- neither of the conditions is individually necessary
Venn: the conditions lie within the outcome
X1+X2->Y
plus means OR: presence of either X1 or X2 is sufficient to bring about the outcome
case, case study, and causation: core concepts and fundamentals
p. 51.60 “set-relational causation”
-> conjuctural causation
= two or more conditions produce the outcome only if they are simultaneously present
e.g. welfare state entrenchment (outcome) occurs when
- economic crisis occurs under the watch of a
- conservative government that seizes the opportunity to cut spending
X1*X2 -> Y
* stands for AND: conditions must occur together in order to produce the outcome
case, case study, and causation: core concepts and fundamentals
p. 51.60 “set-relational causation”
-> INUS conditions
= when at least one conjunction and equifinality come together
at least two conditions are insufficient but necessary elements of a conjunction that is unnecessary, but sufficient for the outcome
e.g. welfare state entrenchment only occurs if a
- conservative gov goes along with an eco crisis
- or if labor unions are weak in time sof high public deficit because the latter creates pressure for reduced spending that weak unions can’t prevent
-> welfare entrenchment visible when one of two conjuctions is present
each conjunction is unnecessary: phenomenon is due to equifinality
each conjunction is individually sufficient: outcome can occur if at least one is present
X1X2+X3X2->Y
case, case study, and causation: core concepts and fundamentals
p. 51.60 “set-relational causation”
-> SUIN conditions
a cause is SUIN if it is a sufficient but unnecessary attribute of a condition that is insufficient but necessary for the outcome
e.g. government termination
- frequent gov termination is a necessary condition for the outcome ‘public discontent with democracy’
- gov termination occurs with elections or when the prime minister resigns -> the presence of either of these causes is sufficient for gov termination BUT not necessary: there are two ways in which termination can result
Venn diagram = outer circle is the necessary condition (gov termination, split with a dashed line in election and resignation prime minister), within the necessary condition lies the outcome (democratic discontent)
critiques, responses, and trade offs: drawing together the debate
article + KKV + 4 critiques
David Collier, Henry E. Brady, Jason Seawright
KKV = King Keohane and Verba’s “Designing Social Inquiry” = effort to develop a shared framework for quantitative and qualitative analysis = focus on framework mainstream quantitative methods
= focus on applying quantitative tools to qualitative research
chapter = more emphasis on the limitations of quantitative tools and on the contributions of qualitative methods to addressing these limitations
4 critiques of KKV
- challenge of doing research that is “important”
- conceptualization and measurement
- selection bias
- probabilistic vs deterministic models of causation
statistical responses: support KKV in some respects (doing research that is important + probabilistic vs deterministic causality)
- important research: trade-off between striving for importance opposed to valid inference
- deterministic causation no-variance designs criticized as being subject to extreme selection bias (all-cases designs can be more efficient)
- conceptualization and measurement + selection bias = statistical response agrees with qualitative criticism
-> perspectives from statistical theory sometimes reinforce the views of qualitative methodologists and sometimes those of mainstream quantitative methodologists
-> statistical theory can provide an independent standard for adjudicating methodological debates
critiques, responses, and trade offs: drawing together the debate
doing research that is important
KKV: scholars should study topics that are important in the real world and in relation to a given scholarly literature
+ limited attention to theory
critique:
- provides no guidance for how to choose important topics + how to achieve major advances in our substantive and theoretical understanding of politics and society (Rogowski)
- doesn’t address concern that KKV methodological norms (narrowing RQ) may make it harder to do research that is important
- KKV provides no heuristics for theory construction (McKeown)
- KKV warns against no-variance research designs, other scholars argue that these design are valuable (for within-case analysis)
Rogowski: conflict between
a. methodological goals of improving descriptive and causal inference on the basis of empirical data
b. objective of studying humanly important outcomes and developing theory that helps us to conceptualize and explain them
statistical response:
- KKV explicit in saying it does not attempt to provide guidelines for theoretical innovation (discovery is irrational/creative intuition)
- KKV focus on valid scientific inference NOT generation of hypothesis -> rejects no-variance design based on weak basis for causal inference
- statistical inference = guidelines that increase the probability of generating a correct inference
- conflict between achieving inferential goals and carrying out theoretically productive research = dilemma for all researchers, not just KKV
challenges of promoting creativity
can we establish procedures that promote theoretical creativity and lead to important research?
- one hand = view that we lack systematic procedures for generating novel insights into political phenomena (ideas come from e.g. imagination)
- other hand = reason to believe that some research practices are more likely to produce theoretical insights than others (e.g. deductive and inductive tools for gaining new insights)
- specific research activities can be useful stimuli for theoretical innovation, e.g. fieldwork, studying anomalous cases
“scholars can identify research practices that contribute either to improving inference or to promoting theoretical innovation, but not necessarily to both. we may often face a trade-off”
-> scholars must recognize the value of both goals
critiques, responses, and trade offs: drawing together the debate
conceptualization and measurement
KKV:
- scholars should maximize the validity of measurements + use reliable data-collection procedures
- choose observable rather than unobservable concepts wherever possible
- skepticism about use of typologies
- trade-off between maximize concreteness of theories and encompassing stating theories + descriptive richness and facilitation of comparison + measurement validity and reliability/precision
critique:
- KKV only briefly mentions conceptualization and measurment (while they require extensive attention)
- KKV advice to employ concepts readily operationalizational -> underscores challenge to define difficult concepts (civil society, democracy, nationalism)
- typologies are important (qualitative small N research), e.g. types of political regimes
- KKV framework neglects basic ideas treatment of measurement (measurement error etc.) (Bartels + Brady)
- KKV almost no attention to contextual specificity of conceptualization and measurement -> problematic claims as ‘increase N’, is not good: can bring analysis outside of the domain where concepts are appropriate and valid
statistical/psychometrics response:
writing linked to the traditions of psychometrics, mathemetical measurement theory, and statistics support the critics of KKV with respect to conceptualization and measurement validity: careful decisions about this are crucial
- need for close attention to concept formation, measurement validity, and the contextual specificity of measurement
- psychometric tradition: measurement validity and theory are mutually dependent
- KKV warnings about avoiding unobserved/unmeasurable variables seems at odds with tradition of covariance-structure model + factor analysis
- statistics + psychometricians +measurement theory = validity of a given indicator must always be treated as context-specific
(KKV: we focus on descriptive and causal inference, not on concept formation and theory creation)
critiques, responses, and trade offs: drawing together the debate
selection bias
KKV: advice about selection bias, framing it as central problem in causal inference
- focus on investigator-induced selection bias
- researchers should select cases across the entire range of the DV
selection bias = when there is unrepresentative sampling of cases or when nonrandom process assigns causes to cases
critique = concern with selecting extreme DV values has been oversold + qualitative researchers have distinctive tools for making valid causal inferences (even with truncated sample)
- KKV overextends rules and norms of conventional quantitative research. instead selection bias should be considered in light of trade-offs with other methodological and theoretical priorities
- Collier, Mahoney and Seawright: within-case analysis no selectio bias when based on causal-process observations
- KKV sees no-variance research designs as case of selection bias, this is true for regression analysis, but not neccesarily for other analytical tools
4. definition of selection bias depends on defining universe of cases (you need a perfectly defined universe to know if a sample is nonrandom)
statistical response: in general agrees with KKV claims selection bias in regression analysis BUT agrees with critique concerning application KKV ideas on qualitative research
- no-variance designs useless in regression analysis (effect will be equal to error term -> overall estimate causal effect is zero)
!no-variance design on the IV is impossible for regression analysis, no-variance on the DV is not (causal estimates go to zero due to selection bias not bc maths) - many issues of bias can’t be addressed without having a clear understanding of the relevant population
- qualitative judgement is required if we are to consider broader goals of research design (broader than quantifying bias): describe amount of new theoretical and substantive knowledge the design will produce
!critical note = investigator-induced bias is practice relatively uncommon -> concerns about selection bias due to truncation overstated +distracted attention from other forms of bias (e.g.:
- self-selecting of individuals into the categories of included variables (observational studies) = democracies more likely than authoritarian regimes to break down in the face of poor eco performance -> some countries will be selected into regime type (IV) due to their scores on the DV (eco performance)
- nonrandom sampling process: survey research, same type of people don’t respond
critiques, responses, and trade offs: drawing together the debate
probabilistic vs deterministic models of causation
KVV: exclusively probabilistic model of causation (rather than deterministic)
- deterministic causation = models in which error variance is specified to be zero (no random component)
- qualitative research: deterministic causation = models of necessary and/or sufficient causation
critique:
- KKV fails to recognize importance in qualitative research of hypotheses about deterministic causation + need to dev. tools that test such hypotheses
- KKV rec. to seek variance on DV and IV -> may impede efforts to test deterministic causal models
statistical response: some response for KKV, some support for the critique
- probabilistic tests of deterministic causes: statistics supports critics: if deterministic cause is present and only a probabilistic model is tested there may be invalid inferences (will infer that there is some probability in the absence of a necessary cause)
-> needs own procedure - necessary and/or sufficient causes and selection bias: testing deterministic causation often with no-variance research design -> KKV says bias
not true: variance of the error term is zero -> no truncation (= when selection -> correlation between error term in the causal model and the IV -> overrepresenting atypical cases)
overrepresentation atypical cases in deterministic models irrelevant bc models require even atypical cases to follow the overall pattern - identifying the most efficient test (are no-variance designs the most productive to assess deterministic causation?) -> KKV advice to seek variance on the IV and DV correct, but for diff reasons: all-cases design sometimes better (when small nr of cases), shows that no cases lack the IV and have the DV
*cases with cause but not outcome are not relevant to falsify the hypothesis
critiques, responses, and trade offs: drawing together the debate
deterministic causation + sufficient/necessary causes
deterministic causes increasingly seen as substantively important in the social sciences
- deterministic causation = models in which error variance is specified to be zero (no random component)
- qualitative research: deterministic causation = models of necessary and/or sufficient causation
basis = if a single case deviates from a hypothesized causal pattern -> serious doubt on the hypothesis
=> single variable on its own has a distinctive causal impact
- IV presence inevitably leads to an outcome if it is a sufficient cause
- IV absence definitively prevents an outcome if it is a necessary cause
critiques, responses, and trade offs: drawing together the debate
all-cases design
= useful when small nr of cases
-> can draw on large pool where the outcome did not occur (any of the cases may not have the IV, but have the DV, but they did not)
most appropriate tests for deterministic hypotheses = cases with the outcome OR cases that don’t have the cause
- cases with cause (IV) but not the outcome (DV) are irrelevant
!Seawright: observational studies -> IV and DV not assigned by the researcher -> should not treat them as fixed -> should use all cases
additional advantages all-cases design:
- evidence against deterministic causation -> data can be used to test strength of probabilistic association
- productive when: necessary or sufficient cause turns out to be both necessary and sufficient
- productive when: what was hypothesized to be necessary proves to be sufficient or vice versa
critiques, responses, and trade offs: drawing together the debate
trade-offs in research design (goals and tools)
trade-offs -> conflicts among goals of researchers
- overarching goal = seek valid descriptive and causal inferences about important phenomena in the political and social world + refine theory
e.g. KKV - intermediate goals = more specific research objectives (precise comm concepts, generalizability, parsimony)
studies with divergent intermediate goals can make complementary contributions to overarching goals
trade-offs bc tools employed in pursuing the goals
- tool = specific research practice/procedure to achieve intermediate goals and through them overarching goals
methodology = developing tools with reasoning about how particular tools succeed/fail in achieving research goals
Rogowski = goals and tools involve trade-offs
e.g. goals: more general theories often less accurate and parsimonious
e.g. tools: no-variance may be biased, but may be good hwn basic descriptive info is lacking
we do have standards: we need to be explicit about goals + strengths and weaknesses of alternative means for pursuing the goals
critiques, responses, and trade offs: drawing together the debate
trade-offs in KKV
essence of research design = making choices among potentially incompatible goals
trade-offs between:
*KKV mentions 5 trade-offs, chapter doesn’t mention them
- precision and generality offered by quantitative tools + reliance on often untested assumptions required by these tools
- between avoiding bias by including all relevant IV and seeking to maintain inferential leverage by limiting the nr of IVs
- between representativeness and interpretabiity of quantitative tests associated with random sampling vs close focuso n theoretically relevent comparisons
KKV most important = increasing nr of observations and other significant goals
- increase N -> strengthens falsifiability + enhances explanatory leverage + addresses interdeminacy and multicollinearity
- disadvantages = analysis drawn out of its appropriate domain + lac of measurement validity (is context specific) + diff to get knowledge of the context + adding cases can undermine independence of observations (temporal/spatial subunits -> can add observations that are not independent)
trade-offs -> not one set of methodological guidelines can ensure that researchers will do good work
critiques, responses, and trade offs: drawing together the debate
conclusion
trade-offs -> methodological issues more complex than they appear in KKV (esp. increasing N)
decisions about concepts, typologies, measurement relations and domains of measurement validity are largely neglected by KKV