RD2 Flashcards
According to Dahl (1998), for a polyarchy to exist the following five criteria must all be present: effective participation, voting equality, enlightened understanding, control of agenda, inclusion of adults. This concept structure resembles, what Goertz (2020) calls the Logical OR.
False
Please select whether the following research question is indicative of
A) a covariational, B) a causal process tracing, or C) a congruence analysis type of case study.
- Which configurations of causal conditions led to social revolutions (Skocpol 1979)?
Causal Process Tracing
Please select whether the following research question is indicative of
A) a covariational, B) a causal process tracing, or C) a congruence analysis type of case study.
Does “high reliability organization theory“ or „normal accident theory“ provide a better framework for understanding and explaining risk management in complex organizations (Sagan 1993)?
Congruence Analysis
Please select whether the following research question is indicative of
A) a covariational, B) a causal process tracing, or C) a congruence analysis type of case study.
Does a country‘s political opportunity structure affect the strategy and the impact of anti-nuclear movements (Kitschelt 1986)?
Covariational
According to Lijphart there are three fundamental strategies of research/methods (the experimental, statistical & comparative method)
How does each method resolve the problem of control?
- Experimental method
variance through comparison with control groups
establishes control through well-considered randomisation of treatment
According to Lijphart there are three fundamental strategies of research/methods (the experimental, statistical & comparative method)
How does each method resolve the problem of control?
- Statistical method
observing correlations
control through statistical manipulation
According to Lijphart there are three fundamental strategies of research/methods (the experimental, statistical & comparative method)
How does each method resolve the problem of control?
- Comparative method
observing covariation in limited set of cases
control through careful case selection
“The comparative method resembles the statistical method in all respects except
one. The crucial difference is that the number of cases it deals with is too small to permit systematic control by means of partial correlations“ (ibid., 684).
Experimental method
Experimental Method “in its simplest form, uses two equivalent groups, one of which (the experimental group) is exposed to a stimulus while the other (the control group) is not. […] Equivalence – that is, the condition that the cetera are indeed paria – can be achieved by a process of deliberate randomization” (Lijphart, 1971, p. 683) variance through comparison with control groups; establishes control through well-considered randomisation of treatment
Comparative method
Comparative Method as the “method of testing hypothesized empirical relationships among variables on the basis of the same logic that guides the statistical method, but in which the cases are selected in such a way as to maximize the variance of the independent variables and to minimize the variance of the control variables.“(Lijphart, 1975, p. 684)
observing covariation in limited set of cases; control through careful case selection
Formulate a set of questions that should be asked before or when engaging in In groups of four, formulate a set of questions case selection!
Statistical method
Statistical Method “ entails the conceptual (mathematical) manipulation of empirically observed data – which cannot be manipulated situationally as in experimental design – in order to discover controlled relationships among variables. It handles the problem of control by means of partial correlations.” (Lijphart, 1971, p. 684)observing correlations; control through statistical manipulation
What is the main difference between the two sentences:
a, Social revolution is possible only if the state is
in crisis.
b, State crisis leads to social revolution.
a) State crisis = necessary condition
b) State crisis = sufficient condition
According to Eckstein, a case study is N=1.
True
Dual choice question: Transitivity means that ‘when x causes y, y does
not cause x’
False
Regularity Theories of Causation.
The core idea of regularity theories of causation is that causes are regularly followed by their effects. A genuine cause and its effect stand in a pattern of invariable succession: whenever the cause occurs, so does its effect.
Probability theory
some event is more probable than another without specifying. the exact numerical probabilities of the events in question.
The probability of political sophistication given compulsory voting is higher than the probability of political sophistication without compulsory voting. It’s more likely to achieve political sophistication with compulsory voting.
X causes Y if and only if the conditional probability of Y given X is higher than the conditional probability of Y when X is not given ( p(Y|X) > p(Y|not-X) ). The type-level claim that “granting territorial autonomy causes secessionism” can then be true even if there are some minorities who received autonomy and did not increase their secessionism as long as the conditional probability of secessionism given autonomy is higher than the probability of secessionism without territorial autonomy.
Bayesian probability
is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation[1] representing a state of knowledge[2] or as quantification of a personal belief
- Focus is on the conditional likelihood of making an observation, given that
different theories were true - How likely would I make this observation if my hypothesis were true?
(= the degree of certainty) - How likely would I make this observation if the rival hypothesis were
true? (= the degree of uniqueness) - Different pieces of evidence can have different inferential value
- Even single pieces of evidence can be the basis for inference, if they
discriminate well between competing explanations
Frequentist probability
is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means of emphasizing the frequency or proportion of findings in the data
- Focus is on the frequency of observations – the more, the higher our
confidence when rejecting or confirming a hypotheses - Can be challenging in small-N research
The Bayesian logic can be applied without reference to the Frequentist logic.
False
Is the following situation an example of an interstitial contact (Mosley 2013): You approach people randomly on the train station to interview them.
No
According to Mosley, an interstitial contact requires an interviewee to select the researcher not the other way around.
When I go to an archive I collect data. I don’t generate it.
True
Sampling units:
“units that are distinguished for selective inclusion in an analysis”
Recording units:
“units that are distinguished for separate description, transcription, recording, or coding”
Context units:
“units of textual matter that set limits on the information to be considered in the description of recording units”
“I have analysed a certain number of newspapers.” According to Krippendorf, what is this person talking about?
Sampling units
According do Krippendorf, recording units can exceed sampling units.
False
p.104:”recording units are typically contained in sampling units, at most coinciding with them, but never exceeding them.”
Large-N strenghts
- Assess how often social/political phenomena occur on average
- Assess external validity of theoretical arguments (if cross-nationally comparative)
- Estimate existence and size of ‘causal effects’ of individual variables
- Estimate uncertainties
- Can serve as basis for selecting case
Small-N Strengths:
- Serves to reconstruct causal mechanisms
- Enhance internal validity (meaning of concepts and theories in context)
- Identify possible causes/variables that should be taken up by large-N research / can help detect omitted variables
- Understand how actors make sense of their environment, and how and why they take decisions
Guidelines by Goertz
- Explicitly analyze the negative pole.
- Theorize the underlying continuum between the negative and positive poles.
- Theorize the gray zone; then determine whether or not the concept should be considered continuous or dichotomous.
- Do not let the empirical distribution of cases influence decisions. Usually, the empirical distribution of cases should be explained, not presumed in the concepts.
- Do not just list dimensions of the concept.
- Be explicit about the necessary conditions, if any.
- Give sufficiency criteria (holds for N & S conditions and FR).
- Do not force the reader to guess at structure from discussion of examples or the mathematics of a quantitative measure. 31
- Explicitly ask the weighting question.
- Justify the weighting scheme used.
- Be explicit about the formal relationships between secondary or indicator-level dimensions. The three canonical possibilities are 1) minimum/AND, 2) mean, and 3) maximum/OR.
- What is the theoretical relationship that links the indicator/data level to the secondary level? (e.g. causal relationship as in disease-symptom; direction?)
- What is the theoretical relationship that links the secondary level to the basic level?
Logical OR (Goertz)
could include both things, but doesn’t have to. “A chair can have 1 OR 4 legs”
Logical AND (Goertz)
(necessary and jointly sufficient): has to include both things to be true. “Humans need oxygen AND water to survive”
Marks Rules
- The smaller the volume of information for any set of cases, the greater the benefit of increasing it (square-root law).
- The more imprecise an observation, the greater the benefit of an additional observation, even if it is no less imprecise.
- The more biased a dataset, the greater the benefit of an additional dataset having a different bias, even if the additional dataset is no less inaccurate.
- The greater the diversity of systematic error among datasets, the greater the benefits of triangulation.
How to conduct Lijpharts methods on: Effect of income a person’s on education.
Expermental
Draw random sample from the population, split sample into treatment and control group * Treatment=financial support measure whether this impacts education
How to conduct Lijpharts methods on: Effect of income a person’s on education.
Statistical
Collect data for a sample on income and education
How to conduct Lijpharts methods on: Effect of income a person’s on education.
Comparative
Intentionally pick a few individuals from the population (with different incomes) See whether they have different levels of education
variance of the control variables.“ (Lijphart 1975: 164)
* “The comparative method should be resorted to when the number of
order to establish credible controls is not feasable. There is, consequently,
no clear dividing line between the statistical and comparative methods; the difference depends entirely on the number of cases.“ (Lijphart 1971:
684)
* “[…] if at all possible one should generally use the statistical (or perhaps
even the experimental) method instead of the weaker comparative
method.“ (Lijphart 1971: 685)
* Comparative method as the “method of testing hypothesized empirical to maximize the variance of the independent variables and to minimize the relationships among variables on the basis of the same logic that guides the statistical method, but in which the cases are selected in such a way as
cases available for analysis is so small that cross‐tabulating them further in
How to generate external validity
- Craft arguments with general variables or mechanisms / eliminate (reduce) proper
names (cf. Przeworski & Teune 1970) - Capture representative variation (e.g. base case
selection on typologie; cf. Collier, LaPorte & Seawright 2012) - Select cases that maximize control over
existing rival hypotheses (cf. Slater & Ziblatt 2013: 1313)
Configurative-ideographic study
aim to describe a case very well, but not to contribute to a theory.
- Individualzing, interpretative (≠ nomothetic)
- Logic of understanding (verstehen ‐> W.
35 - Drawback: „do not easily add up“ (S. Verba)
- Uniqueness of case
Dilthey)
Disciplined Configurative study
aim to use established theories to explain a specific case
- Applications of general laws or statements of
probability to particular cases - Linking comparatively tested theory and case
- Feedback on theory interpretation
- ≈ congruence analysis
Hypothesis-generating (or heuristic) case studies
studies aim to inductively identify new variables, hypotheses, causal mechanisms, and causal paths.
- Potentially generalizable relations deliberately
- Serving to find out
sought out (cf. also grounded theory as a method of * Can be used in building block technique ‘discovery of theory from data’) - Track record of case studies as stimulants of theoretical imagination (intensive analysis)
- But: which case should one choose?
Theory testing case studies
aim to assess the validity and scope conditions of existing theories
Plausibility probes,
aim to assess the plausibility of new hypotheses and theories.
lower costs than rigorous
* Pilot studies (e.g. in combination with game
* Probing plausibility of candidate theories testing
Building block studies of types or subtypes
aim to identify common patterns across cases.
Why talk to people rather than just sitting at your desk?
Desk = you have your own thoughts on what fits your “concept” or the kind of person you want to interview. You have notions of what data to use. You believe you can get all the info you need from afar.
Talking to people at the spot = Realize that the concepts and data you thought were relevant were based on your knowledge and your environment, which doesn’t necessarily mean that it is the same for the people you’re researching.
You get more in-depth (and accurate) information when you do your research in the place you’re researching.
What is qualitative research?
Numbers (quan.) vs. words (qual.) (Charles Ragin)
Predetermined variables and their relations (quan.) vs. relations
between categories that are themselves subject to change in the
research process (qual.) (Aspers & Corte 2019)
About interpretation, “Verstehen“ and actors‘ meaning
Iterative processes and improved novel understandings (Aspers &
Corte 2019)
Ontology
Knowledge of how the world works
Methodology
How to put “world” knowledge into practice.
Concept
Main building blocks of theories
If I have a theory I need to disaggregate it to its parts, aka concepts
When we use a concept we have a general idea of what that is. Example: a shirt.
An abstraction (of the concept) helps us sort things.
Concepts are a universe of possibilities. Example: there are many different shirts.
Concepts are hypothetical, the concept when thinking of it is not real, but there is a real version of it.
(Most) concepts are learned (and created), we learn them as we go, they’re not “natural”.
Concepts are socially shared. Concepts are shared via interaction with other people, you learn concepts and tech concepts as you go. Concepts can be contested, your concept of something doesn’t have to be another person.
Concepts are reality oriented, concepts relate to the real world.
Concepts are selective constructions, the same thing can be conceptualized differently depending on the purpose.
Intension
connotation
the intension of a word is the collection of properties which determine the things to which the word applies
extension
denotation
the extension of a word is the class of things to which the word applies
CRITERIA FOR GOOD MEASUREMENT
Objectivity
Reliability (Precision)
Validity
Objectivity
Are measurements independent of the researcher?
Reliability (Precision)
Reliability
Extent to which an observation is consistent in repeated trials (Marks 2007: 4).
Refers to level of stochastic (random) error, or noise, encountered in the attempt to operationalize a concept (inverse of the variance across measurements) (Gerring 2012).
Does applying the same procedure in the same way always produce the same measure?
Validity
Correspondence between a concept‘s definition (its attributes) and the chosen indicators. Validity refers to systematic measurement error (bias) (Gerring 2012).
Extent to which an observation approximates the actual value of the case on the dimension one wishes to measure (Marks 2007: 4).
Do the observations meaningfully capture the ideas contained in the concepts?
Triangulation
*Triangulation involves data collected at different places, sources, times, levels of analysis, or perspectives, data that might be quantitative, or might involve intensive interviews or thick historical description. The best method should be chosen for each data source. But more data are better. Triangulation, then, is another word for refering to the practice of increasing the amount of information to bear on a theory or hypothesis. (King, Keohane & Verba 1995: 479f.)
Combining dissimilar sources of information to enhance validity of measurement. […] a strategy to minimize inaccuracy due to systematic error. (Marks 2007: 3)
Systematic error
Is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently registers weights as higher than they actually are)
is consistent, repeatable error associated with faulty equipment or a flawed experiment design.
Systematic bias in results, same mistake made all the time (dart lands on the same place every time)
Random (non-systematic) error
Is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).
has no pattern. One minute your readings might be too small. The next they might be too large. You can’t predict random error and these errors are usually unavoidable.
random mistakes (dart lands all over)
Crucial Case studies
- Need for independent testing of theoretical
curves - Crucial case study similar to a well‐
constructed, decisive experiment (that
simulates as closely as possible the specified
conditions under which a law must hold) - Fit of theory and case is decisive
Internal validity
Internal validity refers to the correctness of a hypothesis with respect to the sample (the
cases actually studied by the researcher)
External valdidity
External validity refers to the correctness of a hypothesis with respect to the population of an inference (cases not studied). The key
element of external validity thus rests upon
SELECTION BIAS
attributes the properties of the scrutinized cases Selection bias is a faulty inference that wrongly to a larger universe of cases.
Ex, wants to study revolutions and therefore chooses known revolutions. Problem is we are only selecting positive cases, we know the outcome is 1. We chose based on the dependent variable.
General rule: Don’t select based on the dependent variable.
Exception to general rule: Want to understand the mechanism. Y-centered (outcome-centered) research. Also if we want to research necessary conditions.
PURIST ADVICE
[…] the best intentional design selects observations to ensure variation in the explanatory variable (and any control variables) without regard to the values of the dependent
Ask before case selection
Which mode of comparison?
Cases should be as similar as possible except for the explanatory variable if we want to see the causal inference.
What is the research question? Case selection should be relevant to research questions.
Specify time and space.
Function of case study? Is it causal, descriptive or diagnostic?
Data availability.
State of literature, is it worth doing this?
What theory are we using?
If we only had two questions to ask:
How do the selected cases relate to the universe of cases?
How do the selected cases relate to the theory?
Types of case studies
Descriptive case studies.
Explanatory case studies.
Exploratory case reports.
Intrinsic case studies.
Instrumental case studies.
Collective case reports.
Gerrings Most-similar design
Method of Difference
Compared cases are different on outcome (Y) but are share background factors (Z), (X) is different <- should be the cause of different (Y)
Causal Inference and the method of difference
Gerrings Most-different design
Method of Agreement
Compared cases share the same outcome (Y) but are different on background factors (Z), but have a similar (X) <- should be the cause of same (Y)
Causal Inference and the method of agreement
Type-level evidence
abstract generalizatons
more general
Ex, A lae
Token-level evidence
concrete, occur at one particular point/interval in time, one partcular place in space)
Ex, Specifically Lake Constance
Small-N research produces token-level evidence
Transitivity
If (X) causes (Z) and (Z) causes (Y), then (X) also causes (Y)
Transitivity means that when x causes y, y does not cause x = false
Example transitivity: GDP (X) -> Education (Z) -> Democracy (Y). So we can say that high GDP (X) leads to democracy (Y).
Asymmetry
reverse causation
(X) causes (Y), (Y) does not cause (X)
Example asymmetry: smoking (X) cause lung cancer (Y), but lung cancer (Y) does not cause smoking (X)
Regularity theory
Compulsory voting always leads to political sophistication. If there’s X there will always be Y.
These criteria apply to the type level because we can only require recurring sequences of X and Y when we look at classes of events. To introduce our running example, the claim that “granting territorial autonomy to minorities (X) causes secessionism (Y)” is true for the regularity theorist if and only if each and every time a minority somewhere in the world is granted territorial autonomy, secessionism grows.
Interventionist theory
If X changes then Y will change as well.
Mechanism
Someone/something does something which causes X to Y.
Mechanism should not give rise to additional why questions:
disciplinary and field-specifik stopping -rules
X > Entity engaged in activity > Entity engaged in activity > Entity engaged in activity > Y
Ingredients
entities, their properties and activities
Entities
individual or collective actors (not structures)
underlying behavioral theory needed
Hedström and Ylikoski
mechanism in concrete case (token)
concrete actors and activities in concrete space and time
Hedström and Ylikoski
mechanism scheme in theory (type)
abstracting away from particular time and place
Modeling causal mechanisms: example
PR electoral system > ???? > >Climate policy stringency
PR electoral system > politie elites experience higher electoral safety > government does not fear to impose short term costs on voters > government adopts stringent climate policy > climate policy stringency
What is process tracing?
A method to study causal mechanism empirically
A method that uses within-case evidence
A method that predominantly uses (causal) process observations (CPOs) (as opposed to data-set observation)
spatial bound
Where? Space
temporal bound
When? Time
substantive bound
What? Subject
tip: think of the opposite: non-radical expansion of welfare- state; substantive bound includes welfare-state politics, this means that it would not make sense to include e.g. foreign policy in your analysis
Types of process tracing
Case-centric pt
- outcome-explaining
- motivation tends to be a case with a puzzling
Types of process tracing
Theory-centric pt
- theory-testing (deductive)
- theory-modifying (semi-deductive-inductive)
- theory-building (inductive)
Good process tracing: Waldner’s completeness standard
(1) Start from causal graph whose nodes are connected in such a way that they are jointly sufficient for the outcome;
(2) Use event-history map that establishes valid correspondence between events in case and nodes in causal graph;
(3) theoretical statements about causal mechanisms link nodes in causal graph to their descendants, the empirics allow us to infer that events were in actuality generated by relevant mechanisms;
(4) eliminate rival explanations (by direct hypothesis testing or by demonstrating that they cannot satisfy the first three criteria listed above)
“A causal account […] requires bringing together the underlying causal model, its invariant causal mechanisms, and understanding the concrete instantiation of the causal model in the specific circumstances” (Waldner 2015: 243)
Frequentist and Bayesian: Friends or foes?
Friends!
Frequentist logic can be applied without reference to Bayesian logic Bayesian logic contains elements of the Frequentist logic
→ Bayesian logic goes beyond Frequentist logic
The importance of reflexivity
A researcher‘s identity and her theoretical priors shape type and quality of data that can be collected!
RESEARCHER BIAS
- Personal characteristics
* Your gender, race, age, sexual orientation, social status, etc.
* Your values, ethics, politics
* Your family background / upbringing / current social sphere - Institution
* Your discipline, department and its identity, research paradigm - Research project
* Your theoretical priors
* Project limitations (budget, timeline etc.)
We try to detach, but we all view the world from somewhere! So do our informants!
Differet methods of qualitative data collection and generation
- Fieldwork
- Participant observation
- Focus groups
- Qualitative interviews
What is, and when to use fieldwork.
New data and refined measurements
- Interviews; Participant observation; Archival work (incl. hard-to-access quan data); Field experiments; etc.
Innovative ideas and perspectives – the unexpected and the underexplored - data generation and analysis – continuous re-assessment of theories and concepts
Six principles (Kapiszewski et al., p. 8-9) of fieldwork.
- engagement with context, 2. flexible discipline, 3. triangulation, 4. critical reflection, 5. ethical commitment, 6. transparency
Challenges of fieldwork
- Emotionally/psychologically/ethically often more demanding than staying at your desk
- Loss of objectivity – insider bias
- Observer effects
What is, and when to use participant observation
“Observation carried out when the researcher is playing an established participant role in the scene studied” (Garcés Mascarenas, 2013).
Real behavior by real people in real context
Deep understanding of motivations and reasoning through identification Differences between rules/official discourse and practice
Closed organisations, marginalised groups, grey zones
Challenges of participant observation
- resource-intensive (long periods of time, full immersion)
- Subjective by design
- Excessive identification – seizing to be an observer
- Covert often unethical, but overt PO changes the natural setting
What is, and when to use focus groups
Group discussions organized by the researcher to explore a specific set of issues, focused through a collective activity/stimulus in which research data is generated by group interaction (Kitzinger 1994).
Puzzles - find new explanations that lead to theory modification/building Interest in group interaction, social relations
Shared meaning
Preparation of standardized survey: conceptualization and validity of measurement
Challenges with focus groups
- resource-intensive
- active intervention of researcher/moderator requires skill and training –
moderator bias - Individual/socially stigmatised views unlikely to be voiced
- Dominant and passive participants
What is, and when to use qualitative interviews.
Puzzles - find new explanations that lead to theory modification/building Meaning of concepts for people involved / their understanding of interest Deep/hidden information in particular on causal processes
Sensitive topics where trust is needed to get information
Prelim. analysis / entry point to get access to a field / hard-to-access data
Challenges with qualitative Interviews
- Social interaction– researcher part of data generating process
- Maintaining the right balance of control and openness
à„think jiu-jitsu rather than the blunt force of sumo wrestling“ (Delaney 2007)
Sampling options for interviews (Lynch 2013):
- Random (stratified or non-str.) + and - ?
- Non-random + and -?
- Purposive
- Convenience
- Snowball
- Interstitial contacts
Random (stratified or non-str.) sampling
A simple random sample is used to represent the entire data population and randomly selects individuals from the population without any other consideration.
A stratified random sample, on the other hand, first divides the population into smaller groups, or strata, based on shared characteristics. Therefore, a stratified sampling strategy will ensure that members from each subgroup are included in the data analysis.
Non-random sampling
Non-probability sampling is a method of selecting units from a population using a subjective (i.e. non-random) method. Since non-probability sampling does not require a complete survey frame, it is a fast, easy and inexpensive way of obtaining data.
Purposive sampling
Purposive sampling, also known as judgmental, selective, or subjective sampling, is a form of non-probability sampling in which researchers rely on their own judgment when choosing members of the population to participate in their surveys.
Convenience sampling
Convenience sampling is a non-probability sampling method where units are selected for inclusion in the sample because they are the easiest for the researcher to access. This can be due to geographical proximity, availability at a given time, or willingness to participate in the research.
Snowball sampling
Snowball sampling is a recruitment technique in which research participants are asked to assist researchers in identifying other potential subjects.
Interstitial contacts sampling
Someone approaches the researcher, researcher do not choose subject at all
When to use interviews?
- In preliminary research
− Before the actual research project
− Not the main data source
− 34:”identify fruitful (and fruitless) avenues of research”
− e.g. identify/refine concepts and measurement - In the main study
− 36:”test central descriptive and causal hypotheses in political science research”
− Overt vs latent content - In multi-method research
− In combination with other methods
− For the purpose of Triangulation
Archival research: Is document authentic?
Beach, Pedersen (2019, 137)
1. Has the document been produced at the time and place when an event occurred, or has it been produced later and/or away from where the events took place?
2. Is the document what it claims to be?
3. Where and under what circumstances was it produced?
4. Why was the document created?
5. What would a document like this be expected to tell us?
→ If document authentic, it might have some inferential value for our research project
Qualitative content analysis
is a method for systematically describing the meaning of qualitative material. It is done by classifying material as instances of the categories of a coding frame.“ (Schreier p. 1)
Rich data that requires interpretation (Schreier, p. 3) Not just text - visual material also possible
Systematic, rule-bound procedure
Coding
Goal
- Coding = interpreting meaning, yet following clear guidelines
- Goal: Intersubjective agreement on meaning
Individual steps of coding QCA
- Define the sampling unit
- Define the context unit
- Define the coding unit
- Define the categories and the rules for their application
(= the “coding frame”, incl. revision to improve validity) - Apply the categories to the corpus (incl. tests for intra- or inter-coder reliability)
- Draw inferences on the basis of the coding results CAQDAS software can support you in step 5 and 6 frequencies of categories, network of categories, co-occurrence of categories
Transparency of research
Sharing data and full transparency a specific challenge in qualitative research …
* alongside systematically collected data, also variety of valuable less systematic data
* privacy concerns and responsibility towards informants – confidentiality and trust
* getting consent for sharing might change the interaction and data collected
* data often generated through social interaction allowing for openness: influence of
interpretative frames of original researcher – different epistemological perspectives
* mostly primary language data
* rich textual data – > even if not generated, but collected: copyright concerns?
Benefits of sharing? (Diana Kapiszewski)
* Allows data to accumulate and be used for secondary analysis
* Facilitates collaboration
* Encourages rigor, and allows for evaluation of research products
* Optimizes use of publicly funded data/research
* Benefits scholars without resources to conduct research
* For research findings with deep social value, unethical not to share your data.
Trancperancy 1
- Data access
* Where is the data you used?
* Who can access it and how?
* If your own and you restrict access, re-think. If you stick to it, justify why - Production transparency
* How were the data collected / generated?
* Where not your own, give insight into how authors of data proceeded - Analytic transparency
* How was the data analysed? What were the rules you applied in the process?
* How did you connect evidence and claims?
* Link each inference directly to the evidence backing it up
* Consider using „annotation for transparent inquiry“ (ATI) and the Qualitative Data
Reusability & replicability
Documentation and data management tips:
* think of someone else and your future self!
* start early and design a data management plan (https://managing-qualitative-
data.org/modules/1/b/)
* review regularly
* de-identify to ensure privacy
* ease access to publicly available, but hard to collect primary data (e.g. archives)
* safe content of data collected online, links won’t work in the future
Written appendix, minimum:
* complete list of documents/transcripts used for the research, where they can be accessed how they were produced (for interviews: publish guideline plus eventual adaptations)
* coding frame (detailed rules guiding how material was content-analysed)
Always bare confidentiality in mind, de-identify where needed and re-ensure and continue to re-ensure consent!
Ethical issues in qualitative research
- Research interest: social background/motives of men engaging in “Tearoom Trades” (1960s)
- Humphreys acts as “lookout”
- He builds up contacts with men→interviews
some of them - In a second step, he writes down license plates
of men to interview them a year later as a
“health service interviewer” - →gets information on marital status,
occupation, income etc.
Until today Humphrey’s study is taken as an example for unethical research in social science → Men don’t know their “lookout” is a researcher, no
consent
→ Information from license plates as a “market
researcher”
→ Interviews with subjects as “health care interviewer”
Repository
Diana Kapiszewski‘s rule: If you cannot share the data itself, share as much information about the data as you can and explain what you cannot share and justify why!
Congruence analysis
A congruence analysis approach (CON) is a small-N research design in. which the researcher uses case studies to provide empirical evidence. for the explanatory relevance or relative strength of one theoretical. approach in comparison to other theoretical approaches.
Covariational analysis
A co-variational analysis can provide first evidence for the claim that X made a difference in the case of interest by showing that a differ- ent value of X in other cases is co-existing with a different value of the dependent variable Y
co-variational analysis
A co-variational analysis can provide first evidence for the claim that X made a difference in the case of interest by showing that a differ- ent value of X in other cases is co-existing with a different value of the dependent variable Y
Gerrings case selection table
Type of cases
Descriptive or Causal
Descriptive cases
Typical = mean or median cases
Diverse = typical sub-types
Causal cases
Exploratory = to identify H
Estimating = to estimate H
Diagnostic = to asses H
Exploratory cases
Extreme = maximize variation in X and Y
Most different
Most similar
Index = first instance of change in Y
Deviant = poorly explained by Z (does not fit the theory or other similar cases)
Diverse = all possible configurations of Z
Estimating cases
Longitudinal = X changes, Z constant or biased against HX
Most-similar = Similar on Z, different on X
Diagnostic cases
Influental = Greatest impact on P(HX)
Pathway = X ! Y strong, Z constant or biased against HX
Most-similar = Similar on Z, different on X
OMNIBUS
Intrinsic importance = Theoretical or practical significance
Case independence = X and Y unaffected by values for other cases
Within-case evidence = Suitability of case-based evidence
Logistics = Accessibility of evidence for a case Representativeness = Generalizability
Single case
Type-level, only one case but it’s applicable in different times and spaces
Singular case
Token-level, only one case, only applicable in specific times and spaces
Counterfactual dependence and possible worlds
The basic and intuitive counterfactual definition of a cause is: a singular event x causes a singular event y if and only if had x not occurred, y would not have occurred. For our running example, we now have to replace the type-level claim with a token-level claim, e.g. “granting a new autonomy statute to the Catalans in 2006 (x) caused increased support for secessionist Catalan elites in the elections to the Catalan parliament in 2010 (y)”. This claim is true if we can establish that had the Catalans not received a new autonomy statute in 2006, support for secessionist elites would not have risen in 2010.
Counterfactual dependence and interventions
He states that the type- level claim “X causes Y” is true if there is an intervention on the variable X that will change the values of the variable Y (Woodward 2003:55): “A necessary and sufficient condition for X to be a direct cause of Y with respect to a variable set V is that there be a possible intervention on X that will change Y (or the probability distribution of Y) when all other variables in V besides X and Y
are held fixed at some value by interventions.”
Process theories
Process theories dispense with the idea that causation requires difference- making and see causation as production instead. Under the process account, causation requires a
spatiotemporally continuous process transmitting some physical structure from a token cause to its token effect. Process theories are located at the token level because this is where the processes unfold. Whether a type of process is causal then depends on generalization over the singular causal processes