IPRESS TERMS EXAM 1 Flashcards
induction
A process of reasoning that moves from specific observations to broader generalizations and theories.
deductive reasoning
A process of reasoning that begins with broad generalisations or theoretical propositions and then moves to specific observations.
Retroduction
The interaction of induction and deduction in an evolving, dynamic process of discovery and hypothesis formation.
falsifiability
The claim that
(1) a scientific theory must be formulated in a way that
enables it to be disconfirmed or proven false; if it can’t be falsified it is not a theory but an ideology;
(2) when we ‘test’ a theory we should seek, not to verify, but to falsify it.
Deductive-nomological
model
According to this model, something is explained when it is shown to be a member of a more general class of things, when it is deduced from a general law (nomos, in Greek) or set of laws.
Hypothetico-deductive model
According to the model, we confirm that a generalization is a law by treating it as a hypothesis, and then we test the hypothesis by deducing from it predictions of further phenomena that should be observable as a consequence of the hypothesis.
Causal mechanism
Something that links a cause to its effect, that generates some type of ‘necessary connection’ between two events (the cause and the effect).
rational choice theory
explains outcomes as the result of rational
choices made by individuals within a given set of material and structural circumstances. It
shows that, given a particular set of circumstances, the strategic interactions of agents will
produce predictable, law-like outcomes. It assumes that all actors make rational decisions while being aware of all possibilities, and act on the best one usually for their self interest
positivism
Seeks to explain and predict. Looks at observable phenomena, prefers quantitative
analysis, to produce objective and law-like generalisations of empirical regularities.
Positivism:
* Cannot observe that one thing causes another;
* Causation is understood as empirical regularity: B consistently following
scientific realism
Seeks to explain and predict. Uses quantitative and qualitative analysis for studying both
observable and unobservable (theorized) elements
Realism:
* Can infer causality.
* Study causal mechanisms which may include both observable and unobservable variables
Interpretevist
Focuses on understanding social phenomena, via the meanings that these have for actors, prefers qualitative analysis, and offers the results “as one interpretation of the relationship between the …phenomena studied” (Marsh & Furlong 2002: 21). What can be observed is ‘in the eye of the beholder’; researcher cannot be ‘erased’ from the ‘findings’ (= joint constructions)
Interpretivism:
* Focus on meaning/understanding rather than causality. Reasons/reasoning rather than
causes
social fact
a category of facts which present very special characteristics: they consist of manners of acting,
thinking, and feeling external to the individual, which are invested with a coercive power by
virtue of which they exercise control over him”
Examples: norms, values, culture
What is epistemology concerned with?
Understanding and explaining how we know what we know
What does positivism maintain about scientific knowledge of the social world?
It is limited to what can be observed and explained through empirical regularities
Define behaviouralism in political research.
Application of positivist tenets, focusing on observable behaviour of political actors
key tenet for behaviroislm
Only observable behaviour may be studied
What does naturalism claim regarding natural and social sciences?
No fundamental difference exists between them
What distinguishes facts from values in scientific methods?
Facts are observer-independent and confirmed through sensory observation
What does scientific realism assert about unobservable elements?
They can be considered real if they produce observable effects
causal mechanism
The pathway or process by which an effect is produced
false thing about interpretivism
it doesnt seek to explain human behavior through law like generalizaitons
List three methodological conventions shared by interpretivists.
Clear differentiation of premises from conclusions
Recognition that sampling strategies matter
Acceptance of deductive logic
ontology
nature of the social world
commonality of quantitative and qualitative method
scientfici insight to social phenomena
research vase
top: broad question
middle: concrete rq
bottom: how conclusions contribute to research
types of rq: descriptive
chrachteristics of how somthing works or behaves
types of rq: explainatory
the cause of something that has occured or is happening
types of rq: normative
what is best, right, just, preferable and what ought to be done to prevent it
wwhat makes rq bad
beggign question: leading to a biased question
false dychtomoy - forces a choice between two options that are are not mutually exclusive or exhaustive
fictional question- hypothetical scenarios that cant be answered with empirics
metaphysical- trying to resolving non empirical stuff with empirics
tautological questions- repeats same idea in different words, makes rq meaningless
types of casues
micro- individual
meso/micro- social network
macro-biggest scale, country, society, culture
scientific relevance
- All RQ in poli sci needs to add scientific insight
- A Rq needs to build on previous studies
- A rq needs to study a question that has not been answered definitively
Possible routes to Scientific significance
1- theoretical contestation - Pitting theories against each other or testing a particular theory
2-theoretical eloboration- Building on a theory, providing more insights in the casual mechanism: why does A lead to B?
3- theoretical nuancing and contextualization -
When does A leads to B, in which circumstances/ among which groups does a theory hold (or not)
4- theoretical exploration and innovation -
Analysing a new relevant phenomenon, or analysing it in an innovative way
Types of research
Explaratory -
begins with a question and perhaps a basic proposition, probes its plausibility against various types of data, and eventually generates a hypothesis as a conclusion rather than as a preliminary to conducting the research itself.
prescriptive -
ask what we should do to bring about, or prevent, some outcome; what course of action we should follow to achieve a particular objective or goal.
a proposition-
a hunch or guess that two or more variables are related. When put forward for investigation
hypothesis
When put forward for investigation—stated in a way that enables us to determine whether it is right or wrong—a proposition becomes a hypothesis. Though a hypothesis is stated in a way that enables us to evaluate, analyse, or investigate a proposition, it is still a provisional and untested idea. It is only when it has withstood repeated tests and has been found to have considerable explanatory power that it becomes a theory.
type of theory: Grand theory
all-inclusive systematic efforts to develop a unified theory that will explain all the observed uniformities of social behaviour, social organization, and social change’
type of theory: theories of middle range
theories that attempted to understand and explain a limited aspect of social life, a more restricted domain or set of social phenomena. These explanations could then be verified through empirical research and then perhaps systematized into theoretical systems of broader scope and content.
type of theory: inductive vs deductive theory
in deductive theory, a hypothesis is deduced from current theory, which is then subjected to empirical scrutiny. Induction, you will recall, is a process of reasoning from particular facts to a general conclusion. So while deductive theory guides research, inductive theory is the outcome of research.
type of theory: Empirical vs normative
Empirical theory is concerned with questions that can be answered with empirical data (data gathered through observations of the world around us). Normative theory is concerned with questions about what is right and wrong, desirable or undesirable,
just or unjust in society.
type of theory: grounded theory
is an inductive research strategy. The researcher starts by collecting data, and allow concepts and categories to emerge from them. Hypotheses are then developed through the interaction of theoretical elements and data. Consequently, theory is produced through, and grounded in, data. What is most emphasized in grounded theory is that it is explicitly an emergent process: the aim is to discover the theory implicit in the data, to allow theory to emerge from the data as opposed to forcing it into preconceived frameworks.
interveening variable (Z)
The relation between these two variables is often affected by an intervening variable.
An intervening variable that affects the relationship between the independent and dependent variables by producing an interaction effect and acts as a ‘moderator’ variable.
One that transmits the effects of the independent variable to the dependent variable is called a ‘mediating’ variable. Without these mediating variables to act as a conduit, the independent variable would not affect the dependent variable.
conditions of causality
A casual mechanism or process links the two variables
The variables are corelated or co-vary (the score on one variable rises/ decreases based on the score of another variable)
The hypothised causse is temporaly prior to the effect
The correlation between the independent and depended variable is not spurious, meaning we ruled out the possibility that the correlation appears to exist only because there is a variable causally prior to both variables that effects both. (confounding variable
Quality criteria: reliability
- consistency free from error
- study can be replicated
- measuremetns are repeated and show same result
quality criteria: internal validity
how confident we are that the cause is responsible for the variation. Is our reliable results in the outcome?
Quality criteria: external validty
extent to which results can be generalised to other studies
why do we need research desing?
It specifies the type of research and techniques of data collection appropriate to the objectives of the project.
It makes explicit the logic which enables you to draw inferences—logical conclusions based on the information you collect or observations you make.
It identifies the type of evidence that not only confirms your hypothesis or argument, but provides a convincing ‘test’ of it.
It decreases threats to the internal and external validity of your findings.
It ensures that your findings are reliable.
Research design types: cross sectional
analyze a sample at a single point in time , focusing on variation between cases rather than within them.
LOW INTERNAL VALIDITY (its hard to find all others confounding variables)-> but easy to conduct and to repeat.
Repeated cross-sectional studies introduce a longitudinal element but which use different samples.
types of research design: longitidunal
Longitudinal designs track the same sample or set of cases over multiple intervals to examine changes over time. Longitudinal designs include cohort studies (tracking a specific group over time) and panel studies (following the same individuals, though with possible attrition).-> better internal validity so good at understanding causality, but its harder to do. Also low external validity due to attrition bias (ppl dropping out the survey over time)
These designs help analyze long-term changes, such as shifts in political attitudes or voter behavior, which cross-sectional studies cannot fully capture.
types of research design: comparative
Comparative research designs involve comparisons across cases, including countries, regions, or time periods. Even single-country studies can be comparative if they analyze internal variations.
Three main types:
* Large-N studies: Compare many cases to identify statistical relationships.
* Small-N studies: Analyze a few cases to trace causal mechanisms.
* Single-N studies: Focus on one case but compare different periods or regions.
Trade-offs:
* Small-N and case studies provide in-depth causal analysis through process tracing.
* Large-N studies offer broad cross-case knowledge but may miss deeper causal mechanisms.
* Case studies have strong internal validity (context, history) but weak external validity (generalizability).
* Expanding case selection improves generalizability but risks conceptual stretching.
Case selection should align with research goals:
* Critical cases: Test theories.
* Revelatory cases: Uncover hidden relationships.
* Unusual cases: Highlight extreme or unexpected phenomena.
triangulation
refers to the use of multiple methods or data sources in qualitative research to develop a comprehensive understanding of phenomena (Patton, 1999). Triangulation also has been viewed as a qualitative research strategy to test validity through the convergence of information from different sources
intercoder relaibility
reveals the extent to which different coders (in this case business people), each coding the same content (a country’s level of corruption), come to the same coding decisions (whether the country is corrupt or not)
Measurements
Reliable: free from random errors
Valid: Free from systematic errors
age (reliable) but not valid cause it doesnt answer what political
measurement validity
conceptual definition
operational definition
variables (indicator)
measurements (score, value)
look lecture 4
case study
intetnive study of a single or small number of cases which draws on observabale data and promises to shed light on a bigger population of cases (gerring 2017)
meta analysis
A meta-analysis is a systematic attempt to integrate the results of individual studies into a quantitative analysis, pooling individual cases drawn from each study into a single dataset (with various weightings and restrictions).
! Both statistical meta-analyses and narrative literature reviews assimilate a series of studies, treating them as case studies in some larger project – whether or not this was the intention of the original authors.
Case-study characteristics
- In depth investigation
- Any type of observable method and data possible: documents, macro-economic data, surveys, interviews, observation
- Often uses a combo of methods
Population
When one chooses a case study, one also chooses a population they study the case of. The population one studies is correlated with the theory that the research will be based on (the researched needs to clarify which population the case will draw from and what population is that the results of the case study will be inferred to)
Ex:
RQ: Why did the PVV become the most successful populist party in the Netherlands?
Case: PVV
Population: Populist parties (Europe?)
Main types (goals) of case studies
1- descriptive
2- causal explanatory
3- causal estimating
4- causal diagnostic
descriptive
not organised on an overarching hypothesis/ theory, although they can make some causal claims
causal explanatory
aims to identify a new hypothesis, what is the effect of X, what is the cause of Y (both with dependent and independent variables)
causal estimating
aims to test a hypothesis by estimating its causal effect. “Is it a positive or a negative relationship?”. Usually, here, one uses a large N-study, but one would do a case study only if there aren’t enough cases.
causal diagnostic
understand why a specific hypothesis does not hold up or at least does not appear to hold up, or why does one hold up: confirm, disconfirm or refine a particular hypothesis.
case selection strat: deviant
a case that doesn’t fit a theory and/or known relation between X and Y
The aim of studying a deviant case is to explain this case
Identify a hypothesis that can then be applied to other cases (exploratory)
The explanation could be
-To suggest a new causal factor
-An interaction between known factors
case selection strat. MSSD
(most similiar systems)
1) Similar in most characteristics (Z) that might influence Y, but different score on Y.
* Aim is to identify the factor X that explains the difference in outcomes (exploring)
2) Similar in most characteristics (Z) that might influence Y, different score on X
* Aim is to identify the effect Y, caused by X (estimating)
3) Similar in most characteristics (Z) that might influence Y, different score on X and Y
* Aim is to identify the determine the mechanisms connecting X to Y (diagnosing
Critiques of MSSD and MSDS:
Vulnerable to omitted variable bias: there may be other factors – other than X1 - that vary/are constant between the cases and that impact the outcome Y
The outcome may be a result of an interaction between multiple causes
A given outcome can have multiple (independent) causes
case selec. strat. Influential
Two variants:
- most likely cases meet all conditions (X and Z) for y to occur
- least likely: cases doenst meet any conditions thought to cause Y, excpet for one (X)
Different views on when/how to use this strat.
- common practice: use a most likely case as a weak/strong test of a theory by examing whether the prediction is correct (did Y occur)
- gerring : not to mean to test but understand why Y didnt occur against theoretical expectations
Strenghts and Weaknesses of (compartive) case study research
Case studies : internal validty
- critiques of single case studies:
- theory testing in a case study requires a determinsitic logic
Internal validity of case studies
Case studies are not good for determining causal effects, but rather good for uncovering causal mechanisms for a known effect
In-depth investigation, usually studying a longer period, sometimes looking at subunits, make case-studies well placed to:
* Identify (observe) causal mechanisms
* Generate new hypotheses
* Identify measurement error, e.g. by triangulating multiple data sources
* Identify possible confounders
Critiques of (single) case studies: Theory testing in a case study requires a deterministic logic
Critiques of MSSD and MSDS:
Vulnerable to omitted variable bias: there may be other factors – other than X1 - that vary/are constant between the cases and that impact the outcome Y
The outcome may be a result of an interaction between multiple causes
A given outcome can have multiple (independent) causes
! Adding more cases decreases these issues but does not solve them
Reliability of case studies
Comparatively low in case studies because of
“Informal nature”: many different decisions being made along the research process
Ongoing interaction between theory and evidence
Difficulties making (all) data accessible
Reliability can be improved by transparency on
The argument the researcher sets out to study
The case selection, incl what the researcher knew about the case when selecting it
Process of evidence gathering
Where possible, allow replication by depositing data and analysis files
do a case study when:
There is little theory on the topic: too little to guide a large-C study
There is too little available data to do a large-C study
Too few relevant cases to do a large-C study
Large C studies
Comparing a large number of cases (individuals, countries, political parties)
Research activities:
Collection of numerical data on many cases
Analysing them through statistical analysis
Estimating the relationships between variables, and whether they are likely to be found in the population as well (significance)
Breadth (many cases) rather than in-depth
Interpretivism in case studies
Not so interested in finding empirical regularities and testing causal claims
Focusing on a case/site/setting in its own right (idiographic rather than nomothetic)
Focus on constructions/understandings than causes
Theorising
Covert participation
This observation happens without the informants’ knowledge, and they are thus not provided with an opportunity to express whether they would like to take part in the research or not.
! As part of the procedure for obtaining informed consent, the researcher must ensure that potential participants are ‘aware of their right to refuse to participate; understand the extent to which confidentiality
and anonymity will be maintained; and be reminded of their right to renegotiate consent during the research process’.
Issues relating to informed consent
providing incentives: offering incentives to prospective participants (as, for instance, some material inducement) to share information with you is unethical.
A second issue is obtaining consent from marginalized or vulnerable populations. There are conditions which might render people incompetent to make an informed decision as to whether to participate: working with people in crisis—for instance, refugees—raises complex ethical issues around consent (as well as the possibility of causing harm).
Other conditions which impact the ability to give informed consent are in cases where people cannot speak the language in which the research is being carried out, elderly and infirm individuals, and children.
ethics
! Sharing information about a respondent with others for purposes other than research
is unethical. You may need to identify your study population in order to put your findings
into context; but it is unethical to identify an individual participant and the information
he or she provided. You
! Harm includes ‘research that might involve such things as discomfort, anxiety, harassment, invasion of privacy, or demeaning or dehumanising procedures’
Data in ethics
Data Access: data needs to be available
Production Transparency: researchers providing access to data they generated or collected themselves should offer a full account of the procedures used to collect or generate their data
Analytic Transparency: reseasrchers making evidence based knowledge claims should provide a full account on how they draw analytic conclusions from the data, so clearly explain the links between the data and the conclusions
Personal data
any information that relates to an identified or identifiable living individual. Different pieces of information, which collected together can lead to the identification of a particular person, also constitute personal data.
categories of respondents.
The first type, ‘ordinary people’, we typically inform that we intend to anonymise the interview transcript and not use their real name if we should cite them.
With the second type, ‘expert informants’, we typically exchange information about whether and, if so, how they would like to be anonymised.
The third type is ‘spokespersons’, whom we typically ask for their permission to be cited by name.”