IPRES1 Flashcards

1
Q

induction

A

A process of reasoning that moves from specific observations to broader generalizations and theories.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Deduction

A

A process of reasoning that begins with broad generalisations or theoretical propositions and then moves to specific observations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Verifability

A

The claim that the criterion for establishing truth claims, and thus the goal of social scientific inquiry, should be to verify statements or propositions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Retroduction

A

The interaction of induction and deduction in an evolving, dynamic process of discovery and hypothesis formation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Falsifiability

A

The claim that

(1) a scientific theory must be formulated in a way that
enables it to be disconfirmed or proven false; if it can’t be falsified it is not a theory but an ideology;

(2) when we ‘test’ a theory we should seek, not to verify, but to falsify it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Deductive-nomological
model

A

According to this model, something is explained when it is shown to be a member of a more general class of things, when it is deduced from a general law (nomos, in Greek) or set of laws.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Hypothetico-deductive
model

A

According to the model, we confirm that a generalization is a law by treating it as a hypothesis, and then we test the hypothesis by deducing from it predictions of further phenomena that should be observable as a consequence of the hypothesis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Causal mechanism

A

Something that links a cause to its effect, that generates some type of ‘necessary connection’ between two events (the cause and the effect).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Rational choice theory

A

explains outcomes as the result of rational choices made by individuals within a given set of material and structural circumstances. It shows that, given a particular set of circumstances, the strategic interactions of agents will
produce predictable, law-like outcomes. It assumes that all actors make rational decisions while being aware of all possibilities, and act on the best one usually for their self interest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Positivist

A

Seeks to explain and predict. Looks at observable phenomena, prefers quantitative
analysis, to produce objective and law-like generalisations of empirical regularities.

Positivism:
* Cannot observe that one thing causes another;
* Causation is understood as empirical regularity: B consistently following

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Scientific realism

A

Seeks to explain and predict. Uses quantitative and qualitative analysis for studying both
observable and unobservable (theorized) elements

Realism:
* Can infer causality.
* Study causal mechanisms which may include both observable and unobservable variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Interpretevist

A

Focuses on understanding social phenomena, via the meanings that these have for actors, prefers qualitative analysis, and offers the results “as one interpretation of the relationship between the …phenomena studied” (Marsh & Furlong 2002: 21). What can be observed is ‘in the eye of the beholder’; researcher cannot be ‘erased’ from the ‘findings’ (= joint constructions)

Interpretivism:
* Focus on meaning/understanding rather than causality. Reasons/reasoning rather than
causes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

social fact

A

“a category of facts which present very special characteristics: they consist of manners of acting,
thinking, and feeling external to the individual, which are invested with a coercive power by
virtue of which they exercise control over him”

Examples: norms, values, culture

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is epistemology concerned with?

A

Understanding and explaining how we know what we know

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does positivism maintain about scientific knowledge of the social world?

A

It is limited to what can be observed and explained through empirical regularities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Define behaviouralism in political research.

A

Application of positivist tenets, focusing on observable behaviour of political actors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the key tenet of behaviouralism?

A

Only observable behaviour may be studied

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does rational choice theory assume?

A

Behaviour is motivated by rational self-interest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does naturalism claim regarding natural and social sciences?

A

No fundamental difference exists between them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How does realism derive knowledge?

A

From sensory experience; no a priori knowledge exists

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the goal of social science according to the text?

A

To explain and predict phenomena using laws

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What distinguishes facts from values in scientific methods?

A

Facts are observer-independent and confirmed through sensory observation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Fill in the blank: The criterion for establishing truth claims in scientific inquiry is _______.

A

[verification]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What does falsifiability argue about theories?

A

Theories cannot be 100% proved; they should be disprovable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
What does scientific realism assert about unobservable elements?
They can be considered real if they produce observable effects
23
What is a causal mechanism?
The pathway or process by which an effect is produced
24
What do critical realists argue about unobservable structures?
They impact social life and can be explored for potential change
25
What does interpretivism maintain about knowledge of the social world?
It can be gained through interpreting meanings behind actions
26
True or False: Interpretivism seeks to explain human behaviour through law-like generalizations.
False
27
What is hermeneutics in the context of social sciences?
Interpreting human actions as a text to understand underlying meanings
28
What are intersubjective meanings?
Meanings rooted in and constitutive of social relations and practices
29
List three methodological conventions shared by interpretivists.
* Clear differentiation of premises from conclusions * Recognition that sampling strategies matter * Acceptance of deductive logic
30
What is ontology?
The nature of the social world
31
What is a criticism of the approach in Designing Social Inquiry?
It develops a quantitative template for qualitative research
32
What do both quantitative and qualitative research seek?
Scientific insights into social phenomena
33
research vase
top: broad question or topic middle: concrete research question bottom: how conclusions contribute to research
34
# type of RQs Descriptive
The characteristics of how something works or behaves
34
# type of RQs Explainatory
The cause of something that has occurred or is happening
35
# type of RQs Normative
What is best, just, right, preferable and what ought to be (or not) done to bring about (or prevent it)
36
# type of Fallacy Begging the Question
Assumes the truth of a premise within the question, leading to a biased conclusion (e.g., "Why was American slavery the most awful ever?" assumes it was the worst).
37
# type of Fallacy False Dychotomy
Forces a choice between two options that are not mutually exclusive or exhaustive (e.g., "Napoleon: Enlightened Statesman or Proto-Fascist?" ignores other possibilities).
38
# type of Fallacy Fictional Questions
Asks about hypothetical scenarios that cannot be answered with empirical evidence (e.g., "Would Roosevelt have dropped the atomic bomb?").
39
# type of Fallacy Metaphysical Questions
Attempts to resolve non-empirical issues with empirical research (e.g., "Was World War I inevitable?" assumes inevitability can be empirically tested).
40
# type of Fallacy Tautological Questions
Repeats the same idea in different words, making the question meaningless (e.g., "Was Bush unsuccessful because he moved against history?" equates moving against history with being unsuccessful).
41
Types of causes:
Micro factors: individual Meso/ micro factors: social, network Macro factors: On the biggest scale, country, society, culture
41
Types of Academic Relevance
Theoretical contestation: pitting theories against eachother or testing a particular theory Theoretical elaboration: Building on a theory, providing more insights in the casual mechanism: why does A lead to B? Theoretical nuancing and contextualisaiton: When does A leads to B, in which circumstances/ among which groups does a theory hold (or not) Theoretical excploration and innovation: Analysing a new relevant phenomenon, or analysing it in an innovative way
42
# Type of research Confirmatory research
begins with a hypothesis and uses observations to test it. We begin with a statement, on the basis of a theory, of what we would expect to find and then see whether what we expect is fulfilled.
43
# Type of research Exploratory research
begins with a question and perhaps a basic proposition, probes its plausibility against various types of data, and eventually generates a hypothesis as a conclusion rather than as a preliminary to conducting the research itself.
44
Prescriptive questions
ask what we should do to bring about, or prevent, some outcome; what course of action we should follow to achieve a particular objective or goal.
45
a proposition
a hunch or guess that two or more variables are related. When put forward for investigation—stated in a way that enables us to determine whether it is right or wrong—a proposition becomes a **hypothesis**. Though a hypothesis is stated in a way that enables us to evaluate, analyse, or investigate a proposition, it is still a provisional and untested idea. It is only when it has withstood repeated tests and has been found to have considerable explanatory power that it becomes a **theory**.
46
# Type of Theory Grand theory
‘all-inclusive systematic efforts to develop a unified theory that will explain all the observed uniformities of social behaviour, social organization, and social change’
46
# Type of Theory ‘theories of the middle range’
theories that attempted to understand and explain a limited aspect of social life, a more restricted domain or set of social phenomena. These explanations could then be verified through empirical research and then perhaps systematized into theoretical systems of broader scope and content.
47
# Type of Theory inductive versus deductive theory
in deductive theory, a hypothesis is deduced from current theory, which is then subjected to empirical scrutiny. Induction, you will recall, is a process of reasoning from particular facts to a general conclusion. So while deductive theory guides research, inductive theory is the outcome of research.
48
# Type of Theory Empirical vs Normative theory
Empirical theory is concerned with questions that can be answered with empirical data (data gathered through observations of the world around us). Normative theory is concerned with questions about what is right and wrong, desirable or undesirable, just or unjust in society.
48
# Type of Theory Grounded theory
is an inductive research strategy. The researcher starts by collecting data, and allow concepts and categories to emerge from them. Hypotheses are then developed through the interaction of theoretical elements and data. Consequently, theory is produced through, and grounded in, data. What is most emphasized in grounded theory is that it is explicitly an emergent process: the aim is to discover the theory implicit in the data, to allow theory to emerge from the data as opposed to forcing it into preconceived frameworks.
49
!
A statement positing that two or more variables are related is a **proposition**: a provisional idea that merits evaluation. In order for us to evaluate its worth, its constituent terms need to be defined very specifically. Once you have done this, you have a **hypothesis**: a tentative answer or argument you wish to develop (investigate, demonstrate) in response to your research quettion. A **theory** identifies a small number of variables that must be taken into consideration in addressing the question, and how they are related both to each other and to the outcome that is being addressed.
50
‘theoretical framework’
the logic, story, or set of assumptions that connects your key factors or elements to each other.
51
‘hypothesis-generating’ research
produces findings that can be used in the development of theory.
51
Variable
A concept or factor that can vary, change, or assume different values or characteristics. Variation in this factor contributes to the variation of the outcome that we seek to understand. A constant is a factor that cannot vary and therefore cannot contribue change to the outcome. A variable has to be created with tewo or more characteristics which have to be mutually exclusive | variable: form of government values: democracy, dictatorship
52
independent variable
is a factor thought to influence, affect, or cause variation in the second type of variable. It always comes before that other factor (the ‘dependent variable’) in time and space. We refer to this factor as ‘independent’ in the sense that, in the context of the research, we are not interested in investigating what brought it about. Colin Kohl’s hypothesis, is an independent variable because, in the context of his research, we are not concerned with explaining how this stress came about.
53
A dependent variable
is the outcome that we wish to understand. We call it a dependent variable’ because we hypothesize that it depends upon or is caused by variation in an independent variable.
54
interveening variable
The relation between these two variables is often affected by an intervening variable. * An intervening variable that affects the relationship between the independent and dependent variables by producing an interaction effect and acts as a ‘moderator’ variable. * One that transmits the effects of the independent variable to the dependent variable is called a ‘mediating’ variable. Without these mediating variables to act as a conduit, the independent variable would not affect the dependent variable.
54
# Type of relationship Association
a relation between variables such that changes in one variable occur together with changes in the other.
54
# Type of relationship Causality
changes in one variable bring about changes in another
55
Conditions of causality
* A casual mechanism or process links the two variables * The variables are corelated or co-vary (the score on one variable rises/ decreases based on the score of another variable) * The hypothised causse is temporaly prior to the effect * The correlation between the independent and depended variable is not spurious, meaning we ruled out the possibility that the correlation appears to exist only because there is a variable causally prior to both variables that effects both. (confounding variable)
56
Causal relationships
* A deterministic causal relation states that ‘if (X) then always/invariably (Y)’. * A probabilistic causal relation states that ‘if (X) then maybe/sometimes/probably (Y)’. ‘Probabilistic’ means that when the values that an independent variable takes on increase, this usually results in the values of the dependent variable increasing (or decreasing)
56
# Quality criteria Reliability
=consistency, free from random errors * Study can be reproduced and replicated * Measurements are repeated and show the same result
56
# Quality criteria: Validity Internal validity
How confident can we be that the cause really is responsible for the (variation in) the effect? (though in interpretivist approach is not about validity but more about credibility: the extent to which the presentations by the researcher match the constructed realities of the respondents)
57
# Quality criteria: Validity External Validity
the extent to which the results can be generalized to other cases (outside the original study)-> again does not make sense for interpretivist: Researchers should offer ‘thick description’ of findings within context (time, place, culture) .Thick description enables readers to assess transferability of the findings to other cases.
58
Why do we need a research design?
1. It specifies the type of research and techniques of data collection appropriate to the objectives of the project. 2. It makes explicit the logic which enables you to draw inferences—logical conclusions based on the information you collect or observations you make. 3. It identifies the type of evidence that not only confirms your hypothesis or argument, but provides a convincing ‘test’ of it. 4. It decreases threats to the internal and external validity of your findings. 5. It ensures that your findings are reliable.
58
Inference
The process of using the facts we know to draw logical conclusions about facts we do not know
59
# Types of research design Experimental design
* Uses experimental control to isolate causal relationships, with researcher intervention in data collection. * Three main types: laboratory (high internal validity, low external validity), field (higher external validity but ethical concerns), and natural experiments (rely on naturally occurring events). * Combining lab and field experiments strengthens both internal and external validity. * Less common in political science due to ethical and practical constraints but remains highly influential in shaping research methods.
60
# Types of research design Cross-sectional
Cross-sectional designs analyze a **sample at a single point in time** , focusing on **variation between cases** rather than within them. LOW INTERNAL VALIDITY (its hard to find all others confounding variables)-> but easy to conduct and to repeat. Repeated cross-sectional studies introduce a longitudinal element but which use different samples.
61
# Types of research design Longitudional
Longitudinal designs track the **same sample or set of cases over multiple intervals** to examine changes over time. Longitudinal designs include cohort studies (tracking a specific group over time) and panel studies (following the same individuals, though with possible attrition).-> **better internal** validity so good at understanding causality, but its harder to do. Also low external validity due to **attrition bias** (ppl dropping out the survey over time) These designs help analyze long-term changes, such as shifts in political attitudes or voter behavior, which cross-sectional studies cannot fully capture.
62
# Types of research design Comparative
Comparative research designs involve comparisons across cases, including countries, regions, or time periods. Even single-country studies can be comparative if they analyze internal variations. **Three main types:** * Large-N studies: Compare many cases to identify statistical relationships. * Small-N studies: Analyze a few cases to trace causal mechanisms. * Single-N studies: Focus on one case but compare different periods or regions. **Trade-offs:** * Small-N and case studies provide in-depth causal analysis through process tracing. * Large-N studies offer broad cross-case knowledge but may miss deeper causal mechanisms. * Case studies have strong internal validity (context, history) but weak external validity (generalizability). * Expanding case selection improves generalizability but risks conceptual stretching. **Case selection should align with research goals:** * Critical cases: Test theories. * Revelatory cases: Uncover hidden relationships. * Unusual cases: Highlight extreme or unexpected phenomena.
63
Historical research
Historical research examines how past events influence future developments by contextualizing events and analyzing sequences. 1. Historical events research: Focuses on a single case at a specific time, often testing theoretical models (e.g., Allison’s study of the Cuban Missile Crisis). 2. Cross-sectional comparative research: Compares multiple cases from one time period (e.g., Pierson’s study of welfare state dismantling during the Thatcher/Reagan era). 3. Historical process research: Studies a sequence of events within one case to understand change over time. 4. Comparative historical research: Examines sequences of events across multiple cases to compare processes over time and space. 5. Historical institutionalism: Analyzes the origins, persistence, and evolution of institutions, focusing on timing and temporality. **Key concepts in historical institutionalism:** * Critical junctures: Moments that shape long-term institutional development. * Path dependence: How past decisions constrain future possibilities. * Positive returns/feedback: Self-reinforcing processes that maintain institutional stability. * Process tracing: Identifies causal mechanisms by analyzing sequences of events. * Event structure analysis: Maps causal relationships between actions in a sequence. **Historical research aids in:** * Understanding institutional origins and development. * Explaining causal relationships between political events. * Providing insights for predictive research (e.g., China-Japan relations based on German-French post-war cooperation).
64
Triangulation
refers to the use of multiple methods or data sources in qualitative research to develop a comprehensive understanding of phenomena (Patton, 1999). Triangulation also has been viewed as a qualitative research strategy to test validity through the convergence of information from different sources
65
Intercoder reliability
reveals the extent to which different coders (in this case business people), each coding the same content (a country’s level of corruption), come to the same coding decisions (whether the country is corrupt or not)
65
Other type of variables
* **nominal variable** is one where the numbers assigned to the variable are interesting only in so far as the labels—or names—that are attached to them are interesting. To interpret the variable, we must know what the values refer to and the names of the different categories. (if you choose answer no 2 doesnt mean its worse/ better than answer number one, only the answer has value) * **ordinal variables**, the numbers assigned to the different response categories do have some meaning. They have an order. (here itson a scale, on which number you put yourself in “on a scale from 1-5” matters)
65
Measurements
Reliable: free from random errors Valid: Free from systematic errors | age (reliable) but not valid cause it doesnt answer what political a
66
Measurement validity
1. conceptual definition 2. operational definition 3. variables (indicator) 4. measurements (score, value) | look lecture 4
66
Case
**Case** connotes a spatially delimited phenomenon (a unit) observed at a single point in time or over some period of time. It comprises the type of phenomenon that an inference attempts to explain (in a study that attempts to elucidate certain features of nation-states, cases are comprised of nation-states; in a study that attempts to explain the behavior of individuals, cases are comprised of individuals) ! the spatial boundaries of a case are often more apparent than its temporal boundaries. But also vice-versa; if one is studying terrorist attacks it may not be clear how the spatial unit of analysis should be understood, but the events themselves may be well bounded.
67
Case study
**A case study** may be understood as the intensive study of a single case where the purpose of that study is – at least in part – to shed light on a larger class of cases (a population). An additional implication of the term “case study” is that the unit(s) under special focus is not perfectly representative of the population, or is at least questionable. Unit homogeneity across the sample and the population is not assured. If, for example, one is studying a single H20 molecule, it may be reasonable to assume that the behavior of that molecule is identical to that of all other H20 molecules-> not a case study In a case study, the case under study always provides more than one observation. These may be constructed **diachronically** (by observing the case or some subset of within-case units over time) or **synchronically** (by observing within-case variation at a single point in time) **Case study research** may incorporate several cases, that is, multiple case studies.
68
Cross case
At the point where the emphasis of a study shifts from the individual case to a sample of cases, we shall say that a study is cross-case.
69
Observation
An observation is the most basic element of any empirical endeavor. Conventionally, the number of observations in an analysis is referred to with the letter N. (Confusingly, N may also be used to designate the number of cases in a study, a usage that is usually clear from context.)
70
# Deffinition from other guy variable
A single observation may be understood as containing several dimensions, each of which may be measured (across disparate observations) as a variable.
71
Sample
A **sample** consists of whatever cases are subjected to formal analysis; they are the immediate subject of a study or case study. The more case studies one has, the less intensively each one is studied, and the more confident one is in their representativeness (of some broader population), the more likely one is to describe them as a sample rather than as a series of case studies. ! the sample of cases (large or small) rests within a population of cases to which a given proposition refers. **The population of an inference** is thus equivalent to the breadth or scope of a proposition. (I use the terms proposition, hypothesis, inference, and argument interchangeably.) Note that most samples are not exhaustive; hence the use of the term “sample,” referring to sampling from a larger population. Occasionally, however, the sample equals the population of an inference; all potential cases are studied.
71
Meta-analysis
A meta-analysis is a systematic attempt to integrate the results of individual studies into a quantitative analysis, pooling individual cases drawn from each study into a single dataset (with various weightings and restrictions). ! Both statistical meta-analyses and narrative literature reviews assimilate a series of studies, treating them as case studies in some larger project – whether or not this was the intention of the original authors.
72
Case-study characteristics
1. In depth investigation 2. Any type of observable method and data possible: documents, macro-economic data, surveys, interviews, observation 3. Often uses a combo of methods
73
Population
When one chooses a case study, one also chooses a population they study the case of. The population one studies is correlated with the theory that the research will be based on (the researched needs to clarify which population the case will draw from and what population is that the results of the case study will be inferred to) Ex: RQ: Why did the PVV become the most successful populist party in the Netherlands? Case: PVV Population: Populist parties (Europe?)
74
Main types (goals) of case studies
1. **Descriptive**: not organised on an overarching hypothesis/ theory, although they can make some causal claims 2. Causal, **explanatory**: aims to identify a new hypothesis, what is the effect of X, what is the cause of Y (both with dependent and independent variables) 3. Causal, **estimating**: aims to test a hypothesis by estimating its causal effect. “Is it a positive or a negative relationship?”. Usually, here, one uses a large N-study, but one would do a case study only if there aren't enough cases. 4. Causal, **diagnostic**: understand why a specific hypothesis does not hold up or at least does not appear to hold up, or why does one hold up: confirm, disconfirm or refine a particular hypothesis.
74
# Case selection strategy Deviant
Deviant case: a case that doesn’t fit a theory and/or known relation between X and Y * The aim of studying a deviant case is to explain this case * Identify a hypothesis that can then be applied to other cases (exploratory) * The explanation could be -To suggest a new causal factor -An interaction between known factors
75
# Case selection strategy Most similar system design (MSSD) | (Russia and eastern Germany)
There are several variants of the most similar systems design – all comparing 2 or more cases: 1) Similar in most characteristics (Z) that might influence Y, but different score on Y. * Aim is to identify the factor X that explains the difference in outcomes (exploring) 2) Similar in most characteristics (Z) that might influence Y, different score on X * Aim is to identify the effect Y, caused by X (estimating) 3) Similar in most characteristics (Z) that might influence Y, different score on X and Y * Aim is to identify the determine the mechanisms connecting X to Y (diagnosing **Critiques of MSSD and MSDS:** Vulnerable to omitted variable bias: there may be other factors – other than X1 - that vary/are constant between the cases and that impact the outcome Y The outcome may be a result of an interaction between multiple causes A given outcome can have multiple (independent) causes
76
# Case selection strategy Influential ## Footnote Most likely: Like the Netherlands being the most likely case to study family migration policies because its court is similar to the french court, immigrants have acces to legal aid, and NL seeks to engage in supernational cooperation
**Two variants:** 1) Most likely: case(s) meet all conditions (X and Z) for Y to occur 2) Least likely: case(s) doesn’t meet any conditions (Z) thought to cause Y, except for one (X) **Different views on when/how to use this strategy:** * Common practice: use a “most/least likely” case as a weak/strong test of a theory by examining whether prediction is correct (did Y occur) * Gerring (2017): Not meant to test but to understand why Y (didn’t) occur against theoretical expectations (diagnostic). Because if we reject/ confirm a theory only based on one case then we make a deterministic claim.
77
Internal validity of case studies
1. Case studies are not good for determining causal effects, but rather good for uncovering causal mechanisms for a known effect In-depth investigation, usually studying a longer period, sometimes looking at subunits, make case-studies well placed to: * Identify (observe) causal mechanisms * Generate new hypotheses * Identify measurement error, e.g. by triangulating multiple data sources * Identify possible confounders 2. Critiques of (single) case studies: Theory testing in a case study requires a deterministic logic 3. Critiques of MSSD and MSDS: * Vulnerable to omitted variable bias: there may be other factors – other than X1 - that vary/are constant between the cases and that impact the outcome Y * The outcome may be a result of an interaction between multiple causes * A given outcome can have multiple (independent) causes ! Adding more cases decreases these issues but does not solve them
78
External validity of case studies
Comparatively low external validity as: * Random sampling not possible/sensible; cannot do statistical inference * Selection of the case(s) may determine the conclusion drawn * Comparatively high risk of investigator (selection) bias How to deal with the weak external validity? * Be transparent about the characteristics of the case: * How the case relates to the population on the characteristics of interest (plausible independent variables)
79
Reliability of case studies
**Comparatively low in case studies because of** * “Informal nature”: many different decisions being made along the research process * Ongoing interaction between theory and evidence * Difficulties making (all) data accessible **Reliability can be improved by transparency on** * The argument the researcher sets out to study * The case selection, incl what the researcher knew about the case when selecting it * Process of evidence gathering * Where possible, allow replication by depositing data and analysis files
80
Do a case study when
* There is little theory on the topic: too little to guide a large-C study * There is too little available data to do a large-C study * Too few relevant cases to do a large-C study
81
Large C- studies
Comparing a large number of cases (individuals, countries, political parties) Research activities: * Collection of numerical data on many cases * Analysing them through statistical analysis * Estimating the relationships between variables, and whether they are likely to be found in the population as well (significance) * Breadth (many cases) rather than in-depth
82
Interpretivism in case studies
* Not so interested in finding empirical regularities and testing causal claims * Focusing on a case/site/setting in its own right (idiographic rather than nomothetic) * Focus on constructions/understandings than causes * Theorising
83
Covert participation
This observation happens without the informants’ knowledge, and they are thus not provided with an opportunity to express whether they would like to take part in the research or not. ! As part of the procedure for obtaining informed consent, the researcher must ensure that potential participants are ‘aware of their right to refuse to participate; understand the extent to which confidentiality and anonymity will be maintained; and be reminded of their right to renegotiate consent during the research process’.
84
Issues relating to informed consent
* providing incentives: offering incentives to prospective participants (as, for instance, some material inducement) to share information with you is unethical. * A second issue is obtaining consent from marginalized or vulnerable populations. There are conditions which might render people incompetent to make an informed decision as to whether to participate: working with people in crisis—for instance, refugees—raises complex ethical issues around consent (as well as the possibility of causing harm). * Other conditions which impact the ability to give informed consent are in cases where people cannot speak the language in which the research is being carried out, elderly and infirm individuals, and children.
85
ethics
! Sharing information about a respondent with others for purposes other than research is unethical. You may need to identify your study population in order to put your findings into context; but it is unethical to identify an individual participant and the information he or she provided. You ! Harm includes ‘research that might involve such things as discomfort, anxiety, harassment, invasion of privacy, or demeaning or dehumanising procedures’
86
Sponsoring organisations
Sometimes there may be direct or indirect controls exercised by sponsoring organizations. They may select the methodology, or impose other restrictions on the research that may stand in the way of obtaining and disseminating accurate information. Both the imposition and acceptance of these controls and restrictions are unethical, as they constitute interference and could amount to the sponsoring organization tailoring research findings to meet its vested interests.
87
**Data Access**: data needs to be available **Production Transparency**: researchers providing access to data they generated or collected themselves should offer a full account of the procedures used to collect or generate their data **Analytic Transparency**: reseasrchers making evidence based knowledge claims should provide a full account on how they draw analytic conclusions from the data, so clearly explain the links between the data and the conclusions
88
Ethical obligations of researchers
89
Personal data
any information that relates to an identified or **identifiable living individual**. Different pieces of information, which collected together can **lead to the identification of a particular person**, also constitute personal data.
90
categories of respondents.
1. The first type, **‘ordinary people’**, we typically inform that we intend to anonymise the interview transcript and not use their real name if we should cite them. 2. With the second type, **‘expert informants’**, we typically exchange information about whether and, if so, how they would like to be anonymised. 3. The third type is **‘spokespersons’**, whom we typically ask for their permission to be cited by name.”