IPRESS TERMS EXAM 2 Flashcards
Conceptualization and Operationalization
A conceptual definition explains the meaning of a term.
An operational definition links the concept to empirical references.
Concepts and Concept Formation
concept refers to a class of objects or behaviors with shared characteristics.
Concepts provide labels for generalizations (e.g., “participation” for voting, “conservative” for political ideology).
Political science relies on concepts such as power, democracy, ideology, and legitimacy.
However, there is no universal agreement on how to define many political concepts.
Researchers may use different terms for similar phenomena (e.g., revolution vs. coup).
Precise definition of terms is necessary for clarity and comparability in research.
concept formation process
Conceptualization involves selecting appropriate terms to represent phenomena.
John Gerring identifies eight criteria for a good concept:
Familiarity – Common understanding of the term.
Resonance – Whether the term “rings” well with audiences.
Parsimony – Simplicity and brevity in definition.
Coherence – Internal consistency in meaning.
Differentiation – Distinctness from similar concepts.
Depth – Inclusion of shared properties among instances.
Theoretical utility – Usefulness in explaining broader theories.
Field utility – Relevance within a specific academic field.
Conceptualization involves trade-offs between these criteria.
exampeles of operalisation
Failed State (e.g., Yemen, Pakistan)
- Conceptual definition: A state with high vulnerability or risk of violence.
- Operational definition: Measured by indicators like:
- Humanitarian emergencies (refugee movements, IDPs).
- Economic decline.
- Deterioration of public services.
- Foreign intervention.
- measured by researchers
Speeding
- Conceptual definition: Driving too fast.
- Operational definition: Driving over 70 mph.
- Measured using a speedometer reading.
operationalisation
Concept of Operational Definitions
- Ensures abstract concepts are measurable and observable.
- Conceptual definitions alone are insufficient for research.
Need for Operationalization
- Researchers may agree on a term but not on how to measure it.
- Operational definitions provide clear criteria for identifying a concept.
Normative Political Theory and Methodology
Normative theorists are often less explicit about their research methods.
Many students in the US and UK start advanced political theory research with little methodological training.
The research process for normative and empirical studies can follow similar patterns.
Empirical v Normative research
Empirical research: Concerned with the “real world,” using empirical data from observations.
Normative research: Focuses on what is just, right, or preferable, often through argumentative discourse.
Debate in Political Theory
Ideal Theory :
- Focuses on coherent philosophical ideals (e.g., Rawls’ A Theory of Justice).
- Assumes ideal conditions where everyone follows justice principles.
- Helps measure the gap between the ideal and reality.
Non-Ideal Theory :
- Criticizes ideal theory for being detached from real-world issues.
- Emphasizes the role of actual political institutions and practical constraints.
- Advocates for theories grounded in social and political realities (e.g., Carens, Sen).
Application of Normative Political Theory
Can be applied to issues such as globalization, migration, nationalism, gender roles, and inequality.
Researchers develop hypotheses and arguments based on literature and empirical data.
Thinking in terms of hypotheses helps clarify arguments and supporting evidence.
Approaches to Normative Political Research
Conceptual Analysis & Argumentative Discourse
- Focuses on defining principles, categorization, and logical argumentation.
- Uses philosophical methods to develop and critique concepts.
- Thought experiments and hypothetical cases help clarify principles.
Empirical Integration in Normative Analysis
- Evaluates political and economic institutions based on normative criteria.
- Uses historical data, surveys, and case studies to analyze justice and policy.
- Incorporates empirical findings from social sciences (e.g., sociology, economics).
Conceptual traveling
happens when a concept is applied to different contexts, places, or cases beyond where it was originally developed. This can be useful but may also lead to problems if the concept doesn’t fit well in the new context.
Conceptual stretching
occurs when a concept is broadened too much to include more cases, making it lose its precision and analytical usefulness. This often happens when scholars try to apply a term to too many different situations without keeping its core meaning intact.
steps in conducting an experiment
Hypothesis: e.g. Watering increases the proportion of sprouting seeds
Pre-register experiment: hypotheses + design
Random assignment to either experimental or control group (randomization)
Pre-test measure of dependent variable in both groups
One intervention a.k.a. treatment (independent variable)
Post-test measure of dependent variable in both groups
Compare (pre- and) post-test scores of groups
➢ Difference between groups? Strong evidence of effect independent variable!
Face validity
simply means: on the face of it, does the question intuitively seem like a good measure of the concept? If the answer is no, then we definitely have a problem, but what is intuitively yes for one person may not be so for another.
what is the independent variable in an experiment?
the treatment, which you design
Content validity
examines the extent to which the question covers the full range of the concept, covering each of its different aspects.”
Criterion validity
examines how well the new measure of the concept relates to existing measures of the concept, or related concepts.
Construct validity
examines how well the measure conforms to our theoretical expectations, by examining the extent to which it is associated with other theoretically relevant factors.
double-barrelled questions
This is when the question incorporates two distinct questions within one. So, for example, consider the question: ‘Do you agree or disagree with the following statement:
“Tony Blair lied to parliament about Iraq and should be tried for war crimes.”’ You might
think he lied but should not be tried for war crimes, and you might think he didn’t lie but
should be tried for war crimes, so how do you answer?
bandwagon effect
a psychological phenomenon in which people do something primarily because other people are doing it, regardless of their own beliefs, which they may ignore or override
(Voters may simply forget which party or candidate they voted for -> Respondents struggling to remember how they voted are probably more likely to infer that they voted for the winner)
telescoping
Respondents can generally remember that they have done such an activity, but cannot recall accurately when it was, and so they confuse real participation within the specified time frame with earlier experiences.
Acquiescence bias
refers to a tendency among respondents (or some types of respondent) to agree with attitude statements presented to them. The poorly educated, are more likely to lack true attitudes on issues presented to them. Since they have no real answer to the question, they tend to just agree. Acquiescence bias is therefore likely to be more pronounced on issues or attitudes that are of low salience to the respondent, in which he lacks clearly defined views, or in cases where the question is not clearly understood.
The most obvious remedy to this problem is to mix pro and anti statements, so that at least the extent of
the problem can be identified.
Stratified Sampling
The population is divided into distinct sub-groups (strata), and samples are drawn from each group to ensure adequate representation, improving accuracy for smaller groups of interest.
Cluster Sampling
Instead of selecting individuals, groups (clusters) are sampled, such as entire households or geographic regions, reducing costs but increasing the risk of sampling error due to similarities within clusters.
Quota sampling
Researchers set quotas for certain characteristics (e.g., age, gender, income) and interview people who fit those criteria. Since selection is not random, interviewer bias can be introduced.
Purposive Sampling
The researcher uses their own judgment to choose individuals they consider representative of the population. This can introduce bias, as it relies on subjective selection.
Snowball Sampling
Researchers recruit initial respondents who then refer others who fit the study criteria. This is useful for hard-to-reach populations (e.g., drug users) but can lead to biased samples.
Volunteer Sampling
Participants self-select to be part of the study, often in response to advertisements or media polls. This method is highly biased, as only motivated individuals participate, making results unreliable
Response Bias
Occurs when the people who respond to a survey differ significantly from those who do not, leading to misleading results. For example, politically disengaged individuals may be less likely to answer election surveys.
type of survey
) Cross-sectional: Repeated cross sectional: e.g. European Social Survey,, Eurobarometer, Latinobarómetro, World, Values Survey, Gallup World Poll
2) Longitudinal (panel-survey): E.g. Understanding Society, German Socio-Economic Panel
I would classify a survey experiment as a type of experiment rather than a type of survey
coverage error
the difference between the sampling frame and the
target population: Does the frame cover the entire population?
what is an error
the difference between the results (of the survey) and the “true” value
error= bias + (chance) variance
when to use a survey
Descriptive questions: Do politicians think that voters are more conservative than they really are? (Pilet et al, 2024), Is trust declining? (Valgarðsson et al, 2024)
Testing: relations, causal effects and/or causal mechanisms: Economic policy and anti-immigration attitudes drive vote choice between centre- and radical-right parties (Abou-Chadi & Wagner, 2021. – see T2)
Exploring clustering/building typologies: What do gender ideologies look like in diverse work–family policy settings? (Grunow et al 2018)
what is bias
systeamatic error
sampling error
he difference between the sample and the sampling
frame: Not every unit (person/company/..) in the sampling frame ends up in
the sample.
non-response error
the difference between the units sampled and
those responding.
Note: in research papers/books the set of respondents is often called
‘the sample’.
focus group
A type of interview in which the researcher meets with a group of people,
selected because they are related to some phenomenon of interest, and
facilitates an organized discussion related to something the participants
have experience of or beliefs about.
! Focus groups build upon the interview approach by bringing together a small group of individuals to discuss a specific topic. The dynamic interaction within the group can lead to more in-depth reflections and insights compared to individual interviews or questionnaires.
strucutred interviews
A form of interview that consists of a standardized set of closed and
shorter or simpler questions that are asked in a standardized manner and
sequence and, thus, are better for making comparisons.
unstructured interviews
A type of interview that uses open, and perhaps lengthier and more com-
plex questions, which might vary in the way and order in which they are asked, and which allow for more in-depth probing and flexibility, than
structured interviews.
semi structured interviews
A form of interview in which the interviewer uses a combination of struc-
tured questions (to obtain factual information) and unstructured ques-
tions (to probe deeper into people’s experiences).
when to use interviews
usually explanatory or descrpitive wuesitons, fact finding
sample size for interview
usually small, <50. don’t need a random or representative sample, because they don’t try to make statistical inferences. Purposive sampling (you know what kind of people you would need to get the answers you want). Diversity rather than representative, within the purpose (strategic relevance) of the interviews. Sampling tends to continue during data collection= you keep doing this until you reach out to everybody?
difference between purposive and convienece sampling
Purpose sampling: Researchers intentionally select participants based on specific characteristics relevant to the study.
Goal: To obtain information-rich cases that provide deep insights into the research topic.
Example: A study on the experiences of cancer survivors may specifically select individuals who have undergone different types of treatment.
Convinience sampling: Researchers select participants based on ease of access and availability.
Goal: To gather data quickly and efficiently, often with limited resources.
Example: A university study surveying students who happen to be in a campus library at the time of data collection.
discourse analysis
s concerned with analysing, not just the text itself, but the relationship of a text to its context (its source, message, channel, intended audience, connection to other texts and events), as well as the broader relations of power and authority which shape that context.
discourse is:
interpretitive: it assumes that people act on basis of beliefs, values, or ideologies that give meaning to their actions; and that to understand political behaviour, we must know about the meanings that people attach to what they’re doing.
constructivist: it assumes not only that people act towards objects, including people, on the basis of the meanings which those objects have for them, but that these meanings are socially and discursively constructed.
difference between discourse and content
Discourse analysis focuses on how language shapes meaning and power, while content analysis focuses on what is explicitly present in the text.
discourse
Discourses consist of ensembles of ideas, concepts, and categories through which meaning is produced and reproduced in a particular historical situation.
! But while textual analysis can reveal the elements of a discourse, the meaning that they produce or reproduce can only be understood in relation to some broader context.
post strucutralism
Definition: A theoretical approach that challenges the idea of fixed meanings, structures, or absolute truths.
Core Idea: Knowledge, identity, and social reality are historically and socially constructed through discourse and power relations.
Example: Foucault’s work shows how concepts like “madness” or “criminality” are shaped by shifting institutional and discursive practices rather than objective truths.
content anlaysis
is concerned with the study of the text itself, rather than with the broad context within which it was produced. This analysis can be either quantitative or qualitative.
The aim of quantitative content analysis is to draw inferences about the meaning and intention of a text through an analysis of the usage and frequency of words, phrases, and images, and the patterns they form within a text.
Qualitative content analysis is a more interpretive form of analysis concerned with uncovering meanings, motives, and purposes in textual content.
speech act theory
Definition: A theory of language that sees communication as action rather than just conveying information.
Core Idea: Language is not just about stating facts but also about performing actions (e.g., promising, apologizing, ordering).
Example: Saying “I now pronounce you husband and wife” during a wedding is not just a statement but an act that changes reality.
critical discourse analysis
Definition: A method of analyzing language to uncover power structures, ideology, and social inequality.
Core Idea: Language is a tool of social control, and those in power shape discourse to maintain dominance.
Example: Examining how media portrays marginalized groups to reinforce stereotypes and social hierarchies.