post midterm Flashcards
what is the comparative method
a method of comparison between cases to find similarities and differences between cases
types of comparative method
single case study- using one single case to build or test a theory
small n-case study- only a small number of cases fit the phenomenon
large n-case study- unit of analysis allows for a large sample and many variables to be evaluated at the same time.
How to use the comparative method: small n case studies (things to watch out for)
When there are is a specific case or phenomenon that you want to investigate.
- things to watch out for:
- number of variables (don’t want too few cases and too many variables)
- selection bias: cases that prove your theory
- omitted variables; unaccounted variable can lead to a spurious relationship; relationship between x and y is caused by z
How to use comparative method: small n case studies (strategies)
Most similar systems design
most different systems design
how to use comparative method: small n case studies (best used for..)
- theory development/building and deep investigation
Politics of Third Wave Feminism by Evans
small n case study: that federal vs unitary system affects the participation of women in government
How to use comparative method: Large N studies
- quantitative analysis: data analysis, case selection, data collection
- variables are most important
How to use comparative method: large N studies (things to look out for)
- careful of equivalence of meaning and conceptual stretching, does my definition of the variables mean what I want it to mean across borders
how to use comparative method: (time wise)
cross-sectional vs. longitudinal.
Historical events research- cross-sectional with a single case
historical process research- longitudinal with a single case
cross-sectional comparative research- cross-sec. and one single case
comparative historical research- longitudinal with many cases
Why use the comparative method
Use it to test theory
- test theory on new cases
- develop new theories and cases
- it helps guard against false uniqueness- “too narrow of an explanation for a large phenomenon.
false universalism- if it happens in one place it happens everywhere.
What is the Ethnographic approach
how we study people and how data is collected in those studies, more broadly it includes different data collection and data analysis methods.
participant observation
field research; a method of data collection most common in ethnography
why do ethnography
- studying people in there natural habitat is important because there is a difference with what people say and what people do.
- good for exploratory research
informants, field
informants- people being studied
field- the research setting
what are the key concerns for ethnographic research
- case selection (generalizability is not a priority in this case)
- access to research information
- trust, rapport and objectivity
- replicability
Richard Fenno, Home Style- observing members of the US congress in their home districts
example of participant observation, took mental notes, asked a lot of questions, a lot of participating observation
The Politics of Third Wave Feminism
example of ethnographic research: she used semi-structured interview, attendance and feminist gatherings
Focus group
observation of the group dynamic, seeing how people interact with each other
types of questions during an interview
closed vs. open questions
types of interviews
structured- closed questions, same order, survey style (many people)
semi-structured- mix of short and long questions, allowed follow ups/modifications
unstructured- long, complex, no set questions/general topics
When to use the interview method
- is the info you need only available through talking to people
- when doing qualitative analysis
things to consider when choosing the type of interview
- exploratory (qualitative) vs. explanatory (quantitative)
- is topic straightforward or complex
- are costs, time and available facilities an issues
- is reliability or validity of answers threatened
two types of interviews in political science
expert interview
elite interview
3 steps in data analysis
data reduction- reduce it down to common themes
data coding- what are the commonalities and what will you call these commonalities
analysis- what does the data mean, do my conclusions make sense in terms of internal validity
quantitative analysis happens in what type of interview
close-ended/structured interview in which there are many responses
Dr. Roberta Rice’s research project
why are indigenous parties successful in some Latin American countries but not in others.
Bolivia, Ecuador, Peru, Chile
Rice’s methods
100+ semi-structured interviews, participant observations, organizational documents
what is an interview
a directed conversation that elects the inner views of the respondent .
advantages of research
- rich data- that you can’t find in textbooks or online
- its in the respondents’ own words
- you can learn new things from interviewees, gaining understanding
- interview and human interaction is a source, body language what gets them excited what makes them uncomfortable
Disadvantages of interviews
- interviews may not be truth (need to confirm or validate what info respondents give)
- time consuming
- difficult to analyze and generalize from that
- interview effects- Hawthorne effect (observation changes people’s actions) Rosenthal effect (other’s expectations of the target affect the target’s performance)
lack of standardization- your question may not mean the same thing across the board, sometimes you have to rework your question.
7 steps in interview research
- identify the requirements for research and choose an interview type that fits those requirements
- identify participants - secondary research on people
- design an interview guide- intro: explain who u are and the purpose of interview, questions, leave it open to interviewee to add.
- background research on interview so you can ask the right questions
- conduct interview- ethics approval must be signed before
- analysis afterword of notes.
Experimental method
method that allows you to control variables in order to remove externalities and verify variables and causal variables
experimental method- intervention
treatments or settings are manipulated or controlled by a researcher i.e lab setting
Experimental design
pre-test: measure outcome of control and experiment group before intervention
intervention- no treatment to control, treatment to experimental group
post-test- measure outcome of both and compare and analyze
when do we use the experimental method
control over the environment, and can find causal variables
relationship between control and validity in experimental method
the more control over the situation, the less external validity and vise versa
Dr. Tuxhorn’s study
explain the suprot for Canada-China trade agreement
Dr. Tuxhorn’s research question and research design
Research question: is Canadian support for China driven by fears of the US
Research design: randomized sample: split into control and experimental group
- control group left alone
- group 1: add info of US protectionism
- group 2: Grains of US over Canada
- looked to which group would be more supportive of trade with China
- gains of the US had more support for China
why one can’t assume correlation = causation
there are many factors that come into play and could be causing the phenomenon
relative gains
how will change of info change the way people act/react
relationship between random sampling and external validity
positive correlation
externalities
variables that one cannot control
Ethical problems that come with talking to people about political issues
talking to people about how they should vote affects voter turn out and voting so it impacts actually politics, and may favour a particular party
ideal research design
comparing fictional or what if scenarios and there effects
- what if NAFTA negotiations never happened vs. if they did?
what is the survey method
standard questionnaire, usually involves many participants.
why use the survey method
- it is used for describing and explaining attitude in land behavioural phenomena, common form of data collection
validity, reliability measurement error of measures in survey method
surveys are good at showing what a section of the population generally think, but can at times misrepresent them.
measurement error- questions attempt to measure political attitudes and behaviours, but because answers to this are abstract it becomes hard to measure (i.e respondents don’t remember, lie and, unwilling to reveal personal views more)
validity- the ability for the questions to fully capture the concept and what its trying to measure.
reliability- is the questioned answered in the same way across the board.
two types of survey methods
self-administered vs. supervised
pros and cons of supervised surveys
con: interviewer-effect
Pro: increased comprehension of question
when to use the survey method
- depends on research question: is the research question sensitive to the interviewer effect, the social desirability effect, recall problems (would it be easier to get accurate information with a survey rather than face to face?)
errors in representation of sample
A correct sample allows for accurate description of the data,
coverage error- not everyone has an equal chance of being selected
sampling error- the degree to which the sample is different from the actual population
non-response error- sample is not reflective because participants refused to participate
response rate
(number of completed interviews) / (number of attempted interviews)
- this happens with cellphones, robo calling
What helps increase response rate
- incentives,
- actual person instead of robo calling
Importance of random sampling
- avoiding coverage error- everyone has equal chance of being selected
- more representative of the population therefore increases validity
- avoiding selection bias
types of random sampling
simple random sampling
systematic sampling
stratified sampling
cluster sampling
non-random sampling
when you intentionally pick your sample
why use non-random sampling
there no sample frame (natural pool to pick from i.e terrorists), limited time and resources, case study, or small sample study
Examples of non-random sampling and definitions
quota- taking people in as they come (the first thirty people who walk into the store)
purposive- picking a specific sample because of the way it relates to your case study or answers your research question
snowball- asking interviewees if there is anyone else you can talk to
systematic sampling
randomly selecting where you start/how you pick. example: Skip interval: divide population by the sample and (4) you pick every fourth person from the pool
stratified sampling
randomizing at different levels:
cluster sampling
the sample is formed in groups and from those groups you pick individual parts as your sample, church attendance- look at groups of different churches and pick attendants from those churches to sample
Rules of a good questionnaire
- convince people to participate
- includes all info you need to collect (valid measures of the factors of interest)
- elicits acceptable accurate information
parts of a questionnaire (intro)
intro- states purpose (this has to be vague a not to skew answers), expresses gratitude (build rapport),
states ethics- anonymous and voluntary
order of questions in a questionnaire
questions should be:
- easy to answer, interesting, close-ended.
- general questions should come before specific ones (general perception of crime rather before personal experience, to avoid skewed response)
- questions directly related to survey question should be a the beginning of questionnaire
- group common questions together
- include instructions to avoid measurement error i.e 0 means not at all
what to consider when writing questions
- the goal: using the questions to create valid and reliable measurements of the key concepts
- the 4 types of response processes
- is the question simple and clear/easy to understand
- do they need certain info to answer the question, do they have the info, can my question be interpreted the same way cross the board (across cultures/languages)
Rules for close-ended questions
- should be only one category for every possible response.
- avoid don’t know response in order to assure a response for each question- opinion isn’t there are or not it is created as you ask questions
- avoid leading questions
response set
similarity of responses across range of questions ex: agree or disagree to a set of questions
acquiescence bias
the tendency of respondents to agree with statements presented to them, in order to avoid this you use (strongly agree, strongly disagree scale)
social desirability and how to avoid it
the need to answer what would most be socially accepted. this can be avoided through an introduction to the question to soften the pressure.
Franceschet research
Directorio Legislativo- looking at how the gender of politicians affected their political careers and demographics
process of research:
1. collecting data-
2. figuring out research questions that best take into account all the variables at play i.e does the gender affect the degree level of politician
3. making data usable- coding
4. consult with experts to make sure questions make sense.
How is quantitative data used?
to describe and explain data
what are the 3 different levels of measurement in quantitative analysis?
nominal- mutually exclusive categories cannot be numerically ranked and numbers assigned to them don’t have a meaning. lowest level of measurement
ordinal- the categories have an order which can be ranked, but the numbers assigned do no have a meaning and distance between the categories is not equal. high level of measurement
interval- has the same qualities as ordinal and nominal but the numbers assigned to them have intrinsic meaning and value. i.e level of income or number of children in a home. The highest level of measurement
discrete. continuous measures
discrete- break between each measure(1, 2, 3)
continuous- break point between measures (1-1000, 1001-1009)
what is univariate analysis
statistical measures used to summarize the characteristics of a single variable
2 common statistics in univariate analysis
proportion- share of cases relative to the whole population on a 0 to 1 scale (50 women in a sample of 125=0.4 proportion)
percentage- proportion times 100 (40% women)
what are measures of central tendency
a way of measuring the central value in a frequency distribution using mean (gives the most information), median, mode (gives the least information)
frequency distribution
describes the entire distribution if responses and summarizes the number of cases for each different response
comparison of utility: mean, medium, mode and what level of measurement does each use
mode- list utility, tells the most common response not much else and can change dramatically with the addition of a few new cases, so its not really stable. (nominal)
median- uses ordinal data because in order to determine the median you have to order the variables from lowest to highest. Looks at the middle value of distribution
mean- most useful because it says where is the case relative to others, is it above or below average, but the medium says more about the most common case if there are outliers
what is measure of dispersion and how do you do it
definition: measuring how standard a particular case is now that you know the standard.
we do that with: nominal, ordinal, and interval
nominal- there is no measure of dispersion, doesn’t talk about the case relative to another just indicates how many cases there are
ordinal- tells us about the spread of the data by giving us the range (the highest - the lowest value)
interval- standard deviation; calculating how far the case deviates from the mean and in what direction. But
rules for tables and charts
- straightforward, clear, informative
- description in the text below
- clear title, clear labels,
- note the source of the data
Frequency tables
a way of representing the distributions of a single variable, the table provides information of the distribution of responses on all categories
utility of bar chart, pie chart, and line graph
bar chart- good for variation of percentage in the data/differences across categories
pie chart- good for variation of percentage in the data/ differences across categories
line graph- showing a trend over time, continuous longitudinal data
two types of textual analysis
content and discourse analysis
content analysis
- analysis of content of the text to finding the meaning behind it
- capturing and coding the characteristics of the text
- the act of studying has no effect on those you are studying
manifest vs. latent content analysis
manifest- coding the easiest characteristics to find within a text
latent- interpretations of meanings, motives and purposes in a text
steps of content analysis
- select text to be analyzed
- determine the categories or topics of interest you are looking for in the text
- determine the unite of analysis: every sentence, phrase, paragraph etc.
- assign different code to the categories you determined in step 2
- code
- arrange the codes to provide a description of the variables or show a relationship between variables
- draw conclusions from these relationships that link back/help answer the research question
what is analysis
process of attaching meaning to data and connecting this meaning back to the research question
Discourse analysis
how discourse plays a role in creating, resisting and redistributing social power abuse, dominance and inequality. The analysis is when we see how the discourse establishes a particular context within the text and identifying the impact of that discourse on power relations
advise on writing
punctuation:
- don’t use a comma to connect to independent clauses
- parenthetic expressions in-closed with a comma
- semi-colon should not be used to join incomplete sentences
parts of a paper
Introduction: present the topic, the research question, how you will argue it and how the paper will proceed.
body: develop your arguments, and address counterarguments, discuss findings and data
conclusion: recaps the arguments and connects it back to the thesis, address limitations of research and what needs to be done moving forward.
what is critical thinking
the systematic evaluation of beliefs by rational standards
arguments that work
- argument supported by evidence
- argument that builds on existing theory
- clear statement of claim being made
- argument is logical
argument that doesn’t work
- argument with irrelevant premise- based on emotion
- arguments with false/unacceptable premise
3.
strategies for successful writing
mind map to brainstorm ideas, create an outline prior to writing
bivariate vs. multivariate analysis
bivariate- analysis of the relationship between two variables
multivariate- analysis between the relationship between three or more variables.