Curriculum Flashcards
F1 Gelman et al. 2020
Close elections (small errors mean a lot – difficult to forecast)
Polls have more error than stated (only consider standard sample error 2 pct. but closer to 4 pct. because of nonsampling error: nonresponse, mode, house effect)
Argument for region (states swing similarly when they are neighbors)
Incentives among forecaster (over- and under confidence)
Basic understanding of our understanding of forecasting models
Fundamentals x calcification/polarization is new for forecasters. Politics are changing, but models are not
F1 Victor (2021)
Criticism of forecasting:
(1) Partisan polarization perverts fundamentals
(2) Forecasts may affect turnout
(3) Outsized focus on the election horse race
(4) Forecasts give a false impression of science and certainty
F1 Cohn & Katz (2018)
Show your probability and how it can change
Misinterpretation of uncertainty/probability
F2 The bitter end chapter 1
Calcification due to:
(1) Long-term (party polarization)
(2) Short-term shift (Trump – emphasis identity, that contains more disagreement + Covid-19 trust in government)
Calcification (locked in): Party polarization (further apart – a bigger ideological leap to change) + affective polarization (worse feelings about the opposite)
Calcification manifests in many ways (vote, perception of economy, trust)
F2 The American voter chapter 2
Funnel of causality (Michigan model)
Most votes are determined by sociodemographic and party identification
Fundamentals is ‘issues’: More important for swing voters/independents
F2 The American voter chapter 13
Economic voting: Objective/subjective, prospective/retrospective and ego tropic (pocketbook voting) and socio tropic
Party identification can determine the relationship between economic voting and vote choice (forerunner of Brady et al. 2022)
The election is a referendum on the incumbent performance relating to economy (just like Abramowitz just narrower)
Socio tropic voting is more predominante
F2 Brady et al. (2022)
Growing partisan divide in economic perception. Both Republican and democrats perceive the economy differently.
Economic variables still matter - just less than before.
Builds on the American voter just with more data from more calcified elections
F2 Erikson & Wlezien (2008)
Polls and economic indicators.
Economic indicators explain more early on (Q2 GDP is imperative)
Economic indicators are incorporated in later on in polls through the campaign (Q3 GDP doesn’t tells us that much)
F2 The bitter end chapter 8
Story of 2020 election: Fundamentals favored Biden, but it was closer due to Trump.
Chronically low approval for Trump (from start to beginning) = fundamentals are still relevant
Basically, Obama in 2012 (economically) but with lower approval rating
Why didn’t Trump lose bigger/didn’t plummet = calcification.
Covid-19, black life matters (big events didn’t matter that much – perfect example of calcification. Just like the shooting)
F3 Abramowitz (2008)
Three parameters.
GDP is not the best predictor compared to approval. Discuss difference between incumbency and time in White House
Referendum of the presidency as a whole (broader then economic voting)
F3 Dickinson (2014)
Looking at 2012 election
Argues the fundamentals still matter (TV hosts declared them not to be – Obama was such a good candidate)
Fundamentals are brought to voters through campaign (campaigns interpret them differently)
Erikson & Wlezien (2014)
Same argument as 2008 text
Economic indicators are channeling into polls (bringing fundamentals to voters – Gelman especially)
Internal and external fundamentals
Around 100 days before the election the economic indicators start influencing the polls
F3 The bitter end chapter 3
Trump approval rating. Why was it so low? Due to calcified politics
Populism with the Republican voters
Affective polarization
F3 Linzer & Lauderdale (2015)
Much more uncertainty than what you report with fundamentals (coefficients, model specification and from national to state-level)
Uncertainty is understated! Just like polls that just report sample error
Bayesian fundamentals model. More complex model
F4 538 2024. How pollster ratings work
Accounts for:
Accuracy: Average error (election result - how difficult is it to predict) and average bias (house effect)
Methodological transparency
F4 Silver (2021). Death of polling
Polling in 2020 was mediocre not bad.
Pollsters try to correct their previous mistake. Risk of overcorrecting for earlier mistakes
Live caller is no longer better than internet (low response rate in general)
Distinguish between error (distance from election result) and house bias (distance from average polls)
F4 Bailey (2014)
Everything about polling.
Modern problems = sample (no random sampling because of low response rate)
Types of nonresponse (dependent on the case): Ignorable (size of groups) and non-ignorable (groups acting weird).
Introduce polling mode (live calling, face to face). Sampling frame: Poll of respondents.
Probability based (random digit dialing) vs. panel (wild west – employing methods from probability-based sampling)
Partisan nonresponse: Excitement (new candidate – democrats answer more. Trump calling polls fake news – republicans are less inclined to answer).
F4 Rentsch et al. (2019)
Likely voter vs. registered voters (registered overestimated Democrats)
Why: Low turnout and differential turnout. You are trying to predict an electorate that hasn’t formed yet
Likely voter model (probabilistic and deterministic)
Combination of PG + demographic because turnout is dependent on sociodemographic
Vote intention was the most important (people saying they are not going to vote – never voted)
First you deal with ignorable nonresponse and then turnout.
F5 Jackman (2005)
Why do pooling: Power problem of estimating changes in polls (like 0,5%). Motivation: More data – reduces sample error
Kalman filter / random walk (today is predicting tomorrow)
Retrospectively. Evaluating polls from Australia.
Estimate house effect looking at the difference between election result and polls
F5 Gelman 2021
Polling error is not that bad
The key challenges are (a) attaining a representative sample of potential voters (differential nonresponse), and (b) predicting turnout (differential turnout)
Sampling error and non-sampling error (nonresponse etc.)
Pollsters inform us about opinion trends and policy preferences
Differential nonresponse and differential turnout seem like more plausible explanations of polling error.
House effects are non-intended
Reject the shy trump vote hypothesis (nonresponse is more likely)
F6 The American voter chapter 11
Group identification leads to common believes and aggregate cost/benefit due to:
1) Psychological identification
2) Linked fate
3) Group membership
Three levels: Individual – group – political leader (individual only looks towards the group not the political leader)
Strong identification with the group = strong predictor of vote
A relative understanding. How distinct is the group compared to others.
F6 The American voter chapter 12
Social classes (certain level of education, income e.g.). Not so relevant anymore
Not as visual in everyday life as groups (no psychological perception)
The distinction between group and category/social class is sometimes blurred.
Four social variables class, education, age, and gender
F6 Pew Research Center (2024)
Empirical information/evidence for group theory
White evangelical, black and unaffiliated lean most.
Educational divide (maybe short term because of Trump)
F6 The bitter end chapter 4
Primaries for 2020. Focus on electability and Bidens winning coalition among sociodemographic groups