algorithmic policing Flashcards
Q: What is algorithmic policing?
A: The use of data and algorithms by law enforcement to forecast where, when, and to whom crimes may occur, enabling pre-emptive policing.
Q: What is the goal of algorithmic policing?
A: To predict and prevent crime by analyzing historical data and deploying police resources in advance.
Q: What does Feeley & Simon (1994) say about algorithmic policing?
A: It represents a “new penology”: a strategy focused on classifying and managing groups based on perceived dangerousness.
Q: What does Zedner (2007) say about algorithmic policing?
A: It’s a security governance rationality that takes pre-emptive actions to change future outcomes.
Q: What are the four stages of algorithmic policing in practice?
A: Collecting data (e.g., arrests, location, time)
Analyzing with algorithms
Forecasting future crimes
Responding with police action
Q: What is “social sorting” (Lyon, 2007)?
A: The process of classifying people into groups based on behavior or data. Those who don’t fit the norm are seen as suspicious, reinforcing social inequality.
Q: What is “racializing surveillance” (Browne, 2015)?
A: Surveillance practices that reinforce racial boundaries, leading to discriminatory outcomes, especially for racialized groups.
Q: What does Garland (1996) say about algorithmic policing?
A: It leads to the criminalization of the ‘alien other’ — framing marginalized groups as inherently dangerous and unlike the dominant social group.
Q: How does algorithmic policing undermine social trust?
A: It assumes everyone is a potential threat, promoting the idea that no citizen is trustworthy, which erodes presumptions of innocence (UK House of Lords, 2009).
Q: Assumption 1: Data is a perfect reflection of reality — why is this flawed?
A: Data is based only on what’s reported or observed, which is influenced by biases in policing, underreporting, and systemic inequality.
Q: Assumption 2: Algorithms are neutral — why is this flawed?
A: Algorithms are built by humans, and the decision about which variables matter reflects bias and values, not objectivity.
Q: Assumption 3: The primacy of place — what’s the issue here?
A: Not all crimes are tied to place equally, yet algorithms often focus on location, leading to over-policing certain neighborhoods.
Q: Assumption 4: More policing is the solution — what’s the problem?
A: Over-policing may harm communities, while long-term, non-police strategies (like social services) may better address root causes.
Q: Assumption 5: Technology increases accountability — is this true?
A: Not always. Many algorithms are privately owned, making them difficult to audit or investigate independently.
Q: What justice-related concerns are raised by algorithmic policing?
A: Reinforces bias and targets “usual suspects”
Masks discrimination with a veneer of objectivity
Undermines democratic accountability
Encourages privatization and lack of transparency
Q: How can algorithmic policing reproduce inequality?
A: By using biased data, algorithms may prioritize policing certain races, neighborhoods, or classes, enforcing structural discrimination.