Busuioc (2021) – Accountable artificial intelligence: Holding algorithms to account Flashcards
Abstract
Artificial intelligence (AI) algorithms govern in subtle yet fundamental ways the way we live and
are transforming our societies. The promise of efficient, low-cost, or “neutral” solutions harnessing the potential of big data has led public bodies to adopt algorithmic systems in the provision of public services. As AI algorithms have permeated high-stakes aspects of our public existence—from hiring and education decisions to the governmental use of enforcement powers
(policing) or liberty-restricting decisions (bail and sentencing)—this necessarily raises important
accountability questions: What accountability challenges do AI algorithmic systems bring with them, and how can we safeguard accountability in algorithmic decision-making? Drawing on a decidedly public administration perspective, and given the current challenges that have thus far become manifest in the field, we critically reflect on and map out in a conceptually guided manner the implications of these systems, and the limitations they
pose, for public accountability.
Facial recognition algorithms have persistently been found to display much higher error rates
or minorities, which potentially leads to false arrests and discrimination of already marginalized groups when used in policing
The rise of algorithms and the need for countervailing checks
- Deep learning/neural networks= technological advances brought on by data,
computation, and the growing power of machine pattern recognition, relying on a range of methods - AI algorithms have been revealed to reproduce historical biases hidden within their training data
- Predictive systems used by police departments in several jurisdictions are likely built on discriminatory and unlawful historical police practices
- Dirty data: due to systematic police underreporting or logging of specific types of crimes, systemic over-policing of certain areas, or minorities
- AI algorithms also have been found to get caught in negative feedback loops that are
difficult to spot, break out of, and or self-correct - E.g., when an algorithm was to wrongly predict a particular area as “high crime”, the
resulting enhanced police presence will result in mor arrests in that area, which
become the algorithm/s new training data, reconfirming and reinforcing its earlier predictions - Algorithmic power: the fact that authority is increasingly expressed algorithmically
Dirty data
Due to systematic police underreporting or logging of specific types of crimes, systemic over-policing of certain areas, or minorities
Algorithmic power
the fact that authority is increasingly expressed algorithmically
Accountability
Refers to being answerable to somebody else, being obligated to explain and justify action and inaction.
The exercise of power is said to demand accountability
three phases of accountability process:
1.Information
2.explanation/justification,
3. consequences
- Information: transparency of agent conduct or performance Affording insight into actor decisions and (in)actions
- For meaningful accountability to have taken place, and to the extent that failings have
been identified, there should be a possibility to extract consequences - Negative judgment can result in the imposition of sanctions and the need fo actors to
undertake remedial actions (or rectification) to address failures and afford redress to
those adversely affected
Ai Algorithmic design
Any set of rules implemented in sequence to reach a particular outcome
- These algorithms are at the heart of our investigation in that such algorithms learn the
rules that govern their behavior on their own
* These algorithms are increasingly relied upon in public and private decision-making
- Algorithmic decision-making à the use of algorithms as an aid or as a substitute to
human analysis, to make or inform (and improve the quality of) decisions or actions
- It can be fully automated, or it can be mixed (entail a human decision-maker or
reviewer in the loop)
- Most of the algorithms work mixed
* AI system providers and public sector adopters and users for the operation and the
implications of the algorithmic systems they create and respectively purchase and deploy
Two levels in mixed Algorithms
- The AI algorithmic output, recommendation or decision (and the algorithmic processes
through which these were arrived at) - The subsequent interaction between the algorithmic output and the human decision-maker
Black boxes
Systems that hide their internal logic to the user
Secrets
Even if the algorithm’s features and operations are understandable such as with simple AI
algorithms like decision trees, these can still be secretive for proprietary reasons
- Trade secrets are a core impediment to understanding automated authority like
algorithms
conclusion
The three most important problems of AI algorithms:
1. Informational problems
2. Absence of adequate explanation or justification of algorithm functioning
3. Ensuing difficulties with diagnosing failure and securing redress
- Transparency is a highly necessary but insufficient condition for accountability
- Transparency of model design in the case of complex AI models, whose features are often opaque and escape interpretability, will fall short by way of providing adequate understanding of algorithmic decision-making
- Issues of AI transparency, bias, fairness, and accountability are not purely technical
(machine learning) questions but require serious engagement from our discipline and a
broader public administration lens - Accountability challenges raised by the adoption of AI tools within government are
inextricably linked to broader questions of bureaucratic legitimacy - Public sector use of AI tools, where the stakes can be the likes of liberty deprivation, use
of force, and welfare or healthcare denial, requires explanation of the nationale of
individual decision-making - Is it necessary to use black-box models in the public sector?
- It seems necessary that in areas where decisions have high stakes (individual-level)
implications, algorithms can neither be secret (proprietary) not uninterpretable