L9 - AI Ethics & the Responsibility Gap Flashcards
Using examples, explain the difference between causal and moral responsibility
Causal responsibility: Someone is causally responsible for something, if they are part of the causal chain that led to it.
Example: Imagine a person, Alex, accidentally knocks over a vase while reaching for their phone. Alex is causally responsible for the vase breaking because their movement directly led to the chain of events resulting in the vase falling and breaking.
Moral responsibility: People can only be morally responsible for actions. So for example reflexes cannot be viewed as moral responsibility as they are unintentional behaviour, automatic reactions to stimuli. Action are voluntary behaviour, carried out intentionally. It is thought to only occur when an agent could have acted otherwise. Generally, moral responsibility requires causal responsibility and requirement of behaviour with intention.
Example: Consider Sarah, who borrows her friend’s laptop and intentionally deletes important files as a prank. Sarah is morally responsible for the loss of data because her actions were intentional, and she could have chosen not to engage in the harmful behaviour.
Explain what the responsibility gap is and why it often occurs when using AI.
AI has become so complex and obscure, it is hard to attribute causal (and moral) responsibility to any one for AI’s actions.
AI systems operate autonomously and make complex decisions based on intricate algorithms. The evolving nature of these systems makes it challenging to trace and understand the decision-making process.
- Unlike humans, AI lacks consciousness and intentionality. Decisions are made without conscious thought, making it challenging to apply traditional notions of responsibility.
- AI development and deployment involve various stakeholders, including developers, organizations, regulators, and end-users. The complex interplay among these entities further complicates the assignment of responsibility.
Explain what agency laundering is and specify how AI can facilitate this practice.
Agency laundering is a set of techniques which make it seem as if a person had no causal responsibility over a certain even. They distance themselves from that event which makes it hard to regard them as morally responsible for that event. Although they often actually are morally responsible. A specific example of how AI can facilitate this practice is by abusing the responsibility gap created by AI. Because AI has become so complex and obscure, it is hard to attribute causal (and moral) responsibility to any one for AI’s actions. Agency laundering occurs when somebody is abusing this responsibility gap to obscure responsibility and avoid moral blame by distancing oneself from (the outcome of) one’s actions. The actual responsibility party blames the algorithm/users for something they did.
1. An instance/person has power over a certain action. (Mark Zuckerberg power over Facebook)
2. This instance/person gives an algorithmic process power over the action. (Giving an algorithm the power over Facebook’s personalized adds resulting in problems)
3. The instance/person ascribes morally relevant qualities to the decisions and conclusions made by the algorithmic process. (Facebook responded to the problems by saying he algorithms had create the categories based on user responses)
4. By doing this, the instance/person obscures their moral responsibility for the action.(Through that they distanced themselves from actions they were responsible for)
Define the control problem concerning AI.
Describe and explain Nick Bostrom’s “paper clip maximizer” thought experiment in your answer.
The control problem questions how we can ensure, when developing advanced AI, that the AI aligns (and stays aligned) with our values and purposes that we design them for? How can we make sure that we don’t lose control of AI?
Nick Bostrom’s example that paints the picture of this problem is to invision that we make a superintelligent AI with purpose of efficiently producing as many paper clips as possible. The AI kills all human beings, because they might switch off the machine and because they obtain atoms that be turned into paper clips. By that, he wants to show that AI optimizes what we specify, not what we intend. Moreover, AI keeps leaning and can outthink us.