Responsibility Flashcards
When is an action morally significant?
If it is reasonable to assume that an actor should be praised or blamed for that action.
When is an actor morally responsible?
- If they act voluntarily
- Are able to predict and evaluate the consequences of acting
- Are able to exert reasonable control over the action
- Are able to justify the action
What is meant with forward-looking and backward-looking moral responsibility?
Forward-looking responsibility (active) practices articulate and reify expectations about future tasks and duties. Such expectations might be spread through instruction manuals, ethical codes, observation of past practices, policy, regulation, laws, etc.
Backward-looking (passive) responsibility practices involve various ways in which actions are evaluated , accountabilities established and blame or praise be attributes.
Define Justice, and explain what can happen if justice is lacking.
Justice in relation to moral responsibility refers to the fair and equitable treatment of individuals and groups in terms of their moral obligations and rights. This includes holding individuals accountable for their actions and ensuring that they are punished or rewarded as appropriate. Not doing so can lead to dissatisfaction.
Key words: equality, accountability, dissatisfaction.
Explain what is meant with effectiveness in relation to moral responsibility. Also explain why this is important.
Effectiveness in relation to moral responsibility refers to the ability of an individual or group to carry out actions that lead to the desired moral outcome. Holding someone responsible for their (wrongful) actions today can contribute to them acting better tomorrow. People may learn from their mistakes and may want to avoid being blamed again
Key words: values, improvement
Explain how the introduction of (new) technology in morally relevant situations can lead to an opacity of moral responsibility.
Traditional accounts of moral responsibility focus on individuals and that the action of these individuals are under direct control of the actors. Technologies constrain and facilitate human action and shape the human perception of the action. An actor’s control over technology-mediated action is indirect, which poses the problem of who is morally responsible in these situations.
Explain the many-hands problem, and why it is relevant in terms of technology.
The Many-Hands Problem is the challenge of identifying the individuals or group that should be held accountable for the actions of a collective group or organization. This issue arises when a group or organization carries out an action, and it becomes challenging to determine who should be held responsible for the outcome, whether it be positive or negative. This problem becomes more prevalent in technological-mediated actions where multiple actors are involved in the action, and each one of them may only contribute to a small degree of moral responsibility. The problem raises questions about the responsibility of those who deliver data, program AI, or are directly using the technology for an action.
Keywords: Multiple actors, accountability, identifiability
Which three factors should you consider when thinking about the distribution of responsibility?
Justice: Responsibility is held by those who should hold it, and not held by those who should not hold it.
Effectiveness: Distribution of responsibility leads to better future actions.
Feasibility: Can responsibility actually be identified and distributed in a systematic way?
Keywords: fairness, improvement, identification
Explain three types of distributed responsibility and for each provide one downside.
Collective responsibility: everyone is equally responsible. This can encourage free-riding.
Individual responsibility: everyone is responsible only to the extend that they actually contributed. There is no incentive to ensure overall quality this way.
Hierarchical responsibility: responsibility is greater at the “top”. There could be **unfair ** distribution of praise/blame.
Keywords: Equality, free-riding, contribution, quality, hierarchy, unfairness
Explain three types of responsibility that can be distributed.
Causal responsibility is held by anyone who causally contributes to an action (e.g. users of a technology, developers or data subjects.
Legal liability is answerability to the law. Individual knowledge and control are often not needed to be considered liable, so e.g. engineers might be legally liable even if they might not be considered morally responsible.
Professional Responsibility: professional are expected to follow some regulated procedures, guidelines and duties. professionals have a high degree of predictive ability and control over the use of technology.
Depending on the situation, different actors might be held responsible in some of these other ways, even if they don’t have (full) moral responsibility
Keywords: causality, contribution, law, professionalism, regulations
Provide three aspects of AI in which AI and data science professionals can have very limited control and predictive ability of an AI.
- AI systems can be increasingly autonomous in the sense of requiring decreasing levels of human involvement
- AI systems can be increasingly opaque, decreasing our ability to predict, intervene, and justify
- AI technology does not always mediate actions, but rather executes actions as well.
Keywords: autonomy, opacity, execution
Explain the responsibility gap and give a concrete example of when it can arise.
Situations in which moral responsibility should be held by someone, but no appropriate actor can actually be identified.
An example of a situation where the responsibility gap can arise is in the case of self-driving cars. In the event of an accident caused by a self-driving car, it may be unclear who should be held responsible for the accident. The manufacturer of the car may argue that the accident was caused by a software malfunction and that they should not be held responsible, while the owner of the car may argue that they were not in control of the car at the time of the accident and should not be held responsible. In this case, there is a discrepancy between the moral responsibility for the accident (the individuals or organizations that caused the accident) and the legal or practical responsibility for the accident (the individuals or organizations that are held accountable for the accident).
What are three things we can do to deal with responsibility gaps in AI.
- Prohibit the development of opaque, autonomous AI, at least in domains in which the stakes are high.
- Establish a kind of “strict moral liability” in which e.g. developers are always held responsible for AI-executed actions
- Mandate the use of transparency-giving methods to regain an ability to predict, control and justify.
Keywords: Prohibit, liability, mandate, transparency