Applying Ethics Flashcards
Connecting facts and value judgments
- Needs to provide a premise that bridges the two
- Naturalistic fallacy: if something is natural, it must be good
Naturalistic fallacy: if something is natural, it must be good
- Moves from “is” to “ought” without justification
Argument from analogy
- Draws a comparison between two situations or cases, and argues you should react to them in the same way
- Suggests consistency in moral judgments is important
- Helps to come up with answers to new ethical situations based on familiar ones
- Often relies on intuitions
Casual claims and moral responsibility
- Important Tools for Applying Ethics often focuses on causal claims and moral responsibility
Three Categories of Concerns
1, Ethical obligations of society as a whole that result from the creation of AI
- Ethical obligations people have when they design and creating artificial intelligences
- Ethical obligations people have to the artificial intelligences they design and create
1, Ethical obligations of society as a whole that result from the creation of AI
- What should be done with what is produced and who should get the benefits and harms
- Ethical obligations people have when they design and creating artificial intelligences
- ensuring such machines don’t harm humans and other morally relevant beings
- Ethical obligations people have to the artificial intelligences they design and create
- determining the moral status of the machine themselves
Domain Specific vs. AGI
- Domain Restricted AIs: the concern is mainly with safety
- AGIs: wider variety of moral concerns and ethical problems
Domain Restricted AIs: the concern is mainly with safety
- They have one task, the idea is to do that task well
- The focus is mainly on the ethical obligations people have when they design and creating artificial intelligences
AGIs: wider variety of moral concerns and ethical problems
- Often novel and qualitatively different, unpredictable and numerous
- They are like humans, and raise some of the same ethical concerns, but also completely unlike humans
- Raises questions about the ethical obligations people have directly to the artificial intelligences they design and create
Concerns About Domain Specific AIs
- Transparency
- Predictability
- Minimizing Manipulation
- Moral Responsibility
- Transparency
- Example: a bank using a machine learning algorithm to recommend mortgage applications for approval, rejects most black applications
- Predictability
- Example: Analogy to stare decisis in legal decisions, following past precedent whenever possible
- Minimizing Manipulation
- Example: avoiding the ability to “fool” the AI and exploit weakness, such as to smuggle a bomb through airline luggage
- Moral Responsibility
- Who is to blame for accidents, failures to fulfill the purpose of the AI, etc., is difficult to determine
Example: a bank using a machine learning algorithm to recommend mortgage applications for approval, rejects most black applications
- Depending on how transparent the method used to design the machine learning algorithm is, it will be more or less difficult to discover why this result has occurred
Example: Analogy to stare decisis in legal decisions, following past precedent whenever possible
- When an AI takes on cognitive work with social dimensions, it inherits social requirements, such as needing to be predicable in order for individuals to optimize their lives within a context
Artificial General Intelligence and the related challenges: Ethical Challenge 1
- How do address the potential safety and ethical problems that come from AGI
Artificial General Intelligence: Solution 1
- Build general requirements and rules into the AI programming
- For example: good behaviour is X, such that the consequence of X is not harmful to humans
Artificial General Intelligence: Objection
- This kind of rule only works if the AI explicitly identifies and understands the possible consequences of its behaviour, and does so in the way we expect it to
- Implicitly consequentialist
Artificial General Intelligence: Ethical Challenge 2
Do AIs have moral status?
Do AIs have moral status?
- It depends on what capacity we are using as the requirement for moral status
- It depends on what capacity we are using as the requirement for moral status
a) Rationality (sapience)
b) Ability to feel pleasure and pain (sentience)
c) Ability to achieve virtue and flourishing
d) Ability to build relationships and feel emotions
How Utilitarianism, Kantianism, Virtue Ethics, and Care ethics could apply to AI: Kantianism
- Kantianism argues that moral status is based in having the ability to reason.
How Utilitarianism, Kantianism, Virtue Ethics, and Care ethics could apply to AI: Utilitarianism
Utilitarianism argues that moral status is based in having the ability to feel pleasure and pain.
How Utilitarianism, Kantianism, Virtue Ethics, and Care ethics could apply to AI: Virtue Ethics
Virtue Ethics argues that moral status is based on the ability to flourish (achieve eudaimonia).
How Utilitarianism, Kantianism, Virtue Ethics, and Care ethics could apply to AI: Care Ethics
Care Ethics argues that moral status is based (at least partially) in the ability to have emotions and develop relationships.