Applying Ethics Flashcards
1
Q
Connecting facts and value judgments
A
- Needs to provide a premise that bridges the two
- Naturalistic fallacy: if something is natural, it must be good
2
Q
Naturalistic fallacy: if something is natural, it must be good
A
- Moves from “is” to “ought” without justification
3
Q
Argument from analogy
A
- Draws a comparison between two situations or cases, and argues you should react to them in the same way
- Suggests consistency in moral judgments is important
- Helps to come up with answers to new ethical situations based on familiar ones
- Often relies on intuitions
4
Q
Casual claims and moral responsibility
A
- Important Tools for Applying Ethics often focuses on causal claims and moral responsibility
5
Q
Three Categories of Concerns
A
1, Ethical obligations of society as a whole that result from the creation of AI
- Ethical obligations people have when they design and creating artificial intelligences
- Ethical obligations people have to the artificial intelligences they design and create
6
Q
1, Ethical obligations of society as a whole that result from the creation of AI
A
- What should be done with what is produced and who should get the benefits and harms
7
Q
- Ethical obligations people have when they design and creating artificial intelligences
A
- ensuring such machines don’t harm humans and other morally relevant beings
8
Q
- Ethical obligations people have to the artificial intelligences they design and create
A
- determining the moral status of the machine themselves
9
Q
Domain Specific vs. AGI
A
- Domain Restricted AIs: the concern is mainly with safety
- AGIs: wider variety of moral concerns and ethical problems
10
Q
Domain Restricted AIs: the concern is mainly with safety
A
- They have one task, the idea is to do that task well
- The focus is mainly on the ethical obligations people have when they design and creating artificial intelligences
11
Q
AGIs: wider variety of moral concerns and ethical problems
A
- Often novel and qualitatively different, unpredictable and numerous
- They are like humans, and raise some of the same ethical concerns, but also completely unlike humans
- Raises questions about the ethical obligations people have directly to the artificial intelligences they design and create
12
Q
Concerns About Domain Specific AIs
A
- Transparency
- Predictability
- Minimizing Manipulation
- Moral Responsibility
13
Q
- Transparency
A
- Example: a bank using a machine learning algorithm to recommend mortgage applications for approval, rejects most black applications
14
Q
- Predictability
A
- Example: Analogy to stare decisis in legal decisions, following past precedent whenever possible
15
Q
- Minimizing Manipulation
A
- Example: avoiding the ability to “fool” the AI and exploit weakness, such as to smuggle a bomb through airline luggage