Applying Ethics Flashcards

1
Q

Connecting facts and value judgments

A
  • Needs to provide a premise that bridges the two
  • Naturalistic fallacy: if something is natural, it must be good
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Naturalistic fallacy: if something is natural, it must be good

A
  • Moves from “is” to “ought” without justification
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Argument from analogy

A
  • Draws a comparison between two situations or cases, and argues you should react to them in the same way
  • Suggests consistency in moral judgments is important
  • Helps to come up with answers to new ethical situations based on familiar ones
  • Often relies on intuitions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Casual claims and moral responsibility

A
  • Important Tools for Applying Ethics often focuses on causal claims and moral responsibility
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Three Categories of Concerns

A

1, Ethical obligations of society as a whole that result from the creation of AI

  1. Ethical obligations people have when they design and creating artificial intelligences
  2. Ethical obligations people have to the artificial intelligences they design and create
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

1, Ethical obligations of society as a whole that result from the creation of AI

A
  • What should be done with what is produced and who should get the benefits and harms
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  1. Ethical obligations people have when they design and creating artificial intelligences
A
  • ensuring such machines don’t harm humans and other morally relevant beings
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  1. Ethical obligations people have to the artificial intelligences they design and create
A
  • determining the moral status of the machine themselves
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Domain Specific vs. AGI

A
  • Domain Restricted AIs: the concern is mainly with safety
  • AGIs: wider variety of moral concerns and ethical problems
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Domain Restricted AIs: the concern is mainly with safety

A
  • They have one task, the idea is to do that task well
  • The focus is mainly on the ethical obligations people have when they design and creating artificial intelligences
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

AGIs: wider variety of moral concerns and ethical problems

A
  • Often novel and qualitatively different, unpredictable and numerous
  • They are like humans, and raise some of the same ethical concerns, but also completely unlike humans
  • Raises questions about the ethical obligations people have directly to the artificial intelligences they design and create
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Concerns About Domain Specific AIs

A
  1. Transparency
  2. Predictability
  3. Minimizing Manipulation
  4. Moral Responsibility
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
  1. Transparency
A
  • Example: a bank using a machine learning algorithm to recommend mortgage applications for approval, rejects most black applications
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
  1. Predictability
A
  • Example: Analogy to stare decisis in legal decisions, following past precedent whenever possible
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
  1. Minimizing Manipulation
A
  • Example: avoiding the ability to “fool” the AI and exploit weakness, such as to smuggle a bomb through airline luggage
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  1. Moral Responsibility
A
  • Who is to blame for accidents, failures to fulfill the purpose of the AI, etc., is difficult to determine
17
Q

Example: a bank using a machine learning algorithm to recommend mortgage applications for approval, rejects most black applications

A
  • Depending on how transparent the method used to design the machine learning algorithm is, it will be more or less difficult to discover why this result has occurred
18
Q

Example: Analogy to stare decisis in legal decisions, following past precedent whenever possible

A
  • When an AI takes on cognitive work with social dimensions, it inherits social requirements, such as needing to be predicable in order for individuals to optimize their lives within a context
19
Q

Artificial General Intelligence and the related challenges: Ethical Challenge 1

A
  • How do address the potential safety and ethical problems that come from AGI
20
Q

Artificial General Intelligence: Solution 1

A
  • Build general requirements and rules into the AI programming
  • For example: good behaviour is X, such that the consequence of X is not harmful to humans
21
Q

Artificial General Intelligence: Objection

A
  • This kind of rule only works if the AI explicitly identifies and understands the possible consequences of its behaviour, and does so in the way we expect it to
  • Implicitly consequentialist
22
Q

Artificial General Intelligence: Ethical Challenge 2

A

Do AIs have moral status?

23
Q

Do AIs have moral status?

A
  • It depends on what capacity we are using as the requirement for moral status
24
Q
  • It depends on what capacity we are using as the requirement for moral status
A

a) Rationality (sapience)

b) Ability to feel pleasure and pain (sentience)

c) Ability to achieve virtue and flourishing

d) Ability to build relationships and feel emotions

25
Q

How Utilitarianism, Kantianism, Virtue Ethics, and Care ethics could apply to AI: Kantianism

A
  • Kantianism argues that moral status is based in having the ability to reason.
26
Q

How Utilitarianism, Kantianism, Virtue Ethics, and Care ethics could apply to AI: Utilitarianism

A

Utilitarianism argues that moral status is based in having the ability to feel pleasure and pain.

27
Q

How Utilitarianism, Kantianism, Virtue Ethics, and Care ethics could apply to AI: Virtue Ethics

A

Virtue Ethics argues that moral status is based on the ability to flourish (achieve eudaimonia).

28
Q

How Utilitarianism, Kantianism, Virtue Ethics, and Care ethics could apply to AI: Care Ethics

A

Care Ethics argues that moral status is based (at least partially) in the ability to have emotions and develop relationships.