PHIL8-10 Flashcards

1
Q

What are descriptive claims vs normative claims?

A

Descriptive: About what is - factual claims, “Web developers do x”, “AI can be used to solve many different kinds of problems”

Normative: About what should be- evaluative claims, “Web developers should ask if they want to do x”, “AI should be used to solve many different kinds of problems”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why ethics?

A

They provide different tools and perspectives to help us solve moral problems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are ethical theories and which approaches are there?

A

Theories helping us distinguish between good and bad, solving specific moral problems.

Three main approaches:
- Consequentialist
- Deontological
- Virtue ethics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is consequentialism?

A
  1. Determine the possible actions
  2. For each action, determine possible consequences, their likelihoods, and their relative goodness
  3. Select the option that maximizes the likelihood of achieving the most good

Key question: What is the ‘good’ and how can we measure the goodness of any particular consequence?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the deontological approach?

A
  1. Identify norms that apply in the current situation
  2. Determine which norms take precedence over others
  3. Act in a way that best fulfills the most important norms

Key question: Which norm apply in the current situation?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are virtue ethics?

A

The question is not ‘What should I do’ but rather ‘What kind of person should I be?’ (E.g. honest, courageous)

Key question: What makes a person good

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What kind of AI Ethics are there?

A

Instrumental: What is good/bad as means to other ends
Non-instrumental: Things that are good/bad themselves (friendship, hatred)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are two challenges raised by AI?

A

Intelligence and autonomy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the alignment problem?

A

How do we make sure that AI systems do what we want them to do?

Key issues:
1. What is the right objective
2. How to specify the objective
3. How do we know the system is actually following that objective
4. Should humans remain in control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the right objective?

A

First law: Robot may not injure a human being, or through inaction, allow a human being to come to harm

Second law: Robot must obey the orders given it by human beings except where such orders would conflict with the First law

Third law: A robot must protect its own existence as long as such protection does not conflict with the First or Second law

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are obstacles to defining the right objective?

A

Moral theories provide different, incompatible, answers: utility, abstract norms, virtue etc.

Moral uncertainty: What should we do when we are unsure what ought to be done?

Value pluralism: People hold different, incompatible, reasonable values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are proposals doe finding the right objective?

A

Instructions: AI does what it is instructed

Intentions: Agent does what the designer intended

Preferences: Agent does what people prefer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How to specify the right objective?

A

Top down: select correct objective and design a systems that implements this.
Q: Can ethical principles be explicitly stated and expressed in computer code?

Bottom up: Make the system learn the objective.
Q: Has the system learned the right objective?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are problems with specifying the objective?

A

Goal misspecification: Values or principles are difficult to choose or to represent

Specification gaming/reward hacking: System may exploit loopholes or interpret reward too literally

Quality of the data: What data should be included for bottom-up approaches?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Is the system actually following the objective?

A

Opacity and lack of explainability: Limited understanding why AI systems behave the way they do
Solution: Explainable AI or interpretability

Evaluating complex behaviour: Advanced AI systems may have capabilities, knowledge and action space that is different to understand
Solution: Scalable oversight

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Should humans remain in control?

A

It will be harder to retain control as the systems become more capable and autonomous.

Standard AI model; Specify a goal and let the machine pursue it — may actively disobey or get around human control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Solutions to losing control

A

Capability control (off-switch or boxing)

Motivational control/value alignment

What form of control matters? Explicit, implicit, aligned or delegated?

Possible trade-off between control and other values (e.g. explicit control may be unsafe)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the dimensions of the Alignment problem?

A

Technical: What techniques can provably align the behavior of AI systems?

Ethical: What normative approaches should we use to develop and assess AI system, institutions, technologies etc.

Societal: Alignment has socio-political dimension - we needs to collectively decide principles and values that matter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Important upshot on the dimensions: all questions have a normative component

A

Technical/descriptive and the normative are tightly intertwined together. What does I mean for systems to be aligned with values, what constitutes good evidence of alignment, when do solutions ensure alignment?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is a risk?

A

Probability x consequence

Sometimes there is uncertainty, or we are simply ignorant. Technology is safe when the risk is acceptable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Bias and discrimination

A

Data reflects existing underlying inequalities, not obvious how this can be solved

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Misuse

A

AI is a dual-use technology: can be used for civil and military purposes. Malicious actors can use AI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Disinformation, deepfakes, and threat to democracy

A

AI reduces cost of generating mis/disinformation at scale. Deepfakes can harm individuals and have societal effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Automation and job displacement

A

Technology disrupts work: Automate existing jobs, make jobs obsolete and create new ones. AI targets cognitive tasks.

25
Q

Enfleeblement

A

Once the practical incentive to pass our civilization
on to the next generation disappears, it will be very
hard to reverse the process. One trillion years of
cumulative learning would, in a real sense, be lost.
We would become passengers in a cruise ship run
by machines, on a cruise that goes on
forever—exactly as envisaged in the film WALL-E. […]

26
Q

What is singularity?

A

Theoretical point where AI surpasses human intelligence, becomes uncontrollable and leads to societal change

27
Q

What is superintelligence?

A

Any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest

28
Q

Orthogonality thesis

A

Intelligence and final goals are orthogonal axes along which possible agents can freely vary. More or less any level of intelligence could be combined with more or less any final goal. - paperclip example.

29
Q

What is instrumental convergence?

A

Thesis that intelligent agents with varied final gaols will still pursue similar intermediate goals or means such as self-preservation, acquiring resources or power

30
Q

Superintelligence + orthogonality thesis + instrumental convergence

A

What’s the risk of things going really bad for humans given the alleged possibility or superintelligence combined with the orthogonality thesis and instrumental convergence

31
Q

What is an existential risk?

A

Extinction of humanity? Civilization collapsing? Permanent and drastic loss of our potential for desirable future development?

32
Q

Catastrophic risks

A

Bad outcomes don’t need to be existential to be catastrophic.

33
Q

Risk management

A

Assessing risks: What could happen, how likely is it? How bad/good is it? Are we facing uncertainty? Are we ignorant?

34
Q

What types of errors can be made while assessing risks?

A

Type 1: False Positive
Type 2: False Negative

35
Q

When is it morally permissible to subject people to risks?

A

Do people have a right to not be subject to risks?
Duty to not subject others to risks?
Benefits outweigh costs?
Risks fairly distributed?
Would people agree to be subject to risks?

36
Q

What is the problem of paralysis?

A

Almost every action carries some risk. Rights-violating vs non-rights violating risk impositions

37
Q

What is informed consent?

A

Risks are permissible when people give informed consent, fully understanding potential risks and benefits involved

38
Q

Consequentialism in risks:

A

Will the action really cause harm?
Can we reliably assess the probability?
What consequences matter? (Money? Death?)
What about the distribution of risks

39
Q

Possible solutions for fair distribution of risks?

A

Prioritarianism: Principle of justice that gives priority to the worse off over the better off
Contractualism: Would people in principle agree to a given distribution of risks

40
Q

Uncertainty and precautionary principle

A

Lack of knowledge concerning possible consequences and probabilities doesn’t justify inaction, but there may also be a downside to be too cautious (type 1 or type 2)

41
Q

What purpose does attributing responsibility serve?

A

Praising or blaming
Accountability
Liability
Identifying duties and obligations

42
Q

Which types of responsibility are there?

A

Forward looking: Prospective duties or obligations for the future
Backward looking: Accountability for events that have already happened
Moral: Arises from moral norms and usually forward looking
Legal: Arises from legal norms and usually backward looking

43
Q

When is an agent morally responsible?

A
  1. Autonomy: Capacity to make free choices and control over them
  2. Epistemic condition: Ability to predict the consequences of acting
  3. Causal connection: The action caused the consequences
44
Q

When is an agent legally responsible?

A

Criminal liability emphasizes more autonomy, civil liability looks at consequences

Strict liability: Responsible for harm regardless of intent or negligence

Fault-based liability: Requires evidence that someone did something wrong

Different liability regies serve to regulate behavior

45
Q

Moral vs legal responsibility

A

Moral: Focuses on individuals, doesn’t always entail legal responsibility, assumes that action is under direct control of the agent and might not involve sanctions

Legal: Organizations or animals can also be responsible, doesn’t always require moral responsibility, agents might have diffuse control over outcomes, involves formal sanctions

46
Q

Technology Mediated Actions

A

Technologies constrain and facilitate human actions. They also shape human perceptions of the action.
- some actions become possible, others impossible
- increasing distance between actor, action and consequences

An actor’s control over a TMA is indirect: Who is responsible?

47
Q

Technology mediating can…

A

Make a difference to our capacity to make free choices. Obscure the causal connection between action and consequences. Improve/decrease our capacity to foresee the consequences of actions

48
Q

Is AI different?

A
  • An autonomous nature : No need for direct control
  • increasing complexity: AI systems are often opaque- we don’t understand how they work or why they do what they do
  • ability to learn from experience: AI systems adapt, and can teach themselves therefore exhibit behavior that is not foreseen/programmed
49
Q

Reponsibilty gaps

A

Situations in which moral responsibility should be held by someone, but no appropriate actor can actually be identified

50
Q

Multifaceted gaps:

A

Culpability - Gap in blameworthiness
Moral accountability: Gap in moral justification
Legal accountability: Gap in public justification
Active responsibility : Gap in forward-looking duties

51
Q

How to deal with responsibility gaps?

A

Fatalism: Accept that in some cases nobody is responsible
Deflationism: Nothing new under the sun - existing accountability mechanisms are sufficient
Develop liability regime
Develop technical solutions: Transperency governing method to regain ability to predict and control
Attribute personhood

52
Q

If AI and responsibility gap is a problem

A

Assumes that AI have control over outcomes but - humans still decide what algorithms to develop, how to train and test systems, when to employ them, how to integrate them in society

Assumes that humans and machine autonomy are similar but AI autonomy != free will

Assumes that AI systems are independent but they are embedded in social practices

53
Q

What is the advantage of AI-driven decision support over human decision making

A

More efficient and objective

54
Q

What is the problem with objectivity?

A

It’s a myth, AI supported decision making is no less biased than human decision making

55
Q

When is bias problematic?

A

These systems are biased insofar as their decisions reliably tend or “lean” toward or against one group or another. But bias in and of itself is not necessarily problematic

56
Q

When is bias not a problem?

A

Classification necessitates differentiation of some kind, not all of which is problematic (differentiating between males in females in medical treatment)

Prior domain-specific knowledge may be essential for effective decision making in domains

57
Q

When is bias problematic?

A

When the tendency toward or against individuals with certain characteristics that are immutable and irrelevant according to some appropriate standard (like gender, race or religious affiliation)

58
Q

Why does bias arise?

A

Biased objectives: Learning algorithm optimizes an biased function (user engagement with an interface whose imagery is more appealing to individuals of a particular gender)

Confirmation bias: Encoding (or failing to detect) a biased algorithm because it confirms the developers own biases