Artificial Intelligence Flashcards

1
Q

May 2016 crash

A

First person killed - travelling in auto-pilot mode

- Tesla did not assume responsibility

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

March 2018

A

First pedestrian killed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Giuseppe Contissa

A

Self-driving cars should be equipped with an ‘ethical knob’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Jan Gogoll

A

Everyone’s cars should have the same ethical settings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Advantage of giving people a degree of choice for moral settings?

A

Can hold them responsible for the outcomes more easily

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Utilitarian ethics

A

Maximising overall happiness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Kantian ethics

A

Applying a basic set of principles to serve as universal laws

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Virtue ethics

A

Fully realising a basic set of virtues

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Contractualistic ethics

A

Formulating guidelines people would be willing to adopt

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Gurney’s theory

A

Computer equipped to make utilitarian calculations might take into account that people prefer to drive in cars that save themselves, so therefore if more people use cars –> overall # deaths decrease –> overall maximum happiness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Hevelke and Nida Rumelin - what to do since it’s unfair to hold drivers responsible

A

Unfair to hold drivers responsible

  • Unfair moral luck
  • Should instead hold users collectively responsible for the risks they introduce as a group into society (NB retribution gap)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Can agency be transferred to a car?

A

Mindell: Always supervised by humans to some degree

- Can’t act on beliefs and desires

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

‘Mixed traffic argument’

A

People have a duty to switch to the safer alternative, or use added safety precautions e.g. speed limiters and alcohol locks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

NZ car regulations (2)

A

LTA s 22: Driver must stop and give assistance
Land Transport Rule: Drivers must not exceed speed limits
- Also offences such as ‘operate’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is AI? (Michael Scherer)

A

Machines that are capable of performing tasks that, if performed by a human, would be said to require intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Four categories of AI

A

Thinking humanely, acting humanely, thinking rationally, acting rationally

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Problem with autonomy

A

Reduces human labour

- Forces disruptive changes to the law and legal system

18
Q

Problem with foreseeability

A

Can’t predict the future
Machine-learning
Humans bound by cognitive limitations - can’t see all the options in time constraint so settle for satisfactory option
- Might not be able to hold designers liable if they didn’t predict the actions

19
Q

Research and development

A

Discreetness: can be conducted with limited visible infrastructure
Diffuseness: can work on it from multiple locations
Discreteness: can design without coordination/replicate code
Opacity: inner workers may be secret
+ low cost

20
Q

How to solve discreteness and opacity?

A

Apportion liability and demand publication of the code

21
Q

How to solve diffuseness and discreetness?

A

More difficult.

- Likely to be large visible corporations so not a huge worry but could be a problem in the future

22
Q

Why should a legislature influence policy?

A

Democratic, freedom, resources; but

Lack of expertise and limited time

23
Q

Why should an agency influence policy?

A

Tailor-made to a specific problem, flexible, specialised, ex-ante; but
Legislatures scared to give agencies too much freedom + who is an AI expert?

24
Q

Why should the courts influence policy?

A

Reactive, and focus on the facts; but

Common law moves slowly and they don’t focus on broad social considerations

25
Artificial Intelligence Development Act
Creates an agency assigned with the task of certifying safety of AI systems - Tort liability for certified - Strict liability for non-certified
26
How does the AIDA agency serve as a middle ground?
Not as coercive as a regulatory regime, but provides a strong incentive for AI developers to incorporate safety features.
27
Less interventionalist method than AIDA agency?
1. Government entity to conduct safety research - tort rules for people who ignore it 2. Private insurance
28
Examples of discrimination/bias in machine learning
Google's image recognition algorithm - labelled black people as "Gorillas"
29
Definition of singularity
The fear that we may develop an algorithm capable of recursive self-improvement
30
Pros and cons of state regulation for algorithms
Pros: competition between states that produce a race to optimal legal rules Cons: algorithm regulation is a national problem
31
Pros and cons of federal regulation for algorithms
Pros: Extensive expertise, comprehensive policy, quick to respond, can look holistically Cons: tunnel vision, focus on key political issues
32
If there was a federal agency, what features should it have?
1. Ex-ante regulation 2. Broad mandate to ensure dangerous algorithms aren't released 3. Ultimate authority over safety regardless of the type of products
33
Comparison between drugs and algorithms
Crisis after misbranded foods and drugs were being sold | - Not many labelling requirements
34
How could AI benefit NZ?
Improve productivity Make sense of large amounts of data Improve decision making Increase GDP by $54B
35
Individual rights models
Right to privacy, right to an explanation, right to a human decision maker etc
36
Official Information Act
S 23(1): Person has the right to be given a written statement for the reasons for the decision or recommendation
37
Why is s 23(1) OIA problematic?
Individuals can't make sense of the algorithm May not be aware you are subject to an algorithmic decision Can only get access to your own data
38
Privacy Act, principle 8
Accuracy of personal information to be checked before use
39
Case Note case
A manual notation has to be added to the record to comply with principle 8 of the Privacy Act
40
PHRaE framework in NZ
Checklist for data scientists thinking of using predictive analytical tools Ensures privacy, human rights and ethics are taken into account