chapter 7 Flashcards

1
Q

beneficial AI

A

improving life for humans by increasingly automating the necessary jobs that no one wants to do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The great AI trade-off

A

The trade-off between between the benefits and risks of AI

should we the abilities of AI systems, which can improve our lives and even help save lives, and allow these systems to be employed ever more extensively?

Or should we be more cautious, given current AI’s unpredictable errors, susceptibility to bias, vulnerability to hacking, and lack of transparency in decision-making?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

issues with facial recognition

A

privacy

reliability
(significantly higher error rate on people of color than on white people)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Asimov’s fundamental Rules of Robotics

A
  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

These laws have become famous, but in truth, Asimov’s purpose was to show how such a set of rules would inevitably fail.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

humans on preferred morals of self driving cars

A

 76 percent of participants answered that it would be morally preferable for a self-driving car to sacrifice one passenger rather than killing ten pedestrians.

 But when asked if they would buy a self- driving car programmed to sacrifice its passengers in order to save a much larger number of pedestrians, the overwhelming majority of survey takers responded that they themselves would not buy such a car.

 six Amazon Mechanical Turk studies approved of utilitarian AVs [autonomous vehicles] (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs.

Before we can put our values into machines, we have to figure out how to make our values clear and consistent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly