chapter 7 Flashcards
beneficial AI
improving life for humans by increasingly automating the necessary jobs that no one wants to do
The great AI trade-off
The trade-off between between the benefits and risks of AI
should we the abilities of AI systems, which can improve our lives and even help save lives, and allow these systems to be employed ever more extensively?
Or should we be more cautious, given current AI’s unpredictable errors, susceptibility to bias, vulnerability to hacking, and lack of transparency in decision-making?
issues with facial recognition
privacy
reliability
(significantly higher error rate on people of color than on white people)
Asimov’s fundamental Rules of Robotics
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
These laws have become famous, but in truth, Asimov’s purpose was to show how such a set of rules would inevitably fail.
humans on preferred morals of self driving cars
76 percent of participants answered that it would be morally preferable for a self-driving car to sacrifice one passenger rather than killing ten pedestrians.
But when asked if they would buy a self- driving car programmed to sacrifice its passengers in order to save a much larger number of pedestrians, the overwhelming majority of survey takers responded that they themselves would not buy such a car.
six Amazon Mechanical Turk studies approved of utilitarian AVs [autonomous vehicles] (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs.
Before we can put our values into machines, we have to figure out how to make our values clear and consistent.