L05 Robots and AI Flashcards
THREE MORAL ROBOT LAWS OF ASIMOV
First law
A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
Second law
A robot must obey the orders given it by human beings, except where such
orders would conflict with the First Law.
Third law
A robot must protect its own existence as long as such protection does not
conflict with the First or Second Laws
Turing Test
The original Turing Test requires three terminals, each of which is physically separated from the other two. One terminal is operated by a computer, while the other two are operated by humans.
During the test, one of the humans functions as the questioner, while the second human and the computer function as respondents. The questioner interrogates the respondents within a specific subject area, using a specified format and context. After a preset length of time or number of questions, the questioner is then asked to decide which respondent was human and which was a computer.
The test is repeated many times. If the questioner makes the correct determination in half of the test runs or less, the computer is considered to have artificial intelligence because the questioner regards it as “just as human” as the human respondent.
THE ETHICS GUIDELINES FOR TRUSTWORTHY ARTIFICIAL INTELLIGENCE(AI) OF THE EUROPEAN COMMISSION (2019)
- Human agency and oversight: AI systems should support human autonomy and decision-making
- Technical robustness and safety: AI Systems should minimize unintentional and unexpected harm, and prevent unacceptable harm.
- Privacy and Data governance: Personal data collected by AI systems should be secure and private.
- Transparency: Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.”
- Diversity, non-discrimination and fairness: Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines.
- Societal and environmental well-being: AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”
- Accountability: AI systems should be auditable and covered by existing protections for corporate whistleblowers. Negative impacts of systems should be acknowledged and reported in advance.