Week 7 – Robots, AI and ethics Flashcards

1
Q

these include:

  • The US phalanx close in weapon system
    • was an intergrated computer system, radar and multi-barrel gun that could detect, track and engage with targets such as anti ship missiles all without human intervention. This would be installed onto ships for defence
  • Sea hunter
    • buiilt by the US, is a naval vehicle that can find and clear mines. It can operate autonomously but does require human oversight
A

give 2 examples of

defensive Lethal autonomous weapons (LAWs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

explain how well suited

deontological ethics

are for robotic ssystems

A

this ethics framework can be well suited for robotic systems since it uses principles that are easy to define:

Isaac Asimov’s three laws actually fit this framework

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

this type of ethics frameworks focuses on qualities that you should uphold and maintaining these qualities means that you have maintained your ethics. Some examples include:

  1. courage in the face of fear
    1. Do not run from situations that scare you
  2. truthfulness with self-expression
    1. Do not act fake or lie about how you feel
  3. friendliness in social conduct.
    1. Treat people with respect and do not offend
A

describe how

virtue ethics

works

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

this is caused by the data that is fed into it and so it starts to learn that this is the correct output even though it may not be

In computer science the phrase “rubbish in rubbish out” states that an A.I system or an algorithm is only as good as the data that you are feeding it

A.I relies on good input data in order to make a correct decision

A

what

causes data biases within A.I

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

name 3

ethical frameworks

A

these include:

  • Virtue ethics
  • Deontological ethics
  • Consequentialist ethics
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

give 2 examples of

defensive Lethal autonomous weapons (LAWs)

A

these include:

  • The US phalanx close in weapon system
    • was an intergrated computer system, radar and multi-barrel gun that could detect, track and engage with targets such as anti ship missiles all without human intervention. This would be installed onto ships for defence
  • Sea hunter
    • buiilt by the US, is a naval vehicle that can find and clear mines. It can operate autonomously but does require human oversight
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

using the points below state a situation where the ethics could be contradicting

  • It is ethical to follow the law
  • It is ethical to save lives
A

which ethics are being contradicted in this situation

breaking the law in order to save many lives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

which ethics are being contradicted in this situation

breaking the law in order to save many lives

A

using the points below state a situation where the ethics could be contradicting

  • It is ethical to follow the law
  • It is ethical to save lives
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

With this type of ethical framework the focus is on the outcome of the actions.

This framework always seeks to get a positive outcome. Therefore bad actions can take place as long as the end result is positive. I.e the ends justify the means

A

describe

Consequentialist ethics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

This includes machines that can learn and recognise images or speech recognition systems.

These A.I systems have developed well and are at a near equal performance to humans

A

in regards to the 3 machine learning categories proposed by Professor Arvind Narayanan, describe

Perception tasks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

This ethical framework focuses on principles that must be upheld in order to maintain ethics

as long as the underpinning ethics, principles and motivations are upheld then a decision is allowed. this can in theory lead unethical decisions

some examples include:

  1. Don’t kill.
  2. Don’t cause pain.
  3. Don’t disable.
  4. Don’t deprive of freedom.
  5. Don’t deprive of pleasure.
  6. Don’t deceive.
  7. Keep your promise.
  8. Don’t cheat.
  9. Obey the law.
  10. Do your duty.
A

describe

deontological ethics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

the limitations of this ethical framework within robotic and A.I systems are:

  • Determining what action is required to achieve the intended consequence (I.e determining intended and unintended consequences)
  • One reason is how to give a value of goodness to an outcome
  • To what level of bad actions does the robot follow in order to maintain its Consequentialist ethics
A

what are 3 limitations of using

Consequentialist ethics

within robotic systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

describe

data biases

within A.I

A

this is a term that describes A.I giving a skewed and biased output

the output it provides will not be fair and will be biased towards one kind of outcome.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

these include:

  • these would decide who lives and dies which crosses a moral threshold. furthermore it would be done without human characterstics such as compassion
  • replacing robots for trops could make the decision to go to war easier, furthermore an incorrect decision made by a robot could ignite further tensions
  • Fully autonomous weapons would lack the human judgment necessary to evaluate the proportionality of an attack, distinguish civilian from combatant, and abide by other core principles of the laws of war. History shows their use would not be limited to certain circumstances.
  • these robots could lead to further use outside war such as border patrols or stopping protests or prop up regimes
  • wo would be held accountable for unlawful acts
    • the programmer
    • manufacturer
    • commander
    • the machine itself
  • if development of thes is left unchecked it could lead to an arms race

furthermore what if one of these is hacked and taken control of to be used as not intended

A

the catholic church and the campaign to stop the killer robots are two groups who are aganstthe use of Lethal Autonomous Weapons (LAWs)

name 6 concerns raised by the campaign to stop the killer robots

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what are 4 limitations of using

deontological ethics

within robotics systems

A

the limitations of using this ethics framework within robot systems are:

  • Which ethical principles should be included
  • What happens when there is a conflict in the principles
  • How do you cover all possible ethical scenarios(practically impossible)
  • How do you define concepts such as pleasure and pain in a precise manner
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Although not currently deployed some examples that are shrouded in secrecy include:

  • Super aEgis ii
    • a sentry gun that may be placed in the DMZ zone between north and south korea and can operate without human intervention
  • Tanks being developed by russia and china that can be tele operated or autonomous
A

give 2 examples of

offensive Lethal autonomous weapons (LAWs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

describe

Consequentialist ethics

A

With this type of ethical framework the focus is on the outcome of the actions.

This framework always seeks to get a positive outcome. Therefore bad actions can take place as long as the end result is positive. I.e the ends justify the means

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

in regards to the 3 machine learning categories proposed by Professor Arvind Narayanan, describe

Perception tasks

A

This includes machines that can learn and recognise images or speech recognition systems.

These A.I systems have developed well and are at a near equal performance to humans

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

these include:

  • perception tasks
  • automating judgement
  • predicting social outcomes
A

what are the 3

categories of machine learning

that professor Arvind Narayanan proposed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

This includes machines that must make a judgement on a situation such spam detection or detecting copyright material

These systems are fit for some purpose but can still be limited by the fact that people have different levels of judgement.

Example:

while someone might regard an email as spam someone else might not

A

in regards to the 3 machine learning categories proposed by Professor Arvind Narayanan, describe

Automating judgement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

this can be defined as a way of giving a value of what is wrong and what is right to a situation, decision or behaviour

A

define the term

ethics

22
Q

describe how

virtue ethics

works

A

this type of ethics frameworks focuses on qualities that you should uphold and maintaining these qualities means that you have maintained your ethics. Some examples include:

  1. courage in the face of fear
    1. Do not run from situations that scare you
  2. truthfulness with self-expression
    1. Do not act fake or lie about how you feel
  3. friendliness in social conduct.
    1. Treat people with respect and do not offend
23
Q

give 4

benefits and a disadvantage of

Lethal autonomous weapons (LAWs)

A

these include:

Benefits:

  • Do not tire
  • Do not need feeding
  • Can save lives by replacing the human
  • Faster than humans

Disadvantages:

  • As A.I solutions become off the shelf and machine parts become cheaper there is the very real risk of homemade WARs that may be used in terroism
24
Q

breifly describe 3 real world situations where

data biases in A.I caused undesirable output / consequences

A

theses include:

example 1

An image recognition system was made to learn what a tench fish looked like. It did so but because so many images of tench fish include fingers (because it is a trophy fish) it defined a tench fish as having a pink foreground with a green background

If this tench recognition system was used only to find pictures of trophy tench fish then it can be described as successful but if it were to be used in an underwater system looking for tench fish it would fail

example 2

Amazon built an A.I recruitment tool. Its took data from the past ten years of Amazons recruitment. Because this was mostly composed of men the A.I prioritised men over women (it became sexist because of the data it was fed). secondly because a skill such as writing code was so common on all applicant form this phrase was weighted as less important and so this resulted in inexperienced people being hired or getting to interviews

example 3

Tay is a chatbot that was built to chat in a tone of voice similar to that of the 18-24 year olds that it was interacting with. Upon being released ‘into the wild’ on twitter within 24hrs it had learnt behaviour that was sexist and racist and was tweeting such comments

25
these include: * Virtue ethics * Deontological ethics * Consequentialist ethics
name 3 ## Footnote **ethical frameworks**
26
this is what holds your ethics and will also lay out how you make decisions
what is an ## Footnote **ethical framework**
27
This includes actions such as: * predicting re offending * deciding which people should be hired * deciding which areas of a city to police This is a highly limited part of A.I since the future by definition and common sense cannot be predicted
in regards to the 3 machine learning categories proposed by Professor Arvind Narayanan, describe ## Footnote **Predicting social outcome**
28
what is an ## Footnote **ethical framework**
this is what holds your ethics and will also lay out how you make decisions
29
the limitations of using this ethics framework within robot systems are: * Which ethical principles should be included * What happens when there is a conflict in the principles * How do you cover all possible ethical scenarios(practically impossible) * How do you define concepts such as pleasure and pain in a precise manner
what are 4 limitations of using **deontological ethics** within robotics systems
30
this ethics framework can be well suited for robotic systems since it uses principles that are easy to define: Isaac Asimov's three laws actually fit this framework
explain how well suited **deontological ethics** are for robotic ssystems
31
give 2 examples of ## Footnote **offensive Lethal autonomous weapons (LAWs)**
Although not currently deployed some examples that are shrouded in secrecy include: * Super aEgis ii * a sentry gun that may be placed in the DMZ zone between north and south korea and can operate without human intervention * Tanks being developed by russia and china that can be tele operated or autonomous
32
in regards to the 3 machine learning categories proposed by Professor Arvind Narayanan, describe ## Footnote **Predicting social outcome**
This includes actions such as: * predicting re offending * deciding which people should be hired * deciding which areas of a city to police This is a highly limited part of A.I since the future by definition and common sense cannot be predicted
33
these can be defined as weapons that can autonomously search for and destroy targets including killing humans ## Footnote these are currently in development by: USA, UK, Israel, Russia and China
describe ## Footnote **Lethal autonomous weapons (LAWs)**
34
what ## Footnote **causes data biases within A.I**
this is caused by the data that is fed into it and so it starts to learn that this is the correct output even though it may not be In computer science the phrase “rubbish in rubbish out” states that an A.I system or an algorithm is only as good as the data that you are feeding it A.I relies on good input data in order to make a correct decision
35
these include: Benefits: * Do not tire * Do not need feeding * Can save lives by replacing the human * Faster than humans Disadvantages: * As A.I solutions become off the shelf and machine parts become cheaper there is the very real risk of homemade WARs that may be used in terroism
give 4 ## Footnote **benefits and a disadvantage of** **Lethal autonomous weapons (LAWs)**
36
this type of ethics framework focuses on qualities. therefore it is not well suited for A.I or robotic systems since it does not explicitly define what actions are and are not appropriate.
why is **virtue ethics** not well suited for A.I and robotic systems
37
theses include: ## Footnote _example 1_ An image recognition system was made to learn what a tench fish looked like. It did so but because so many images of tench fish include fingers (because it is a trophy fish) it defined a tench fish as having a pink foreground with a green background If this tench recognition system was used only to find pictures of trophy tench fish then it can be described as successful but if it were to be used in an underwater system looking for tench fish it would fail _example 2_ Amazon built an A.I recruitment tool. Its took data from the past ten years of Amazons recruitment. Because this was mostly composed of men the A.I prioritised men over women (it became sexist because of the data it was fed). secondly because a skill such as writing code was so common on all applicant form this phrase was weighted as less important and so this resulted in inexperienced people being hired or getting to interviews _example 3_ Tay is a chatbot that was built to chat in a tone of voice similar to that of the 18-24 year olds that it was interacting with. Upon being released ‘into the wild’ on twitter within 24hrs it had learnt behaviour that was sexist and racist and was tweeting such comments
breifly describe 3 real world situations where ## Footnote **data biases in A.I caused undesirable output / consequences**
38
Even during war there is law and some of these include: * For a military response to be legal: * Must distinguish between civilian and soldier * Must use proportional force * A clear judgement for the engagement * Must accept a surrender of the enemy It is argued that an A.I system could not cope with the delicates of some of the above laws, where they may thrive in spotting a human can it make clear judgement on the context of the current situation
what are some arguments against ## Footnote **Lethal Autonomous Weapons (LAWs)**
39
describe ## Footnote **deontological ethics**
This ethical framework focuses on principles that must be upheld in order to maintain ethics as long as the underpinning ethics, principles and motivations are upheld then a decision is allowed. this can in theory lead unethical decisions some examples include: 1. Don’t kill. 2. Don’t cause pain. 3. Don’t disable. 4. Don’t deprive of freedom. 5. Don’t deprive of pleasure. 6. Don’t deceive. 7. Keep your promise. 8. Don’t cheat. 9. Obey the law. 10. Do your duty.
40
what are some arguments against ## Footnote **Lethal Autonomous Weapons (LAWs)**
Even during war there is law and some of these include: * For a military response to be legal: * Must distinguish between civilian and soldier * Must use proportional force * A clear judgement for the engagement * Must accept a surrender of the enemy It is argued that an A.I system could not cope with the delicates of some of the above laws, where they may thrive in spotting a human can it make clear judgement on the context of the current situation
41
describe ## Footnote **Lethal autonomous weapons (LAWs)**
these can be defined as weapons that can autonomously search for and destroy targets including killing humans ## Footnote these are currently in development by: USA, UK, Israel, Russia and China
42
what are the 3 **categories of machine learning** that professor Arvind Narayanan proposed
these include: * perception tasks * automating judgement * predicting social outcomes
43
this is a term that describes A.I giving a skewed and biased output the output it provides will not be fair and will be biased towards one kind of outcome.
describe **data biases** within A.I
44
why is **virtue ethics** not well suited for A.I and robotic systems
this type of ethics framework focuses on qualities. therefore it is not well suited for A.I or robotic systems since it does not explicitly define what actions are and are not appropriate.
45
the catholic church and the campaign to stop the killer robots are two groups who are aganstthe use of **Lethal Autonomous Weapons (LAWs)** name 6 concerns raised by the campaign to stop the killer robots
these include: * these would decide who lives and dies which crosses a moral threshold. furthermore it would be done without human characterstics such as compassion * replacing robots for trops could make the decision to go to war easier, furthermore an incorrect decision made by a robot could ignite further tensions * Fully autonomous weapons would lack the human judgment necessary to evaluate the proportionality of an attack, distinguish civilian from combatant, and abide by other core principles of the laws of war. History shows their use would not be limited to certain circumstances. * these robots could lead to further use outside war such as border patrols or stopping protests or prop up regimes * wo would be held accountable for unlawful acts * the programmer * manufacturer * commander * the machine itself * if development of thes is left unchecked it could lead to an arms race furthermore what if one of these is hacked and taken control of to be used as not intended
46
this is a computer scientist from stanford university
who is ## Footnote **professor Arvind Narayanan**
47
who is ## Footnote **professor Arvind Narayanan**
this is a computer scientist from stanford university
48
# define the term **ethics**
this can be defined as a way of giving a value of what is wrong and what is right to a situation, decision or behaviour
49
what are 3 limitations of using **Consequentialist ethics** within robotic systems
the limitations of this ethical framework within robotic and A.I systems are: * Determining what action is required to achieve the intended consequence (I.e determining intended and unintended consequences) * One reason is how to give a value of goodness to an outcome * To what level of bad actions does the robot follow in order to maintain its Consequentialist ethics
50
in regards to the 3 machine learning categories proposed by Professor Arvind Narayanan, describe ## Footnote **Automating judgement**
This includes machines that must make a judgement on a situation such spam detection or detecting copyright material These systems are fit for some purpose but can still be limited by the fact that people have different levels of judgement. Example: while someone might regard an email as spam someone else might not