Week 7 – Robots, AI and ethics Flashcards
these include:
- The US phalanx close in weapon system
- was an intergrated computer system, radar and multi-barrel gun that could detect, track and engage with targets such as anti ship missiles all without human intervention. This would be installed onto ships for defence
- Sea hunter
- buiilt by the US, is a naval vehicle that can find and clear mines. It can operate autonomously but does require human oversight
give 2 examples of
defensive Lethal autonomous weapons (LAWs)
explain how well suited
deontological ethics
are for robotic ssystems
this ethics framework can be well suited for robotic systems since it uses principles that are easy to define:
Isaac Asimov’s three laws actually fit this framework
this type of ethics frameworks focuses on qualities that you should uphold and maintaining these qualities means that you have maintained your ethics. Some examples include:
-
courage in the face of fear
- Do not run from situations that scare you
-
truthfulness with self-expression
- Do not act fake or lie about how you feel
-
friendliness in social conduct.
- Treat people with respect and do not offend
describe how
virtue ethics
works
this is caused by the data that is fed into it and so it starts to learn that this is the correct output even though it may not be
In computer science the phrase “rubbish in rubbish out” states that an A.I system or an algorithm is only as good as the data that you are feeding it
A.I relies on good input data in order to make a correct decision
what
causes data biases within A.I
name 3
ethical frameworks
these include:
- Virtue ethics
- Deontological ethics
- Consequentialist ethics
give 2 examples of
defensive Lethal autonomous weapons (LAWs)
these include:
- The US phalanx close in weapon system
- was an intergrated computer system, radar and multi-barrel gun that could detect, track and engage with targets such as anti ship missiles all without human intervention. This would be installed onto ships for defence
- Sea hunter
- buiilt by the US, is a naval vehicle that can find and clear mines. It can operate autonomously but does require human oversight
using the points below state a situation where the ethics could be contradicting
- It is ethical to follow the law
- It is ethical to save lives
which ethics are being contradicted in this situation
breaking the law in order to save many lives
which ethics are being contradicted in this situation
breaking the law in order to save many lives
using the points below state a situation where the ethics could be contradicting
- It is ethical to follow the law
- It is ethical to save lives
With this type of ethical framework the focus is on the outcome of the actions.
This framework always seeks to get a positive outcome. Therefore bad actions can take place as long as the end result is positive. I.e the ends justify the means

describe
Consequentialist ethics
This includes machines that can learn and recognise images or speech recognition systems.
These A.I systems have developed well and are at a near equal performance to humans
in regards to the 3 machine learning categories proposed by Professor Arvind Narayanan, describe
Perception tasks
This ethical framework focuses on principles that must be upheld in order to maintain ethics
as long as the underpinning ethics, principles and motivations are upheld then a decision is allowed. this can in theory lead unethical decisions
some examples include:
- Don’t kill.
- Don’t cause pain.
- Don’t disable.
- Don’t deprive of freedom.
- Don’t deprive of pleasure.
- Don’t deceive.
- Keep your promise.
- Don’t cheat.
- Obey the law.
- Do your duty.

describe
deontological ethics
the limitations of this ethical framework within robotic and A.I systems are:
- Determining what action is required to achieve the intended consequence (I.e determining intended and unintended consequences)
- One reason is how to give a value of goodness to an outcome
- To what level of bad actions does the robot follow in order to maintain its Consequentialist ethics
what are 3 limitations of using
Consequentialist ethics
within robotic systems
describe
data biases
within A.I
this is a term that describes A.I giving a skewed and biased output
the output it provides will not be fair and will be biased towards one kind of outcome.
these include:
- these would decide who lives and dies which crosses a moral threshold. furthermore it would be done without human characterstics such as compassion
- replacing robots for trops could make the decision to go to war easier, furthermore an incorrect decision made by a robot could ignite further tensions
- Fully autonomous weapons would lack the human judgment necessary to evaluate the proportionality of an attack, distinguish civilian from combatant, and abide by other core principles of the laws of war. History shows their use would not be limited to certain circumstances.
- these robots could lead to further use outside war such as border patrols or stopping protests or prop up regimes
- wo would be held accountable for unlawful acts
- the programmer
- manufacturer
- commander
- the machine itself
- if development of thes is left unchecked it could lead to an arms race
furthermore what if one of these is hacked and taken control of to be used as not intended
the catholic church and the campaign to stop the killer robots are two groups who are aganstthe use of Lethal Autonomous Weapons (LAWs)
name 6 concerns raised by the campaign to stop the killer robots
what are 4 limitations of using
deontological ethics
within robotics systems
the limitations of using this ethics framework within robot systems are:
- Which ethical principles should be included
- What happens when there is a conflict in the principles
- How do you cover all possible ethical scenarios(practically impossible)
- How do you define concepts such as pleasure and pain in a precise manner
Although not currently deployed some examples that are shrouded in secrecy include:
- Super aEgis ii
- a sentry gun that may be placed in the DMZ zone between north and south korea and can operate without human intervention
- Tanks being developed by russia and china that can be tele operated or autonomous
give 2 examples of
offensive Lethal autonomous weapons (LAWs)
describe
Consequentialist ethics
With this type of ethical framework the focus is on the outcome of the actions.
This framework always seeks to get a positive outcome. Therefore bad actions can take place as long as the end result is positive. I.e the ends justify the means

in regards to the 3 machine learning categories proposed by Professor Arvind Narayanan, describe
Perception tasks
This includes machines that can learn and recognise images or speech recognition systems.
These A.I systems have developed well and are at a near equal performance to humans
these include:
- perception tasks
- automating judgement
- predicting social outcomes
what are the 3
categories of machine learning
that professor Arvind Narayanan proposed
This includes machines that must make a judgement on a situation such spam detection or detecting copyright material
These systems are fit for some purpose but can still be limited by the fact that people have different levels of judgement.
Example:
while someone might regard an email as spam someone else might not
in regards to the 3 machine learning categories proposed by Professor Arvind Narayanan, describe
Automating judgement
this can be defined as a way of giving a value of what is wrong and what is right to a situation, decision or behaviour
define the term
ethics
describe how
virtue ethics
works
this type of ethics frameworks focuses on qualities that you should uphold and maintaining these qualities means that you have maintained your ethics. Some examples include:
-
courage in the face of fear
- Do not run from situations that scare you
-
truthfulness with self-expression
- Do not act fake or lie about how you feel
-
friendliness in social conduct.
- Treat people with respect and do not offend
give 4
benefits and a disadvantage of
Lethal autonomous weapons (LAWs)
these include:
Benefits:
- Do not tire
- Do not need feeding
- Can save lives by replacing the human
- Faster than humans
Disadvantages:
- As A.I solutions become off the shelf and machine parts become cheaper there is the very real risk of homemade WARs that may be used in terroism
breifly describe 3 real world situations where
data biases in A.I caused undesirable output / consequences
theses include:

example 1
An image recognition system was made to learn what a tench fish looked like. It did so but because so many images of tench fish include fingers (because it is a trophy fish) it defined a tench fish as having a pink foreground with a green background
If this tench recognition system was used only to find pictures of trophy tench fish then it can be described as successful but if it were to be used in an underwater system looking for tench fish it would fail
example 2
Amazon built an A.I recruitment tool. Its took data from the past ten years of Amazons recruitment. Because this was mostly composed of men the A.I prioritised men over women (it became sexist because of the data it was fed). secondly because a skill such as writing code was so common on all applicant form this phrase was weighted as less important and so this resulted in inexperienced people being hired or getting to interviews
example 3
Tay is a chatbot that was built to chat in a tone of voice similar to that of the 18-24 year olds that it was interacting with. Upon being released ‘into the wild’ on twitter within 24hrs it had learnt behaviour that was sexist and racist and was tweeting such comments
these include:
- Virtue ethics
- Deontological ethics
- Consequentialist ethics
name 3
ethical frameworks
this is what holds your ethics and will also lay out how you make decisions
what is an
ethical framework
This includes actions such as:
- predicting re offending
- deciding which people should be hired
- deciding which areas of a city to police
This is a highly limited part of A.I since the future by definition and common sense cannot be predicted
in regards to the 3 machine learning categories proposed by Professor Arvind Narayanan, describe
Predicting social outcome
what is an
ethical framework
this is what holds your ethics and will also lay out how you make decisions
the limitations of using this ethics framework within robot systems are:
- Which ethical principles should be included
- What happens when there is a conflict in the principles
- How do you cover all possible ethical scenarios(practically impossible)
- How do you define concepts such as pleasure and pain in a precise manner
what are 4 limitations of using
deontological ethics
within robotics systems
this ethics framework can be well suited for robotic systems since it uses principles that are easy to define:
Isaac Asimov’s three laws actually fit this framework
explain how well suited
deontological ethics
are for robotic ssystems
give 2 examples of
offensive Lethal autonomous weapons (LAWs)
Although not currently deployed some examples that are shrouded in secrecy include:
- Super aEgis ii
- a sentry gun that may be placed in the DMZ zone between north and south korea and can operate without human intervention
- Tanks being developed by russia and china that can be tele operated or autonomous
in regards to the 3 machine learning categories proposed by Professor Arvind Narayanan, describe
Predicting social outcome
This includes actions such as:
- predicting re offending
- deciding which people should be hired
- deciding which areas of a city to police
This is a highly limited part of A.I since the future by definition and common sense cannot be predicted
these can be defined as weapons that can autonomously search for and destroy targets including killing humans
these are currently in development by:
USA, UK, Israel, Russia and China
describe
Lethal autonomous weapons (LAWs)
what
causes data biases within A.I
this is caused by the data that is fed into it and so it starts to learn that this is the correct output even though it may not be
In computer science the phrase “rubbish in rubbish out” states that an A.I system or an algorithm is only as good as the data that you are feeding it
A.I relies on good input data in order to make a correct decision
these include:
Benefits:
- Do not tire
- Do not need feeding
- Can save lives by replacing the human
- Faster than humans
Disadvantages:
- As A.I solutions become off the shelf and machine parts become cheaper there is the very real risk of homemade WARs that may be used in terroism
give 4
benefits and a disadvantage of
Lethal autonomous weapons (LAWs)
this type of ethics framework focuses on qualities.
therefore it is not well suited for A.I or robotic systems since it does not explicitly define what actions are and are not appropriate.
why is
virtue ethics
not well suited for A.I and robotic systems
theses include:

example 1
An image recognition system was made to learn what a tench fish looked like. It did so but because so many images of tench fish include fingers (because it is a trophy fish) it defined a tench fish as having a pink foreground with a green background
If this tench recognition system was used only to find pictures of trophy tench fish then it can be described as successful but if it were to be used in an underwater system looking for tench fish it would fail
example 2
Amazon built an A.I recruitment tool. Its took data from the past ten years of Amazons recruitment. Because this was mostly composed of men the A.I prioritised men over women (it became sexist because of the data it was fed). secondly because a skill such as writing code was so common on all applicant form this phrase was weighted as less important and so this resulted in inexperienced people being hired or getting to interviews
example 3
Tay is a chatbot that was built to chat in a tone of voice similar to that of the 18-24 year olds that it was interacting with. Upon being released ‘into the wild’ on twitter within 24hrs it had learnt behaviour that was sexist and racist and was tweeting such comments
breifly describe 3 real world situations where
data biases in A.I caused undesirable output / consequences
Even during war there is law and some of these include:
- For a military response to be legal:
- Must distinguish between civilian and soldier
- Must use proportional force
- A clear judgement for the engagement
- Must accept a surrender of the enemy
It is argued that an A.I system could not cope with the delicates of some of the above laws, where they may thrive in spotting a human can it make clear judgement on the context of the current situation
what are some arguments against
Lethal Autonomous Weapons (LAWs)
describe
deontological ethics
This ethical framework focuses on principles that must be upheld in order to maintain ethics
as long as the underpinning ethics, principles and motivations are upheld then a decision is allowed. this can in theory lead unethical decisions
some examples include:
- Don’t kill.
- Don’t cause pain.
- Don’t disable.
- Don’t deprive of freedom.
- Don’t deprive of pleasure.
- Don’t deceive.
- Keep your promise.
- Don’t cheat.
- Obey the law.
- Do your duty.

what are some arguments against
Lethal Autonomous Weapons (LAWs)
Even during war there is law and some of these include:
- For a military response to be legal:
- Must distinguish between civilian and soldier
- Must use proportional force
- A clear judgement for the engagement
- Must accept a surrender of the enemy
It is argued that an A.I system could not cope with the delicates of some of the above laws, where they may thrive in spotting a human can it make clear judgement on the context of the current situation
describe
Lethal autonomous weapons (LAWs)
these can be defined as weapons that can autonomously search for and destroy targets including killing humans
these are currently in development by:
USA, UK, Israel, Russia and China
what are the 3
categories of machine learning
that professor Arvind Narayanan proposed
these include:
- perception tasks
- automating judgement
- predicting social outcomes
this is a term that describes A.I giving a skewed and biased output
the output it provides will not be fair and will be biased towards one kind of outcome.
describe
data biases
within A.I
why is
virtue ethics
not well suited for A.I and robotic systems
this type of ethics framework focuses on qualities.
therefore it is not well suited for A.I or robotic systems since it does not explicitly define what actions are and are not appropriate.
the catholic church and the campaign to stop the killer robots are two groups who are aganstthe use of Lethal Autonomous Weapons (LAWs)
name 6 concerns raised by the campaign to stop the killer robots
these include:
- these would decide who lives and dies which crosses a moral threshold. furthermore it would be done without human characterstics such as compassion
- replacing robots for trops could make the decision to go to war easier, furthermore an incorrect decision made by a robot could ignite further tensions
- Fully autonomous weapons would lack the human judgment necessary to evaluate the proportionality of an attack, distinguish civilian from combatant, and abide by other core principles of the laws of war. History shows their use would not be limited to certain circumstances.
- these robots could lead to further use outside war such as border patrols or stopping protests or prop up regimes
- wo would be held accountable for unlawful acts
- the programmer
- manufacturer
- commander
- the machine itself
- if development of thes is left unchecked it could lead to an arms race
furthermore what if one of these is hacked and taken control of to be used as not intended
this is a computer scientist from stanford university
who is
professor Arvind Narayanan
who is
professor Arvind Narayanan
this is a computer scientist from stanford university
define the term
ethics
this can be defined as a way of giving a value of what is wrong and what is right to a situation, decision or behaviour
what are 3 limitations of using
Consequentialist ethics
within robotic systems
the limitations of this ethical framework within robotic and A.I systems are:
- Determining what action is required to achieve the intended consequence (I.e determining intended and unintended consequences)
- One reason is how to give a value of goodness to an outcome
- To what level of bad actions does the robot follow in order to maintain its Consequentialist ethics
in regards to the 3 machine learning categories proposed by Professor Arvind Narayanan, describe
Automating judgement
This includes machines that must make a judgement on a situation such spam detection or detecting copyright material
These systems are fit for some purpose but can still be limited by the fact that people have different levels of judgement.
Example:
while someone might regard an email as spam someone else might not