Killer robots Flashcards

1
Q

What are the different degrees of machine autonomy in weapons

A
  1. human-in-the-loop
  2. human-on-the-loop
  3. human-out-of-the-loop
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Human-in-the-loop

A

machine recommends targets, but a human soldier must decide whether to engage the target

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Human-on-the-loop

A

machine selects and engage targets with a human monitoring and able to intervene

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Human-out-of-the-loop

A

machine selects and engages targets without human oversight

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

ICRC definition of lethal autonomous weapons

A

any system that automates targeting and engaging a weapon

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what does the ‘just war’ theory assume?

A

war is a great evil and impermissible by default.
specific exceptions are necessary to authorise the use of force

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Main dimensions of ‘just war’ theory

A

1.lus ad bellum
2.lus in bello
3. lus post bellum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Lus ad bellum

A

covers conditions under which it is permissible to go to war

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Lus in bello

A

covers permissible conduct in war

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

lus post bellum

A

covers permissible conduct in engine war and post-war relations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Lus ad bellum: conditions for authorising armed force

A

just cause: was may only be initiated for self-defence and the defence of others from aggression
last resort: non-violent options must be exhausted before force can be justified
probability of success: the times of war must have a high likelihood of achievement
war must be publicly initiated by a state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

lus in bello: conditions for conduct in war

A

distinction: acts of war should target enemy combatants and not non-combatants
military necessity: attacks must advance legitimate military objectives
proportionality: any collateral damage must be proportional to legitimate military objectives
inherently evil tactics may not be used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Objections to lethal uninhabited aerial vehicles (drones) Strawser ‘Moral predators’ (2010)

A

invites indiscriminate and disproportionate attackts
create unjust asymmetry in combat
create cognitive dissonance for operators, lowering their responsibility threshold
slippery slope to autonomous weapons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Military conduct in Operation Iraqi Freedom (Arkin 2009)

A

small amount of soldiers reporting mistreatment of noncombatants
Arkin: it is conceptually and mathematically possible to develop a class of robots that not only conform to international law but outperform human soldiers in their ethical capacity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Asaro on autonomous weapons (2020)

A

building and using autonomous weapons is morally wrong on both deontological and consequentialists grounds

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Deontological arguments

A

responsibility - performance of moral obligations delegated to other moral agents who take responsibility. Wrong to delegate to LAWS because they are not moral agents
dignity - decisions for lethal force must recognise a human being as a human, reach a rational conclusion that killing is justified in particular situations

17
Q

Consequentialist arguments

A

LAWS might accidentally misidentify targets or intentionally designed with these purposes
LAWS don’t require sophisticated technology and can be easily acquired by non state actors
LAWS may accidentally initiate or escalate conflicts; the interaction between adversarial systems is unpredictable and may be catastrophic
LAWS may lower the threshold for initiating or escalating military conflicts
LAWS may engage in unattributable attacks, leading to widespread assassinations or random attacks to destabilise societies
LAWS are susceptible to spoofing and hacking; tricked into seeing friends as enemies and stop signs as go signs