Unit 7 - Military Technology and Ethics Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Name and explain three leading arguments for the use and development of military robots today.

A
  1. In many situations, robots have faster reaction times and better aim than human beings.
  2. They are often ideal for filling roles that people in the field call the “Three Ds”: dull, dirty, or dangerous. Unlike humans, who get tired and hungry and lose concentration and effectiveness, robots can perform boring tasks with unstinting accuracy for long periods of time. (As one advertisement for an unmanned plane put it, “Can you keep your eyes open for thirty hours without blinking?”) They can operate in dirty environments, such as battle zones filled with biological or chemical weapons, or under other dangerous conditions, such as in space, in rough seas, or in flights with very high gravitational pressures.
  3. A machine used will preserve a life, they can wait to see who shoots first and then return fire without endangering a persons life. The use of machines may also take out the emotional aspects of war, thereby preventing some soldiers from getting PTSD, or allowing the operator to act deliberatley without spikes of emotion.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Drawing upon the case study that opened Unit 7, what is the “main reason” that those involved in this area are concerned with arming military robots?

A

Where would blame be attributed? Would it be the programmers, squadron, or commander who approved its use? How could someone or something be held morally accountable for an action or mis-action.

Some other concerns the impersonal and detached reality robots bring to war as well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Name and explain the three principles that Singer argues should guide the development of military robots. Do you agree or disagree with his suggestions? Why or why not? Explain your reasoning.

A

First, since it will be very difficult to guarantee that autonomous robots can, as required by the laws of war, discriminate between civilian and military targets and avoid unnecessary suffering, they should be allowed the autonomous use only of non-lethal weapons. That is, while the very same robot might also carry lethal weapons, it should be programmed such that only a human can authorize their use.

Second, just as any human’s right to self-defense is limited, so too should be a robot’s. This sounds simple enough, but oddly the Pentagon has already pushed the legal interpretation that our drones have an inherent right to self-defense, including even to preemptively fire on potential threats, such as an anti-aircraft radar system that lights them up. There is a logic to this argument, but it leads down a very dark pathway; self-defense must not be permitted to trump other relevant ethical concerns.

Third, the human creators and operators of autonomous robots must be held accountable for the machines’ actions. (Dr. Frankenstein shouldn’t get a free pass for his monster’s misdeeds.) If a programmer gets an entire village blown up by mistake, he should be criminally prosecuted, not get away scot-free or merely be punished with a monetary fine his employer’s insurance company will end up paying. Similarly, if some future commander deploys an autonomous robot and it turns out that the commands or programs he authorized the robot to operate under somehow contributed to a violation of the laws of war, or if his robot were deployed into a situation where a reasonable person could guess that harm would occur, even unintentionally, then it is proper to hold the commander responsible.

Yes I agree with all his principles because 1) killing involves some of the toughest moral decisions a person can make, no other thing has ever made moral decisions of this kind except, having a machine make this decision would likely be insufficient and would be in itself a moral decision to not make a decision, 2) A machines self-defense could easily become complicated if say a child threatens the machine (most would agree that it would be better to risk the machine then kill the child), let alone general misinterpretation of threat. Considering that most would value human life over property, machine self-defense can easily lead to controversial moral exchanges. 3) Without holding a creator or operator responsible there would be no moral accountability thereby allowing neglegence or at least not making pertinent the moral responsibilities of war.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why does Singer argue that “the human creators and operators of autonomous robots must be held accountable for their machine’s actions” (p. 162)? Do you think this claim is something that Anderson and Anderson would agree with, even though they claim that AI machines should be considered “moral agents”?

A

Singer argues that this makes responsibility clear, not just for administration
of punishment but also as a pre-emptive deterrence of error.

Anderson and Anderson’s conception argue that the machine would cite moral principles that would prove its behavior, therefore its moral failure would be do to an incomplete ethics. Although if such an error occurred, either in its ethical code or a bug in its programming, they would likely attribute the error to the person responsible (ethicist or programmer).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Do you think armed and autonomous military robots should be developed? Why or why not? Give reasons in support of your position.

A

No, because killing requires difficult moral decision making that is very relative to the situation and contain various nuance d choices that we cannot expect a machine to make, it impersonalizes killing and lessens humans role in war (which should not be lessened as its experiential value creates ethical principles to pursue in war like treatment of POW’s and would be expected to grow as the world changes). Finally, it diffuses responsibility for abhorrent actions, even though the programmer or commander may be held responsible, they will only be held responsible through indirect action, this will structurally decay the amount of attributable moral responsibility by making the executioner essentially unaccountable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In the play In the Matter of J. Robert Oppenheimer, how does the character of Oppenheimer argue that, although he was responsible for developing the atomic bomb, he was not responsible for the decision to use it to inflict mass destruction? Do you think that Oppenheimer did anything that was morally wrong? Explain why or why not.

A

Oppenheimer argues that the dropping of the bomb was a political decision, whereas Oppenheimer’s job on the war council was to pick out the most effective area to bomb (a decision which required the experience of nuclear physicists). Oppenheimer argued that the physicist’s were working as experts to do the job they were payed to do, but had the moral conviction to make the bomb in the hopes of preventing its use by the Germans or by deterring further war. The scientists were not being asked if the bomb should be used but were rather asked how it should be used.

Oppenheimer had positive and optimistic moral convictions while making the bomb under high stakes circumstances. While his participation was integral to the dropping of the bomb he had substantial moral reasoning for his choices and has paid the price for this error through guilt. Because Oppenheimer used his powerful knowledge in the hopes of stopping further destruction I do not think he did anything morally wrong, despite the massive immoral consequences that occurred despite his wishes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Consider again Conrad Brunk’s principle of “Conscientious Professionalism” (Unit 4 of this course, see especially Brunk, p. 151). Explain how Brunk’s principle could apply to In the Matter of J. Robert Oppenheimer. Consider especially this line in your application of Brunk’s principle: “It is as appropriate for the nuclear physicist to warn of the dangers of nuclear power plants and nuclear weapons as anyone else, indeed, she usually has a greater obligation as anyone else to do so” (Brunk, p. 151). How might Oppenheimer respond to Brunk’s principle?

A

Brunk’s principle of conscientious professionalism argues that professionals hold ethical responsibility over the potential ethical dilemmas inherent in their work and must either raise awareness of these dangers or object to its fruition if their is potential for these projects to cause harm. Specialists contain special knowledge and information into projects that may produce ethical dilemmas and therefore should carry increased responsibility into the ethical consequences of these projects.

Oppenheimer would argue that his job was that of a nuclear physicist and he had the moral conviction to create the technology in the hopes of decreasing the likelihood of violence or at least bestowing the power with the Americans rather then the Germans. Oppenheimer trusted that the politicians would use the power morally whereas he would show them how to use it most effectively if it needed to be used. Therefore he might argue that the politicians had more pertinent knowledge of the war then he and therefore could make a more informed moral decision.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Drawing upon both the P.W. Singer essay and the Kipperhardt play, discuss the scope and nature of the moral responsibility of scientists working in the military for the outcomes of their work. Do you think scientists are in no way morally culpable for harmful outcomes, or are they in some sense morally responsible for such outcomes? Explain your answer with reasons that can support your position.

A

Scientists must consider to what degree their technologies may bring about ethical issues and should therefore use these ethical considerations to guide their production and application of technology.

Scientist must be held at least somewhat culpable for the technologies they develop. If a programmer sells a robot to the military that is designed to kill the target by any means necessary and blows up a school full of children in the process, then the programmer as well as anyone else informed of the programming and responsible for the robots deployment should be held accountable. Accountability allows for moral reprimand, deliver of punishment, and ultimately deters neglectful moral behavior in the first place. However, scientists cannot be considered the only moral agents for their developed military technologies as the applications of these technologies is dependent upon specialized military and political knowledge that we could not expect the scientist to posses.

In the case of Oppenheimer, he should not be held morally accountable for his part in the atom bomb because he followed his moral duty to use his knowledge to develop a powerful weapon in the hopes of either stopping or controlling the war. There could have been certain moral scenarios where the atom bomb could have brought a peaceful end but any application of the bomb would need a in depth political choice that Oppenheimer could not control. Therefore, moral responsibility for scientists should rest on deontological grounds where the scientist uses their knowledge in the hopes of moral outcomes and should only be persecuted when a resultant unethical outcome was foreseeable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly