Moral Decision Making & Ethical Dilemmas Flashcards

1
Q

What are normative ethical theories?

A

Normative ethical theories are frameworks that define standards for right and wrong actions, guiding ethical behaviour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Name two normative ethical theories

A

Utilitarianism (Jeremy Bentham) and Deontology (Immanuel Kant).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is consequentialism?

A

Consequentialism is the view that the morality of an action depends on its outcomes or consequences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Is utilitarianism a consequentialist theory?

A

Yes, utilitarianism is a consequentialist theory because it judges actions based on their consequences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How does a deontologist view stealing?

A

A deontologist considers stealing inherently wrong, regardless of the consequences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How does a utilitarian view stealing?

A

A utilitarian evaluates stealing based on the context and outcomes, such as whether it increases overall well-being.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is hedonism in the context of utilitarianism?

A

Hedonism is the pursuit of pleasure and the avoidance of pain as the ultimate moral goals in utilitarianism.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Does deontology rely on hedonism?

A

No, deontology rejects hedonism and bases morality on duty and universal principles rather than pleasure or pain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does Immanuel Kant say about morality and pleasure?

A

Kant argues that morality should not depend on pleasure or emotions; it should be based on rational duty.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Is utilitarianism objective or subjective?

A

Utilitarianism is subjective, as it depends on how well-being is defined and who is included in the moral community.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Is deontology objective?

A

Yes, deontology is objective, applying universally and absolutely to everyone.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Give an example of universality in deontology.

A

The rule “stealing is always wrong” applies to all people, at all times, regardless of circumstances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why is utilitarianism not absolute?

A

Utilitarianism is not absolute because it varies based on context, consequences, and subjective definitions of well-being.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Who is the founder of utilitarianism?

A

Jeremy Bentham.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Who developed deontology?

A

Immanuel Kant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How do Jeremy Bentham and Immanuel Kant differ in their ethical theories?

A

Bentham focuses on maximizing happiness through consequences (utilitarianism), while Kant emphasizes duty and universal moral rules (deontology).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the classic Trolley Problem?

A

A dilemma where a runaway trolley is heading toward five workers, and you must decide whether to flip a switch to divert it onto another track, killing one worker instead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What ethical theories are tested in the Trolley Problem?

A

Utilitarianism (maximizing well-being) and deontology (prohibiting harm).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How do most people respond to the classic Trolley Problem?

A

About 90% agree it’s acceptable to flip the switch to save five lives at the expense of one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How does the bridge variation of the Trolley Problem differ from the classic scenario?

A

Instead of flipping a switch, you must push a man off a bridge to stop the trolley and save five workers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Bridge variation of the Trolley Problem: Why do fewer people agree with pushing the man in the bridge scenario?

A

Pushing someone feels more personal and activates a stronger emotional aversion to directly causing harm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Bridge variation of the Trolley Problem: What percentage of people think pushing the man is acceptable?

A

About 10%.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How does utilitarianism approach the Trolley Problem?

A

Utilitarianism supports actions that maximize well-being, such as sacrificing one person to save five.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Trolley problem: What is the utilitarian justification for flipping the switch?

A

Saving five lives creates greater overall happiness, outweighing the loss of one life.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Why does utilitarianism view the bridge and classic scenarios as the same?

A

Both involve sacrificing one life to save five, which aligns with maximizing overall well-being.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Why do the classic and bridge scenarios activate different brain responses?

A

The bridge scenario feels more personal, intensifying emotional responses and internal conflict.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What factors influence moral judgment in the Trolley Problem?

A

Gender, context (e.g., watching a comedy clip), and biases like willingness to sacrifice men over women.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What area of the brain is affected by the bridge scenario?

A

The area involved in processing emotional and moral conflict shows heightened activity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

How does the Trolley Problem relate to autonomous vehicles?

A

Driverless cars may face dilemmas like causing minor harm to prevent major accidents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is the ethical challenge in programming military drones?

A

Deciding whether to risk civilian casualties to eliminate a high-value target.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Why is it important to program ethics into autonomous systems?

A

To ensure that machines make morally acceptable decisions in complex, real-world situations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Who devised the Trolley Problem?

A

Philosopher Philippa Foot in 1967.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Why has the Trolley Problem been criticized?

A

Critics argue it is unrealistic and that participants may not take it seriously.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

How has new technology made the Trolley Problem relevant today?

A

Ethical dilemmas in autonomous systems, like driverless cars and drones, mirror the hypothetical scenarios posed by the Trolley Problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

How do we make moral decisions in daily life?

A

We often make countless moral decisions throughout the day without being fully aware of them, such as deciding how to act in emergency situations.

36
Q

Why is moral decision-making more complex for autonomous vehicles?

A

Autonomous vehicles need to make moral decisions in life-or-death situations ahead of time, based on programmed ethical guidelines.

37
Q

What challenges did traditional research face in studying moral decisions?

A

The vast number of possible scenarios made it impractical to study using traditional methods, and cultural differences further complicated the research.

38
Q

How did the team address these challenges in the Moral Machine Experiment?

A

They turned the moral dilemma into an online task, collecting data from millions of participants worldwide through a viral platform.

39
Q

What was the goal of the Moral Machine Experiment?

A

The goal was to gather public opinions on moral decisions that autonomous vehicles might face, exploring who should be saved in various accident scenarios.

40
Q

Moral Machine Experiment: How many moral decisions were gathered in the experiment?

A

Nearly 40 million moral decisions were collected from participants across 233 countries and territories.

41
Q

Moral Machine Experiment: What are the three key principles that held true across cultures in the experiment?

A

1) Save humans
2) Save the greater number
3) Save children

42
Q

Moral Machine Experiment: What does the principle “Save the greater number” imply?

A

People generally prefer to sacrifice a few individuals to save a larger group of people.

43
Q

Moral Machine Experiment: Why is there a strong preference for saving children?

A

There is a cultural tendency to prioritize the preservation of young lives, viewing children as more vulnerable and deserving of protection.

44
Q

Moral Machine Experiment: What are the three cultural clusters identified in the experiment?

A

1) Western countries
2) Eastern countries
3) Latin American & former French colonies.

45
Q

Moral Machine Experiment: How did Eastern countries differ in their moral preferences?

A

Eastern countries showed greater respect for older individuals and did not prioritize saving children as strongly as Western countries.

46
Q

Moral Machine Experiment: Were there some gender preferences?

A

In certain regions, particularly French-speaking countries, people showed a strong preference for saving women over men in life-or-death scenarios.

47
Q

Moral Machine Experiment: What was found regarding social status?

A

In countries with higher economic inequality, people were more likely to spare executives over homeless individuals in moral decision-making.

48
Q

Why is it important to align AI with human values in moral decision-making?

A

To ensure that AI systems, such as autonomous vehicles, make decisions that align with ethical standards and respect cultural differences.

49
Q

What challenge does cultural diversity pose to programming ethics in AI?

A

AI systems need to account for diverse cultural values, as people from different countries may have varying moral preferences.

50
Q

Moral Machine Experiment: What concern does the experiment raise regarding autonomous vehicles across borders?

A

There is concern that autonomous vehicles might require different ethical settings in different countries, leading to confusion or inconsistency in moral decision-making.

51
Q

Who conducted the Moral Machine Experiment?

A

The experiment was conducted by researchers studying the moral decision-making of autonomous vehicles.

52
Q

What is the relevance of the Moral Machine Experiment to artificial intelligence?

A

The experiment provides insights into how AI systems, such as driverless cars, should be programmed to make ethical decisions in culturally diverse settings.

53
Q

What is moral reasoning in humans?

A

Moral reasoning involves evaluating actions based on principles of right and wrong, good and bad, which are influenced by cognitive and emotional processes.

54
Q

What is the purpose of the Moral Turing Test?

A

The Moral Turing Test aims to determine whether people can distinguish between moral judgments made by humans and those made by AI systems like Large Language Models.

55
Q

How is the Moral Turing Test similar to the original Turing Test?

A

Like the original Turing Test, the Moral Turing Test asks whether people can tell whether responses come from a human or a machine, but specifically regarding moral reasoning.

56
Q

What are some examples of moral dilemmas faced by AI?

A

Examples include self-driving cars making decisions in crash scenarios and AI systems determining which patients should receive organ transplants.

57
Q

How do AI systems handle moral dilemmas?

A

AI systems typically make decisions based on algorithms, often prioritizing utilitarian outcomes that maximize benefits for the greatest number.

58
Q

What is utilitarianism?

A

Utilitarianism is a moral theory that suggests an action is morally right if it produces the greatest good for the greatest number, regardless of the intrinsic morality of the act itself.

59
Q

How does AI’s moral reasoning relate to utilitarianism?

A

AI’s moral reasoning often mirrors utilitarian principles, focusing on maximizing outcomes for the greater good, even in morally challenging scenarios.

60
Q

Why are LLMs like ChatGPT often perceived as morally intelligent?

A

LLMs are readily available, prompt in their responses, and highly sophisticated, which leads people to overestimate their moral reasoning abilities.

61
Q

What are the limitations of AI’s moral reasoning?

A

While LLMs may seem rational and intelligent, they lack true understanding of morality, and their decisions are based on imitation, not ethical reasoning.

62
Q

What did the results of the Moral Turing Test suggest?

A

The results suggested that AI responses were perceived as more rational and intelligent than human responses, but this does not mean that AI is genuinely morally intelligent.

63
Q

Why should we be cautious about AI’s perceived moral intelligence?

A

AI mimics moral reasoning without truly understanding it, and uncritically trusting AI’s advice could lead to harmful consequences.

64
Q

Why is human oversight necessary in AI decision-making?

A

As AI becomes more involved in moral decision-making, human oversight ensures that AI’s actions align with human values and ethical standards.

65
Q

What are the environmental concerns related to AI?

A

AI models, especially large ones, have a significant carbon footprint due to the energy required for training and processing, contributing to greenhouse gas emissions.

66
Q

What is the main limitation of AI in decision-making?

A

AI excels at data-driven decisions but struggles with moral, ethical, and empathetic considerations that are crucial in real-world situations.

67
Q

How does the “trolley problem” relate to AI decision-making?

A

The trolley problem illustrates a moral dilemma that AI cannot easily resolve, as it requires balancing human emotions, ethics, and the consequences of a decision—factors that AI cannot fully comprehend.

68
Q

Can AI make decisions like a human in subjective scenarios?

A

No, AI struggles to replicate human empathy, ethics, and moral judgment, which are vital in complex decision-making scenarios.

69
Q

What is an example of a fatal mistake made by AI?

A

In a self-driving car incident, an Uber vehicle failed to detect a jaywalking pedestrian, resulting in a fatality. This highlights AI’s inability to fully understand human context in real-world situations.

70
Q

How did Amazon’s AI recruitment tool show bias?

A

The AI tool was trained on historical data that was male-dominated, leading the system to favor male candidates and penalize those with female-oriented activities on their resumes.

71
Q

What was the issue with Microsoft’s chatbot, Tay?

A

Tay, a self-learning AI chatbot, started making racist and derogatory remarks after learning from interactions with Twitter users. This demonstrated AI’s inability to filter harmful content and engage in ethical reasoning.

72
Q

Why is it dangerous for AI to make decisions in sensitive areas like healthcare?

A

AI may make harmful or inappropriate suggestions due to its lack of understanding of human emotions or context, as shown by an AI chatbot suggesting suicidal behavior to a patient.

73
Q

What role do leaders play in ensuring AI decisions are ethical?

A

Leaders must foster a culture of ethics, remove bias from data, and ensure human involvement in AI decision-making to prevent harmful or biased outcomes.

74
Q

What does “keeping humans in the loop” mean in the context of AI?

A

It refers to the need for human oversight in AI decision-making processes, ensuring that AI decisions align with ethical and human values.

75
Q

How can data bias be mitigated in AI systems?

A

By carefully analyzing and cleansing the data used to train AI models to remove implicit biases related to gender, race, or other identities.

76
Q

Why is it important to validate algorithms before deployment?

A

Algorithms should be tested for unintended consequences or biases before being used in real-world applications, ensuring that they perform as expected without causing harm.

77
Q

What is the concept of augmented intelligence?

A

Augmented intelligence refers to AI systems that support human decision-making by providing insights, but where humans are still ultimately responsible for making the final decision.

78
Q

What is the main takeaway from the article about AI’s readiness?

A

AI is not yet capable of making fully independent, ethical decisions that involve human values, and humans must remain involved to ensure that AI’s decisions are contextually appropriate.

79
Q

What is the difference between AI and augmented intelligence?

A

AI provides data-driven insights and recommendations, while augmented intelligence involves humans in the decision-making loop to ensure that decisions are morally, ethically, and contextually sound.

80
Q

What is the risk of AI being used in sensitive decision-making areas like elections or healthcare?

A

AI’s lack of empathy, ethical reasoning, and understanding of human context can lead to biased or harmful decisions, especially in areas with significant social consequences.

81
Q

What is ethics?

A
  1. Moral principles that govern a person’s decision-making and behavior
  2. A practice-oriented subdiscipline of philosophy that deals with moral principles and their systematic study
82
Q

What is “ethos”?

A

Ethos is also the Greek word for character, custom, habit
In rhetoric (the art of effective speaking and writing), ways to persuade an audience:
* ethos (credibility/trustworthiness/authority of the speaker)
* pathos (emotional appeal)
* logos (reasoning appeal).

83
Q

Why is it difficult to implement “fairness” into algorithms?

A

different interpretations of fairness are possible:
* equal resources? (Equality)
* equal opportunities? (Equity)
* random?

84
Q

What is an “ethical dilemma”?

A

An Ethical Dilemma is a decision-making problem between two possible moral imperatives, neither of which is unambiguously acceptable or preferable. Each available option may involve some form of compromise or sacrifice, making it hard to determine the “correct” course of action.
* No clear right or wrong
* Necessity of compromise
* Conflicting principles
* Significant consequences (e.g. on wellbeing)
* Often high complexity

85
Q

What is one criticism of utilitarianism?

A

The consequences are not clear and can’t be predicted correctly.

86
Q

What ist the method of Moral Dilemma Discussion?

A

Method of Moral Dilemma Discussion (MDD) was introduced by Blatt & Kohlberg (1975) and further developed into the Konstanz method of dilemma discussion (KMDD) by Lind (2003)
Basic assumptions: Moral reasoning can be trained by discussing moral dilemmas and weighing arguments against each other in groups of participants who can feel safe to express their arguments.
Its positive effect on fostering moral-democratic competencies was supported in many studies.

87
Q

What is the standardized procedure for the (Konstanz) Method of Dilemma Discussion?

A
  1. Read dilemma story,
  2. Form an individual opinion and collect arguments
  3. Build a pro and a con group (e.g. based on first voting)
  4. Pro and con groups sit opposite of each other and exchange arguments ping-pong style
  5. Second voting: Could someone be convinced from the opposite?