Moral Decision Making & Ethical Dilemmas Flashcards
What are normative ethical theories?
Normative ethical theories are frameworks that define standards for right and wrong actions, guiding ethical behaviour.
Name two normative ethical theories
Utilitarianism (Jeremy Bentham) and Deontology (Immanuel Kant).
What is consequentialism?
Consequentialism is the view that the morality of an action depends on its outcomes or consequences.
Is utilitarianism a consequentialist theory?
Yes, utilitarianism is a consequentialist theory because it judges actions based on their consequences.
How does a deontologist view stealing?
A deontologist considers stealing inherently wrong, regardless of the consequences.
How does a utilitarian view stealing?
A utilitarian evaluates stealing based on the context and outcomes, such as whether it increases overall well-being.
What is hedonism in the context of utilitarianism?
Hedonism is the pursuit of pleasure and the avoidance of pain as the ultimate moral goals in utilitarianism.
Does deontology rely on hedonism?
No, deontology rejects hedonism and bases morality on duty and universal principles rather than pleasure or pain.
What does Immanuel Kant say about morality and pleasure?
Kant argues that morality should not depend on pleasure or emotions; it should be based on rational duty.
Is utilitarianism objective or subjective?
Utilitarianism is subjective, as it depends on how well-being is defined and who is included in the moral community.
Is deontology objective?
Yes, deontology is objective, applying universally and absolutely to everyone.
Give an example of universality in deontology.
The rule “stealing is always wrong” applies to all people, at all times, regardless of circumstances.
Why is utilitarianism not absolute?
Utilitarianism is not absolute because it varies based on context, consequences, and subjective definitions of well-being.
Who is the founder of utilitarianism?
Jeremy Bentham.
Who developed deontology?
Immanuel Kant.
How do Jeremy Bentham and Immanuel Kant differ in their ethical theories?
Bentham focuses on maximizing happiness through consequences (utilitarianism), while Kant emphasizes duty and universal moral rules (deontology).
What is the classic Trolley Problem?
A dilemma where a runaway trolley is heading toward five workers, and you must decide whether to flip a switch to divert it onto another track, killing one worker instead.
What ethical theories are tested in the Trolley Problem?
Utilitarianism (maximizing well-being) and deontology (prohibiting harm).
How do most people respond to the classic Trolley Problem?
About 90% agree it’s acceptable to flip the switch to save five lives at the expense of one.
How does the bridge variation of the Trolley Problem differ from the classic scenario?
Instead of flipping a switch, you must push a man off a bridge to stop the trolley and save five workers.
Bridge variation of the Trolley Problem: Why do fewer people agree with pushing the man in the bridge scenario?
Pushing someone feels more personal and activates a stronger emotional aversion to directly causing harm.
Bridge variation of the Trolley Problem: What percentage of people think pushing the man is acceptable?
About 10%.
How does utilitarianism approach the Trolley Problem?
Utilitarianism supports actions that maximize well-being, such as sacrificing one person to save five.
Trolley problem: What is the utilitarian justification for flipping the switch?
Saving five lives creates greater overall happiness, outweighing the loss of one life.
Why does utilitarianism view the bridge and classic scenarios as the same?
Both involve sacrificing one life to save five, which aligns with maximizing overall well-being.
Why do the classic and bridge scenarios activate different brain responses?
The bridge scenario feels more personal, intensifying emotional responses and internal conflict.
What factors influence moral judgment in the Trolley Problem?
Gender, context (e.g., watching a comedy clip), and biases like willingness to sacrifice men over women.
What area of the brain is affected by the bridge scenario?
The area involved in processing emotional and moral conflict shows heightened activity.
How does the Trolley Problem relate to autonomous vehicles?
Driverless cars may face dilemmas like causing minor harm to prevent major accidents.
What is the ethical challenge in programming military drones?
Deciding whether to risk civilian casualties to eliminate a high-value target.
Why is it important to program ethics into autonomous systems?
To ensure that machines make morally acceptable decisions in complex, real-world situations.
Who devised the Trolley Problem?
Philosopher Philippa Foot in 1967.
Why has the Trolley Problem been criticized?
Critics argue it is unrealistic and that participants may not take it seriously.
How has new technology made the Trolley Problem relevant today?
Ethical dilemmas in autonomous systems, like driverless cars and drones, mirror the hypothetical scenarios posed by the Trolley Problem.
How do we make moral decisions in daily life?
We often make countless moral decisions throughout the day without being fully aware of them, such as deciding how to act in emergency situations.
Why is moral decision-making more complex for autonomous vehicles?
Autonomous vehicles need to make moral decisions in life-or-death situations ahead of time, based on programmed ethical guidelines.
What challenges did traditional research face in studying moral decisions?
The vast number of possible scenarios made it impractical to study using traditional methods, and cultural differences further complicated the research.
How did the team address these challenges in the Moral Machine Experiment?
They turned the moral dilemma into an online task, collecting data from millions of participants worldwide through a viral platform.
What was the goal of the Moral Machine Experiment?
The goal was to gather public opinions on moral decisions that autonomous vehicles might face, exploring who should be saved in various accident scenarios.
Moral Machine Experiment: How many moral decisions were gathered in the experiment?
Nearly 40 million moral decisions were collected from participants across 233 countries and territories.
Moral Machine Experiment: What are the three key principles that held true across cultures in the experiment?
1) Save humans
2) Save the greater number
3) Save children
Moral Machine Experiment: What does the principle “Save the greater number” imply?
People generally prefer to sacrifice a few individuals to save a larger group of people.
Moral Machine Experiment: Why is there a strong preference for saving children?
There is a cultural tendency to prioritize the preservation of young lives, viewing children as more vulnerable and deserving of protection.
Moral Machine Experiment: What are the three cultural clusters identified in the experiment?
1) Western countries
2) Eastern countries
3) Latin American & former French colonies.
Moral Machine Experiment: How did Eastern countries differ in their moral preferences?
Eastern countries showed greater respect for older individuals and did not prioritize saving children as strongly as Western countries.
Moral Machine Experiment: Were there some gender preferences?
In certain regions, particularly French-speaking countries, people showed a strong preference for saving women over men in life-or-death scenarios.
Moral Machine Experiment: What was found regarding social status?
In countries with higher economic inequality, people were more likely to spare executives over homeless individuals in moral decision-making.
Why is it important to align AI with human values in moral decision-making?
To ensure that AI systems, such as autonomous vehicles, make decisions that align with ethical standards and respect cultural differences.
What challenge does cultural diversity pose to programming ethics in AI?
AI systems need to account for diverse cultural values, as people from different countries may have varying moral preferences.
Moral Machine Experiment: What concern does the experiment raise regarding autonomous vehicles across borders?
There is concern that autonomous vehicles might require different ethical settings in different countries, leading to confusion or inconsistency in moral decision-making.
Who conducted the Moral Machine Experiment?
The experiment was conducted by researchers studying the moral decision-making of autonomous vehicles.
What is the relevance of the Moral Machine Experiment to artificial intelligence?
The experiment provides insights into how AI systems, such as driverless cars, should be programmed to make ethical decisions in culturally diverse settings.
What is moral reasoning in humans?
Moral reasoning involves evaluating actions based on principles of right and wrong, good and bad, which are influenced by cognitive and emotional processes.
What is the purpose of the Moral Turing Test?
The Moral Turing Test aims to determine whether people can distinguish between moral judgments made by humans and those made by AI systems like Large Language Models.
How is the Moral Turing Test similar to the original Turing Test?
Like the original Turing Test, the Moral Turing Test asks whether people can tell whether responses come from a human or a machine, but specifically regarding moral reasoning.
What are some examples of moral dilemmas faced by AI?
Examples include self-driving cars making decisions in crash scenarios and AI systems determining which patients should receive organ transplants.
How do AI systems handle moral dilemmas?
AI systems typically make decisions based on algorithms, often prioritizing utilitarian outcomes that maximize benefits for the greatest number.
What is utilitarianism?
Utilitarianism is a moral theory that suggests an action is morally right if it produces the greatest good for the greatest number, regardless of the intrinsic morality of the act itself.
How does AI’s moral reasoning relate to utilitarianism?
AI’s moral reasoning often mirrors utilitarian principles, focusing on maximizing outcomes for the greater good, even in morally challenging scenarios.
Why are LLMs like ChatGPT often perceived as morally intelligent?
LLMs are readily available, prompt in their responses, and highly sophisticated, which leads people to overestimate their moral reasoning abilities.
What are the limitations of AI’s moral reasoning?
While LLMs may seem rational and intelligent, they lack true understanding of morality, and their decisions are based on imitation, not ethical reasoning.
What did the results of the Moral Turing Test suggest?
The results suggested that AI responses were perceived as more rational and intelligent than human responses, but this does not mean that AI is genuinely morally intelligent.
Why should we be cautious about AI’s perceived moral intelligence?
AI mimics moral reasoning without truly understanding it, and uncritically trusting AI’s advice could lead to harmful consequences.
Why is human oversight necessary in AI decision-making?
As AI becomes more involved in moral decision-making, human oversight ensures that AI’s actions align with human values and ethical standards.
What are the environmental concerns related to AI?
AI models, especially large ones, have a significant carbon footprint due to the energy required for training and processing, contributing to greenhouse gas emissions.
What is the main limitation of AI in decision-making?
AI excels at data-driven decisions but struggles with moral, ethical, and empathetic considerations that are crucial in real-world situations.
How does the “trolley problem” relate to AI decision-making?
The trolley problem illustrates a moral dilemma that AI cannot easily resolve, as it requires balancing human emotions, ethics, and the consequences of a decision—factors that AI cannot fully comprehend.
Can AI make decisions like a human in subjective scenarios?
No, AI struggles to replicate human empathy, ethics, and moral judgment, which are vital in complex decision-making scenarios.
What is an example of a fatal mistake made by AI?
In a self-driving car incident, an Uber vehicle failed to detect a jaywalking pedestrian, resulting in a fatality. This highlights AI’s inability to fully understand human context in real-world situations.
How did Amazon’s AI recruitment tool show bias?
The AI tool was trained on historical data that was male-dominated, leading the system to favor male candidates and penalize those with female-oriented activities on their resumes.
What was the issue with Microsoft’s chatbot, Tay?
Tay, a self-learning AI chatbot, started making racist and derogatory remarks after learning from interactions with Twitter users. This demonstrated AI’s inability to filter harmful content and engage in ethical reasoning.
Why is it dangerous for AI to make decisions in sensitive areas like healthcare?
AI may make harmful or inappropriate suggestions due to its lack of understanding of human emotions or context, as shown by an AI chatbot suggesting suicidal behavior to a patient.
What role do leaders play in ensuring AI decisions are ethical?
Leaders must foster a culture of ethics, remove bias from data, and ensure human involvement in AI decision-making to prevent harmful or biased outcomes.
What does “keeping humans in the loop” mean in the context of AI?
It refers to the need for human oversight in AI decision-making processes, ensuring that AI decisions align with ethical and human values.
How can data bias be mitigated in AI systems?
By carefully analyzing and cleansing the data used to train AI models to remove implicit biases related to gender, race, or other identities.
Why is it important to validate algorithms before deployment?
Algorithms should be tested for unintended consequences or biases before being used in real-world applications, ensuring that they perform as expected without causing harm.
What is the concept of augmented intelligence?
Augmented intelligence refers to AI systems that support human decision-making by providing insights, but where humans are still ultimately responsible for making the final decision.
What is the main takeaway from the article about AI’s readiness?
AI is not yet capable of making fully independent, ethical decisions that involve human values, and humans must remain involved to ensure that AI’s decisions are contextually appropriate.
What is the difference between AI and augmented intelligence?
AI provides data-driven insights and recommendations, while augmented intelligence involves humans in the decision-making loop to ensure that decisions are morally, ethically, and contextually sound.
What is the risk of AI being used in sensitive decision-making areas like elections or healthcare?
AI’s lack of empathy, ethical reasoning, and understanding of human context can lead to biased or harmful decisions, especially in areas with significant social consequences.
What is ethics?
- Moral principles that govern a person’s decision-making and behavior
- A practice-oriented subdiscipline of philosophy that deals with moral principles and their systematic study
What is “ethos”?
Ethos is also the Greek word for character, custom, habit
In rhetoric (the art of effective speaking and writing), ways to persuade an audience:
* ethos (credibility/trustworthiness/authority of the speaker)
* pathos (emotional appeal)
* logos (reasoning appeal).
Why is it difficult to implement “fairness” into algorithms?
different interpretations of fairness are possible:
* equal resources? (Equality)
* equal opportunities? (Equity)
* random?
What is an “ethical dilemma”?
An Ethical Dilemma is a decision-making problem between two possible moral imperatives, neither of which is unambiguously acceptable or preferable. Each available option may involve some form of compromise or sacrifice, making it hard to determine the “correct” course of action.
* No clear right or wrong
* Necessity of compromise
* Conflicting principles
* Significant consequences (e.g. on wellbeing)
* Often high complexity
What is one criticism of utilitarianism?
The consequences are not clear and can’t be predicted correctly.
What ist the method of Moral Dilemma Discussion?
Method of Moral Dilemma Discussion (MDD) was introduced by Blatt & Kohlberg (1975) and further developed into the Konstanz method of dilemma discussion (KMDD) by Lind (2003)
Basic assumptions: Moral reasoning can be trained by discussing moral dilemmas and weighing arguments against each other in groups of participants who can feel safe to express their arguments.
Its positive effect on fostering moral-democratic competencies was supported in many studies.
What is the standardized procedure for the (Konstanz) Method of Dilemma Discussion?
- Read dilemma story,
- Form an individual opinion and collect arguments
- Build a pro and a con group (e.g. based on first voting)
- Pro and con groups sit opposite of each other and exchange arguments ping-pong style
- Second voting: Could someone be convinced from the opposite?