Module 4: Responsible & Ethical AI Flashcards
What is practical ethics?
Practical ethics is the application of ethical theory, and the development of good practices to solve real-world moral dilemmas. The goal of practical ethics is to provide concrete guidance for moral decision making and problem solving.
What are key aspects of practical ethics?
Applied focus. Practical ethics aims to address actual ethical dilemmas that arise in domains like medicine, law, politics, business, and now AI. It can offer recommendations, not just theories.
Multidisciplinary. Practical ethics draws on moral philosophy, social sciences like psychology and sociology, domain expertise, law, computer science, and other fields to inform the analyses of complex issues.
Context-sensitive. Practical ethics emphasizes that situational nuances matter in moral decision making.
Pluralistic approach. Different ethical frameworks like consequentialism, deontology, and virtue ethics can provide insights for evaluating issues. Practical ethics may blend these approaches.
How can a company benefit from incorporating guidance from Practical Ethics?
(1) Building trust and reputation
(2) Avoiding scandals
(3) Attracting and retaining talent
(4) Strengthening culture
(5) Supporting risk management
(6) Encouraging and guiding innovation
(7) Promoting long-term thinking
Which three ethical frameworks are applied in the context of AI?
(1) Consequentialism
(2) Deontology
(3) Virtue Ethics
Explain consequentialism
Consequentialism is an ethical theory that judges the morality of an action based on the consequences of that action.
At its core, consequentialism focuses on the outcomes or results of an action to determine whether it is right or wrong. The most common form is utilitarianism, which aims to maximize overall utility. Utility is often defined in terms of pleasure, happiness, or the satisfaction of desires. Under utilitarianism, the morally correct action in any situation is the one that produces the greatest net utility for all affected. Utilitarianism is forward-looking, circumstantially relative, and focused on end consequences.
What are advantages/disadvantages of consequentialism?
A key advantage of consequentialism is that it provides a single, quantifiable metric for determining moral value. One shortcoming, however, is that the choice of metric is highly subjective, and quantification of value can be challenging. Utilitarian calculations aim to be impartial, objective, and amenable to scientific measurement of utility. However, consequences are often unpredictable, and it is unclear where the calculation should stop.
What is a common critique of consequentialism?
Critics of utilitarianism and other forms of consequentialism argue that always maximizing utility can lead to actions that many consider immoral, like severely violating individual rights for the greater good. Utilitarianism struggles with situations in which utility is maximized by something most would consider unethical.
How do consequentialists counteract the common critique?
In response, some consequentialists grant moral weight to following general moral rules (as opposed to acts) that tend to maximize utility. Rule consequentialists judge acts by whether they adhere to utility-maximizing rules, not only by their case-specific outcomes.
Explain deontology
Deontology is an ethical framework that judges the morality of actions based on adherence to ethical duties and rules rather than focusing on consequences.
Explain the categorical imperative
The categorical imperative, a fundamental principle of Kantian ethics, asserts that one should act only according to maxims that could be willed as a universal law without contradiction, such as, “It’s wrong to lie.”
Morality, in Kant’s view, is grounded in reason and the intrinsic value of individuals, providing a principled foundation for ethical behavior.
Explain virtue ethics
Virtue ethics emphasizes virtuous character traits and living a good life, rather than rules or consequences. The key question in virtue ethics is, “What kind of person should I be?” Rather than focusing on universal duties or maximizing utility, virtue ethicists ask what character traits we should cultivate to live well.
Virtue ethicists believe we should aspire to ideals of human excellence. Virtues are nurtured through practice, habit, and modeling virtuous exemplars. One way to approach ethical dilemmas from a virtue ethics point of view is to ask what a virtuous agent would do in a particular situation;
What critiques are leveraged against virtue ethics?
Critics argue virtue ethics lacks clear guidance for moral decisions compared to duty-based or consequentialist approaches. Because different virtues can conflict, how to weigh them is unclear.
How do proponents of virtue ethics respond to criticism?
Virtue ethicists counter that practical wisdom helps navigate hard cases, and that they are in no disadvantage with respect to other theories; moral duties can also conflict, and consequences are not always comparable. Virtue ethics also integrates well with common morality, given that most people seem to learn about morality through habituation in the context of socialization.
How is virtue ethics applied to organizations/society?
, virtues might include justice, accountability, environmental stewardship, and responsible innovation. Virtue ethics is seeing renewed interest across disciplines like moral psychology and business ethics. In the context of AI, some scholars have suggested that to build AI that is ethical, we must build it in a virtue ethics way, which would imply it learning from experience and habit, like children do. Otherwise, morality is so complex that we might never be able to code it in a top-down approach.
Which practical ethic strand does AI focus on and why?
Medical ethics
Ethical concerns have a long history in the field of medicine, given its direct involvement in matters of life and death.
What is nonmalficence?
The principle of nonmaleficence asserts an obligation to avoid harming others or inflicting injuries. Part of what it means to avoid harming others is a prohibition on imposing risks of harm that are not justified or that outweigh potential benefits.
In other words, not only should you not go around hurting others (e.g., subjecting them to algorithms that can harm them), but you should also not impose unnecessary or unjustified risk on others (e.g., subject them to untested algorithms that could be harmful).
Importantly, it matters who is making the decision, and who will bear the brunt of the harm if things go badly. It is more ethically acceptable to impose risks on people who stand to benefit from whatever the proposed action is.
What is beneficence?
The principle of beneficence refers to the moral obligation to act for the benefit of others. Beneficence requires taking positive steps to help others, rather than simply refraining from harm.
A common misunderstanding is that beneficence is solely an outcome-focused, consequentialist concept. However, duty-based deontological frameworks include beneficence as an obligation we must fulfill above and beyond what may maximize utility.
There are limits to the duty of beneficence, however. No one individual can alleviate all suffering in the world, so reasonable constraints apply. Considerations like scarce resources, competing obligations, reasonableness, and demandingness (i.e., there’s only so much ethics can demand of individuals) should factor into determining the extent of our duty of beneficence. Additionally, the recipient’s right to autonomy may preclude unwanted “benefits” that disrespect personal agency and choice.
What is justice?
Justice refers to the moral obligation to act in accordance with principles of fairness, equality, impartiality, and proportionality. In ethics, justice requires giving each person his or her proper due while upholding duties toward fairness and equality.
There are different concepts of justice. Procedural justice demands fair processes and impartiality. Distributive justice focuses on equitable allocation of benefits and burdens in society. Restorative justice aims to repair harms through reconciling victims and offenders. Interactional justice concerns respect and fairness between individuals. Social justice refers to just institutions in society that provide for basic rights and needs.
Justice is concerned with ensuring human rights are respected, resources are distributed equitably, opportunities are available to all, the law is applied impartially, and no one is discriminated against unfairly. Violations of justice may lead to human rights abuses, discrimination, corruption, inequality, and exploitation of vulnerable groups.
However, there are debates around what constitutes a just distribution of goods or a fair process. Different principles of justice - like egalitarianism, utilitarianism, meritocracy, or need-based allocation - can conflict. There are also disagreements around what goods justice should be concerned with distributing, like resources, opportunities, power, or welfare.
What is autonomy?
Autonomy refers to the capacity of people to make their own informed, un-coerced decisions about their lives and actions. As an ethical principle, autonomy commands respecting and supporting others’ abilities to determine their own course in life.
Autonomy has roots in humanistic and existentialist traditions. It depends on capacities for self-awareness, independent decision making, critical reflection, and personal freedom. Infringing on someone’s autonomy contravenes her right to direct her own life.
What is explainability?
The ethical principle of explainability (sometimes called explicability) has gained significant attention in the context of AI. It refers to the idea that AI systems, especially those with decision-making capabilities, should provide transparent and understandable explanations for their actions or decisions.
One reason explainability is thought to be important is for the purposes of accountability.
Explainability is also thought to further trust. Trust is a fundamental component of the adoption and acceptance of AI technologies. If users or stakeholders cannot understand how a system reaches its conclusions, they are less likely to trust it.
What counts as a good explanation will likely vary depending on the kind of AI, but one popular approach is to develop counterfactual explanations.
Provide an overview of different types of explainability
Local explainability focuses on explaining the decisions of a specific AI model on a single instance or prediction. Local explanations provide insights into why a particular decision was made for a particular case.
Global explainability looks at an AI model’s overall behavior and decision-making processes. It provides a more comprehensive understanding of how the model operates across various inputs.
Model-specific explainability refers to the fact that some AI models have specific explainability techniques tailored to their architecture. For example, decision trees have intuitive rules for explaining their decisions, whereas deep neural networks may require different methods. In contrast, model-agnostic methods are designed to work with any AI model, making them more versatile. They don’t rely on the specific architecture or algorithms used in the model.