Module 4: Responsible & Ethical AI Flashcards

1
Q

What is practical ethics?

A

Practical ethics is the application of ethical theory, and the development of good practices to solve real-world moral dilemmas. The goal of practical ethics is to provide concrete guidance for moral decision making and problem solving.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are key aspects of practical ethics?

A

Applied focus. Practical ethics aims to address actual ethical dilemmas that arise in domains like medicine, law, politics, business, and now AI. It can offer recommendations, not just theories.

Multidisciplinary. Practical ethics draws on moral philosophy, social sciences like psychology and sociology, domain expertise, law, computer science, and other fields to inform the analyses of complex issues.

Context-sensitive. Practical ethics emphasizes that situational nuances matter in moral decision making.

Pluralistic approach. Different ethical frameworks like consequentialism, deontology, and virtue ethics can provide insights for evaluating issues. Practical ethics may blend these approaches.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How can a company benefit from incorporating guidance from Practical Ethics?

A

(1) Building trust and reputation
(2) Avoiding scandals
(3) Attracting and retaining talent
(4) Strengthening culture
(5) Supporting risk management
(6) Encouraging and guiding innovation
(7) Promoting long-term thinking

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which three ethical frameworks are applied in the context of AI?

A

(1) Consequentialism
(2) Deontology
(3) Virtue Ethics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Explain consequentialism

A

Consequentialism is an ethical theory that judges the morality of an action based on the consequences of that action.

At its core, consequentialism focuses on the outcomes or results of an action to determine whether it is right or wrong. The most common form is utilitarianism, which aims to maximize overall utility. Utility is often defined in terms of pleasure, happiness, or the satisfaction of desires. Under utilitarianism, the morally correct action in any situation is the one that produces the greatest net utility for all affected. Utilitarianism is forward-looking, circumstantially relative, and focused on end consequences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are advantages/disadvantages of consequentialism?

A

A key advantage of consequentialism is that it provides a single, quantifiable metric for determining moral value. One shortcoming, however, is that the choice of metric is highly subjective, and quantification of value can be challenging. Utilitarian calculations aim to be impartial, objective, and amenable to scientific measurement of utility. However, consequences are often unpredictable, and it is unclear where the calculation should stop.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a common critique of consequentialism?

A

Critics of utilitarianism and other forms of consequentialism argue that always maximizing utility can lead to actions that many consider immoral, like severely violating individual rights for the greater good. Utilitarianism struggles with situations in which utility is maximized by something most would consider unethical.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do consequentialists counteract the common critique?

A

In response, some consequentialists grant moral weight to following general moral rules (as opposed to acts) that tend to maximize utility. Rule consequentialists judge acts by whether they adhere to utility-maximizing rules, not only by their case-specific outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Explain deontology

A

Deontology is an ethical framework that judges the morality of actions based on adherence to ethical duties and rules rather than focusing on consequences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Explain the categorical imperative

A

The categorical imperative, a fundamental principle of Kantian ethics, asserts that one should act only according to maxims that could be willed as a universal law without contradiction, such as, “It’s wrong to lie.”

Morality, in Kant’s view, is grounded in reason and the intrinsic value of individuals, providing a principled foundation for ethical behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Explain virtue ethics

A

Virtue ethics emphasizes virtuous character traits and living a good life, rather than rules or consequences. The key question in virtue ethics is, “What kind of person should I be?” Rather than focusing on universal duties or maximizing utility, virtue ethicists ask what character traits we should cultivate to live well.

Virtue ethicists believe we should aspire to ideals of human excellence. Virtues are nurtured through practice, habit, and modeling virtuous exemplars. One way to approach ethical dilemmas from a virtue ethics point of view is to ask what a virtuous agent would do in a particular situation;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What critiques are leveraged against virtue ethics?

A

Critics argue virtue ethics lacks clear guidance for moral decisions compared to duty-based or consequentialist approaches. Because different virtues can conflict, how to weigh them is unclear.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How do proponents of virtue ethics respond to criticism?

A

Virtue ethicists counter that practical wisdom helps navigate hard cases, and that they are in no disadvantage with respect to other theories; moral duties can also conflict, and consequences are not always comparable. Virtue ethics also integrates well with common morality, given that most people seem to learn about morality through habituation in the context of socialization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How is virtue ethics applied to organizations/society?

A

, virtues might include justice, accountability, environmental stewardship, and responsible innovation. Virtue ethics is seeing renewed interest across disciplines like moral psychology and business ethics. In the context of AI, some scholars have suggested that to build AI that is ethical, we must build it in a virtue ethics way, which would imply it learning from experience and habit, like children do. Otherwise, morality is so complex that we might never be able to code it in a top-down approach.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which practical ethic strand does AI focus on and why?

A

Medical ethics

Ethical concerns have a long history in the field of medicine, given its direct involvement in matters of life and death.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is nonmalficence?

A

The principle of nonmaleficence asserts an obligation to avoid harming others or inflicting injuries. Part of what it means to avoid harming others is a prohibition on imposing risks of harm that are not justified or that outweigh potential benefits.

In other words, not only should you not go around hurting others (e.g., subjecting them to algorithms that can harm them), but you should also not impose unnecessary or unjustified risk on others (e.g., subject them to untested algorithms that could be harmful).

Importantly, it matters who is making the decision, and who will bear the brunt of the harm if things go badly. It is more ethically acceptable to impose risks on people who stand to benefit from whatever the proposed action is.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is beneficence?

A

The principle of beneficence refers to the moral obligation to act for the benefit of others. Beneficence requires taking positive steps to help others, rather than simply refraining from harm.

A common misunderstanding is that beneficence is solely an outcome-focused, consequentialist concept. However, duty-based deontological frameworks include beneficence as an obligation we must fulfill above and beyond what may maximize utility.

There are limits to the duty of beneficence, however. No one individual can alleviate all suffering in the world, so reasonable constraints apply. Considerations like scarce resources, competing obligations, reasonableness, and demandingness (i.e., there’s only so much ethics can demand of individuals) should factor into determining the extent of our duty of beneficence. Additionally, the recipient’s right to autonomy may preclude unwanted “benefits” that disrespect personal agency and choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is justice?

A

Justice refers to the moral obligation to act in accordance with principles of fairness, equality, impartiality, and proportionality. In ethics, justice requires giving each person his or her proper due while upholding duties toward fairness and equality.

There are different concepts of justice. Procedural justice demands fair processes and impartiality. Distributive justice focuses on equitable allocation of benefits and burdens in society. Restorative justice aims to repair harms through reconciling victims and offenders. Interactional justice concerns respect and fairness between individuals. Social justice refers to just institutions in society that provide for basic rights and needs.

Justice is concerned with ensuring human rights are respected, resources are distributed equitably, opportunities are available to all, the law is applied impartially, and no one is discriminated against unfairly. Violations of justice may lead to human rights abuses, discrimination, corruption, inequality, and exploitation of vulnerable groups.

However, there are debates around what constitutes a just distribution of goods or a fair process. Different principles of justice - like egalitarianism, utilitarianism, meritocracy, or need-based allocation - can conflict. There are also disagreements around what goods justice should be concerned with distributing, like resources, opportunities, power, or welfare.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is autonomy?

A

Autonomy refers to the capacity of people to make their own informed, un-coerced decisions about their lives and actions. As an ethical principle, autonomy commands respecting and supporting others’ abilities to determine their own course in life.

Autonomy has roots in humanistic and existentialist traditions. It depends on capacities for self-awareness, independent decision making, critical reflection, and personal freedom. Infringing on someone’s autonomy contravenes her right to direct her own life.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is explainability?

A

The ethical principle of explainability (sometimes called explicability) has gained significant attention in the context of AI. It refers to the idea that AI systems, especially those with decision-making capabilities, should provide transparent and understandable explanations for their actions or decisions.

One reason explainability is thought to be important is for the purposes of accountability.

Explainability is also thought to further trust. Trust is a fundamental component of the adoption and acceptance of AI technologies. If users or stakeholders cannot understand how a system reaches its conclusions, they are less likely to trust it.

What counts as a good explanation will likely vary depending on the kind of AI, but one popular approach is to develop counterfactual explanations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Provide an overview of different types of explainability

A

Local explainability focuses on explaining the decisions of a specific AI model on a single instance or prediction. Local explanations provide insights into why a particular decision was made for a particular case.

Global explainability looks at an AI model’s overall behavior and decision-making processes. It provides a more comprehensive understanding of how the model operates across various inputs.

Model-specific explainability refers to the fact that some AI models have specific explainability techniques tailored to their architecture. For example, decision trees have intuitive rules for explaining their decisions, whereas deep neural networks may require different methods. In contrast, model-agnostic methods are designed to work with any AI model, making them more versatile. They don’t rely on the specific architecture or algorithms used in the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Why can problem specification lead to bias?

A

An algorithm may exhibit bias from its inception if the goals it is designed to achieve contain inherent problems. Operationalizing complex goals is a nuanced task, and often, the selected target variables may fail to capture real-world objectives accurately.

23
Q

Why can data lead to bias?

A

If they rely on historical data, ML algorithms may tend to perpetuate biases from the past; this propensity is commonly known as historical bias. In addition to the use of historical data possibly perpetuating bias related to features such as sex and race, it can more broadly lead to inaccurate or malfunctioning algorithms, especially when lab data does not align with real-world trends.

A different but related data challenge is that historical data rarely show counterfactual outcomes. This problem is called the selective labels problem. For example, a company probably doesn’t track the career progression of those it didn’t hire; therefore, it will never know whether it indeed hired the best candidate. A bank has data on the people to whom it gave loans, but it doesn’t have data on the people to whom it denied loans. The people who were denied loans might’ve become even better clients than those to whom it gave loans, but because it doesn’t have that data, it will continue to select people who are like those to whom it has granted loans in the past.

Another related but distinct kind of bias stemming from data is sampling bias. It is a bias that is well known in science. Sampling bias arises when the data sample is not random. If the data sampled are not random, the trends shown by the population under study may not generalize to another population

24
Q

Why can Modeling, Validation, and Algorithm Design lead to bias?

A

Choices related to optimization functions, the application of different regression models, consideration of subgroups, and how information is presented can all introduce biases.

25
Q

Why can deployment lead to bias?

A

Some studies indicate that when human beings receive a suggestion from a computer, they often opt to defer to the automated system. There are a few hypotheses as to why people tend to defer to automatized systems. It might be because it’s convenient and time saving. It might also be because automatized systems appear to be more “objective” and people know their own fallibilities and that might create self-doubt. Perhaps deferring to an algorithm shields people from responsibility. At best, responsibility seems shared, whereas going against the recommendation of an algorithm might expose people who make mistakes to harsher judgments and blame. This tendency to do as we are told by an algorithm creates unintended incentive effects, where individuals might relinquish responsibility, allowing them to attribute blame to the algorithm in case of any issues.

26
Q

When Does Bias Count As Discrimination?

A

In general, algorithmic bias is likely to lead to discrimination when it results in disfavoring people based on their race, sex, ethnicity, age, or any other classification protected by law. Such disadvantages typically violate legal protections.

27
Q

What is fairness?

A

Fairness entails the absence of bias or preference toward an individual or group based on irrelevant characteristics, such as their race. An algorithm is considered fair when it does not exhibit problematic biases. There are two primary types of fairness: group and individual fairness.

28
Q

Why is it impossible to achieve full fairness?

A

When base rates between populations are different, which is almost always the case, then it is impossible to satisfy demographic parity, predictive rate parity, and equal opportunity simultaneously.

Automating fairness becomes feasible only when base rates are equal, which is seldom the case in reality. Fairness can ultimately be considered a moral or ethical judgment, not a mathematical one, and it can involve making imperfect compromises and trade-offs that might need to change in response to changing circumstances.

29
Q

What is procedural fairness?

A

Fairness is not only about outcome, but about procedure. Procedural fairness provides reassurances, not only that a fair outcome will be sought, but that it will be sought through impartial and just processes.

In the context of AI ethics, the challenge is to create corporate structures that can carry out both procedural and outcome fairness. For instance, having an ethics committee that can weigh consequentialist, deontological, and virtue ethics considerations to develop and implement best practices can help achieve both outcome and procedural fairness.

30
Q

How can problematic biases and unfairness be avoided?

A

What is most important is to be aware of trade-offs and to make decisions that are justifiable to the population at large, the stockholders, the stakeholders, regulators, and those who lose out.

Technological solutions. Some toolkits are being developed to assess the amount of fairness in a system. Companies can create their own internal “auditing” systems to identify potential biases that their AIs display.

Data quality. Ensuring that data is diverse, updated, accurate, representative, and free from past discriminatory tendencies goes a long way toward avoiding biases, but again, is not a panacea on its own

Auditing. Algorithmic auditing is likely the best way to identify and correct biases. There are private companies that offer this service.11 Auditing will include using technological tools and statistical analyses, but also a fresh and diverse look at your systems.

Ethical committees or similar structures. Instituting forums in which possible problematic biases can be discussed, and in which decisions about trade-offs can be made considering consequentialist, deontological, and virtue ethics considerations can help ensure procedural fairness and can contribute to outcome fairness.

Training for board and C-suite. Given the wide range of risks to which a firm might be exposed if AI models are designed and implemented in a manner that is not responsible or ethical, firm leadership should be educated as to the risks associated with AI and the importance of ethical/responsible AI practices to help mitigate those risks.

31
Q

What is privacy?

A

In the context of AI, someone has privacy with respect to some person or institution and in reference to some personal data point if that person or institution has no access to that personal data point. In other words, we have privacy to the extent that others don’t have access to our personal information. Privacy is important because, among other things, it protects us from possible abuses of power. The more someone knows about you, the easier it is for them to interfere with your life.

32
Q

Why is privacy an ethical issue?

A

Privacy is an ethical issue because the lack of it can lead to wrongs, harms, and risks for individuals, institutions, and society at large. In ethics, wrongs are sometimes distinguished from harms. Wrongs can sometimes lead to harm, but they are immoral even when they don’t lead to harm.

33
Q

Why do people suffer from privacy violations?

A

Individuals can suffer discrimination, blackmail, exposure and public shaming, identity theft, and the like.

Privacy losses are also a potential liability to institutions. Every personal data point is a potential lawsuit, a potential fine.

Finally, privacy losses can result in harm to society. The extent of personal data collection, for example, has made it relatively easy for anyone (including foreign adversaries) to learn of sensitive information about military personnel or politicians and blackmail them, which can endanger national security and democracy in various ways.

34
Q

What are the most important ethical principles related to privacy and cybersecurity to ensure best practices?

A

Right to privacy. The right to privacy is generally considered a moral right.

Data minimization. Data minimization is the most effective way to protect privacy: collecting only the data that is necessary to fulfil a specific purpose.

Right to be forgotten. The right to be forgotten is a person’s right to have private information about them removed from Internet searches and other directories (thereby making it less accessible)

Control over data. Giving people control over their data is another measure that can minimize potential abuses of data.

Contextual integrity. The use of personal data should adhere to contextual norms of privacy. When people give up their data in a particular context, they have certain expectations about how that data will be used.

Data deletion. Personal data should be deleted as soon as it is not necessary. Routine data deletion is a way to protect individuals, and it is also a way to keep data accurate

Data security. Data security is part of complying with due diligence. It’s good practice to use all technical tools available to keep data safe, from strong encryption (and strong passwords) to thorough anonymization of data and use of cryptographic methods such as differential privacy

35
Q

Why are power asymmetries in legislative/regulatory bodies a problem?

A

Many of the tech companies that are at the cutting-edge of AI are often more powerful—in terms of wealth, influence, and expertise—than some national governments and regulatory agencies. These asymmetries can lead to challenges in terms of legislative and regulatory bodies effectively responding to potential risks posed by AI and its uses.

36
Q

Why is institutional opaqueness a problem?

A

Most AI systems have been developed by private companies that may not be subject to the same transparency requirements as public institutions or universities. As a result, the public, academics, journalists, policymakers, and regulatory agencies may have little detailed information about the practices that went into designing and training large language models, for example, from the datasets used to details about whether and how the systems were tested and tuned for safety.

37
Q

What can be done about algorithmic opaqueness?

A

The use of proxies is a common technique used by AI systems to simplify the training process by representing complex or difficult-to-measure objectives with a simpler, easier-to-measure metric.

For example, an AI system designed to generate news articles might use word count as a proxy for article quality, rather than trying to measure the quality of the content directly. By using proxies, AI systems can make progress toward a goal without needing to optimize directly for the goal itself, which can be a challenging and computationally expensive task.

However, this approach can also introduce potential issues, such as optimizing for a goal that is not truly aligned with the system’s overall objective, and if outside observers — be it academics or regulatory bodies — don’t know what proxies the model is using, it is difficult to govern that model.

38
Q

What is a problem in AI ethics?

A

If we compare AI ethics with medical ethics, the lack of structure seems evident.

Even though AI ethics is gradually becoming mainstream, a computer scientist can still go through an education without ever taking a course in AI ethics or being a licensed or certified professional. Although boards are increasingly worried about AI risks, it is still rare to see AI ethicists as board members.

39
Q

Which country is the only country with national laws regulating AI?

A

China

40
Q

How can unpredictability in AI output be addressed?

A

One potential option to minimize the risk of unpredictability is to subject AIs to randomized controlled trials (RCTs) to test for their safety. Another option, complementary to RCTs, is to audit AIs periodically for safety and accuracy

41
Q

Are LLMs truth telling?

A

Large language models (LLMs) are not based on an understanding of truth or a knowledge of the world. Rather, they make statistical inferences and probabilistic “guesses” to construct responses. Given the input and their training, they are designed to give plausible responses. But plausible responses are not necessarily truthful.

42
Q

What are privacy and copyright related issues AI companies face?

A

When companies scrape data off the internet, or collect data from their users, questions related to whether they have a claim to that data arise. There is the worry that the privacy of data subjects has been violated by collecting the personal data of millions of unsuspecting internet users. It is unclear, for example, whether LLMs can comply with Europe’s General Data Protection Regulation, as European citizens are supposed to have a right to ask companies what data they have on them, to modify that data, and delete that data. It is far from clear that the companies that develop and sell access to these LLMs can comply with such data requests. Finally, there is the concern that copyright has been violated with LLMs ingesting material like books. At the time of writing there are various lawsuits in process related to these matters.

43
Q

What is the relationship between ethics and law?

A

Ethics is considered a complement to law and necessary to ground, inform, and shape law. Societies tend to regulate behavior according to what they deem morally acceptable. Ethics helps one distinguish between just and unjust laws. Laws, however, are narrow in scope; they typically establish minimal requirements of behavior for social institutions to function well. Ethics goes beyond that—it identifies moral issues, reflects on the kind of society we want to live in based on ideas of what a good life looks like, and makes recommendations accordingly. Ethics, therefore, can be considered more ambitious than the law.

44
Q

What is GDPR about?

A

The European GDPR was implemented in May 2018. It is designed to regulate personal data. According to the GDPR, personal data is any information concerning an identified or identifiable person. Under the GDPR, data subjects have a right to receive concise and transparent information about their data; access their personal data upon request; request erasure of their personal data; and object to processing of personal data from marketing or other purposes unrelated to the service being offered. That extraterritorial jurisdiction has made the GDPR hugely effective across the world. It has made some international corporations improve their standards everywhere, because it is too complicated to have one system for European residents and a different one for the rest of the world.

45
Q

What is the DMA (Digital Markets Act) about?

A

The Digital Markets Act (DMA) is a landmark piece of legislation passed by the European Union in 2022 that aims to create a more fair and competitive digital marketspace. The DMA targets large online platforms with “gatekeeper” status, which are defined as companies that hold a dominant position in a market and have the ability to distort competition

Key provisions of the DMA include:

Ban on self-preferencing: Gatekeepers are prohibited from favoring their own services or products over those of their competitors. This includes practices such as ranking their own products higher in search results or making it difficult for users to switch to rivals.

Open access to data: Gatekeepers must provide access to their data to third-party developers and businesses, allowing them to create innovative services that compete with the gatekeepers’ offerings.

Transparency obligations: Gatekeepers must be transparent about their algorithms and practices, allowing users and regulators to understand how they operate and identify potential anti-competitive behavior.

No tying and bundling: Gatekeepers cannot require users to purchase additional services or features that they don’t want or need to access their core services.
Interoperability of messaging services: Gatekeepers must ensure that their messaging services are interoperable with other messaging services, allowing users to communicate seamlessly across platforms.

46
Q

What is the Digital Services Act about?

A

The Digital Services Act (DSA) is a regulation in EU law that aims to update the Electronic Commerce Directive 2000 regarding illegal content, transparent advertising, and disinformation. It was adopted by the European Parliament and the Council of the European Union in October 2022 and came into force in 2023.

The DSA applies to online intermediaries and platforms, including marketplaces, social networks, content-sharing platforms, app stores, and online travel and accommodation platforms. It sets out obligations for these platforms to:

Prevent the dissemination of illegal content: Platforms must proactively identify and remove illegal content, such as hate speech, child sexual abuse material, and counterfeit goods. They must also have clear and effective reporting mechanisms for users to flag illegal content.

Be more transparent about their content-moderation practices: Platforms must publish clear information about their policies and procedures for identifying and removing illegal content. They must also provide users with access to their content-moderation data, allowing them to see how their content has been handled.

Address the issue of disinformation: Platforms must take measures to prevent the spread of disinformation, such as providing clear labels on political advertising and promoting fact-checking initiatives.

Protect users from algorithmic bias: Platforms must ensure that their algorithms are not biased against certain groups of users. They must also be transparent about how their algorithms work and how they are used to personalize user experiences.

Allow users to opt out of receiving personalized content: Very large online platforms are required to allow their users to opt out of receiving personalized content — which typically relies on tracking and profiling user activity — when viewing content recommendations.

47
Q

What is the EU AI Act?

A

On March 13, 2024, the EU Parliament approved the Artificial Intelligence Act, commonly known as the EU AI Act, with an aim to establish a common regulatory and legal framework for AI. On May 21, 2024, the Council of the European Union approved the AI Act, which is the final stage in the legislative process.

The AI Act has been designed to ensure proportionate risk mitigation over a range of AI functions. For instance, a company offering an AI service to screen job applicants would have to take steps to prevent their systems from unduly hurting individuals’ access to opportunities. The regulation also imposes a legally binding requirement to notify people when they are interacting with a chatbot, biometric systems, or emotion recognition. Companies will also need to label deepfakes and content generated by AI, as well as design systems to make AI-generated media detectable.

On March 13, 2024, the EU Parliament approved the Artificial Intelligence Act, commonly known as the EU AI Act, with an aim to establish a common regulatory and legal framework for AI. On May 21, 2024, the Council of the European Union approved the AI Act, which is the final stage in the legislative process.

The AI Act has been designed to ensure proportionate risk mitigation over a range of AI functions. For instance, a company offering an AI service to screen job applicants would have to take steps to prevent their systems from unduly hurting individuals’ access to opportunities. The regulation also imposes a legally binding requirement to notify people when they are interacting with a chatbot, biometric systems, or emotion recognition. Companies will also need to label deepfakes and content generated by AI, as well as design systems to make AI-generated media detectable.

48
Q

What is prohibited based on the EU AI Act?

A

The following systems are expected to be prohibited with just six months for companies to ensure compliance:

Biometric categorization systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);

Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
Emotion recognition in the workplace and educational institutions;

Social scoring based on social behavior or personal characteristics;

AI systems that manipulate human behavior to circumvent their free will;

AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

Non-compliance can lead to substantial fines, ranging from €35 million or 7% of global turnover to €7.5 million or 1.5% of turnover, depending on the offense and company size.

49
Q

How is privacy regulated in the US?

A

Health Insurance Portability and Accountability Act (HIPAA)

HIPAA was enacted in 1996 and protects the privacy and security of health information. It applies to health care providers, health plans, and other organizations that use or store electronic health information (EHI). HIPAA requires these organizations to implement safeguards to protect EHI from unauthorized access, use, disclosure, alteration, or destruction.

State-level Privacy Regulations

The United States has a patchwork of state privacy regulations that govern how businesses can collect, use, and share personal information about consumers. These regulations vary in scope and enforcement, but they are all designed to protect consumer privacy. See Appendix for a sampling of key U.S. state-level privacy regulations.

50
Q

How is the US dealing with cybersecurity regulation?

A

On May 12, 2021, President Biden issued an Executive Order (EO) on Improving the Nation’s Cybersecurity. The EO is a comprehensive and ambitious plan to strengthen the cybersecurity of the United States against evolving threats.

Key provisions of the EO:

Require the federal government to adopt and implement a zero-trust architecture for all federal networks and systems.

Increase the security of software supply chains and promote the use of open-source software.

Enhance the cybersecurity of critical infrastructure, such as energy, transportation, and healthcare.

Expand public-private partnerships to improve cybersecurity.

Invest in education and training for cybersecurity professionals.

51
Q

What is the US doing to regulate AI?

A

Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

On Oct. 30, 2023, President Biden issued an EO on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The order is a comprehensive plan to address the national security, economic, and ethical challenges posed by AI. National Institute of Standards and Technology (NIST)’s AI Risk Management Framework (AI RMF)

The NIST developed the AI RMF in 2023. This voluntary framework serves as a guide for organizations designing, developing, deploying, or using AI systems. Its overarching goal is to promote the responsible and trustworthy development of AI while mitigating potential harms. The AI RMF focuses on four key functions: Govern, map, measure, and manage.

AI Safety and Security Board

Established by the U.S. Department of Homeland Security on April 26, 2024, the Artificial Intelligence Safety and Security Board (AISSB) advises the Secretary, the critical infrastructure community, other private sector stakeholders, and the broader public on the safe, secure, and responsible development and deployment of AI technology in our nation’s critical infrastructure.

52
Q

What is China doing to regulate AI?

A

One significant regulation is the Provisional Administrative Measures of Generative Artificial Intelligence Services (Generative AI Measures), which were published by the Cyberspace Administration of China (CAC) and have taken effect since Aug. 15, 2023. These measures apply to the use of generative AI technology to provide services for generating text, pictures, sounds, videos, and other content within the territory of China. They impose various obligations on generative AI service providers, including the prohibition of generating illegal content, taking measures to prevent the generation of discriminatory content, and not infringing on others’ rights, including privacy rights and personal information rights.

China has enacted several comprehensive laws aimed at protecting personal information, like the Personal Information Protection Law (PIPL) and the Internet Information Service Algorithmic Recommendation Management Provisions. These regulations mandate data minimization, user consent, and transparency in algorithm decision-making.

53
Q
A