4. AI Ethics and Potential Harm Flashcards

1
Q

What are common principles outlines in the OECD Guidelines on the the Protection of Privacy and Transborder Flows of Personal Data?

A
  1. Data minimization or collection limitation: Maintain data for only as long as it is needed and obtain it by lawful and fair means
  2. Use limitation: Limit data to uses specified by the organization unless a data subject has given
    consent for, or there is a legal exception for, alternate uses
  3. Safeguards or security: Helps establish that reasonable security safeguards should protect
    personal data
  4. Notice or openness: Companies should be clear and open to the extent required by law about how they manage personal data and explain their practices and policies regarding personal data
  5. Access or individual participation: Allows a person to understand the data an organization has about them and to obtain, amend, correct or otherwise challenge it
  6. Accountability: Companies should be accountable for complying with the principles and
    obligations in the other FIPs
  7. Purpose specification: The organization commits to disclose specific purposes for which it will
    use data, then only uses data for those compatible purposes
  8. Data quality and relevance: Personal data should be relevant to the purposes for which it is to be used and should be accurate, complete and timely to be fair to data subjects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are some AI-specific ethical considerations?

A

The OECD has a set of principles specific to promoting trustworthy AI use:

  1. Inclusive growth, sustainable development and well-being (Highlights the potential for trustworthy AI to contribute to overall growth and prosperity for
    individuals, society and the planet, and advance global development objectives)
  2. Human-centered values and fairness (States that AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and include appropriate safeguards to ensure fairness and justice)
  3. Transparency and explainability (Calls for transparency and responsible disclosure around AI systems so that people understand when they are engaging with them and can challenge outcomes)
  4. Robustness, security and safety (States that AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed)
  5. Accountability (Proposes that organizations and individuals who develop, deploy or operate AI systems should be held accountable for their proper functioning in line with the OECD’s values-based principles for AI)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are key ethical issues for AI?

A
  1. Lawfulness
  2. Safety for people and the planet
  3. Protection from unfair bias
  4. AI use is transparent and explainable
  5. Individuals have appropriate choices about the use of their personal information to develop AI
  6. Individuals can choose to have human intervention in key AI-driven decisions that impact their legal rights or well-being
  7. Organizations must be accountable for ensuring AI they develop and use is secure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What foundational controls should be in place to mitigate ethical risks posed by the use of AI?

A
  1. Organizations should develop ethical principles of AI
  2. Organizations should develop a cross-functional and demographically diverse oversight body to review higher-risk AI use cases that create ethical gray areas for the organization
  3. Organizations should assess whether they have appropriate policies and procedures for associated risks such as unfair bias/disparate impact; privacy; cybersecurity and data governance and enhance those policies and procedures as necessary to apply to AI use case
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the roles and positions required to create a culture of ethical AI?

A
  1. Legal and compliance - relevant policies and procedures to ensure bias mitigation
  2. Equitable design - diversity of thought in terms of development, training, testing and monitoring (lack of diversity increases the likelihood of biased inputs and outcomes)
  3. Transparency and exploitability (also known as interpretability) - AI models and products with embedded AI should be labeled as such both internally and externally and decisions made by AI should be explained to the consumer including third-party due diligence contracts
  4. Privacy and cybersecurity - PI used to develop AI should be disclosed in privacy notices, privacy regulation compliance for automated profiling, access to delete PI used to train AI models, data minimization (PI that is unlikely to improve the model should be left out by default), cyber intrusion risk mitigation (exfiltration of confidential or PI or poisoning of the model)
  5. Data governance - quality and integrity of data used to develop and train AI
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Who is impacted by risks posed by AI?

A
  1. Individuals (civil rights, economic opportunity, safety)
  2. Groups (discrimination towards subgroups)
  3. Society (democratic process, public trust in governmental institutions, educational access, jobs redistribution)
  4. Companies/institutions (reputational, cultural, economic, acceleration risks)
  5. Ecosystems (natural resources, environment, supply chain
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can bias in AI systems cause harm?

A
  1. Implicit bias: Discrimination or prejudice toward a particular group or individual
  2. Sampling bias: Data gets skewed toward a subset of a group and therefore may favor that subset of a larger group
  3. Temporal bias: A model is trained and functions properly at the time, but may not work well at a future point, requiring new ways to address the data
  4. Overfitting to training data: A model works for the training data, but does not work for new data because it is so fitted to the training data
  5. Edge cases and outliers: Any data outside the boundaries of the training dataset (e.g., edge cases can be errors when you have data that is incorrect, duplicative or unnecessary)
    * Noise: Data that negatively impacts the machine learning of the model
    * Outliers: Data points outside the normal distribution of the data; can impact how the model operates and its effectiveness
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

List potential forms AI bias and discrimination:

A
  1. Employment and hiring discrimination
    * AI-based systems used for recruiting and hiring
    * If the system is biased, it may discriminate against applicants based on gender, race, ethnicity or economic status
    * Amazon, 2014: implemented an AI system to help with recruiting and hiring; during testing they found the system was biased against women
    - This happened because the system was trained on test data of the resumes of men only
    - Engineers tried to retrain the system, but this is difficult to do once the model has
    already been trained a certain way; project was eventually abandoned in 2017
  2. Insurance and social benefit discrimination
    * If the system is not appropriately modeled and developed, there can be a discriminatory impact against particular groups of individuals, often based on economic status
  3. Housing discrimination
    * Tenant selection and mortgage qualification can be affected if a biased AI system is used

4.Education discrimination
* AI systems used to select individuals to attend a school * A biased system can discriminate against qualified individuals based on race, gender or economic background

5.Credit discrimination
* Financial lending discrimination and individuals unable to get loans
* Differential pricing of goods and services

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

List privacy concerns with AI:

A
  1. Personal data used as part of AI training data
    * Screen out personal data: If you don’t need personal data, it should not be used in the system; personal data could be shared with individuals who should not have access to it if it is part of the larger set of data used to train the system
    * Deidentification: removing identifiers from the data, such as name, address, Social Security number; however, it is possible to reidentify an individual if data is aggregated or combined with another data set
    * With AI systems, massive amounts of data are used and there are typically multiple data sets; easy to recombine personal data from different datasets and take deidentified data, combine it with identified data, and reidentify individuals, leading to privacy issues
  2. Appropriation of personal data for model training
    * Models being trained in AI from large sources of data
    * Data may come from social media or large datasets with information about individuals; individuals may have consented for one particular use of their data, but not for training an AI system
  3. Inference: An AI system model that makes predictions or decisions
    * In some cases, the systems can be used to identify individuals, but they are not always accurate
    * Personal data can be attributed to the wrong individual
  4. Lack of transparency of use
    * AI systems should notify individuals when AI is being used (e.g., when interacting with chatbots)
  5. Inaccurate models
    * Accuracy of data is very important; AI systems are only as good as the data that trains them
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe potential group harms associated with AI:

A
  1. Facial recognition algorithms: Many AI systems using face recognition exhibit demographic differentials (the ability to match two images of the same person vary from one demographic group to another)
    * A NIST study found AI facial recognition systems to be unreliable across many kinds of systems
    * Studies have found those with darker skin tones and females are much more difficult to recognize, leading to discrimination and bias
    * AI facial recognition software used by the London police once showed an 81% inaccuracy rate; can lead to biased policing, as well as the ability to track individuals online that could lead to discrimination using those types of systems
  2. Mass surveillance: A large potential harm, particularly for marginalized groups
    * If mass surveillance is used, protected groups or those harmed in the past may not receive as much privacy protection and may be targeted for surveillance (due to race, religion, sexual orientation, etc.)
  3. Civil rights
    * Harms to freedom of assembly and protest due to tracking and profiling individuals linked to certain beliefs or actions
  4. Deepening of racial and socio-economic divides
    * Discrimination against population subgroups
    * Distrust among groups
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are potential social harms associated with AI?

A
  1. Spread of disinformation
  2. Ideological bubbles or echo chambers
    * Individuals exposed only to information that agrees with information they encountered in the past
    * Unable to see differing views or understand broader societal implications
    * Causes isolation and more division; groups only exposed to their specific ideas and values
  3. Deepfakes: Audio, video or images manipulated to create an alternate reality
    * Harmful in elections
  4. Safety
    * Lethal autonomous weapons that identify targets to attack
    * Concern that without sufficient oversight, systems could evolve and may be able to attack randomly without being monitored
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe the potential corporate and institutional risks associated with AI:

A
  1. Reputational
    * Loss of customers and renewals
    * Increased queries due to concerns over the AI being used; hesitant new customers with concerns over AI used
    * Negative brand impact
    * Share price drop and investor flight
    * Company being a target for campaigners
  2. Cultural
    * Assumption that AI is more correct than humans, so we are less likely to challenge its outcomes, even though AI is created by humans
    * Built-in bias that AI is technology- and data-driven and therefore can produce a superior outcome, which is not necessarily the case
  3. Acceleration
    * Not all risks can be anticipated from the beginning, due to the volume of data that AI can process, the speed of processing, and the complexity of the algorithm
    * AI impact may be wider and greater than with other software and technology solutions
    * Generative AI has been created without necessary controls in place; can be very difficult to see the warning signs when things move at a quick speed
  4. Legal and regulatory
    *Industry laws and regulations may apply to AI use (e.g., pharmaceutical, telecommunications, financial)
    * Privacy law implications; competition law; trade; tax
    * Breach of legal and regulatory risks can lead to sanctions, fines, and orders to stop processing
    * Given the nature of AI to continue to learn and evolve, it can be difficult to anticipate what forms risks may take, particularly for new risks. Therefore, it is essential to apply AI principles and ethics rigorously to the development and testing of AI to mitigate these potential harms.
    * Engage key stakeholders to understand potential harm
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the ecological harms associated with AI?

A
  • When training several common large AI models, studies found that they emit more than 626,000 pounds of carbon dioxide (the equivalent of five times the lifetime emissions of an American car)
  • Another study found that when looking at the top four natural language processing models, the energy consumed over the training process matches the energy mix used by Amazon’s AWS, the largest cloud service provider
  • An additional study found that each casual use of generative AI is like dumping out a small bottle of water on the ground
  • To address this, many organizations are seeking alternatives to the use of electrical power (Possibility of using batteries to power systems; this can also have an environmental impact; ex. lithium batteries: lithium extraction demands enormous water usage)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How can AI be used to help the environment?

A
  • Self-driving cars developed by AI systems can help reduce emissions
  • AI use in agriculture has produced higher yields
  • AI use in satellite images can help identify disaster-stricken areas so they can receive help
  • Weather forecasting
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the characteristics of trustworthy AI?

A
  1. Human-centric (AI should amplify human capacity and should have a positive impact on the human condition)
  2. Accountability (Organizations ultimately needs to be responsible for the AI they deliver, irrespective of the number of contributors)
  3. Transparency (AI must be understandable to the intended audience ex. technical, legal, the user)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the potential opportunities that AI can produce?

A
  1. AI can be faster and more accurate in its results across a broader range of data
  2. AI in the use of medical assessments can be incredibly accurate, more so than humans, particularly when evaluating scans and other medical outcomes
  3. AI can also help with legal predictions, and can review case law, issues and regulations far more broadly, quickly and accurately than humans
  4. AI is similar to big data; it can process a huge volume of data at tremendous velocity and can process a wide variety of data
  5. AI in its automation of processing can also help remove human error and bias from decision-making; can automate and accelerate otherwise mundane and repetitive tasks, which is often where inconsistencies occur
17
Q

What needs to be done to ensure that the intended audience understands the value of AI?

A
  • Can often be a suspicion about the use of technology when replacing people or a more human approach
  • The security and integrity of AI must be ensured to prevent reverse engineering of data in order to identify individuals
  • Need to ensure AI will honor and enable privacy rights
18
Q

Describe trustworthy AI:

A
  1. Trustworthy AI is part of the operating model
    * Achieved by practicing responsible AI processes
    * Operationalized with a risk management framework
    * A risk management framework will address privacy measures and requirements and ensure accountability of the organization
  2. Organization and AI following the stated processes (ethical scaling)
  3. Confirm AI systems are safe and secure
  4. Ensure the integrity of the AI (transparent and explainable as well as fair and nondiscriminatory)
  5. AI enables human oversight and promotes human values
19
Q

What are the steps for operationalizing responsible AI?

A
  1. Understanding where AI is used and its role in the organization
  2. Set clear technical standards that are shared and adhered to
  3. Develop AI playbooks (Help ensure AI follows the rules of the organization; includes guidelines about what should and shouldn’t be done with AI within the organization)
  4. Update internal legal organizational structures to reflect new roles and responsibilities (Everyone should be clear on the role they have to play)
20
Q

What are best practices for establishing and implementing responsible AI practices?

A
  1. Entails a collective effort across the organization
  2. Requires a cultural shift supported by leadership; top-down approach
  3. Involves establishing governance structures that bring together diverse roles within the organization
    * This can be helped by technical standards, risk management frameworks, AI playbooks and guidelines which can bridge the gap between high-level policy declarations and down-to-earth practical implementation