Module 2 Flashcards
Describe harms in the context of AI
Harms from use of an AI system to a person’s civil liberties, rights, physical or psychological safety or economic opportunity
What should we look at when trying to identify bias in a system?
We are looking at the training of the model and the inputs and outputs that occur
List 5 different ways bias can happen
- Implicit bias
- Sampling bias
- Temporal bias
- Overfitting to training data
- Edge cases and outliers
Describe the result of implicit bias in AI
Discrimination or prejudice toward a particular group or individual (often unconscious)
Describe the result of sampling bias in AI
Data gets skewed toward a subset of the group, so it may favour a subset
Describe the result of temporal bias in AI
Based on time, happens when a model is trained and it works now, but it may not work well later
Describe the result of overfitting to training data in AI
The model works for the training data but doesn’t work for any new data that comes in
Describe the result of edge cases and outliers in AI
The model is unpredictable because of data that are outside the boundaries of the training dataset
Provide 2 examples of edge cases
- Errors, when the data is not correct or duplicated or not needed
- Noise, data that negatively impact the machine learning of the model
What are outliers?
Data points outside the normal distribution of the data which can affect how the model operated and how effective it is
Describe the issue Amazon discovered in one of their AI systems
- They wanted to implement an AI system to help them in recruiting and hiring
- In training, they used test data from resumes of just men, not women
- All women were judged as unqualified
- They decided they were not able to get the model to work the way they wanted, and they abandoned the project
Describe how bias can happen in facial recognition systems
- The ability to match 2 images of the same person will vary from one demographic group to another
- So, it is not a very reliable system to use for facial recognition
List 6 potential areas where discrimination can impact an individual
- Employment and hiring
- Insurance and social benefits
- Housing (tenant selection or qualifying for a mortgage)
- Education (selection)
- Credit (lending)
- Differential pricing of goods and services
Provide an example of bias that is ok
Loaning less money to someone based on income
What have studies found in relation to discrimination of women in facial recognition systems
Females are much harder to recognize than males
Provide an example of a facial recognition that caused issues in law enforcement
The London police face recognition system once had an 81% inaccuracy rate
Describe deidentification
Removing identifiers from data (name, address, SIN…)
What are the risks of using deidenfied data in an AI system
- It may not completely deidentify it, particularly if you combine the dataset with other data
- With AI systems, it is easy to recombine data and reidentify individuals as they often use multiple datasets
What are the risks of appropriating personal data for model training
- Data is often sourced from social media or large datasets with data about individuals
- Individuals may not have consented to this non-intended use
What is AI inference?
An AI system model that makes predictions or decisions
What are the risks of AI inference?
If you have data that is mis-identified, you may be using someone else’s data to make a decision
How can you correct a lack of transparency in AI systems?
Have notices whereby individuals know they are interacting with AI
How can inaccurate data in AI models cause privacy issues?
- AI systems are only as good as the data that trains and that is used with them
- Models may become more inaccurate due to drift over time
List 3 ways AI can affect employment
- Job loss due to AI doing a job that was previously done by a human
- Job loss through AI-driven discriminatory hiring practices
- Job opportunities may fail to reach key demographics due to AI-driven tools that have bias built-in
What are group harms?
Harm to a group such as discrimination against a population subgroup
In what areas can group harms occur?
- Mass surveillance – potential harm, especially for marginalized groups
- Harms to freedom of assembly and protest due to tracking and profiling
Why are group harms so dangerous?
Potential of deepening racial and socio-economic inequities and increasing mistrust amongst groups
What is a deep fake?
Audio, video or images altered to portray a different reality
List 3 harmful effects AI can have on society
- Harm to democratic participation and process
- Spread of disinformation, fostering ideological bubbles or echo chambers
- Safety issues due to lack of human oversight
How do disinformation, ideological bubbles and echo chambers affect individuals?
- Individuals are exposed only to information they have agreed on in the past, they are not exposed to information that is different or the broader societal implications
- They can become isolated and divided
How can safety issues in AI cause harm?
- Lethal autonomous weapons that misidentify targets to kill
- Without proper oversight, they may at some point go out on killing sprees that would not be monitored at all
List 4 ways that AI can be used to help the environment
- Self-driving cars reduce emissions
- Agriculture can produce higher yields
- Satellites can identify disaster-stricken areas
- Weather forecasting
List the results of 3 studies showing the negative impacts that AI can have on the environment
- When training several common large AI models, a study found that they emit 626,000 pounds of carbon dioxide (equivalent to 5 times the lifetime emissions of an average car)
- Another study found that when looking at the top 4 natural language processing models, the energy used during the training processes matches the energy mix used by Amazon’s AWS, one of the largest cloud service providers out there
- Another study found that each casual use of generative AI is like dumping out a small bottle of water on the ground
What 3 factors can exacerbate risks in AI?
- Scale
- Scope
- Speed of processing
List the key stakeholders involved in identifying potential harms in AI
- Customer or other impacted individual
- AI risk owner
- AI risk manager
- AI subject matter expert
- Compliance, legal and the privacy officer
- IT leaders – CISO, CTO, CIO
- Executive business leaders
- Board of directors
What is the role of the customer or other impacted individual when identifying harms posed by an AI system?
Identify any equality, diversity and inclusion impacts that might arise
Who is the AI risk owner of an AI system?
The business unit or area that has overall responsibility for risk management (sales, HR…)
Who is the AI risk manager of an AI system?
Could be a business, technical or operations person who is responsible for implementing and managing the AI risk management framework
Who is considered an AI subject matter expert?
Data scientists, developer or software engineer
Why is it important to implicate IT leaders such as the CISO, CTO and CIO in the development of AI systems?
They need to be engaged so they can ensure the use of AI is done in a secure and compliant manner and that the AI integrates appropriately into the existing technology framework and infrastructure of the organization
What is the role of the executive business leaders in AI system development?
They need to ensure that the AI is aligned to the business goals and objectives of the organization
What is the role of the board of directors in AI system development?
The role of the board is to conduct oversight, governance and strategy to ensure the organization operates appropriately and responsibly at all times
List the 5 risk categories that need to be explored to identify organizational harms
- Reputational
- Cultural & societal
- Economic
- Acceleration
- Legal & regulatory
List 2 reputational risks related to AI
- Reputation and credibility of the organization
- Adverse consequences
Provide examples of the adverse consequences of AI in terms of an organization’s reputation
- Customers leave or they don’t renew
- Customers have increased queries to the organization
- Reduced number of new customers
- Vendor or partner cancellation
- Negative brand impacts which take a long time to repair
- Share price drops
- Impact to ESG and other ratings that might be significant to the organization
- Investor flight
- Become a target for campaigners
- Exacerbation of existing risks
List 2 cultural and societal risks related to AI
- There is a misconception that AI is more correct than humans and so we are less likely to challenge its outcomes, even though AI is created by humans
- There can be a built-in bias with a view that AI is technologically- and data-driven and therefore can produce a superior outcome which is not necessarily the case
List 2 economic risks related to AI
- Litigation costs including class actions and punitive damages
- Costs of remediation if something goes wrong with AI
Provide examples of remediation costs if AI goes wrong
- Putting in place an alternative process to deal with the issues while the AI is being fixed
- Putting in place additional measures to protect them while the AI issue is being resolved
- Internal resources that are required to remediate the issue
- Reimbursing customers
Describe acceleration risks
- Volume of data that AI can process, the speed of the processing, and the complexity of the algorithm means that not all risks can be anticipated right from the very beginning, so the impact may be wider and greater than with other software and technology solutions
- Some generative AI has been created without the necessary controls in place
- As you speed by, it is difficult to see the warning signs
List 3 potential impacts of breaching legal and regulatory risks
- Sanctions
- Fines
- Orders to stop processing
How can we anticipate harms caused by AI?
- Start with what we know – look at the requirements that are already in place and ensure that AI comply with those
- Identify gaps
- Address new and evolving risks by ongoing monitoring – this is not a one-off exercise, requires continuous learning
What are the 3 characteristics of a trustworthy AI system?
- Human-centric
- Accountable
- Transparent
What are the characteristics of untrustworthy AI?
- Black box decision making
- Unfair outcomes
- Lack of explainability
- Diminished human experience
Describe what it means to build AI that is human-centric
- AI needs to amplify human agency
- Have a positive, not a negative impact on the human condition
- It needs to help, not hinder
Describe what it means to build AI that is accountable
Organizations need to be ultimately responsible for the AI that they are delivering, irrespective of the number of contributors
Describe what it means to build AI that is transparent
Understandable to the intended audience:
- If the intended audience is technical, legal, etc., it needs to speak to them
- If the intended audience is the user, it needs to be understandable to them
How can AI bring value in medical assessments?
Can be incredibly accurate, more so than humans, particularly when looking at scans and other medical outcomes
How can AI bring value in legal predictions?
AI can review case law, issues and regulations far more broadly and quicker than any human and produce legal predictions in a far more accurate way
How do we embed trustworthy AI as part of the operating model of an organization?
- Apply responsible AI processes which are operationalized with a risk management framework
- Ensure it is robust to withstand the challenges of more users, more scale, more data and more use
- Ensure it is safe and secure, not hackable (the integrity of the AI has to be ensured)
- Make it transparent and explainable to those who use it and those who need to understand how it is being used
- Prioritize fairness and non-discrimination
- Enable human oversight and promote human values
What should an AI risk management framework address?
- Privacy measures and requirements
- Accountability of the organization
- Auditability
How do we operationalize responsible AI practices?
- Understand where AI is used and its role within the organization (is it small or critical)
- Technical standards need to be clear, shared and adhered to
- Development of AI playbooks can help ensure AI follows the rules of the organization
- Guidelines around what we should and shouldn’t be doing with AI within the organization
What challenges do we face when thinking about AI?
- Novelty
- Who is responsible for what
- Legal and operational uncertainty about the requirements, frameworks, oversight, controls and expertise
What is required to establish trustworthy AI principles?
- A collective effort as well as a cultural shift (this is a top-down approach)
- Also need governance structures that bring together diverse roles within the organization
Provide examples of published guidance on ethical AI
- The OECD’s AI Principles (2019)
- The U.S. White House Office of Science and Technology put forth a Policy Blueprint for an AI Bill of Rights (2021)
- Many other national and international organizations, as well as technology and industry leaders (UNESCO Principles, Asilomar AI Principles, The Institute of Electrical and Electronics Engineers Initiative on Ethics of Autonomous and Intelligent Systems, CNIL AI Action Plan…)
List key ethical issues for AI
- Lawfulness
- Safety for people and the planet
- Protection from unfair bias
- AI use that is transparent and explainable
- Appropriate individual choices about the use of personal information to develop AI
- Option for human intervention in AI-driven decisions impacting legal rights or well-being
- Organizational accountability to ensure AI is secure and responsibly managed
What 5 areas should an organization explore to ensure they are creating ethical AI?
- Legal and compliance
- Equitable design
- Transparency and interpretability
- Privacy and cybersecurity
- Data governance
What legal and compliance issues should an organization consider in implementing ethical AI?
Legal and compliance guidance, including relevant policies and procedures, should be in place to ensure legal review of AI and the execution of existing processes to guarantee bias mitigation. Else, an organization should develop such a process for AI if none exist.
What equitable design issues should an organization consider in implementing ethical AI?
- Consider whether there is diversity of thought in teams responsible for development of AI, training, testing and monitoring
- A cross-functional and demographically diverse group should be put in place to evaluate higher-risk AI products/processes that could result in biased outcomes or other ethical concerns
What transparency and interpretability issues should an organization consider in implementing ethical AI?
- AI models and products with embedded AI should be labeled as such both internally and externally, per the FTC’s guidance on transparency
- Consumers should be notified when they are interacting with AI or receiving output/decisions generated by AI
- Decisions made by AI should be explainable to a consumer (This also applies when AI is provided by a third party, so contracts should ensure a third party can provide explanations of AI-generated decisions)
What privacy and cybersecurity issues should an organization consider in implementing ethical AI?
- Use of personal information to develop and train AI should be disclosed in privacy notices
- Consent must be obtained in compliance with applicable privacy regulations
- Consumers should be able to access and delete their personal information used to develop and train AI models in compliance with applicable laws
- Data minimization: personal data unlikely to improve the model should not be included
- AI must be developed to mitigate the risk of cyber intrusion, such as exfiltration of confidential or personal information, or poisoning of the model
What data governance issues should an organization consider in implementing ethical AI?
Organizations must ensure the quality and integrity of the data used to develop and train the models