Module 2 Flashcards

1
Q

Describe harms in the context of AI

A

Harms from use of an AI system to a person’s civil liberties, rights, physical or psychological safety or economic opportunity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What should we look at when trying to identify bias in a system?

A

We are looking at the training of the model and the inputs and outputs that occur

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

List 5 different ways bias can happen

A
  • Implicit bias
  • Sampling bias
  • Temporal bias
  • Overfitting to training data
  • Edge cases and outliers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe the result of implicit bias in AI

A

Discrimination or prejudice toward a particular group or individual (often unconscious)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe the result of sampling bias in AI

A

Data gets skewed toward a subset of the group, so it may favour a subset

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Describe the result of temporal bias in AI

A

Based on time, happens when a model is trained and it works now, but it may not work well later

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Describe the result of overfitting to training data in AI

A

The model works for the training data but doesn’t work for any new data that comes in

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Describe the result of edge cases and outliers in AI

A

The model is unpredictable because of data that are outside the boundaries of the training dataset

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Provide 2 examples of edge cases

A
  • Errors, when the data is not correct or duplicated or not needed
  • Noise, data that negatively impact the machine learning of the model
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are outliers?

A

Data points outside the normal distribution of the data which can affect how the model operated and how effective it is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Describe the issue Amazon discovered in one of their AI systems

A
  • They wanted to implement an AI system to help them in recruiting and hiring
  • In training, they used test data from resumes of just men, not women
  • All women were judged as unqualified
  • They decided they were not able to get the model to work the way they wanted, and they abandoned the project
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe how bias can happen in facial recognition systems

A
  • The ability to match 2 images of the same person will vary from one demographic group to another
  • So, it is not a very reliable system to use for facial recognition
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

List 6 potential areas where discrimination can impact an individual

A
  • Employment and hiring
  • Insurance and social benefits
  • Housing (tenant selection or qualifying for a mortgage)
  • Education (selection)
  • Credit (lending)
  • Differential pricing of goods and services
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Provide an example of bias that is ok

A

Loaning less money to someone based on income

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What have studies found in relation to discrimination of women in facial recognition systems

A

Females are much harder to recognize than males

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Provide an example of a facial recognition that caused issues in law enforcement

A

The London police face recognition system once had an 81% inaccuracy rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Describe deidentification

A

Removing identifiers from data (name, address, SIN…)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the risks of using deidenfied data in an AI system

A
  • It may not completely deidentify it, particularly if you combine the dataset with other data
  • With AI systems, it is easy to recombine data and reidentify individuals as they often use multiple datasets
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the risks of appropriating personal data for model training

A
  • Data is often sourced from social media or large datasets with data about individuals
  • Individuals may not have consented to this non-intended use
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is AI inference?

A

An AI system model that makes predictions or decisions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the risks of AI inference?

A

If you have data that is mis-identified, you may be using someone else’s data to make a decision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How can you correct a lack of transparency in AI systems?

A

Have notices whereby individuals know they are interacting with AI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How can inaccurate data in AI models cause privacy issues?

A
  • AI systems are only as good as the data that trains and that is used with them
  • Models may become more inaccurate due to drift over time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

List 3 ways AI can affect employment

A
  • Job loss due to AI doing a job that was previously done by a human
  • Job loss through AI-driven discriminatory hiring practices
  • Job opportunities may fail to reach key demographics due to AI-driven tools that have bias built-in
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What are group harms?

A

Harm to a group such as discrimination against a population subgroup

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

In what areas can group harms occur?

A
  • Mass surveillance – potential harm, especially for marginalized groups
  • Harms to freedom of assembly and protest due to tracking and profiling
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Why are group harms so dangerous?

A

Potential of deepening racial and socio-economic inequities and increasing mistrust amongst groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is a deep fake?

A

Audio, video or images altered to portray a different reality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

List 3 harmful effects AI can have on society

A
  • Harm to democratic participation and process
  • Spread of disinformation, fostering ideological bubbles or echo chambers
  • Safety issues due to lack of human oversight
30
Q

How do disinformation, ideological bubbles and echo chambers affect individuals?

A
  • Individuals are exposed only to information they have agreed on in the past, they are not exposed to information that is different or the broader societal implications
  • They can become isolated and divided
31
Q

How can safety issues in AI cause harm?

A
  • Lethal autonomous weapons that misidentify targets to kill
  • Without proper oversight, they may at some point go out on killing sprees that would not be monitored at all
32
Q

List 4 ways that AI can be used to help the environment

A
  • Self-driving cars reduce emissions
  • Agriculture can produce higher yields
  • Satellites can identify disaster-stricken areas
  • Weather forecasting
33
Q

List the results of 3 studies showing the negative impacts that AI can have on the environment

A
  • When training several common large AI models, a study found that they emit 626,000 pounds of carbon dioxide (equivalent to 5 times the lifetime emissions of an average car)
  • Another study found that when looking at the top 4 natural language processing models, the energy used during the training processes matches the energy mix used by Amazon’s AWS, one of the largest cloud service providers out there
  • Another study found that each casual use of generative AI is like dumping out a small bottle of water on the ground
34
Q

What 3 factors can exacerbate risks in AI?

A
  • Scale
  • Scope
  • Speed of processing
35
Q

List the key stakeholders involved in identifying potential harms in AI

A
  • Customer or other impacted individual
  • AI risk owner
  • AI risk manager
  • AI subject matter expert
  • Compliance, legal and the privacy officer
  • IT leaders – CISO, CTO, CIO
  • Executive business leaders
  • Board of directors
36
Q

What is the role of the customer or other impacted individual when identifying harms posed by an AI system?

A

Identify any equality, diversity and inclusion impacts that might arise

37
Q

Who is the AI risk owner of an AI system?

A

The business unit or area that has overall responsibility for risk management (sales, HR…)

38
Q

Who is the AI risk manager of an AI system?

A

Could be a business, technical or operations person who is responsible for implementing and managing the AI risk management framework

39
Q

Who is considered an AI subject matter expert?

A

Data scientists, developer or software engineer

40
Q

Why is it important to implicate IT leaders such as the CISO, CTO and CIO in the development of AI systems?

A

They need to be engaged so they can ensure the use of AI is done in a secure and compliant manner and that the AI integrates appropriately into the existing technology framework and infrastructure of the organization

41
Q

What is the role of the executive business leaders in AI system development?

A

They need to ensure that the AI is aligned to the business goals and objectives of the organization

42
Q

What is the role of the board of directors in AI system development?

A

The role of the board is to conduct oversight, governance and strategy to ensure the organization operates appropriately and responsibly at all times

43
Q

List the 5 risk categories that need to be explored to identify organizational harms

A
  • Reputational
  • Cultural & societal
  • Economic
  • Acceleration
  • Legal & regulatory
44
Q

List 2 reputational risks related to AI

A
  • Reputation and credibility of the organization
  • Adverse consequences
45
Q

Provide examples of the adverse consequences of AI in terms of an organization’s reputation

A
  • Customers leave or they don’t renew
  • Customers have increased queries to the organization
  • Reduced number of new customers
  • Vendor or partner cancellation
  • Negative brand impacts which take a long time to repair
  • Share price drops
  • Impact to ESG and other ratings that might be significant to the organization
  • Investor flight
  • Become a target for campaigners
  • Exacerbation of existing risks
46
Q

List 2 cultural and societal risks related to AI

A
  • There is a misconception that AI is more correct than humans and so we are less likely to challenge its outcomes, even though AI is created by humans
  • There can be a built-in bias with a view that AI is technologically- and data-driven and therefore can produce a superior outcome which is not necessarily the case
47
Q

List 2 economic risks related to AI

A
  • Litigation costs including class actions and punitive damages
  • Costs of remediation if something goes wrong with AI
48
Q

Provide examples of remediation costs if AI goes wrong

A
  • Putting in place an alternative process to deal with the issues while the AI is being fixed
  • Putting in place additional measures to protect them while the AI issue is being resolved
  • Internal resources that are required to remediate the issue
  • Reimbursing customers
49
Q

Describe acceleration risks

A
  • Volume of data that AI can process, the speed of the processing, and the complexity of the algorithm means that not all risks can be anticipated right from the very beginning, so the impact may be wider and greater than with other software and technology solutions
  • Some generative AI has been created without the necessary controls in place
  • As you speed by, it is difficult to see the warning signs
50
Q

List 3 potential impacts of breaching legal and regulatory risks

A
  • Sanctions
  • Fines
  • Orders to stop processing
51
Q

How can we anticipate harms caused by AI?

A
  • Start with what we know – look at the requirements that are already in place and ensure that AI comply with those
  • Identify gaps
  • Address new and evolving risks by ongoing monitoring – this is not a one-off exercise, requires continuous learning
52
Q

What are the 3 characteristics of a trustworthy AI system?

A
  • Human-centric
  • Accountable
  • Transparent
53
Q

What are the characteristics of untrustworthy AI?

A
  • Black box decision making
  • Unfair outcomes
  • Lack of explainability
  • Diminished human experience
54
Q

Describe what it means to build AI that is human-centric

A
  • AI needs to amplify human agency
  • Have a positive, not a negative impact on the human condition
  • It needs to help, not hinder
55
Q

Describe what it means to build AI that is accountable

A

Organizations need to be ultimately responsible for the AI that they are delivering, irrespective of the number of contributors

56
Q

Describe what it means to build AI that is transparent

A

Understandable to the intended audience:
- If the intended audience is technical, legal, etc., it needs to speak to them
- If the intended audience is the user, it needs to be understandable to them

57
Q

How can AI bring value in medical assessments?

A

Can be incredibly accurate, more so than humans, particularly when looking at scans and other medical outcomes

58
Q

How can AI bring value in legal predictions?

A

AI can review case law, issues and regulations far more broadly and quicker than any human and produce legal predictions in a far more accurate way

59
Q

How do we embed trustworthy AI as part of the operating model of an organization?

A
  • Apply responsible AI processes which are operationalized with a risk management framework
  • Ensure it is robust to withstand the challenges of more users, more scale, more data and more use
  • Ensure it is safe and secure, not hackable (the integrity of the AI has to be ensured)
  • Make it transparent and explainable to those who use it and those who need to understand how it is being used
  • Prioritize fairness and non-discrimination
  • Enable human oversight and promote human values
60
Q

What should an AI risk management framework address?

A
  • Privacy measures and requirements
  • Accountability of the organization
  • Auditability
61
Q

How do we operationalize responsible AI practices?

A
  • Understand where AI is used and its role within the organization (is it small or critical)
  • Technical standards need to be clear, shared and adhered to
  • Development of AI playbooks can help ensure AI follows the rules of the organization
  • Guidelines around what we should and shouldn’t be doing with AI within the organization
62
Q

What challenges do we face when thinking about AI?

A
  • Novelty
  • Who is responsible for what
  • Legal and operational uncertainty about the requirements, frameworks, oversight, controls and expertise
63
Q

What is required to establish trustworthy AI principles?

A
  • A collective effort as well as a cultural shift (this is a top-down approach)
  • Also need governance structures that bring together diverse roles within the organization
64
Q

Provide examples of published guidance on ethical AI

A
  • The OECD’s AI Principles (2019)
  • The U.S. White House Office of Science and Technology put forth a Policy Blueprint for an AI Bill of Rights (2021)
  • Many other national and international organizations, as well as technology and industry leaders (UNESCO Principles, Asilomar AI Principles, The Institute of Electrical and Electronics Engineers Initiative on Ethics of Autonomous and Intelligent Systems, CNIL AI Action Plan…)
65
Q

List key ethical issues for AI

A
  • Lawfulness
  • Safety for people and the planet
  • Protection from unfair bias
  • AI use that is transparent and explainable
  • Appropriate individual choices about the use of personal information to develop AI
  • Option for human intervention in AI-driven decisions impacting legal rights or well-being
  • Organizational accountability to ensure AI is secure and responsibly managed
66
Q

What 5 areas should an organization explore to ensure they are creating ethical AI?

A
  • Legal and compliance
  • Equitable design
  • Transparency and interpretability
  • Privacy and cybersecurity
  • Data governance
67
Q

What legal and compliance issues should an organization consider in implementing ethical AI?

A

Legal and compliance guidance, including relevant policies and procedures, should be in place to ensure legal review of AI and the execution of existing processes to guarantee bias mitigation. Else, an organization should develop such a process for AI if none exist.

68
Q

What equitable design issues should an organization consider in implementing ethical AI?

A
  • Consider whether there is diversity of thought in teams responsible for development of AI, training, testing and monitoring
  • A cross-functional and demographically diverse group should be put in place to evaluate higher-risk AI products/processes that could result in biased outcomes or other ethical concerns
69
Q

What transparency and interpretability issues should an organization consider in implementing ethical AI?

A
  • AI models and products with embedded AI should be labeled as such both internally and externally, per the FTC’s guidance on transparency
  • Consumers should be notified when they are interacting with AI or receiving output/decisions generated by AI
  • Decisions made by AI should be explainable to a consumer (This also applies when AI is provided by a third party, so contracts should ensure a third party can provide explanations of AI-generated decisions)
70
Q

What privacy and cybersecurity issues should an organization consider in implementing ethical AI?

A
  • Use of personal information to develop and train AI should be disclosed in privacy notices
  • Consent must be obtained in compliance with applicable privacy regulations
  • Consumers should be able to access and delete their personal information used to develop and train AI models in compliance with applicable laws
  • Data minimization: personal data unlikely to improve the model should not be included
  • AI must be developed to mitigate the risk of cyber intrusion, such as exfiltration of confidential or personal information, or poisoning of the model
71
Q

What data governance issues should an organization consider in implementing ethical AI?

A

Organizations must ensure the quality and integrity of the data used to develop and train the models