Risks and Harms Flashcards

1
Q

Name some potential individual harms from AI.

A
  • Civil liberties
  • Rights
  • Physical or psychological safety
  • Economic opportunity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Implicit Bias

A

Discrimination or prejudice toward a particular group or individual.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Sampling Bias

A

Data gets skewed toward a subset of the group, so it may favor a subset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Temporal Bias

A

Model is trained and works well now, but may not work later.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Overfitting to Training Data

A

Model works for training data, but not for new data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Edge Cases and Outliers

A

Data outside the boundaries of the training dataset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Areas of potential discrimination

A
  • Employment and hiring
  • Insurance and social benefits
  • Housing
  • Education
  • Credit
  • Differential pricing of goods and services
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Name some AI privacy concerns.

A
  • Using personal data in training and use of systems
  • Easy to recombine data and reidentify individuals
  • Appropriation of personal data for model training
  • Inferences could identify the wrong individual
  • Lack of transparency
  • Inaccurate models
  • Information about protected classes is often considered sensitive
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is an Inference?

A

An AI system model that makes predictions or decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are some economic risks of AI?

A
  • Job loss
  • AI-driven discriminatory hiring practices
  • Job opportunities may fail to reach key demographics due to AI driven tools
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the definition of group harms from AI?

A

Harm to a group such as discrimination against a population subgroup.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Name some potential group harms from AI.

A
  • Mass surveillance (especially for marginalized groups)
  • Freedom of assembly and protest (due to tracking and profiling)
  • Deepening of racial and socio-economic inequities
  • Societal harm: Harm to democratic participation and process.
  • Spread of disinformation, fostering idealogical bubbles (echo chambers)
  • Deepfakes
  • Safety issues
  • Lack of human oversight
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are deepfakes?

A

Audio, video or images altered to portray a different reality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How can AI help the environment?

A
  • Self driving cars (reduce emissions)
  • Agriculture (high crop yields)
  • Disaster relief (using satellites)
  • Weather forecasting
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are some environmental harms from AI?

A
  • High carbon emissions
  • High energy consumption
  • High water usage (lithium batteries)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are some mitigations for the harms from AI?

A
  • Alternative energy sources such as batteries
  • Implement appropriate AI governance and processes
17
Q

Name some potential organizational harms from AI.

A
  • Reputational
  • Cultural and societal
  • Economic (litigation, remediation)
  • Acceleration risk
  • Legal and regulatory
18
Q

Who are some of the key AI stakeholders?

A
  • AI risk owner (the business unit - whether revenue generating or not)
  • Customers or others impacted
  • AI risk manager
  • AI subject matter expert (data scientist, AI engineer)
  • Compliance/Legal/Privacy
  • IT leaders (CISO, CTO, CIO)
  • Executive business leaders
  • Board of Directors
19
Q

What are some steps an organization can take to mitigate the organizational risks of AI?

A
  • Review requirements
  • Identify gaps
  • Conduct ongoing monitoring