Module 4: Implementing Responsible AI Governance and Risk Management: Interoperability of AI Risk Management Flashcards

1
Q

What are some categories of risks with AI algorithms and models?

A
  • Security and operational
  • Privacy
  • Business
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are some Security risks of AI models?

A
  • Hallucinations
  • Deepfakes
  • Training data poisoning
  • Data leakage
  • Filter bubbles aka echo chambers
  • Erosion of individual freedom
  • False sense of security
  • Vulnerability to attack
  • Misuse of AI
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are some Operational risks of AI models?

A
  • High cost to run AI environment
  • Data corruption and poisoning
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the definition of a hallucination?

A

Instances where a generative AI model creates content that either contradicts the source or creates factually incorrect output under the appearance of fact.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are some Privacy Risks of AI models?

A
  • Data persistence
  • Data repurposing
  • Data spillover
  • Data collection/derived data (raises consent, transparency and deletion issues)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

To understand why privacy rights matter in AI, you must go beyond the legal implications and focus on the consequences of the infringement on privacy rights. This focus is accomplished by creating which privacy taxonomy?

A

Privacy Harms Taxonomy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Name some potential Privacy Harms from AI models.

A
  • Use of force
  • Safety and certification
  • Privacy
  • Personhood
  • Displacement of labor
  • Justice
  • Accountability
  • Using labels to disriminate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are some examples of Privacy Harms Taxonomies?

A

1) MITRE PANOPTIC Privacy Threat Model
- Combines two taxonomies: contextual domains (the context of a privacy attack) and privacy activities (activities constituting an attack)
- Data-driven structure to support privacy threat assessment, risk modeling and red teaming

2) Ryan Calo
- Subjective privacy harms: Sense of being internal to the person being harmed
- Objective privacy harms: Sense of being external to the person being harmed. Can occur when personal data is used for adverse action (e.g., refusing a loan).

3) Citron and Solove
Harm types: physical, reputational, relationship, economic, discrimination, psychological, autonomy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are some examples of AI Harms Taxonomies?

A

1) Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction
- Builds on existing taxonomies, classifications and terminologies
- Five major themes: representational, allocative, quality-of-service, interpersonal, social system/societal

2) Taxonomy of Human Rights Risks Connected to Generative AI
- Explores how generative AI may adversely impact the rights listed in the Universal Declaration of Human Rights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly