Domain 6 Flashcards

Implementing Responsible AI Governance and Risk Management

1
Q

What are the 3 broad categories of risk associated with AI algorithms and models?

A
  1. security and operational risk
  2. privacy risk
  3. business risk
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the common security risks of generative AI?

A
  1. hallucinations
  2. deepfakes
  3. training data poisoning
  4. data leakage
  5. filter bubbles/echo chambers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the general security and operation risks with AI?

A
  1. erosion of individual freedoms
  2. false sense of security
  3. vulnerable to adversarial machine learning attacks
  4. misuse of AI
  5. high operational costs
  6. data corruption and poisoning
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are privacy risks that endanger an individual’s privacy?

A
  1. data persistence
  2. data repurposing
  3. spillover data
    4, data collected/derived from the AI algorithm/model itself
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the business risks to the organization?

A
  1. bias and discrimination
  2. job displacement
  3. dependence on AI vendors
  4. lack of transparency
  5. intellectual property infringement
  6. regulation and legal risks
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a harms taxonomy?

A

A list of negative consequences that could befall the data subject or organization if certain pieces of information are leaked or misused.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the 3 approaches to identifying privacy harms?

A
  1. Panopticon
  2. Ryan Calo
  3. Citron and Solove
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which approach is most helpful for identifying privacy harm?

A. Panopticon
B. Ryan Calo
C. Citron and Solove

A

C. Citron and Solove

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which approach is further broken down into subjective and objective privacy harms?

A. Panopticon
B. Ryan Calo
C. Citron and Solove

A

B. Ryan Calo

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the 7 AI governance principles that should be integrated into the company?

A
  1. Adopt a pro-innovation mindset
  2. ensure plannng and design is consensus-driven
  3. ensure team is outcome-focused
  4. ensure the framework is law, industry and technology agnostic
  5. adopt a non-prescriptive apporach to allow for intelligent self- management
  6. ensure governance is risk centric
  7. create policies to manage 3rd party risk, to ensure end-to-end accountability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are examples of different risk methodologies?

A
  1. probability and severity harms matrix
  2. HUDERIA risk index number
  3. confusion matrix
  4. risk mitigation hierarchy
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the reasons why AI systems fail?

A
  1. brittleness
  2. hallucinations
  3. embedded bias
  4. catastrophic forgetting
  5. uncertainty
  6. false positives
How well did you know this?
1
Not at all
2
3
4
5
Perfectly