Domain 6 Flashcards
Implementing Responsible AI Governance and Risk Management
What are the 3 broad categories of risk associated with AI algorithms and models?
- security and operational risk
- privacy risk
- business risk
What are the common security risks of generative AI?
- hallucinations
- deepfakes
- training data poisoning
- data leakage
- filter bubbles/echo chambers
What are the general security and operation risks with AI?
- erosion of individual freedoms
- false sense of security
- vulnerable to adversarial machine learning attacks
- misuse of AI
- high operational costs
- data corruption and poisoning
What are privacy risks that endanger an individual’s privacy?
- data persistence
- data repurposing
- spillover data
4, data collected/derived from the AI algorithm/model itself
What are the business risks to the organization?
- bias and discrimination
- job displacement
- dependence on AI vendors
- lack of transparency
- intellectual property infringement
- regulation and legal risks
What is a harms taxonomy?
A list of negative consequences that could befall the data subject or organization if certain pieces of information are leaked or misused.
What are the 3 approaches to identifying privacy harms?
- Panopticon
- Ryan Calo
- Citron and Solove
Which approach is most helpful for identifying privacy harm?
A. Panopticon
B. Ryan Calo
C. Citron and Solove
C. Citron and Solove
Which approach is further broken down into subjective and objective privacy harms?
A. Panopticon
B. Ryan Calo
C. Citron and Solove
B. Ryan Calo
What are the 7 AI governance principles that should be integrated into the company?
- Adopt a pro-innovation mindset
- ensure plannng and design is consensus-driven
- ensure team is outcome-focused
- ensure the framework is law, industry and technology agnostic
- adopt a non-prescriptive apporach to allow for intelligent self- management
- ensure governance is risk centric
- create policies to manage 3rd party risk, to ensure end-to-end accountability
What are examples of different risk methodologies?
- probability and severity harms matrix
- HUDERIA risk index number
- confusion matrix
- risk mitigation hierarchy
What are the reasons why AI systems fail?
- brittleness
- hallucinations
- embedded bias
- catastrophic forgetting
- uncertainty
- false positives