Responsible AI Practices Flashcards
What is Responsible AI?
Refers to practices and principles
Ensures AI systems are transparent and trustworthy
Mitigates potential risks and negative outcomes
Applicable throughout the lifecycle of an AI application - Design, Development, Deployment, Monitoring, Evaluation etc.
Applicable to both traditional and generative AI.
What are the challenges of Responsible AI?
Bias - predictions that are biased against historically unfavored groups.
This can arise from:
* Data Bias - training data is biased or underrepresents certain groups
* Algorithm Bias - assumptions and simplifications of the model to optimize performance may lead to bias
* Interaction bias - arise from the way humans interact with the system and the context of deployment - e.g. face recognition tested on certain groups may perform poorly for other groups.
* Bias Amplification - existing societal biases amplified
How can bias be mitigated?
- Ensure training data is diverse and representative
- Audit algorithms for potential bias
- Incorporating fairness metrics and constraints in the development process
- Promoting transparency and explainability
- Involving diverse stakeholders in AI development
What are the unique challenges of Generative AI?
- Toxicity - content that is offensive, disturbing, and inappropriate
- Hallucinations - assertions or claims that sound plausible but are verifiably incorrect. - e.g. non existing scientific citations
- Intellectual property - LLMs can sometimes produce text or code verbatim from training data. Even when it content original, there may be copyright issues -e.g. a cat in the style of Picasso
- Plagiarism and Cheating - e.g. college essays written by Gen AI
- Disruption of the nature of work - worry that some professions may become obsolete.
What are the core dimensions of responsible AI?
Mnemonic: Friendly Elephants Prefer Vast Tropical Grasslands, Staying Cool.
F - Fairness
E - Explainability
P - Privacy and Security
V - Veracity and Robustness
T - Transparency
G - Governance
S - Safety
C - Controllability
What is Fairness?
Promotes inclusion, minimizes discriminatory output
What is Explainability?
Humans must be able to understand how the model makes decisions.
What does Privacy and Security involve?
Data is protected from theft and disclosure
Individuals control if and when their data is used.
No unauthorized users can have access to an individual’s data
What is transparency?
How information about an AI system is communicated to its stakeholders.
This helps stakeholders make informed choices about how they use the system.
e.g. Information on development process, capabilities, and limitations of the system, types of testing performed.
E.g. AI model cards
What is veracity and robustness?
Reliability even in unexpected situations, uncertainty, and errors.
Resilience to changes in input parameters, data distributions
What is Governance?
Set of processes that define, implement, and enforce responsible AI practices within an organization.
Governance policies used to enforce compliance with regulatory obligations.
What is Safety?
Develop AI in a way that is safe and beneficial for individuals and society as a whole.
Use of guardrails
What is controllability?
Monitor and guide an AI system’s behavior to align with human values.
Business benefits of Responsible AI?
Increased trust and reputation
Regulatory compliance
Mitigate risks such as bias, privacy violations, security breaches
Competitive edge
Improved decision making
Improved products and businesses
What are Amazon services and tools that help with responsible AI?
- Model Evaluation on Bedrock - automatic and human evaluation
* Automatic Evaluation - predefined metrics for accuracy, robustness and toxicity
* Human evaluation - evaluates friendliness, style, and brand alignment. Either customer employees or AWS-managed team - SageMaker Clarify - does model evaluation; can detect bias (e.g. gender and age); use of workforce to review model responses.
- Guardrails for Amazon Bedrock - filter out undesirable content, PII
- SageMaker Data Wrangler - corrects data imbalances (random undersampling/oversampling etc. to rebalance data)
- SageMaker Clarify - provides explainability metrics -e.g. which feature contributed most to the prediction
- Monitoring and human reviews - SageMaker Model Monitor - monitors the quality of SM ML models. You can set up SM to run jobs to detect deviations in model performance.
- Amazon Augmented AI (A2I) - coordinates human review workflows of ML predictions; managing large number of human reviewers.
- Amazon SageMaker tools for Governance include Role Manager, Model Cards, and Model Dashboard (to monitor model behavior in production)
- AWS AI Service Cards - helps understand AI services