Responsible Artificial Intelligence Practices Flashcards
Algorithm Bias
The algorithms and models used in AI systems can introduce biases, even if the training data is unbiased. This can happen due to the inherent assumptions or simplifications made by the algorithms, in particular for underrepresented groups, or due to machine learning models optimize for performance, not necessarily for fairness.
Interaction Bias
Biases can also arise from the way humans interact with AI systems or the context in which the AI is deployed. For example, if an AI system for facial recognition is primarily tested on a certain demographic group, it may perform poorly on other groups.
Bias amplification
AI systems can amplify and perpetuate existing societal biases, if not properly designed and monitored. This can lead to unfair treatment or discrimination against certain groups, even if it was not intentional. And with more adoption of AI, there is increased risk of bias amplifying further, especially through social media platforms.
Toxicity
Toxicity is the possibility of generating content (whether it be text, images, or other modalities) that is offensive, disturbing, or otherwise inappropriate. This is a primary concern with generative AI. It is hard to even define and scope toxicity.
Intellectual Propriety
Protecting intellectual property was a problem with early LLMs. This was because the LLMs had a tendency to occasionally produce text or code passages that were verbatim of parts of their training data, resulting in privacy and other concerns. But even improvements in this regard have not prevented reproductions of training content that are more ambiguous and nuanced.
Transparency
The transparency dimension of responsible AI refers to the practice of how you might communicate information about an AI system. This helps stakeholders to make informed choices about their use of the system. Some of this information includes development processes, system capabilities, and limitations.
Veracity and robustness in AI
refers to the mechanisms to ensure an AI system operates reliably, even with unexpected situations, uncertainty, and errors.
Safety in responsible AI
refers to the development of algorithms, models, and systems in such a way that they are responsible, safe, and beneficial for individuals and society as a whole.
Model evaluation on Amazon Bedrock,
you can evaluate, compare, and select the best foundation model for your use case in just a few clicks. Amazon Bedrock offers a choice of automatic evaluation and human evaluation.
Model evaluation on Amazon Bedrock: Automatic evaluation
offers predefined metrics such as accuracy, robustness, and toxicity.
Model evaluation on Amazon Bedrock: Human evaluation offers
subjective or custom metrics such as friendliness, style, and alignment to brand voice. For human evaluation, you can use your in-house employees or an AWS-managed team as reviewers.
SageMaker Clarify supports FM evaluation.
You can automatically evaluate FMs for your generative AI use case with metrics such as accuracy, robustness, and toxicity to support your responsible AI initiative.
Guardrails for Amazon Bedrock gives you the ability to
configure thresholds across the different categories to filter out harmful interactions.
Guardrails for Amazon Bedrock helps you detect PII in
user inputs and FM responses. Based on the use case, you can selectively reject inputs containing PII or redact PII in FM responses
SageMaker Clarify helps identify potential
bias in machine learning models and datasets without the need for extensive coding. You specify input features, such as gender or age, and SageMaker Clarify runs an analysis job to detect potential bias in those features.
SageMaker Data Wrangler offers three balancing operators:
random undersampling, random oversampling, and Synthetic Minority Oversampling Technique (SMOTE) to rebalance data in your unbalanced datasets.