Responsible AI Practices Flashcards

In the first section of this course, you will be introduced to what responsible AI is. You will learn how to define responsible AI, understand the challenges that responsible AI attempts to overcome, and explore the core dimensions of responsible AI.

1
Q

What is responsible AI?

A

Responsible AI refers to practices and principles that ensure that AI systems are transparent and trustworthy while mitigating potential risks and negative outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What proactive measures should companies take to ensure responsible AI in their systems?

A
  • Transparency and Accountability: It is transparent and accountable, with monitoring and oversight mechanisms in place.
  • Leadership Accountability: It is managed by a leadership team accountable for responsible AI strategies.
  • Expertise in Development: It is developed by teams or consultants with expertise in responsible AI principles and practices.
  • Guideline Compliance: It is built following responsible AI guidelines.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What type of AI requires responsible AI?

A

Responsible AI is not exclusive to any one form of AI. It should be considered when you are building traditional or generative AI systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the basic differences between traditional AI and generative AI?

A

Traditional AI vs. Generative AI #flashcard
- Traditional machine learning models perform tasks based on the data you provide.
- They can make predictions such as ranking, sentiment analysis, image classification, and more.
- Each model can perform only one task and needs to be carefully trained on the data.
- As they train, they analyze the data and look for patterns to make predictions based on these patterns.

Generative AI Characteristics #flashcard
- Generative AI runs on foundation models (FMs) that are pre-trained on massive amounts of general domain data.
- These models can perform multiple tasks and generate content based on user input, usually in the form of a prompt.
- The generated content comes from learning patterns and relationships, enabling the model to predict the desired outcome.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What potential business values can companies benefit from due to the exciting innovations and diverse strengths of foundation models (FMs) and their anticipated new architectures?

A

Business Values of Foundation Models #flashcard
- Creativity: Create new content and ideas, including conversations, stories, images, videos, and music.
- Productivity: Radically improve productivity across all lines of business, use cases, and industries.
- Connectivity: Connect and engage with customers and across organizations in new ways.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How many types of bias is there and what are they?

A

__Data Bias__
Definition:
If the training data used to train an AI model is biased or underrepresents certain groups, the resulting model may exhibit biases in its predictions or decisions.

Example:
If an AI system for hiring is trained on historical data that reflects past adverse decisions towards an individual or a group based on their characteristics, it may perpetuate those biases in its recommendations.

__Algorithm Bias__
Definition:
The algorithms and models used in AI systems can introduce biases, even if the training data is unbiased. This can happen due to inherent assumptions or simplifications made by the algorithms, particularly for underrepresented groups.

Key Point:
Machine learning models often optimize for performance, not necessarily for fairness.

__Interaction Bias__
Definition:
Biases can arise from the way humans interact with AI systems or the context in which the AI is deployed.

Example:
If an AI system for facial recognition is primarily tested on a certain demographic group, it may perform poorly on other groups.

__Bias Amplification__
Definition:
AI systems can amplify and perpetuate existing societal biases if not properly designed and monitored.

Key Point:
This can lead to unfair treatment or discrimination against certain groups, even if unintentional. With increased adoption of AI, especially through social media platforms, there is a heightened risk of bias amplifying further.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What practices are aimed at mitigating bias throughout the development and operation of an AI system?

A
  • Ensuring diverse and representative data is used for training AI models.
  • Carefully auditing algorithms and models for potential biases.
  • Incorporating fairness metrics and constraints into the AI development process.
  • Promoting transparency and explainability in AI systems to understand their decision-making processes.
  • Involving diverse stakeholders and communities in the design and deployment of AI systems.
  • Continuously monitoring and updating AI systems to address emerging biases.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are some of the challenges related to generative AI?

A

__Toxicity__
Definition:
Toxicity is the possibility of generating content (whether it be text, images, or other modalities) that is offensive, disturbing, or otherwise inappropriate. This is a primary concern with generative AI. It is hard to even define and scope toxicity. The subjectivity involved in determining what constitutes toxic content is an additional challenge, and the boundary between restricting toxic content and censorship can be murky and dependent on context and culture.

Examples of Controversy:
- Should quotations that would be considered offensive out of context be suppressed if they are clearly labeled as quotations?
- What about opinions that might be offensive to some users but are clearly labeled as opinions?

Technical Challenges:
Offensive content might be worded in a very subtle or indirect fashion, without the use of obviously inflammatory language.

__Hallucinations__
Definition:
Hallucinations are assertions or claims that sound plausible but are verifiably incorrect. Considering the next-word distribution sampling employed by large language models (LLMs), it is perhaps not surprising that in more objective or factual use cases, LLMs are susceptible to hallucinations.

Example:
A common phenomenon with current LLMs is creating nonexistent scientific citations. Suppose that an LLMs is prompted with the request, “Tell me about some papers by” a particular author. The model is not actually searching for legitimate citations but generating ones from the distribution of words associated with that author. The result might include realistic titles and topics in the area of the author. However, these might not be real articles, and they might include plausible coauthors but not actual ones.

__Intellectual Property__
Definition:
Protecting intellectual property was a problem with early LLMs. This was because the LLMs had a tendency to occasionally produce text or code passages that were verbatim of parts of their training data, resulting in privacy and other concerns. But even improvements in this regard have not prevented reproductions of training content that are more ambiguous and nuanced.

Example of Controversy:
Consider the following prompt for a generative image model: “Create a painting of a skateboarding cat in the style of [name of a famous artist].” If the model is able to do so in a convincing yet original manner because it was trained on images of the specific artist, objections to such mimicry might arise.

__Plagiarism and Cheating__
Definition:
The creative capabilities of generative AI give rise to worries that it will be used to write college essays, writing samples for job applications, and other forms of cheating or illicit copying. Debates on this topic are happening at universities and many other institutions, and attitudes vary widely.

Example of Debate:
Some are in favor of explicitly forbidding any use of generative AI in settings where content is being graded or evaluated, while others argue that educational practices must adapt to, and even embrace, the new technology. But the underlying challenge of verifying that a given piece of content was authored by a person is likely to present concerns in many contexts.

__Disruption of the Nature of Work__
Definition:
The proficiency with which generative AI is able to create compelling text and images, perform well on standardized tests, write entire articles on given topics, and successfully summarize or improve the grammar of provided articles has created some anxiety. There is a concern that some professions might be replaced or seriously disrupted by the technology.

Key Point:
Although this might be premature, it does seem that generative AI will have a transformative effect on many aspects of work. It is possible that many tasks previously beyond automation could be delegated to machines.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does responsible AI benefit businesses?

A

Flashcard 1
Q: What is one benefit of increased trust and reputation in AI applications?
A: Customers are more likely to interact with AI applications if they believe the system is fair and safe, enhancing their reputation and brand value.

Flashcard 2
Q: How does regulatory compliance relate to responsible AI?
A: As AI regulations emerge, companies with robust responsible AI frameworks can improve compliance with guidelines on data privacy, fairness, accountability, and transparency.

Flashcard 3
Q: What risks can responsible AI practices help mitigate?
A: Responsible AI practices help mitigate risks such as bias, privacy violations, security breaches, and unintended negative impacts on society, reducing legal liabilities and financial costs.

Flashcard 4
Q: What competitive advantage do companies gain by prioritizing responsible AI?
A: Companies that prioritize responsible AI can differentiate themselves from competitors and gain a competitive edge, especially as consumer awareness of AI ethics grows.

Flashcard 5
Q: How does responsible AI contribute to improved decision-making?
A: AI systems built with fairness, accountability, and transparency are more reliable and less likely to produce biased or flawed outputs, leading to better data-driven decisions.

Flashcard 6
Q: In what way does responsible AI enhance products and business?
A: Responsible AI encourages a diverse and inclusive approach to AI development, drawing on varied perspectives and experiences to drive more creative and innovative solutions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is Amazon Sagemaker?

A

Amazon SageMaker is a fully managed ML service. With SageMaker, data scientists and developers can quickly and confidently build, train, and deploy ML models into a production-ready hosted environment. It provides a UI experience for running ML workflows that makes SageMaker ML tools available across multiple integrated development environments (IDEs).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is Amazon Bedrock?

A

Amazon Bedrock is a fully managed service that makes available high-performing FMs from leading AI startups and Amazon for your use through a unified API. You can choose from a wide range of FMs to find the model that is best suited for your use case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How can Guardrails for Amazon Bedrock help implement safeguards in generative AI applications based on specific use cases and responsible AI policies?

A

__Consistent level of AI safety__
Guardrails for Amazon Bedrock evaluates user inputs and FM responses based on use case-specific policies and provides an additional layer of safeguards regardless of the underlying FM. Amongst other FMs, Guardrails for Amazon Bedrock can be applied to Anthropic Claude, Meta Llama 2, Cohere Command, AI21 Labs Jurassic, Amazon Titan Text, and fine-tuned models. Customers can create multiple guardrails, each configured with a different combination of controls, and use these guardrails across different applications and use cases. Guardrails for Amazon Bedrock can also be integrated with Agents for Amazon Bedrock to build generative AI applications aligned with your responsible AI policies.

__Block undesirable topics__
Organizations recognize the need to manage interactions within generative AI applications for a relevant and safe user experience. They want to further customize interactions to remain on topics relevant to their business and align with company policies. By using a short, natural language description, Guardrails for Amazon Bedrock gives you the ability to define a set of topics to avoid within the context of your application. Guardrails for Amazon Bedrock detects and blocks user inputs and FM responses that fall into the restricted topics. For example, a banking assistant can be designed to avoid topics related to investment advice.

__Filter harmful content__
Guardrails for Amazon Bedrock provides content filters with configurable thresholds to filter harmful content across hate, insults, sexual, and violence categories. Most FMs already provide built-in protections to prevent the generation of harmful responses. In addition to these protections, Guardrails for Amazon Bedrock gives you the ability to configure thresholds across the different categories to filter out harmful interactions. Guardrails for Amazon Bedrock automatically evaluates both user queries and FM responses to detect and help prevent content that falls into restricted categories. For example, an e-commerce site can design its online assistant to avoid using inappropriate language such as hate speech or insults.

__Redact PII to protect user privacy__
Guardrails for Amazon Bedrock helps you detect PII in user inputs and FM responses. Based on the use case, you can selectively reject inputs containing PII or redact PII in FM responses. For example, you can redact users’ personal information while generating summaries from customer and agent conversation transcripts in a call center.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is Sagemaker Clarify?

A

SageMaker Clarify helps identify potential bias in machine learning models and datasets without the need for extensive coding. You specify input features, such as gender or age, and SageMaker Clarify runs an analysis job to detect potential bias in those features. SageMaker Clarify then provides a visual report with a description of the metrics and measurements of potential bias so that you can identify steps to remediate the bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is Amazon Sagemaker Data Wrangler?

A

You can use Amazon SageMaker Data Wrangler to balance your data in cases of any imbalances. SageMaker Data Wrangler offers three balancing operators: random undersampling, random oversampling, and Synthetic Minority Oversampling Technique (SMOTE) to rebalance data in your unbalanced datasets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is Amazon Sagemaker Model Monitor?

A

Amazon SageMaker Model Monitor monitors the quality of SageMaker machine learning models in production.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Name and describe the AWS AI Governance tools:

A

__Amazon SageMaker Role Manager__
With SageMaker Role Manager, administrators can define minimum permissions in minutes.

__Amazon SageMaker Model Cards__
With SageMaker Model Cards, you can capture, retrieve, and share essential model information, such as intended uses, risk ratings, and training details, from conception to deployment.

__Amazon SageMaker Model Dashboard__
With SageMaker Model Dashboard, you can keep your team informed on model behavior in production, all in one place.

17
Q
A