Responsible Artificial Intelligence Practices Flashcards

1
Q

Algorithm Bias

A

The algorithms and models used in AI systems can introduce biases, even if the training data is unbiased. This can happen due to the inherent assumptions or simplifications made by the algorithms, in particular for underrepresented groups, or due to machine learning models optimize for performance, not necessarily for fairness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Interaction Bias

A

Biases can also arise from the way humans interact with AI systems or the context in which the AI is deployed. For example, if an AI system for facial recognition is primarily tested on a certain demographic group, it may perform poorly on other groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Bias amplification

A

AI systems can amplify and perpetuate existing societal biases, if not properly designed and monitored. This can lead to unfair treatment or discrimination against certain groups, even if it was not intentional. And with more adoption of AI, there is increased risk of bias amplifying further, especially through social media platforms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Toxicity

A

Toxicity is the possibility of generating content (whether it be text, images, or other modalities) that is offensive, disturbing, or otherwise inappropriate. This is a primary concern with generative AI. It is hard to even define and scope toxicity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Intellectual Propriety

A

Protecting intellectual property was a problem with early LLMs. This was because the LLMs had a tendency to occasionally produce text or code passages that were verbatim of parts of their training data, resulting in privacy and other concerns. But even improvements in this regard have not prevented reproductions of training content that are more ambiguous and nuanced.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Transparency

A

The transparency dimension of responsible AI refers to the practice of how you might communicate information about an AI system. This helps stakeholders to make informed choices about their use of the system. Some of this information includes development processes, system capabilities, and limitations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Veracity and robustness in AI

A

refers to the mechanisms to ensure an AI system operates reliably, even with unexpected situations, uncertainty, and errors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Safety in responsible AI

A

refers to the development of algorithms, models, and systems in such a way that they are responsible, safe, and beneficial for individuals and society as a whole.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Model evaluation on Amazon Bedrock,

A

you can evaluate, compare, and select the best foundation model for your use case in just a few clicks. Amazon Bedrock offers a choice of automatic evaluation and human evaluation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Model evaluation on Amazon Bedrock: Automatic evaluation

A

offers predefined metrics such as accuracy, robustness, and toxicity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Model evaluation on Amazon Bedrock: Human evaluation offers

A

subjective or custom metrics such as friendliness, style, and alignment to brand voice. For human evaluation, you can use your in-house employees or an AWS-managed team as reviewers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

SageMaker Clarify supports FM evaluation.

A

You can automatically evaluate FMs for your generative AI use case with metrics such as accuracy, robustness, and toxicity to support your responsible AI initiative.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Guardrails for Amazon Bedrock gives you the ability to

A

configure thresholds across the different categories to filter out harmful interactions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Guardrails for Amazon Bedrock helps you detect PII in

A

user inputs and FM responses. Based on the use case, you can selectively reject inputs containing PII or redact PII in FM responses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

SageMaker Clarify helps identify potential

A

bias in machine learning models and datasets without the need for extensive coding. You specify input features, such as gender or age, and SageMaker Clarify runs an analysis job to detect potential bias in those features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

SageMaker Data Wrangler offers three balancing operators:

A

random undersampling, random oversampling, and Synthetic Minority Oversampling Technique (SMOTE) to rebalance data in your unbalanced datasets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Amazon SageMaker Data Wrangler to

A

balance your data in cases of any imbalances.

18
Q

SageMaker Experiments is a capability of

A

SageMaker that you can use to create, manage, analyze, and compare your machine learning experiments.

19
Q

SageMaker Clarify is integrated with Amazon SageMaker Experiments to provide scores detailing which features

A

contributed the most to your model prediction on a particular input for tabular, natural language processing (NLP), and computer vision models.

20
Q

Amazon SageMaker Model Monitor monitors the

A

quality of your models.
You can set up continuous monitoring with a real-time endpoint (or a batch transform job that runs regularly), or on-schedule monitoring for asynchronous batch transform jobs. With SageMaker Model Monitor, you can set alerts that notify you when there are deviations in the model quality. With early and proactive detection of these deviations, you can take corrective actions.

21
Q

Amazon Augmented AI (Amazon A2I) is a service that helps build the workflows required for human review of ML predictions.

A

Amazon A2I brings human review to all developers and removes the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers.

22
Q

Amazon SageMaker Model Cards:

A

With SageMaker Model Cards, you can capture, retrieve, and share essential model information, such as intended uses, risk ratings, and training details, from conception to deployment.

23
Q

With SageMaker Model Cards,

A

you can capture, retrieve, and share essential model information, such as intended uses, risk ratings, and training details, from conception to deployment.

24
Q

AWS AI Service Cards are a new resource to help you better understand AWS AI services.

A

AI Service Cards are a form of responsible AI documentation that provides a single place to find information on the intended use cases and limitations, responsible AI design choices, and deployment and performance optimization best practices for AWS AI services.

25
Q

When selecting a model for your AI application, you must

A

narrowly define your use case. This is important because you can tune your model for that specific use case.

26
Q

Responsible agency in responsible AI refers to

A

an AI system’s capacity to make good judgments and act in a socially responsible manner.

27
Q

Data augmentation can be used to

A

generate new instances of underrepresented groups. This can help to balance the dataset and prevent biases towards more represented groups.

28
Q

SageMaker Model Dashboard can be used as

A

a central place to keep the team informed on model behavior in production.

29
Q

Transparency helps to understand

A

HOW a model makes decisions.

This helps to provide accountability and builds trust in the AI system. Transparency also makes auditing a system easier.

30
Q

Explainability helps to understand

A

WHY the model made the decision that it made. It gives insight into the limitations of a model.

This helps developers with debugging and troubleshooting the model. It also allows users to make informed decisions on how to use the model.

31
Q

There are several explainability frameworks available that can help summarize and interpret the decisions made by AI systems. Name 3

A

, such as SHapley Value Added (SHAP),

Local Interpretable Model-Agnostic Explanations (LIME), and Counterfactual Explanations,

32
Q

To help with transparency, Amazon offers

A

AWS AI Service Cards and Amazon SageMaker Model Cards. The difference between them is that with AI Service Cards, Amazon provides transparent documentation on Amazon services that help you build your AI services. With SageMaker Model Cards, you can catalog and provide documentation on models that you create or develop yourself.

33
Q

The difference between AWS AI Service Cards and Amazon SageMaker Model Cards. T

A

he difference between them is that with AI Service Cards, Amazon provides transparent documentation on Amazon services that help you build your AI services. With SageMaker Model Cards, you can catalog and provide documentation on models that you create or develop yourself.

34
Q

SageMaker Model Cards

A

document critical details about your ML models in a single place for streamlined governance and reporting.

Catalog details include information such as the intended use and risk rating of a model, training details and metrics, evaluation results and observations, and additional callouts such as considerations, recommendations, and custom informatio

35
Q

Two AWS tools for explainability

A

SageMaker Clarify
Amazon SageMaker Autopilot uses tools provided by SageMaker Clarify

36
Q

SageMaker Autopilot explanatory functionality determines the contribution of individual features or inputs to the model’s output and provides insights into the relevance of different features. You can use it to understand why a model made a prediction after training or use it to provide per-instance explanation during inference.

A
37
Q

Bias-variance tradeoff is when you optimize your model with the right balance between bias and variance. This means that you need to optimize your model so that it is not underfitted or overfitted. The goal is to achieve a trained model with the lowest bias and lowest variance tradeoff for a given data set.

A
38
Q

Dimension reduction is

A

an unsupervised machine learning algorithm that attempts to reduce the dimensionality (number of features) within a dataset while still retaining as much information as possible.

39
Q

offers the most comprehensive set of human-in-the-loop capabilities for incorporating human feedback across the ML lifecycle to improve model accuracy and relevancy

A

SageMaker Ground Truth

40
Q

Design for amplified decision-making is

A

a design principle that supports decision-makers in making decisions carefully in high-pressure environments by using technology

41
Q

To avoid underfitting and overfitting the data, the model should be trained with low bias and low variance.

A

If either the bias or the variance is high, the model will not perform well.