Responsible AI Practices Flashcards

1
Q

Data Bias

A

If the training data used to train an AI model is biased or underrepresents certain groups, the resulting model may exhibit biases in its predictions or decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Explainability

A

Answers the question WHY. Explainability refers to the characteristic of an AI model to clearly explain or provide justification for its internal mechanisms and decisions so that it is understandable to humans.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Interaction bias

A

Biases can also arise from the way humans interact with AI systems or the context in which the AI is deployed. For example, if an AI system for facial recognition is primarily tested on a certain demographic group, it may perform poorly on other groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Algorithm bias

A

The algorithms and models used in AI systems can introduce biases, even if the training data is unbiased. This can happen due to the inherent assumptions or simplifications made by the algorithms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Transparency

A

Answers the question HOW. The practice of how you might communicate information about an AI system. Some of this information includes development processes, system capabilities, and limitations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Veracity and robustness

A

Veracity and robustness in AI refers to the mechanisms to ensure an AI system operates reliably, even with unexpected situations, uncertainty, and errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Governance

A

The governance dimension refers to the set of processes that are used to define, implement, and enforce responsible AI practices within an organization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Safety

A

Safety in responsible AI refers to the development of algorithms, models, and systems in such a way that they are responsible, safe, and beneficial for individuals and society as a whole

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Controllability

A

The controllability dimension in responsible AI refers to a framework for how you might monitor and guide an AI system’s behavior to align with human values and intent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Amazon Bedrock

A

Fully managed service that makes available high-performing FMs from leading AI startups and Amazon for your use through a unified API. You can choose from a wide range of FMs to find the model that is best suited for your use case. Amazon Bedrock also offers a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Amazon SageMaker Model Monitor

A

monitors the quality of SageMaker machine learning models in production. You can set up continuous monitoring with a real-time endpoint (or a batch transform job that runs regularly), or on-schedule monitoring for asynchronous batch transform jobs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Amazon Augmented AI (Amazon A2I)

A

a service that helps build the workflows required for human review of ML predictions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Amazon tools that can be used to balance your dataset

A

SageMaker Clarify and SageMaker Data Wrangler

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is curating a dataset?

A

Curating datasets is the process of labeling, organizing, and preprocessing the data so that it can perform accurately on the model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is data preprocessing?

A

Preprocess the data to ensure it is accurate, complete, and unbiased. Techniques such as data cleaning, normalization, and feature selection can help to eliminate biases in the dataset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is data augmentation?

A

Use data augmentation techniques to generate new instances of underrepresented groups. This can help to balance the dataset and prevent biases towards more represented groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

SageMaker Clarify

A

provides purpose-built tools to gain greater insights into ML models and data based on metrics such as accuracy, robustness, toxicity, and bias to improve model quality and support responsible AI initiatives.

17
Q

AI Service Cards

A

form of documentation on responsible AI. They provide teams with a single place to find information on the intended use cases and limitations, responsible AI design choices, and deployment and performance optimization best practices for AWS AI services.

18
Q

Which 2 AWS services or features help with monitoring and human review

A

1) Amazon SageMaker Model Monitor & 2) Amazon Augmented AI (Amazon A2I)

19
Q

AWS services for finding the best model

A

Model evaluation on Amazon bedrock

20
Q

AWS model for guarding against harmful content

A

Guardrails for Amazon Bedrock

21
Q

Name some explainability frameworks

A

1) SHapley Value Added (SHAP) 2) Local Interpretable Model-Agnostic Explanations (LIME) 3)Counterfactual Explanations

22
Q

AWS AI Service Cards

A

AI service cards are a form of responsible AI documentation that provides customers with a single place to find information on the intended use cases and limitations, responsible AI design choices, and the deployment and operation best practices for our AI services. These are not customizable.

23
Q

Amazon Safe Maker Model Cards

A

Use to document critical details about your ML models in a single place for streamlined governance and reporting.

24
Q

Overfitting

A

(bias is low and the variance is high.) when model performs well on the training data but does not perform well on the evaluation data.

25
Q

Cross validation

A

a technique for evaluating ML models by training several ML models on subsets of the available input data and evaluating them on the complementary subset of the data. Cross-validation should be used to detect overfitting.

26
Q

Regularization

A

a method that penalizes extreme weight values to help prevent linear models from overfitting training data examples.

27
Q

Dimension reduction

A

an unsupervised machine learning algorithm that attempts to reduce the dimensionality (number of features) within a dataset while still retaining as much information as possible.

28
Q

Interpretability

A

the degree to which a human can understand the cause of a decision.

Interpretability is the access into a system so that a human can interpret the model’s output based on the weights and features.

29
Q

Model safety

A

the ability of an AI system to avoid causing harm in its interactions with the world. This includes avoiding social harm, such as bias in decision-making algorithms, and avoiding privacy and security vulnerability exposures.

30
Q

Model control

A

You can influence the model’s predictions and behavior by changing aspects of the training data. Higher controllability provides more transparency into the model and allows correcting undesired biases and outputs.

Model controllability is measured by how much control you have over the model by changing the input data.

31
Q

HCD (Human Centered Design)

A

an approach to creating products and services that are intuitive, easy to use, and meet the needs of the people who will be using them.

32
Q

3 key principles of human centered design for explainable AI

A

Design for amplified decision-making.

Design for unbiased decision-making.

Design for human and AI learning.

33
Q

Reflexivity

A

An aspect of design for amplified decision making; its designing technology that prompts users to reflect on their decision-making process and encourages them to take responsibility for their choices.

34
Q

3 aspects of unbiased decision making

A

Identify and assess potential biases.

Design decision-making processes and tools that are transparent and fair.

Train decision-makers to recognize and mitigate biases.

35
Q

Key aspects for designing for human and AI learning

A

1) Cognitive apprenticeship (creating learning environments where AI systems learn from human instructors)
2) Personalization (tailor-making learning experiences and tools)
3) User-centered design (designing learning environments and tools that are intuitive and accessible to different types of learners)

36
Q

SageMaker Ground Truth

A

offers the most comprehensive set of human-in-the-loop capabilities for incorporating human feedback across the ML lifecycle to improve model accuracy and relevancy

37
Q

Which AWS solution should the developer consider to increase the explainability of their system?

A

SageMaker Autopilot to provide explainable insights into how ML models make predictions.

38
Q

Interpretability

A

A model that provides transparency into a system so a human can explain the model’s output based on the weights and features is an example of interpretability in a model.

38
Q

Controllability in a model

A

A model that you can influence the predictions and behavior by changing aspects of the training data