AI Practice Test #1 (OLD) Flashcards

1
Q

Amazon Bedrock

A

https://aws.amazon.com/bedrock/agents/

https://aws.amazon.com/bedrock/faqs/

https://docs.aws.amazon.com/bedrock/latest/userguide/general-guidelines-for-bedrock-users.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Agents for Amazon Bedrock

A

Agents for Amazon Bedrock are fully managed capabilities that make it easier for developers to create generative AI-based applications that can complete complex tasks for a wide range of use cases and deliver up-to-date answers based on proprietary knowledge sources.

Agents are software components or entities designed to autonomously or semi-autonomously perform specific actions or tasks based on predefined rules or algorithms. With Amazon Bedrock, agents are utilized to manage and execute various multi-step tasks related to infrastructure provisioning, application deployment, and operational activities. For example, you can create an agent that helps customers process insurance claims or an agent that helps customers make travel reservations. You don’t have to provision capacity, manage infrastructure, or write custom code. Amazon Bedrock manages prompt engineering, memory, monitoring, encryption, user permissions, and API invocation.

https://docs.aws.amazon.com/bedrock/latest/userguide/agents.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Knowledge Bases for Amazon Bedrock

A

With Knowledge Bases for Amazon Bedrock, you can give FMs and agents contextual information from your company’s private data sources for Retrieval Augmented Generation (RAG) to deliver more relevant, accurate, and customized responses. You cannot use Knowledge Bases for Amazon Bedrock for the given use case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Watermark detection for Amazon Bedrock

A

The watermark detection mechanism allows you to identify images generated by Amazon Titan Image Generator, a foundation model that allows users to create realistic, studio-quality images in large volumes and at low cost, using natural language prompts. With watermark detection, you can increase transparency around AI-generated content by mitigating harmful content generation and reducing the spread of misinformation. You cannot use watermark detection for the given use case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Guardrails for Amazon Bedrock

A

Guardrails for Amazon Bedrock help you implement safeguards for your generative AI applications based on your use cases and responsible AI policies. It helps control the interaction between users and FMs by filtering undesirable and harmful content, redacts personally identifiable information (PII), and enhances content safety and privacy in generative AI applications. You cannot use Guardrails for Amazon Bedrock for the given use case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Generative AI

A

Generative AI can automate the creation of new data based on existing patterns, enhancing productivity and innovation

Generative AI in the AWS cloud environment is advantageous because it automates the creation of new data from existing patterns, which can significantly boost productivity and drive innovation. This capability allows businesses to generate new insights, designs, and solutions more efficiently.

via - https://aws.amazon.com/what-is/generative-ai/

Incorrect options:

Generative AI can replace all human roles in software development - Generative AI is not designed to replace all human roles in software development but to assist and enhance human capabilities by automating certain tasks and creating new data based on patterns. So, this option is incorrect.

Generative AI ensures 100% security against all cyber threats - While generative AI can improve security by identifying patterns and anomalies, it does not guarantee 100% security against all cyber threats. Security in the cloud involves a combination of multiple strategies and tools. Therefore, this option is incorrect.

Generative AI can perform all cloud maintenance tasks without any human intervention - Generative AI can assist in cloud maintenance tasks by predicting issues and suggesting solutions, but it cannot perform all maintenance tasks without human oversight and intervention. So, this option is not the right fit.

References:

https://aws.amazon.com/what-is/generative-ai/

https://aws.amazon.com/ai/generative-ai/services/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Prompt Engineering

A

https://aws.amazon.com/what-is/prompt-engineering/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Negative Prompting

A

Negative prompting refers to guiding a generative AI model to avoid certain outputs or behaviors when generating content. In the context of AWS generative AI, like those using Amazon Bedrock, negative prompting is used to refine and control the output of models by specifying what should not be included in the generated content.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Few-shot Prompting

A

In few-shot prompting, you provide a few examples of a task to the model to guide its output.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Chain-of-thought prompting

A

Chain-of-thought prompting is a technique that breaks down a complex question into smaller, logical parts that mimic a train of thought. This helps the model solve problems in a series of intermediate steps rather than directly answering the question. This enhances its reasoning ability. It involves guiding the model through a step-by-step process to arrive at a solution or generate content, thereby enhancing the quality and coherence of the output.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Zero-shot Prompting

A

Zero-shot prompting is a technique used in generative AI where the model is asked to perform a task or generate content without having seen any examples of that specific task during training. Instead, the model relies on its general understanding and knowledge to respond.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

GPT

A

Generative Pre-trained Transformer

The company should use GPT (Generative Pre-trained Transformer), to interpret natural language inputs and generating coherent outputs, such as SQL queries, by leveraging its understanding of language patterns and structures

This is the correct option because GPT models are specifically designed to process and generate human-like text based on context and input data. GPT can be fine-tuned to understand specific domain language and generate accurate SQL queries from plain text input. It uses advanced natural language processing (NLP) techniques to parse input text, understand user intent, and generate the appropriate SQL statements, making it highly suitable for the task.

https://aws.amazon.com/what-is/gpt/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

GAN

A

Generative Adversarial Network

A generative adversarial network (GAN) is a deep learning architecture. It trains two neural networks to compete against each other to generate more authentic new data from a given training dataset. For instance, you can generate new images from an existing image database or original music from a database of songs. A GAN is called adversarial because it trains two different networks and pits them against each other. One network generates new data by taking an input data sample and modifying it as much as possible. The other network tries to predict whether the generated data output belongs in the original dataset. In other words, the predicting network determines whether the generated data is fake or real. The system generates newer, improved versions of fake data values until the predicting network can no longer distinguish fake from original.

via - https://aws.amazon.com/what-is/gan/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Amazon Comprehend

A

Amazon Comprehend is built for analyzing and extracting insights from text, such as identifying sentiment, entities, and key phrases. It does not have the capability to generate SQL queries from natural language input. Therefore, it does not meet the company’s need for text-to-SQL conversion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

ResNet

A

Residual Neural Network

ResNet is a deep neural network architecture used mainly in computer vision tasks, such as image classification and object detection. It is not capable of handling natural language input or generating text-based outputs like SQL queries, making it irrelevant to the company’s needs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

WaveNet

A

WaveNet is a deep generative model created by DeepMind to synthesize audio data, particularly for generating realistic-sounding speech. It is not built to handle text input or produce SQL queries, making it completely unsuitable for this task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Amazon SageMaker Data Wrangler - Use Case

A

Fix bias by balancing the dataset

When the number of samples in the majority class (bigger) is considerably larger than the number of samples in the minority (smaller) class, the dataset is considered imbalanced. This skew is challenging for ML algorithms and classifiers because the training process tends to be biased towards the majority class. Data Wrangler supports several balancing operators as part of the Balance data transform.

Incorrect options:

Monitor the quality of a model - This option is incorrect because monitoring model quality is a feature of SageMaker Model Monitor, not SageMaker Data Wrangler. SageMaker Model Monitor is designed to track model quality as well as performance in production.

Build ML models with no code - SageMaker Data Wrangler is not designed for building machine learning models without coding. SageMaker Canvas, another tool in the SageMaker suite, specifically targets no-code model building, allowing users to create and deploy models using a visual interface.

Store and share the features used for model development - SageMaker Feature Store is specifically designed to store and share machine learning features. It allows data scientists and engineers to create a centralized, consistent, and standardized set of features that can be easily accessed and reused across different teams and projects, making it the ideal choice for sharing features during model development. SageMaker Data Wrangler is not designed for this use case.

Reference:

https://aws.amazon.com/blogs/machine-learning/balance-your-data-for-machine-learning-with-amazon-sagemaker-data-wrangler/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Amazon SageMaker Data Wrangler

A

Amazon SageMaker Data Wrangler

You can split a machine learning (ML) dataset into train, test, and validation datasets with Amazon SageMaker Data Wrangler.

Data used for ML is typically split into the following datasets:

Training – Used to train an algorithm or ML model. The model iteratively uses the data and learns to provide the desired result.

Validation – Introduces new data to the trained model. You can use a validation set to periodically measure model performance as it trains and also tune any hyperparameters of the model. However, validation datasets are optional.

Test – Used on the final trained model to assess its performance on unseen data. This helps determine how well the model generalizes.

Data Wrangler is a capability of Amazon SageMaker that helps data scientists and data engineers quickly and easily prepare data for ML applications using a visual interface. It contains over 300 built-in data transformations so you can quickly normalize, transform, and combine features without writing code.

References:

https://aws.amazon.com/blogs/machine-learning/create-train-test-and-validation-splits-on-your-data-for-machine-learning-with-amazon-sagemaker-data-wrangler/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Amazon SageMaker Clarify

A

SageMaker Clarify is used to evaluate models and explain the model predictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Amazon SageMaker Feature Store

A

Amazon SageMaker Feature Store is a fully managed, purpose-built repository to store, share, and manage features for machine learning (ML) models.

https://aws.amazon.com/sagemaker/feature-store/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Amazon SageMaker Ground Truth

A

Amazon SageMaker Ground Truth is a data labeling service provided by AWS that enables users to build highly accurate training datasets for machine learning quickly. The service helps automate the data labeling process through a combination of human labeling and machine learning.

https://aws.amazon.com/sagemaker/groundtruth/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Context window

A

The context window defines how much text (measured in tokens) the AI model can process at one time to generate a coherent output. It determines the limit of input data that the model can use to understand context, maintain conversation history, or generate relevant responses. The context window is measured in tokens (units of text), not characters, making it the key concept for understanding data processing limits in AI models.

via - https://aws.amazon.com/blogs/security/context-window-overflow-breaking-the-barrier/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Character count

A

Character count measures the number of characters in a piece of text, but AI models typically do not limit their input based on characters alone. Instead, they rely on tokens, which can represent words, subwords, or punctuation marks. The concept that defines how much text can be processed at one time is the context window, which is measured in tokens, not character count.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Tokens

A

While tokens are the individual units of text that the model processes, the concept that describes the total amount of text the model can handle at one time is the context window, not tokens themselves. Tokens are components within the context window, and the model’s capacity is defined by how many tokens can fit within this window, rather than just the tokens themselves.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Embeddings

A

Embeddings are vector representations that encode the semantic meaning of words or phrases, enabling the AI model to understand relationships and context in text data. However, embeddings do not define the amount of text or the number of characters considered at one time; they are a representation technique used within the model once the text is processed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Amazon SageMaker Model Dashboard

A

Amazon SageMaker Model Dashboard is a centralized repository of all models created in your account. The models are generally the outputs of SageMaker training jobs, but you can also import models trained elsewhere and host them on SageMaker. Model Dashboard provides a single interface for IT administrators, model risk managers, and business leaders to track all deployed models and aggregate data from multiple AWS services to provide indicators about how your models are performing.

Model risk managers, ML practitioners, data scientists, and business leaders can get a comprehensive overview of models using the Model Dashboard. The dashboard aggregates and displays data from Amazon SageMaker Model Cards, Endpoints, and Model Monitor services to display valuable information such as model metadata from the model card and model registry, endpoints where the models are deployed, and insights from model monitoring.

https://docs.aws.amazon.com/sagemaker/latest/dg/model-dashboard-faqs.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Amazon SageMaker JumpStart

A

Amazon SageMaker JumpStart is a machine learning (ML) hub that can help you accelerate your ML journey. With SageMaker JumpStart, you can evaluate, compare, and select Foundation Models (FMs) quickly based on pre-defined quality and responsibility metrics to perform tasks like article summarization and image generation. Pretrained models are fully customizable for your use case with your data, and you can easily deploy them into production with the user interface or SDK.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Amazon SageMaker Feature Store

A

Amazon SageMaker Feature Store is a fully managed, purpose-built repository to store, share, and manage features for machine learning (ML) models. Features are inputs to ML models used during training and inference. For example, in an application that recommends a music playlist, features could include song ratings, listening duration, and listener demographics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Amazon SageMaker Data Wrangler

A

Amazon SageMaker Data Wrangler reduces the time it takes to aggregate and prepare tabular and image data for ML from weeks to minutes. With SageMaker Data Wrangler, you can simplify the process of data preparation and feature engineering, and complete each step of the data preparation workflow (including data selection, cleansing, exploration, visualization, and processing at scale) from a single visual interface.

SageMaker Data Wrangler is a tool designed for data preparation and feature engineering in the machine learning pipeline. It allows users to clean, transform, and process data but does not offer features for creating interactive visualizations or dashboards. Therefore, it is not suitable for the company’s need to visualize sales data for business intelligence purposes.

https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-analyses.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Amazon QuickSight

A

Amazon QuickSight, a business intelligence (BI) service that allows users to easily create and share interactive dashboards and visualizations from various data sources, including up-to-date sales data, enabling real-time insights and reporting

This is the correct option because Amazon QuickSight is specifically designed for creating interactive visualizations and dashboards for a wide range of data sources, including sales data. It provides an easy-to-use interface for business intelligence tasks, enabling the company to quickly generate insights and monitor trends. QuickSight also supports real-time data analysis, making it ideal for up-to-date reporting on sales performance over the last 12 months.

https://docs.aws.amazon.com/quicksight/latest/user/working-with-visual-types.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

CloudWatch Dashboard

A

CloudWatch Dashboards are primarily used for monitoring AWS infrastructure and services, such as server metrics, application logs, and performance monitoring. It is not designed for creating visualizations or dashboards for sales data or other business metrics, and therefore, does not meet the company’s requirement for business intelligence and reporting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

SageMaker Canvas

A

SageMaker Canvas is focused on enabling users to build and deploy machine learning models without coding. It is not a tool for data visualization or creating business dashboards. While it can help with data analysis through machine learning, it does not provide the capabilities required for creating interactive visualizations or dashboards for sales data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Decision Trees

A

Decision Trees are highly interpretable models that provide a clear and straightforward visualization of the decision-making process. Decision Trees work by splitting the data into subsets based on the most significant features, resulting in a tree-like structure where each branch represents a decision rule. This makes it easy to understand how different characteristics of movies contribute to the final classification, making Decision Trees the most suitable choice for this task. So, Decision Trees offer high interpretability and transparency, which aligns with the company’s need to document the inner mechanisms of how the model affects the output.

via - https://docs.aws.amazon.com/whitepapers/latest/model-explainability-aws-ai-ml/interpretability-versus-explainability.html

https://docs.aws.amazon.com/whitepapers/latest/model-explainability-aws-ai-ml/interpretability-versus-explainability.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Logistic Regression

A

Logistic Regression is primarily designed for binary classification problems. While it can be adapted for multiclass classification, it may not perform effectively with a large number of categories or a complex dataset like a massive movie database. Additionally, logistic regression does not provide an easily interpretable structure that illustrates how each feature influences the final output, making it less suitable for the company’s requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Neural Networks

A

Although neural networks are powerful tools for handling large and complex datasets, they are often considered “black-box” models due to their lack of transparency. Neural networks involve multiple layers of neurons and nonlinear transformations, making it difficult to understand and document the inner workings of the model. Given the company’s need for transparency and an understanding of how the model affects the output, neural networks are not the best choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Support Vector Machines (SVMs)

A

While SVMs are effective for classification tasks, especially in high-dimensional spaces, they do not inherently provide an interpretable way to understand the decision-making process. SVMs create a hyperplane to separate classes, but it is not straightforward to explain how individual features impact the final classification. This lack of interpretability makes SVMs less suitable for a company that wants to document and understand the inner workings of the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Amazon OpenSearch Service

A

Amazon OpenSearch Service, which is designed to provide fast search capabilities and supports full-text search, indexing, and similarity scoring

Amazon OpenSearch Service is the most suitable choice because it is specifically built to handle search and analytics workloads, including fast index lookups and similarity scoring. OpenSearch supports full-text search, vector search, and advanced data indexing, which are essential for the Retrieval-Augmented Generation (RAG) framework. It enables the chatbot or model to quickly find and rank relevant documents based on their similarity to the query, making it highly effective for applications that require rapid data retrieval and relevance ranking.

via - https://aws.amazon.com/blogs/big-data/amazon-opensearch-services-vector-database-capabilities-explained/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Knowledge Bases for Amazon Bedrock

A

Knowledge Bases for Amazon Bedrock takes care of the entire ingestion workflow of converting your documents into embeddings (vector) and storing the embeddings in a specialized vector database. Knowledge Bases for Amazon Bedrock supports popular databases for vector storage, including vector engine for Amazon OpenSearch Serverless, Pinecone, Redis Enterprise Cloud, Amazon Aurora (coming soon), and MongoDB (coming soon). If you do not have an existing vector database, Amazon Bedrock creates an OpenSearch Serverless vector store for you.

https://aws.amazon.com/bedrock/knowledge-bases/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Amazon DocumentDB

A

Amazon DocumentDB (with MongoDB compatibility), a managed NoSQL document database service designed for storing semi-structured data to facilitate search capabilities

Amazon DocumentDB is primarily designed for storing and querying semi-structured JSON data. While it provides scalability and managed support for document-based workloads, it is not optimized for full-text search or similarity searches. DocumentDB lacks the native capabilities for efficient indexing and retrieval needed for RAG, making it a less suitable choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Amazon DynamoDB

A

Amazon DynamoDB, a fully managed NoSQL database service that offers low-latency data retrieval to handle fast index lookups as well as search operations

Amazon DynamoDB is a key-value and document database designed for fast and predictable performance with low latency, suitable for high-throughput transactional workloads. However, it does not natively support advanced search capabilities or similarity scoring needed for RAG applications. Its primary focus is on rapid data retrieval based on primary keys, not on the complex search and retrieval functions required for this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Amazon Aurora

A

Amazon Aurora, a managed relational database service that is optimized for high-performance transactional workloads that can be useful for search operations

Amazon Aurora is a high-performance relational database service that is excellent for OLTP (Online Transaction Processing) workloads. While it provides advanced indexing features for relational data, it is not optimized for full-text search, fast similarity lookups, or the types of search capabilities required for RAG applications. Aurora’s primary strengths lie in transactional integrity and scalability for relational datasets, not in search and retrieval tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

AWS Trainium instances

A

AWS Trainium instances are designed with energy efficiency in mind, providing optimal performance per watt for machine learning workloads. Trainium, AWS’s custom-designed machine learning chip, is specifically engineered to offer the best performance at the lowest power consumption, reducing the carbon footprint of training large-scale models. This makes Trainium instances the most environmentally friendly choice among the options listed. Trn1 instances powered by Trainium are up to 25% more energy efficient for DL training than comparable accelerated computing EC2 instances.

via - https://aws.amazon.com/machine-learning/trainium/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Accelerated Computing P type instances

A

Accelerated Computing P type instances, powered by high-end GPUs like NVIDIA Tesla, are optimized for maximum computational throughput, particularly for machine learning and HPC tasks. However, they consume significant amounts of power and are not specifically designed with energy efficiency in mind, making them less suitable for an environmentally conscious choice.

https://aws.amazon.com/ec2/instance-types/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Accelerated Computing G type instances

A

Accelerated Computing G type instances, such as those powered by NVIDIA GPUs, are designed for graphics-heavy applications like gaming, rendering, or video processing. While they offer high computational power for specific tasks, they are not specifically optimized for energy efficiency or low environmental impact, making them less suitable for a company focused on minimizing its carbon footprint.

https://aws.amazon.com/ec2/instance-types/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Compute Optimized C type instances

A

Compute Optimized C type instances are designed to maximize compute performance for applications such as web servers, gaming, and scientific modeling. While they provide excellent compute power, they are not optimized for energy efficiency in the same way as AWS Trainium instances, making them less ideal for reducing environmental impact.

https://aws.amazon.com/ec2/instance-types/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Amazon Polly

A

Amazon Polly is used to deploy high-quality, natural-sounding human voices in dozens of languages

Amazon Polly is a cloud service that converts text into lifelike speech. You can use Amazon Polly to develop applications that increase engagement and accessibility. Amazon Polly supports multiple languages and includes a variety of lifelike voices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Amazon Comprehend

A

Amazon Comprehend service uses machine learning to find insights and relationships in the text

Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text, no machine learning experience is required. Amazon Comprehend uses machine learning to help you uncover the insights and relationships in your unstructured data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Amazon Transcribe

A

Amazon Transcribe uses machine learning models to convert speech to text.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Amazon Lex

A

Amazon Lex is the AWS service used to build conversational interfaces for applications using voice and text.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Amazon Rekognition

A

Amazon Rekognition is a cloud-based image and video analysis service that makes it easy to add advanced computer vision capabilities to your applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Batch inference

A

The company should use batch inference, thereby allowing it to run multiple inference requests in a single batch

You can use batch inference to run multiple inference requests asynchronously, and improve the performance of model inference on large datasets. Amazon Bedrock offers select foundation models (FMs) from leading AI providers like Anthropic, Meta, Mistral AI, and Amazon for batch inference at 50% of on-demand inference pricing.

Batch inference is the most cost-effective choice when reducing inference costs on Amazon Bedrock. By processing large numbers of data points in a single batch, the company can lower the cost per inference as the model handles multiple requests simultaneously. This approach is ideal when there is no need for immediate responses, allowing for more efficient use of resources and minimizing computational expenses.

https://docs.aws.amazon.com/bedrock/latest/userguide/inference.html

https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-bedrock-fms-batch-inference-50-price/

https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-deployment.html

The company should use batch inference, which processes multiple data points at once in large batches, suitable for processing large datasets in a single operation when immediate real-time responses are not required

Batch inference is the most suitable choice for processing a large payload of several gigabytes with Amazon SageMaker when there is no need for immediate responses. This method allows the company to run predictions on large volumes of data in a single batch job, which is more cost-effective and efficient than processing individual requests in real-time. Batch inference can handle large datasets and is ideal for scenarios where waiting for the responses is acceptable, making it the best fit for this use case.

SageMaker Batch Transform will automatically split your input file of several gigabytes (GBs) into whatever payload size is specified if you use “SplitType”: “Line” and “BatchStrategy”: “MultiRecord”.

https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-deployment.html

https://repost.aws/questions/QUlefH1ni4QOaulUT4870D5g/sagemaker-batch-transform

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Real-time inference

A

https://docs.aws.amazon.com/bedrock/latest/userguide/inference.html

Real-time inference is optimized for scenarios where low latency is crucial, and responses are needed immediately. It is not suitable for processing large payloads of several gigabytes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Serverless inference

A

https://docs.aws.amazon.com/bedrock/latest/userguide/inference.html

Serverless inference provides automatic scaling and is ideal for unpredictable traffic patterns or sporadic workloads, but it is not specifically designed for handling large, continuous payloads efficiently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

On-demand inference

A

On-demand inference offers flexibility by charging only for the resources used during each inference, making it suitable for unpredictable or variable usage patterns. However, it is generally more costly when used frequently or over long periods because it does not benefit from cost savings associated with bulk processing. For a company looking to reduce costs, on-demand inference may not be the most economical option.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Asynchronous inference

A

Asynchronous inference is designed to handle longer-running tasks and large payloads by processing them in the background, this option is ideal for requests with large payload sizes (up to 1GB), long processing times (up to one hour), and near real-time latency requirements. Asynchronous Inference enables you to save on costs by autoscaling the instance count to zero when there are no requests to process, so you only pay when your endpoint is processing requests.

https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Top P

A

Influences the percentage of most-likely candidates that the model considers for the next token

Top P represents the percentage of most likely candidates that the model considers for the next token. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.

via - https://docs.aws.amazon.com/bedrock/latest/userguide/inference-parameters.html

The percentage of most-likely candidates that the model considers for the next token.

Choose a lower value to decrease the size of the pool and limit the options to more likely outputs.

Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.

In technical terms, the model computes the cumulative probability distribution for the set of responses and considers only the top P% of the distribution.

For example, if you choose a value of 0.8 for Top P, the model selects from the top 80% of the probability distribution of tokens that could be next in the sequence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Stop sequences

A

The inference parameter Stop sequences specifies the sequences of characters that stop the model from generating further tokens. If the model generates a stop sequence that you specify, it will stop generating after that sequence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Top K

A

The inference parameter Top K represents the number of most likely candidates that the model considers for the next token.

The number of most-likely candidates that the model considers for the next token.

Choose a lower value to decrease the size of the pool and limit the options to more likely outputs.

Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.

For example, if you choose a value of 50 for Top K, the model selects from 50 of the most probable tokens that could be next in the sequence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Temperature

A

The inference parameter Temperature is a value between 0 and 1, and it regulates the creativity of the model’s responses.

Affects the shape of the probability distribution for the predicted output and influences the likelihood of the model selecting lower-probability outputs.

Choose a lower value to influence the model to select higher-probability outputs.

Choose a higher value to influence the model to select lower-probability outputs.

In technical terms, the temperature modulates the probability mass function for the next token. A lower temperature steepens the function and leads to more deterministic responses, and a higher temperature flattens the function and leads to more random responses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Linear regression

A

Linear regression refers to supervised learning models that, based on one or more inputs, predict a value from a continuous scale. An example of linear regression is predicting a house price. You could predict a house’s price based on its location, age, and number of rooms after you train a model on a set of historical sales training data with those variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Neural network

A

A neural network solution is a more complex supervised learning technique. To produce a given outcome, it takes some given inputs and performs one or more layers of mathematical transformation based on adjusting data weightings. An example of a neural network technique is predicting a digit from a handwritten image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

Document classification

A

Document classification is an example of semi-supervised learning. Semi-supervised learning is when you apply both supervised and unsupervised learning techniques to a common problem. This technique relies on using a small amount of labeled data and a large amount of unlabeled data to train systems. When applying categories to a large document base, there may be too many documents to physically label. For example, these could be countless reports, transcripts, or specifications. Training on the unlabeled data helps identify similar documents for labeling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Association rule learning

A

This is an example of unsupervised learning. Association rule learning techniques uncover rule-based relationships between inputs in a dataset. For example, the Apriori algorithm conducts market basket analysis to identify rules like coffee and milk often being purchased together.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Clustering

A

Clustering is an unsupervised learning technique that groups certain data inputs, so they may be categorized as a whole. There are various types of clustering algorithms depending on the input data. An example of clustering is identifying different types of network traffic to predict potential security incidents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Benchmark datasets

A

Benchmark datasets are the most suitable option for evaluating an LLM for bias and discrimination with the least administrative effort. These datasets are specifically designed and curated to include a variety of scenarios that test for potential biases in model outputs. They are pre-existing and standardized, meaning that the company does not need to spend time or resources creating or manually curating data. Using these datasets allows for a quick, cost-effective, and consistent evaluation of model fairness across different contexts.

https://docs.aws.amazon.com/bedrock/latest/userguide/model-evaluation-prompt-datasets-builtin.html

https://docs.aws.amazon.com/bedrock/latest/userguide/model-evaluation-prompt-datasets.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Human-monitored benchmarking

A

Human-monitored benchmarking involves a team of human reviewers who manually assess the model’s outputs for bias. While this approach can provide nuanced feedback, it requires substantial administrative effort to coordinate, train, and manage human reviewers. It is labor-intensive and costly, and the potential for human error or subjective judgment may lead to inconsistent evaluations. Therefore, it is not the most efficient option if the goal is to minimize administrative overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

User-generated data

A

Randomly selected user-generated data involves analyzing real user interactions for bias, but this approach is not ideal due to the lack of standardization. It requires considerable effort to manually select, curate, and evaluate the data for bias, and there is a risk that the selected samples may not cover all relevant bias scenarios comprehensively. This method also involves privacy and ethical considerations, adding further complexity and administrative effort.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Internally generated synthetic data

A

Internally generated synthetic data allows for custom scenarios to be tested, but it requires a significant investment in resources, expertise, and time to create and maintain these datasets. Designing synthetic data that accurately reflects real-world biases and discrimination scenarios is complex, making it an impractical choice when aiming to minimize administrative effort.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

SageMaker Feature Store

A

SageMaker Feature Store is specifically designed to store and manage machine learning features. It allows data scientists and engineers to create a centralized, consistent, and standardized set of features that can be easily accessed and reused across different teams and projects, making it the ideal choice for sharing variables during model development. SageMaker Feature Store also supports feature versioning and governance, which helps maintain the integrity and accuracy of the data used in model development.

https://aws.amazon.com/sagemaker/feature-store/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

SageMaker Data Wrangler

A

SageMaker Data Wrangler - SageMaker Data Wrangler is primarily a tool for data preparation and feature engineering, not for storing or sharing features across different teams. While Data Wrangler provides capabilities for cleaning, transforming, and visualizing data, it does not offer the functionality needed to maintain a centralized repository for sharing variables during model development.

https://aws.amazon.com/sagemaker/data-wrangler/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

SageMaker Clarify

A

SageMaker Clarify is focused on detecting bias in data and explaining model predictions to ensure transparency and fairness. It does not provide a mechanism for storing or sharing features and is not designed to support collaborative feature management, making it unsuitable for the company’s goal of sharing variables for model development.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

SageMaker Model Monitor

A

SageMaker Model Monitor is designed to track the performance of machine learning models in production by monitoring data drift, bias, and other deviations. It does not offer any features for storing or sharing variables during the development phase, and its purpose is primarily focused on post-deployment model monitoring rather than feature management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

Good Prompting technique

A

The following are the constituents of a good prompting technique:

(1) Instructions – a task for the model to do (description, how the model should perform)

(2) Context – external information to guide the model

(3) Input data – the input for which you want a response

(4) Output Indicator – the output type or format

via - https://aws.amazon.com/what-is/prompt-engineering/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Hyperparameters

A

Hyperparameters are values that can be adjusted for model customization to control the training process and, consequently, the output custom model. In other words, hyperparameters are external configurations set before the training process begins. They control the training process and the structure of the model but are not adjusted by the training algorithm itself. Examples include the learning rate, the number of layers in a neural network, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Model parameters

A

Model parameters are values that define a model and its behavior in interpreting input and generating responses. Model parameters are controlled and updated by providers. You can also update model parameters to create a new model through the process of model customization. In other words, Model parameters are the internal variables of the model that are learned and adjusted during the training process. These parameters directly influence the output of the model for a given input. Examples include the weights and biases in a neural network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

RAG

A

Retrieval-Augmented Generation

Utilize a Retrieval-Augmented Generation (RAG) system by indexing all product catalog PDFs and configuring the LLM chatbot to reference this system for answering queries

Using a RAG approach is the least costly and most efficient solution for providing up-to-date and relevant responses. In this approach, you convert all product catalog PDFs into a searchable knowledge base. When a customer query comes in, the RAG framework first retrieves the most relevant pieces of information from this knowledge base and then uses an LLM to generate a coherent response based on the retrieved context. This method does not require re-training the model or modifying every incoming query with large datasets, making it significantly more cost-effective. It ensures that the chatbot always has access to the most recent information without needing expensive updates or processing every time.

https://aws.amazon.com/what-is/retrieval-augmented-generation/

https://aws.amazon.com/bedrock/knowledge-bases/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

Foundation Models

A

Foundation Models serve as a broad base for various AI applications by providing generalized capabilities, whereas Large Language Models are specialized for understanding and generating human language

Foundation Models provide a broad base with generalized capabilities that can be applied to various tasks such as natural language processing (NLP), question answering, and image classification. The size and general-purpose nature of FMs make them different from traditional ML models, which typically perform specific tasks, like analyzing text for sentiment, classifying images, and forecasting trends.

Generally, an FM uses learned patterns and relationships to predict the next item in a sequence. For example, with image generation, the model analyzes the image and creates a sharper, more clearly defined version of the image. Similarly, with text, the model predicts the next word in a string of text based on the previous words and their context. It then selects the next word using probability distribution techniques.

In contrast, Large Language Models are specifically designed for tasks involving the understanding and generation of human language, making them more specialized. LLMs are specifically focused on language-based tasks such as summarization, text generation, classification, open-ended conversation, and information extraction.

https://aws.amazon.com/what-is/foundation-models/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

Large Language Model

A

Large language models, also known as LLMs, are very large deep learning models that are pre-trained on vast amounts of data. The underlying transformer is a set of neural networks that consist of an encoder and a decoder with self-attention capabilities. The encoder and decoder extract meanings from a sequence of text and understand the relationships between words and phrases in it.

https://aws.amazon.com/what-is/large-language-model/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Tokens

A

Tokens are the correct answer because they represent the fundamental units of text that the AI model processes. Tokens can be whole words, parts of words (sub-words), or even single characters, depending on the model’s tokenization strategy. In generative AI, the model breaks down text into these tokens to better understand the structure, meaning, and context, enabling it to generate coherent language outputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

Embeddings

A

Embeddings are not the correct answer because they are a way of representing tokens (words, sub-words, or phrases) as numerical vectors to capture their semantic relationships in a high-dimensional space. Embeddings help the model understand the meaning and context of tokens, but they are not the units of text themselves.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

Vectors

A

Vectors are mathematical constructs used to represent the relationships between different words or tokens in a model. While vectors are crucial for understanding how words are related to each other in the embedding space, they do not directly represent the units of text (tokens) processed by the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

Context window

A

The context window is the total amount of text (measured in tokens) that a model can process at once. It does not refer to the individual units of text, like words or sub-words, but rather to the overall capacity for text input. Therefore, it is not the correct answer for identifying the basic units of text that the model handles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

Reinforcement learning

A

Reinforcement learning involves an agent interacting with an environment by taking actions and receiving rewards or penalties, learning a policy to maximize cumulative rewards over time

Reinforcement learning works by having an agent take actions in an environment, receiving rewards or penalties based on the actions, and learning a policy that aims to maximize cumulative rewards over time. This process involves continuously adjusting actions based on the feedback received to improve performance.

https://aws.amazon.com/what-is/reinforcement-learning/

Reinforcement learning does not use supervised learning algorithms to label data. Rather, it focuses on learning from interaction with the environment.

Reinforcement learning is not an unsupervised learning technique and does not cluster data points without feedback.

While data transformation can be part of feature engineering, reinforcement learning specifically involves learning optimal actions based on feedback from the environment rather than transforming data into a new feature space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

Amazon SageMaker JumpStart - Key features

A

(1) You can evaluate, compare, and select Foundation Models quickly based on pre-defined quality and responsibility metrics

(2) Pre-trained models are fully customizable for your use case with your data

Amazon SageMaker JumpStart is a machine learning (ML) hub that can help you accelerate your ML journey. With SageMaker JumpStart, you can evaluate, compare, and select FMs quickly based on pre-defined quality and responsibility metrics to perform tasks like article summarization and image generation. Pretrained models are fully customizable for your use case with your data, and you can easily deploy them into production with the user interface or SDK. You can also share artifacts, including models and notebooks, within your organization to accelerate model building and deployment, and admins can control which models are visible to users within their organization.

Your inference and training data will not be used nor shared to update or train the base model that SageMaker JumpStart surfaces to customers.

SageMaker JumpStart provides proprietary and public models.

Amazon SageMaker Canvas provides a no-code interface, in which you can create highly accurate machine learning models —without any machine learning experience or writing a single line of code.

https://aws.amazon.com/sagemaker/jumpstart/

https://aws.amazon.com/sagemaker/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

Amazon SageMaker Ground Truth

A

To train a machine learning model, you need a large, high-quality, labeled dataset. Ground Truth helps you build high-quality training datasets for your machine learning models. With Ground Truth, you can use workers from either Amazon Mechanical Turk, a vendor company that you choose, or an internal, private workforce along with machine learning to enable you to create a labeled dataset. You can use the labeled dataset output from Ground Truth to train your models. You can also use the output as a training dataset for an Amazon SageMaker model.

Depending on your ML application, you can choose from one of the Ground Truth built-in task types to have workers generate specific types of labels for your data. You can also build a custom labeling workflow to provide your UI and tools to workers labeling your data. You can choose your workforce from:

The Amazon Mechanical Turk workforce of over 500,000 independent contractors worldwide.

A private workforce that you create from your employees or contractors for handling data within your organization.

A vendor company that you can find in the AWS Marketplace that specializes in data labeling services.

https://aws.amazon.com/sagemaker/groundtruth/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

Amazon SageMaker Feature Store

A

Amazon SageMaker Feature Store is a fully managed, purpose-built repository to store, share, and manage features for machine learning (ML) models. Features are inputs to ML models used during training and inference. For example, in an application that recommends a music playlist, features could include song ratings, listening duration, and listener demographics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

Amazon SageMaker JumpStart

A

Amazon SageMaker JumpStart is a machine learning (ML) hub that can help you accelerate your ML journey. With SageMaker JumpStart, you can evaluate, compare, and select Foundation Models (FMs) quickly based on pre-defined quality and responsibility metrics to perform tasks like article summarization and image generation. Pretrained models are fully customizable for your use case with your data, and you can easily deploy them into production with the user interface or SDK.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Amazon SageMaker Canvas

A

SageMaker Canvas offers a no-code interface that can be used to create highly accurate machine learning models —without any machine learning experience or writing a single line of code. SageMaker Canvas provides access to ready-to-use models including foundation models from Amazon Bedrock or Amazon SageMaker JumpStart or you can build your custom ML model using AutoML powered by SageMaker AutoPilot.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

Amazon Bedrock Guardrails:

A

The company should instruct the model to stick to the prompt by adding explicit instructions to ignore any unrelated or potentially malicious content

This is the correct approach because providing explicit instructions within the prompt helps guide the model’s behavior, reducing the likelihood of generating inappropriate or unsafe content. By clarifying what the model should focus on and what it should ignore, the company can enforce boundaries that align with its safety standards. This method is straightforward and leverages prompt engineering to mitigate risks effectively.

https://aws.amazon.com/bedrock/guardrails/

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-bedrock-guardrail.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

Amazon Transcribe Medical

A

Amazon Transcribe Medical is an automatic speech recognition (ASR) service that makes it easy for you to add medical speech-to-text capabilities to your voice-enabled applications. Conversations between health care providers and patients provide the foundation of a patient’s diagnosis and treatment plan and clinical documentation workflow. It’s critically important that this information is accurate. However, accurate medical transcriptions such as dictation recorders and scribes are expensive, time-consuming, and disruptive to the patient experience. Some organizations use existing medical transcription software but find them inefficient and low in quality.

Driven by state-of-the-art machine learning, Amazon Transcribe Medical accurately transcribes medical terminologies such as medicine names, procedures, and even conditions or diseases. Amazon Transcribe Medical can serve a diverse range of use cases such as transcribing physician-patient conversations for clinical documentation, capturing phone calls in pharmacovigilance, or subtitling telehealth consultations.

https://aws.amazon.com/transcribe/medical/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

Amazon Transcribe

A

Amazon Transcribe is an automatic speech recognition service that uses machine learning models to convert audio to text. You can use Amazon Transcribe as a standalone transcription service or add speech-to-text capabilities to any application. Amazon Transcribe is not specifically trained for medical terminologies or patient conditions and diseases. Hence, Amazon Transcribe Medical is optimal for this use case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

Amazon Rekognition

A

Amazon Rekognition is a cloud-based image and video analysis service that makes it easy to add advanced computer vision capabilities to your applications. The service is powered by proven deep learning technology and it requires no machine learning expertise to use. Amazon Rekognition includes a simple, easy-to-use API that can quickly analyze any image or video file that’s stored in Amazon S3. Rekognition is not an automatic speech recognition (ASR) service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

Amazon Polly

A

Amazon Polly uses deep learning technologies to synthesize natural-sounding human speech, so you can convert articles to speech. With dozens of lifelike voices across a broad set of languages, use Amazon Polly to build speech-activated applications. Amazon Polly enables existing applications to speak as a first-class feature and creates the opportunity for entirely new categories of speech-enabled products, from mobile apps and cars to devices and appliances. Polly is not an automatic speech recognition (ASR) service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

Amazon Bedrock

A

Amazon Bedrock is the easiest way to build and scale generative AI applications with foundation models. Amazon Bedrock is a fully managed service that makes foundation models from Amazon and leading AI startups available through an API, so you can choose from various FMs to find the model that’s best suited for your use case. With Bedrock, you can speed up developing and deploying scalable, reliable, and secure generative AI applications without managing infrastructure.

https://aws.amazon.com/bedrock/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

Amazon SageMaker JumpStart

A

Amazon SageMaker JumpStart is a machine learning hub with foundation models, built-in algorithms, and prebuilt ML solutions that you can deploy with just a few clicks. With SageMaker JumpStart, you can access pre-trained models, including foundation models, to perform tasks like article summarization and image generation. Pretrained models are fully customizable for your use case with your data, and you can easily deploy them into production with the user interface or SDK.

https://aws.amazon.com/sagemaker/jumpstart/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

Amazon Q

A

Amazon Q is a generative AI–powered assistant for accelerating software development and leveraging companies’ internal data. Amazon Q generates code, tests, and debugs. It has multistep planning and reasoning capabilities that can transform and implement new code generated from developer requests.

https://aws.amazon.com/q/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

AWS Trainium

A

AWS Trainium is the machine learning (ML) chip that AWS purpose-built for deep learning (DL) training of 100B+ parameter models. Each Amazon Elastic Compute Cloud (Amazon EC2) Trn1 instance deploys up to 16 Trainium accelerators to deliver a high-performance, low-cost solution for DL training in the cloud.

https://aws.amazon.com/machine-learning/trainium/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

AWS Inferentia

A

AWS Inferentia is an ML chip purpose-built by AWS to deliver high-performance inference at a low cost. AWS Inferentia accelerators are designed by AWS to deliver high performance at the lowest cost in Amazon EC2 for your deep learning (DL) and generative AI inference applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

Knowledge Bases for Amazon Bedrock

A

With Knowledge Bases for Amazon Bedrock, you can give FMs and agents contextual information from your company’s private data sources for RAG to deliver more relevant, accurate, and customized responses

Knowledge Bases for Amazon Bedrock takes care of the entire ingestion workflow of converting your documents into embeddings (vector) and storing the embeddings in a specialized vector database. Knowledge Bases for Amazon Bedrock supports popular databases for vector storage, including vector engine for Amazon OpenSearch Serverless, Pinecone, Redis Enterprise Cloud, Amazon Aurora (coming soon), and MongoDB (coming soon). If you do not have an existing vector database, Amazon Bedrock creates an OpenSearch Serverless vector store for you.

via - https://aws.amazon.com/bedrock/knowledge-bases/

https://aws.amazon.com/bedrock/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

Watermark detection for Amazon Bedrock

A

The watermark detection mechanism allows you to identify images generated by Amazon Titan Image Generator, a foundation model that allows users to create realistic, studio-quality images in large volumes and at low cost, using natural language prompts. With watermark detection, you can increase transparency around AI-generated content by mitigating harmful content generation and reducing the spread of misinformation. You cannot use a watermark detection mechanism to implement RAG workflow in Amazon Bedrock.

https://aws.amazon.com/about-aws/whats-new/2024/04/watermark-detection-amazon-titan-image-generator-bedrock/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

Continued pretraining in Amazon Bedrock

A

In the continued pretraining process, you provide unlabeled data to pre-train a model by familiarizing it with certain types of inputs. You can provide data from specific topics to expose a model to those areas. The continued pretraining process will tweak the model parameters to accommodate the input data and improve its domain knowledge. You can use continued pretraining or fine-tuning for model customization in Amazon Bedrock. You cannot use continued pretraining to implement RAG workflow in Amazon Bedrock.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

Guardrails for Amazon Bedrock

A

Guardrails for Amazon Bedrock help you implement safeguards for your generative AI applications based on your use cases and responsible AI policies. It helps control the interaction between users and FMs by filtering undesirable and harmful content, redacts personally identifiable information (PII), and enhances content safety and privacy in generative AI applications. You cannot use guardrails to implement RAG workflow in Amazon Bedrock.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

Confusion matrix

A

Confusion matrix is a tool specifically designed to evaluate the performance of classification models by displaying the number of true positives, true negatives, false positives, and false negatives. This matrix provides a detailed breakdown of the model’s performance across all classes, making it the most suitable choice for evaluating a classification model’s accuracy and identifying potential areas for improvement. It provides a comprehensive overview of the model’s performance by detailing how many instances were correctly or incorrectly classified in each category. This enables the company to understand where the model is performing well and where it may need adjustments, such as improving the classification of specific material types.

https://docs.aws.amazon.com/machine-learning/latest/dg/multiclass-model-insights.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

Root Mean Squared Error (RMSE)

A

Root Mean Squared Error (RMSE) is a metric commonly used to measure the average error in regression models by calculating the square root of the average squared differences between predicted and actual values. However, RMSE is not suitable for classification tasks, as it is designed to measure continuous outcomes, not discrete class predictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

Mean Absolute Error (MAE)

A

Mean Absolute Error (MAE) measures the average magnitude of errors in a set of predictions without considering their direction. MAE is typically used in regression tasks to quantify the accuracy of a continuous variable’s predictions, not for classification tasks where the outputs are categorical rather than continuous.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

Correlation matrix

A

Correlation matrix measures the statistical correlation between different variables or features in a dataset, typically used to understand the relationships between continuous variables. A correlation matrix is not designed to evaluate the performance of a classification model, as it does not provide any insight into the accuracy or errors of categorical predictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

Inference

A

Inference refers to the stage where a trained machine learning model is deployed to make predictions or generate outputs based on new input data. During inference, the model uses the patterns and relationships it learned during training to provide accurate and meaningful results. In this scenario, the user sends input data to the SageMaker model, which then performs inference to generate the corresponding output or prediction.

https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-model.html

https://aws.amazon.com/blogs/machine-learning/create-train-test-and-validation-splits-on-your-data-for-machine-learning-with-amazon-sagemaker-data-wrangler/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

Training (ML Model)

A

Training is the process of teaching a machine learning model to recognize patterns by adjusting its internal parameters based on a labeled dataset. During training, the model learns from data by minimizing errors and improving accuracy. However, the scenario described does not involve modifying the model’s parameters; it only involves using the trained model to make predictions, making “training” an incorrect choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

Validation

A

Validation is a step used to evaluate and fine-tune the model during the training process by checking its performance on a validation dataset, which is separate from the training dataset. The purpose is to optimize the model’s hyperparameters and prevent overfitting. Since the scenario involves using the model to predict outcomes from new input data, rather than evaluating or fine-tuning it, “validation” is not the correct term.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

Testing

A

Testing is the final evaluation phase of a model, where its performance is assessed on an unseen test dataset after the training and validation phases are complete. It provides an unbiased estimate of the model’s generalization ability to new data. However, in the given scenario, the focus is on generating predictions from a model already trained, rather than testing its performance, so “testing” is not the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

Amazon Kendra

A

Amazon Kendra is a highly accurate and easy-to-use enterprise search service that’s powered by machine learning (ML). It allows developers to add search capabilities to their applications so their end users can discover information stored within the vast amount of content spread across their company. This includes data from manuals, research reports, FAQs, human resources (HR) documentation, and customer service guides, which may be found across various systems such as Amazon Simple Storage Service (S3), Microsoft SharePoint, Salesforce, ServiceNow, RDS databases, or Microsoft OneDrive.

When you type a question, the service uses ML algorithms to understand the context and return the most relevant results, whether that means a precise answer or an entire document. For example, you can ask a question such as “How much is the cash reward on the corporate credit card?” and Amazon Kendra will map to the relevant documents and return a specific answer (such as “2%”). Kendra provides sample code so you can get started quickly and easily integrate highly accurate searches into your new or existing applications.

https://aws.amazon.com/kendra/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

Amazon Textract

A

Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, layout elements, and data from scanned documents. It goes beyond simple optical character recognition (OCR) to identify, understand, and extract specific data from documents. Textract is not a search service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

Amazon SageMaker Data Wrangler

A

Amazon SageMaker Data Wrangler reduces the time it takes to aggregate and prepare tabular and image data for ML from weeks to minutes. With SageMaker Data Wrangler, you can simplify the process of data preparation and feature engineering, and complete each step of the data preparation workflow (including data selection, cleansing, exploration, visualization, and processing at scale) from a single visual interface.

114
Q

Amazon Comprehend

A

Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find meaning and insights in text. By utilizing NLP, you can extract important phrases, sentiments, syntax, key entities such as brand, date, location, person, etc., and the language of the text. Comprehend can analyze text, but cannot extract it from documents or images.

115
Q

Amazon Bedrock Guardrails

A

Amazon Bedrock Guardrails detects sensitive information such as personally identifiable information (PIIs) in input prompts or model responses. You can also configure sensitive information specific to your use case or organization by defining it with regular expressions (regex).

This option dynamically scans and redacts confidential information from the model’s responses and it provides a practical and efficient solution. It allows the company to continue using the fine-tuned model without the need to retrain or delete it. This method provides real-time filtering of outputs, ensuring that any sensitive data is removed before it is presented to the end user, effectively maintaining data privacy and security.

https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-sensitive-filters.html

116
Q

Encryption

A

This option is incorrect because encryption protects data during storage and transmission but does not prevent the model from generating responses that may contain confidential information. The primary concern is to avoid the disclosure of sensitive data in the model outputs, which encryption does not address. Encryption would not prevent the model from exposing confidential information, making it an ineffective solution for this particular problem.

117
Q

Amazon Q in QuickSight

A

With Amazon Q in QuickSight, customers get a generative BI assistant that allows business analysts to use natural language to build BI dashboards in minutes and easily create visualizations and complex calculations. These dashboard-authoring capabilities empower business analysts to swiftly build, uncover, and share valuable insights using natural language prompts. You can simplify data understanding for business users through a context-aware Q&A experience, executive summaries, and customizable data stories — all designed to use insights to inform and drive decisions.

118
Q

Amazon Q Developer

A

Amazon Q Developer assists developers and IT professionals with all their tasks—from coding, testing, and upgrading applications, to diagnosing errors, performing security scanning and fixes, and optimizing AWS resources.

https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/what-is.html

119
Q

Amazon Q Business

A

Amazon Q Business is a fully managed, generative-AI-powered assistant that you can configure to answer questions, provide summaries, generate content, and complete tasks based on your enterprise data. It allows end users to receive immediate, permissions-aware responses from enterprise data sources with citations, for use cases such as IT, HR, and benefits help desks.

https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/what-is.html

120
Q

Amazon Q in Connect

A

Amazon Connect is the contact center service from AWS. Amazon Q helps customer service agents provide better customer service. Amazon Q in Connect enriches real-time customer conversations with the relevant company content. It recommends what to say or what actions an agent should take to assist customers in a better way.

121
Q

OpenSearch Serverless vector store

A

Knowledge Bases for Amazon Bedrock takes care of the entire ingestion workflow of converting your documents into embeddings (vector) and storing the embeddings in a specialized vector database. Knowledge Bases for Amazon Bedrock supports popular databases for vector storage, including vector engine for Amazon OpenSearch Serverless, Pinecone, Redis Enterprise Cloud, Amazon Aurora, and MongoDB.

If you do not have an existing vector database, Amazon Bedrock creates an OpenSearch Serverless vector store for you.

via - https://aws.amazon.com/bedrock/knowledge-bases/

https://aws.amazon.com/about-aws/whats-new/2024/04/watermark-detection-amazon-titan-image-generator-bedrock/

122
Q

Redis Enterprise Cloud

A

Redis Cloud combines the simplicity and versatility of Redis with the scalability and reliability required by enterprise-grade applications. This managed service is designed to elevate your data strategy, whether you’re optimizing existing applications or building new, data-intensive experiences.

123
Q

Amazon Aurora

A

Amazon Aurora is a relational database management system (RDBMS) built for the cloud with full MySQL and PostgreSQL compatibility. Aurora gives you the performance and availability of commercial-grade databases at one-tenth the cost.

124
Q

Foundation Models (FMs)

A

FMs use self-supervised learning to create labels from input data, however, fine-tuning an FM is a supervised learning process

In supervised learning, you train the model with a set of input data and a corresponding set of paired labeled output data. Unsupervised machine learning is when you give the algorithm input data without any labeled output data. Then, on its own, the algorithm identifies patterns and relationships in and between the data. Self-supervised learning is a machine learning approach that applies unsupervised learning methods to tasks usually requiring supervised learning. Instead of using labeled datasets for guidance, self-supervised models create implicit labels from unstructured data.

Foundation models use self-supervised learning to create labels from input data. This means no one has instructed or trained the model with labeled training data sets.

Fine-tuning a pre-trained foundation model is an affordable way to take advantage of their broad capabilities while customizing a model on your own small, corpus. Fine-tuning involves further training a pre-trained language model on a specific task or domain-specific dataset, allowing it to address business requirements. Fine-tuning is a customization method that does change the weights of your model.

Fine-tuning an FM is a supervised learning process.

125
Q

BLEU

A

Bilingual Evaluation Understudy

BLEU score is the most appropriate metric for this use case. It is one of the most widely used metrics for evaluating machine translation quality. BLEU compares machine-generated translations with one or more human reference translations by analyzing n-gram overlaps. It provides a quantitative measure of translation accuracy, where a higher BLEU score indicates closer alignment with the reference translation. This makes BLEU particularly suited for assessing the performance of translation models. A BLEU score is typically a number between 0–1; it calculates the similarity of the machine translation to the reference human translation. The higher score represents better quality in natural language understanding (NLU).

via - https://aws.amazon.com/blogs/machine-learning/build-a-multilingual-automatic-translation-pipeline-with-amazon-translate-active-custom-translation/

126
Q

ROUGE

A

Recall-Oriented Understudy for Gisting Evaluation

ROUGE is a metric used mainly for evaluating the quality of automatic text summarization by measuring the overlap of n-grams, word sequences, and word pairs between machine-generated and reference summaries. Although it measures some aspects of textual similarity, it is not specifically tailored for translation tasks and does not effectively capture the nuances needed to evaluate translation accuracy.

127
Q

Accuracy (ML Evaluation)

A

Accuracy is a broad metric typically used to evaluate classification tasks where the model’s output is compared against the correct label. For translation, which involves producing contextually and semantically accurate text rather than simply classifying outputs, accuracy is too simplistic and does not account for the complexities of language, such as syntax and grammar, making it an unsuitable metric for this purpose.

128
Q

BERT

A

Bidirectional Encoder Representations from Transformers

While BERT score is a more advanced metric that uses contextual embeddings to assess the semantic similarity between translated and reference texts, it is less established than BLEU score for evaluating translation tasks. BERT score may provide deeper insights into semantic similarities, but BLEU score remains the standard and most commonly used metric for translation evaluation due to its simplicity and effectiveness in capturing n-gram overlaps, making it the preferred choice.

129
Q

Amazon Comprehend

A

Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to uncover insights and relationships in text. It is specifically designed for tasks such as sentiment analysis, entity recognition, key phrase extraction, and language detection. For the scenario of analyzing customer reviews, Amazon Comprehend can directly determine the overall sentiment of a text (positive, negative, neutral, or mixed), making it the ideal service for this purpose. By using Amazon Comprehend, e-commerce platforms can effectively analyze customer feedback, understand customer satisfaction levels, and identify common themes or concerns.

https://docs.aws.amazon.com/comprehend/latest/dg/how-sentiment.html

130
Q

Amazon Bedrock

A

Amazon Bedrock is an AI service that provides access to foundation models (large language models, including those for NLP tasks) via an API. While Amazon Bedrock is not specifically an NLP service like Amazon Comprehend, it can be used to fine-tune pre-trained foundation models for various tasks, including sentiment analysis. With the proper configuration and fine-tuning, Bedrock can analyze text data to determine sentiment, making it a versatile option for advanced users who may need more customizable solutions than Amazon Comprehend.

https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-templates-and-examples.html

131
Q

Amazon Rekognition

A

Amazon Rekognition is a service designed for analyzing images and videos, not text. It can identify objects, people, text within images, and even detect inappropriate content in images and videos. However, it does not provide any capabilities for natural language processing or sentiment analysis, making it unsuitable for analyzing written customer reviews.

132
Q

Amazon Textract

A

Amazon Textract is an OCR (Optical Character Recognition) service that extracts printed or handwritten text from scanned documents, PDFs, and images. It is useful for digitizing text but does not offer any features for analyzing or interpreting the sentiment of the extracted text. Since Textract focuses on text extraction rather than understanding or analyzing the content, it is not suitable for sentiment analysis tasks.

133
Q

Amazon Personalize

A

Amazon Personalize is a service that provides personalized recommendations, search, and ranking for websites and applications based on user behavior and preferences. While it can help improve customer experience by suggesting products or content based on historical data, it does not offer natural language processing or sentiment analysis capabilities. Thus, it is not the correct choice for analyzing written customer reviews to determine sentiment.

134
Q

OCR

A

Optical Character Recognition

135
Q

SageMaker Canvas

A

SageMaker Canvas is a perfect choice for a company without coding expertise because it provides a fully no-code environment where users can build machine learning models through a user-friendly visual interface. It simplifies the entire machine learning process, from data import and preparation to model building and deployment, without the need to write any code. This makes it highly suitable for business analysts and non-technical users.

Amazon SageMaker Canvas enables you to build your own AI/ML models without having to write a single line of code. You can build ML models for common use cases such as regression and forecasting and can access and evaluate foundation models (FMs) from Amazon Bedrock. You can also access public FMs from Amazon SageMaker JumpStart for content generation, text extraction, and text summarization to support generative AI solutions.

https://aws.amazon.com/sagemaker/canvas/

136
Q

SageMaker Built-in Algorithms

A

SageMaker Built-in Algorithms offer a range of pre-built machine learning algorithms that can be used for various tasks such as classification, regression, and clustering. However, using these algorithms typically requires knowledge of coding to manage data preparation, model training, and tuning. As such, this option is less suitable for a company with no coding expertise.

137
Q

SageMaker Data Wrangler

A

SageMaker Data Wrangler can be a valuable tool for companies looking to clean and prepare data with minimal coding. It provides a visual interface for data wrangling tasks such as data cleaning, transformation, and feature engineering. However, it cannot be used to build machine learning models.

138
Q

SageMaker Clarify

A

SageMaker Clarify automatically evaluates foundation models for your generative AI use case with metrics such as accuracy, robustness, and toxicity to support your responsible AI initiative. SageMaker Clarify explains how input features contribute to your model predictions during model development and inference. Evaluate your FM during customization using automatic and human-based evaluations. SageMaker Clarify cannot be used to build machine learning models.

139
Q

Model Invocation logging

A

The company should enable model invocation logging, which allows for detailed logging of all requests and responses during model invocations in Amazon Bedrock

You can use model invocation logging to collect invocation logs, model input data, and model output data for all invocations in your AWS account used in Amazon Bedrock. With invocation logging, you can collect the full request data, response data, and metadata associated with all calls performed in your account. Logging can be configured to provide the destination resources where the log data will be published. Supported destinations include Amazon CloudWatch Logs and Amazon Simple Storage Service (Amazon S3). Only destinations from the same account and region are supported. Model invocation logging is disabled by default.

This is the correct option because enabling invocation logging on Amazon Bedrock allows the company to capture detailed logs of all model requests and responses, including input data, output predictions, and any errors that occur during model execution. This method provides comprehensive monitoring capabilities, enabling the company to effectively track, audit, and troubleshoot model performance and usage.

https://docs.aws.amazon.com/bedrock/latest/userguide/model-invocation-logging.html

https://aws.amazon.com/blogs/mt/monitoring-generative-ai-applications-using-amazon-bedrock-and-amazon-cloudwatch-integration/

140
Q

AWS CloudTrail

A

AWS CloudTrail is useful for tracking API calls and monitoring who accessed which AWS resources, it does not capture the actual input and output data involved in model invocations. CloudTrail logs are primarily intended for auditing access and managing security rather than monitoring detailed data flow or model performance on Amazon Bedrock.

141
Q

Amazon EventBridge

A

Amazon EventBridge is designed to react to changes and events across AWS resources and trigger workflows or automate responses. Although it can track when a model invocation occurs, it does not provide detailed logging of the input and output data associated with these invocations, limiting its usefulness for comprehensive monitoring purposes.

142
Q

AWS Config

A

AWS Config is specifically designed for monitoring and managing AWS resource configurations and compliance, not for tracking or logging the input and output data of machine learning models on Amazon Bedrock. AWS Config focuses on configuration management and does not provide the level of detail required to monitor data traffic or model performance in machine learning applications.

143
Q

Plagiarism

A

Plagiarism involves presenting someone else’s work, ideas, or creations as one’s own without proper attribution. Detecting the use of generative AI tools to produce essays would help the committee identify instances where applicants might have submitted content that is not genuinely their own, thus maintaining the integrity of the admissions process. Plagiarism detection tools are intended to identify copied or non-original content, and the use of generative AI to produce essays could result in plagiarism if applicants do not properly attribute the AI’s role in generating the content.

144
Q

Hallucination

A

This option is incorrect because “hallucination” in AI refers to the creation of false information, which is not the main issue the admissions committee is dealing with. The committee is focused on detecting essays that are not genuinely authored by the applicants, not on the factual accuracy of the content itself.

145
Q

Bias

A

This option is incorrect because the admissions committee is not concerned with bias in this context. Bias in AI-generated outputs would involve content that unfairly favors or discriminates against certain groups, which is not the same issue as ensuring that essays are the original work of the applicants.

146
Q

Misinterpretation

A

Misinterpretation occurs when the meaning or intent of a text is misunderstood or conveyed incorrectly. While misinterpretation might affect how an essay is read, it is not the core issue the admissions committee is focused on; their primary goal is to verify that the content is original and authored by the applicant.

147
Q

Amazon Titan

A

Exclusive to Amazon Bedrock, the Amazon Titan family of models incorporates Amazon’s 25 years of experience innovating with AI and machine learning across its business. Amazon Titan foundation models (FMs) provide customers with a breadth of high-performing image, multimodal, and text model choices, via a fully managed API. Amazon Titan models are created by AWS and pretrained on large datasets, making them powerful, general-purpose models built to support a variety of use cases, while also supporting the responsible use of AI. Use them as is or privately customize them with your own data.

148
Q

Stable Diffusion

A

Stable Diffusion is a generative artificial intelligence (generative AI) model that produces unique photorealistic images from text and image prompts.

https://aws.amazon.com/what-is/stable-diffusion/

149
Q

Llama

A

Llama is a series of large language models trained on publicly available data. They are built on the transformer architecture, enabling them to handle input sequences of any length and produce output sequences of varying lengths. A notable feature of Llama models is their capacity to generate coherent and contextually appropriate text.

150
Q

Jurassic

A

Jurassic family of models from AI21 Labs supported use cases such as question answering, summarization, draft generation, advanced information extraction, and ideation for tasks requiring intricate reasoning and logic.

https://aws.amazon.com/bedrock/ai21/

151
Q

Claude

A

Claude is Anthropic’s frontier, state-of-the-art large language model that offers important features for enterprises like advanced reasoning, vision analysis, code generation, and multilingual processing.

https://aws.amazon.com/bedrock/claude/

152
Q

Amazon SageMaker Clarify

A

Amazon SageMaker Clarify provides tools to help explain how machine learning (ML) models make predictions. These tools can help ML modelers and developers and other internal stakeholders understand model characteristics as a whole prior to deployment and to debug predictions provided by the model after it’s deployed.

SageMaker Clarify uses a model-agnostic feature attribution approach. You can use this to understand why a model made a prediction after training, and to provide per-instance explanation during inference. The implementation includes a scalable and efficient implementation of SHAP. This is based on the concept of a Shapley value, from the field of cooperative game theory, that assigns each feature an importance value for a particular prediction.

Clarify produces partial dependence plots (PDPs) that show the marginal effect features have on the predicted outcome of a machine learning model. Partial dependence helps explain target response given a set of input features.

via - https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-model-explainability.html

153
Q

Amazon SageMaker Canvas

A

Through the no-code interface of SageMaker Canvas, you can create highly accurate machine-learning models — without any machine-learning experience or writing a single line of code. SageMaker Canvas provides access to ready-to-use models including foundation models from Amazon Bedrock or Amazon SageMaker JumpStart or you can build your custom ML model using AutoML powered by SageMaker AutoPilot. With SageMaker Canvas, you can use SageMaker Data Wrangler to easily access and import data from 50+ sources, prepare data using natural language and 300+ built-in transforms, build and train highly accurate models, generate predictions, and deploy models to production.

154
Q

Amazon SageMaker Feature Store

A

Amazon SageMaker Feature Store is a fully managed, purpose-built repository to store, share, and manage features for machine learning (ML) models. Features are inputs to ML models used during training and inference. For example, in an application that recommends a music playlist, features could include song ratings, listening duration, and listener demographics.

155
Q

Amazon SageMaker Model Monitor

A

Amazon SageMaker Model Monitor monitors the quality of Amazon SageMaker machine learning models in production.

156
Q

Labeled Data

A

Labeled data is annotated with output labels that provide specific information about each data point and is used for supervised learning

Labeled data is data that comes with predefined labels or annotations. Each data point has an associated label that provides information about the data. This type of data is crucial for supervised learning, where the model learns to predict the output from the input data.

Examples: Image classification: Images labeled with the objects they contain. Sentiment analysis: Text labeled with the sentiment it expresses (e.g., positive, negative). Spam detection: Emails labeled as “spam” or “not spam.”

https://aws.amazon.com/what-is/data-labeling/

https://aws.amazon.com/compare/the-difference-between-machine-learning-supervised-and-unsupervised/

157
Q

Unlabeled Data

A

Unlabeled data lacks such annotations (labels) and is used for unsupervised learning

Unlabeled data is data that does not come with any labels, annotations, or explicit instructions about what it represents. This type of data is often used in unsupervised learning, where the model attempts to find patterns or structures in the data without predefined labels.

Examples: Clustering: Grouping similar data points, such as customer segmentation based on purchasing behavior. Dimensionality reduction: Reducing the number of features in a dataset while preserving important information, such as using Principal Component Analysis (PCA).

158
Q

Reduce the number of tokens in the Input

A

For the given use case, reducing the number of tokens in the input is the most effective way to minimize costs associated with the use of a generative AI model on Amazon Bedrock. Each token represents a piece of text that the model processes, and the cost is directly proportional to the number of tokens in the input. By reducing the input length, the company can decrease the amount of computational power required for each request, thereby lowering the cost of usage.

https://docs.aws.amazon.com/bedrock/latest/userguide/bedrock-pricing.html

159
Q

Reducing the temperature inference parameter for the model

A

Reducing the temperature affects the creativity and randomness of the model’s output but has no effect on the cost related to input processing. The cost of using a generative AI model is primarily determined by the number of tokens processed, not by the temperature setting. Thus, adjusting the temperature is irrelevant to cost reduction.

160
Q

Reduce the top-P inference parameter for the model

A

Reducing the top-P value affects the diversity and variety of the model’s generated output but does not influence the cost of processing the input. Since the cost is based on the number of tokens in the input, changing the top-P value does not contribute to reducing expenses.

161
Q

Reduce the batch size while training the model

A

This option acts as a distractor. Modifying the batch size while training the model has no impact on the cost of model usage during inference. You should also note that you cannot train the base Foundation Models (FMs) using Amazon Bedrock, rather you can only customize the base FMs wherein you create your own private copy of the base FM using Provisioned Throughput mode.

162
Q

Large Language Model (LLM)

A

Large language models (LLMs) are a class of Foundation Models (FMs). For example, OpenAI’s generative pre-trained transformer (GPT) models are LLMs. LLMs are specifically focused on language-based tasks such as such as summarization, text generation, classification, open-ended conversation, and information extraction.

163
Q

Retrieval-Augmented Generation (RAG)

A

Retrieval-Augmented Generation (RAG) is the process of optimizing the output of a large language model, so it references an authoritative knowledge base outside of its training data sources before generating a response. Large Language Models (LLMs) are trained on vast volumes of data and use billions of parameters to generate original output for tasks like answering questions, translating languages, and completing sentences. RAG extends the already powerful capabilities of LLMs to specific domains or an organization’s internal knowledge base, all without the need to retrain the model. It is a cost-effective approach to improving LLM output so it remains relevant, accurate, and useful in various contexts.

Depending on the configuration, Amazon Q Business web application workflow can use LLM/RAG or both.

164
Q

Diffusion Model

A

Diffusion models create new data by iteratively making controlled random changes to an initial data sample. They start with the original data and add subtle changes (noise), progressively making it less similar to the original. This noise is carefully controlled to ensure the generated data remains coherent and realistic. After adding noise over several iterations, the diffusion model reverses the process. Reverse denoising gradually removes the noise to produce a new data sample that resembles the original.

165
Q

Generative adversarial network (GAN)

A

GANs work by training two neural networks in a competitive manner. The first network, known as the generator, generates fake data samples by adding random noise. The second network, called the discriminator, tries to distinguish between real data and the fake data produced by the generator. During training, the generator continually improves its ability to create realistic data while the discriminator becomes better at telling real from fake. This adversarial process continues until the generator produces data that is so convincing that the discriminator can’t differentiate it from real data.

https://aws.amazon.com/what-is/retrieval-augmented-generation/

166
Q

Variational autoencoders (VAE)

A

VAEs use two neural networks—the encoder and the decoder. The encoder neural network maps the input data to a mean and variance for each dimension of the latent space. It generates a random sample from a Gaussian (normal) distribution. This sample is a point in the latent space and represents a compressed, simplified version of the input data. The decoder neural network takes this sampled point from the latent space and reconstructs it back into data that resembles the original input.

167
Q

Transfer learning

A

Transfer learning, a method where a model pre-trained on one task is adapted to improve performance on a different but related task by leveraging knowledge from the original task

Transfer learning is the most suitable approach in this scenario. It allows a model to utilize the knowledge learned from one task or dataset to improve its performance on a new, but related task. For a company using multiple models for different use cases, transfer learning can help optimize performance by adapting insights from the latest data generated by other models. This approach reduces the need for extensive data and computational resources while ensuring that the models benefit from shared knowledge across related domains or tasks.

via - https://aws.amazon.com/what-is/transfer-learning/

168
Q

Incremental training

A

Incremental training is useful for updating a model with new data continuously, it focuses on enhancing a single model’s performance with its own data rather than learning from the data of other models. This approach is not designed for optimizing multiple models by leveraging knowledge across different use cases, making it less suitable for the company’s objective.

169
Q

Self-supervised learning

A

Self-supervised learning is effective for tasks where labeled data is scarce or unavailable, as it allows models to learn useful representations from large amounts of unlabeled data. However, it does not directly address the need to optimize multiple models using the latest data from other models. This approach is more suitable for foundational training rather than model optimization across different use cases.

170
Q

Reinforcement learning

A

Reinforcement learning is designed for scenarios where a model needs to learn a sequence of actions to achieve a specific goal by maximizing cumulative rewards. It is primarily used in decision-making problems, such as game playing or robotic control. It is not well-suited for optimizing multiple models by learning from the latest data of other models, as it does not involve leveraging cross-model data or knowledge sharing.

171
Q

Amazon Forecast

A

Predict product demand to accurately vary inventory and pricing at different store locations

Amazon Forecast is a fully managed service that uses statistical and machine learning algorithms to deliver highly accurate time-series forecasts. Based on the same technology used for time-series forecasting at Amazon.com, Forecast provides state-of-the-art algorithms to predict future time-series data based on historical data and requires no machine learning experience.

Here are some common use cases for Amazon Forecast:

Retail demand planning – Predict product demand, allowing you to accurately vary inventory and pricing at different store locations.

Supply chain planning – Predict the quantity of raw goods, services, or other inputs required by manufacturing.

Resource planning – Predict requirements for staffing, advertising, energy consumption, and server capacity.

Operational planning – Predict levels of web traffic, AWS usage, and IoT sensor usage.

https://aws.amazon.com/forecast/

172
Q

Amazon Personalize

A

Recommendations tailored to a user’s profile, behavior, preferences, and history

You can use Amazon Personalize for recommendations that are tailored to a user’s profile, behavior, preferences, and history.

173
Q

Amazon Transcribe

A

Detect and categorize toxic audio and foster a safe and inclusive online environment

You can use Amazon Transcribe to detect and categorize toxic audio and foster a safe and inclusive online environment.

174
Q

Amazon Lex

A

Design conversational solutions that respond to frequently asked questions for technical support, and HR benefits

Amazon Lex is the right fit for this option. Amazon Lex is a fully managed artificial intelligence (AI) service with advanced natural language models to design, build, test, and deploy conversational interfaces in applications.

175
Q

Top K

A

Top K represents the number of most likely candidates that the model considers for the next token. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.

via - https://docs.aws.amazon.com/bedrock/latest/userguide/inference-parameters.html

176
Q

Temperature

A

Temperature is a value between 0 and 1, and it regulates the creativity of the model’s responses.

Use a lower temperature if you want more deterministic responses.

Use a higher temperature if you want more creative or different responses for the same prompt on Amazon Bedrock.

177
Q

Top P

A

Top P represents the percentage of most likely candidates that the model considers for the next token. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.

178
Q

Stop sequences

A

Stop sequences specify the sequences of characters that stop the model from generating further tokens. If the model generates a stop sequence that you specify, it will stop generating after that sequence.

179
Q

Hallucination

A

Where the model generates seemingly accurate information that is, in fact, incorrect or fabricated

The term “hallucination” refers to a phenomenon in which a language model generates responses that sound plausible and appear factual but are actually false or unsupported by any underlying data. Hallucinations occur because the model relies on patterns learned during training rather than verified knowledge. This is a known limitation of LLMs, which can create convincing text that may mislead users if not carefully monitored or verified against reliable sources.

https://aws.amazon.com/blogs/machine-learning/best-practices-to-build-generative-ai-applications-on-aws/

180
Q

Data Drift

A

Data drift occurs when the distribution or characteristics of the input data change over time, which can cause the model’s performance to degrade. However, data drift does not explain why an LLM would produce plausible but incorrect responses; it is more related to changes in the data environment rather than the inherent behavior of the model in generating misleading information.

181
Q

Overfitting

A

Capturing Noise or Irrelevant Details

Overfitting occurs when a model learns the training data too well, capturing noise or irrelevant details, which results in poor performance on new data. However, overfitting does not specifically describe the generation of plausible but incorrect responses; rather, it is about the model’s failure to generalize beyond the examples it has been trained on.

182
Q

Underfitting

A

Model is too simple

Underfitting happens when a model is too simple to learn the complexities of the data, leading to poor performance on both training and unseen datasets. While underfitting does cause incorrect responses, it is due to the model’s inability to learn from data, not because it is generating fabricated or misleadingly plausible information.

183
Q

Amazon SageMaker Clarify

A

SageMaker Clarify automatically evaluates foundation models for your generative AI use case with metrics such as accuracy, robustness, and toxicity to support your responsible AI initiative. SageMaker Clarify explains how input features contribute to your model predictions during model development and inference. Evaluate your FM during customization using automatic and human-based evaluations.

SageMaker Clarify is integrated with SageMaker Experiments to provide a feature importance graph detailing the importance of each input for your model’s overall decision-making process after the model has been trained. These details can help determine if a particular model input has more influence than it should on overall model behavior. SageMaker Clarify also makes explanations for individual predictions available through an API.

https://aws.amazon.com/sagemaker/clarify/

184
Q

Amazon SageMaker Ground Truth

A

Amazon SageMaker Ground Truth offers the most comprehensive set of human-in-the-loop capabilities, allowing you to harness the power of human feedback across the ML lifecycle to improve the accuracy and relevancy of models. You can complete a variety of human-in-the-loop tasks with SageMaker Ground Truth, from data generation and annotation to model review, customization, and evaluation, either through a self-service or an AWS-managed offering.

185
Q

Amazon SageMaker Canvas

A

SageMaker Canvas offers a no-code interface that can be used to create highly accurate machine-learning models - without any machine-learning experience or writing a single line of code. SageMaker Canvas provides access to ready-to-use models including foundation models from Amazon Bedrock or Amazon SageMaker JumpStart or you can build your own custom ML model using AutoML powered by SageMaker AutoPilot.

186
Q

Amazon SageMaker JumpStart

A

Amazon SageMaker JumpStart is a machine learning (ML) hub that can help you accelerate your ML journey. With SageMaker JumpStart, you can evaluate, compare, and select Foundation Models (FMs) quickly based on pre-defined quality and responsibility metrics to perform tasks like article summarization and image generation. Pretrained models are fully customizable for your use case with your data, and you can easily deploy them into production with the user interface or SDK.

187
Q

Data Augmentation

A

Augment the data by generating new instances of data for underrepresented groups

This option involves creating additional examples to balance the dataset. This approach helps to increase the representation of underrepresented groups in the training data, thereby reducing bias in the model’s outputs. By augmenting the dataset, the model is exposed to a more diverse set of inputs, improving its ability to generate fair and unbiased images. By ensuring that all groups are adequately represented in the training data, the model learns to generate unbiased images that fairly reflect the diversity of the population. Data augmentation is a widely used technique to address data imbalance and mitigate bias in machine learning models, particularly in tasks involving image generation.

https://aws.amazon.com/what-is/data-augmentation/

188
Q

Model regularization

A

Regularization techniques, such as L1 or L2 regularization, are designed to prevent overfitting by adding a penalty to the loss function during training. These techniques do not address the issue of data imbalance or bias, and therefore do not solve the problem of biased outputs from the image generation model.

189
Q

Amazon Transcribe

A

Amazon Transcribe converts audio input into text, which opens the door for various text analytics applications on voice input

https://aws.amazon.com/transcribe/faqs/

190
Q

Amazon Comprehend

A

Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text, no machine learning experience is required. Amazon Comprehend uses machine learning to help you uncover the insights and relationships in your unstructured data.

By using Amazon Comprehend on the converted text data from Amazon Transcribe, you can perform sentiment analysis or extract entities and key phrases.

https://docs.aws.amazon.com/managedservices/latest/userguide/comprehend.html

191
Q

Amazon Rekognition

A

Amazon Rekognition is a cloud-based image and video analysis service that makes it easy to add advanced computer vision capabilities to your applications. Amazon Rekognition is not useful for analysis of audio files.

192
Q

Amazon Translate

A

Amazon Translate is used to translate unstructured text documents or to build applications that work in multiple languages and Amazon Transcribe converts audio input into text. Sentiment analysis is not possible using these services.

193
Q

Amazon Bedrock

A

Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs and a broad set of capabilities. You can easily experiment with various top FMs, privately customize them with your data, and create managed agents that execute complex business tasks.

Amazon Bedrock is the easiest way to build and scale generative AI applications with foundation models. Amazon Bedrock makes foundation models from Amazon and leading AI startups available through an API, so you can choose from various FMs to find the model that’s best suited for your use case. With Amazon Bedrock, you can speed up developing and deploying scalable, reliable, and secure generative AI applications without managing infrastructure.

194
Q

Amazon Q Developer

A

Code generation is one of the most promising applications for generative AI. With Amazon Q Developer, a generative AI-powered assistant for software development, you can get great results in developer productivity.

195
Q

Amazon Q in QuickSight

A

Amazon Q in QuickSight helps business analysts easily create and customize visuals using natural-language commands. The new Generative BI authoring capabilities extend the natural-language querying of QuickSight Q beyond answering well-structured questions (such as “What are the top 10 products sold in California?”) to help analysts quickly create customizable visuals from question fragments (such as “Top 10 products”), clarify the intent of a query by asking follow-up questions, refine visualizations, and complete complex calculations.

196
Q

AWS Inferentia

A

AWS Inferentia accelerators are designed by AWS to deliver high performance at the lowest cost in Amazon EC2 for your deep learning (DL) and generative AI inference applications.

https://aws.amazon.com/machine-learning/inferentia/

197
Q

Amazon Bedrock

A

Leverage Amazon Bedrock to make a separate copy of the base FM model and train this private copy of the model using the labeled training dataset

Amazon Bedrock is a fully managed service that makes high-performing foundation models (FMs) from leading AI startups and Amazon available for your use through a unified API. Using Amazon Bedrock, you can easily experiment with and evaluate top foundation models for your use cases, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources.

With Amazon Bedrock, you can privately customize FMs, retaining control over how your data is used and encrypted. Amazon Bedrock makes a separate copy of the base FM and trains this private copy of the model. Your data includes prompts, information used to supplement a prompt, and FM responses. Customized FMs remain in the Region where the API call is processed.

With Amazon Bedrock, your data, including prompts and customized foundation models, stays within the AWS Region where the API call is processed and encrypted in transit as well as at rest. You can use AWS PrivateLink to ensure private connectivity between your models and on-premises networks without exposing traffic to the internet.

198
Q

Amazon Rekognition

A

Amazon Rekognition is a cloud-based image and video analysis service that makes it easy to add advanced computer vision capabilities to your applications. The service is powered by proven deep learning technology and it requires no machine learning expertise to use. Amazon Rekognition includes a simple, easy-to-use API that can quickly analyze any image or video file that’s stored in Amazon S3.

You can add features that detect objects, text, and unsafe content, analyze images/videos, and compare faces to your application using Rekognition’s APIs. With Amazon Rekognition’s face recognition APIs, you can detect, analyze, and compare faces for a wide variety of use cases, including user verification, cataloging, people counting, and public safety.

Amazon Rekognition offers pre-trained and customizable computer vision (CV) capabilities to extract information and insights from your images and videos.

199
Q

Amazon SageMaker

A

Amazon SageMaker is a fully managed machine learning (ML) service. With SageMaker, data scientists and developers can quickly and confidently build, train, and deploy ML models into a production-ready hosted environment. It provides a UI experience for running ML workflows that makes SageMaker ML tools available across multiple integrated development environments (IDEs).

200
Q

Amazon DeepRacer

A

AWS DeepRacer is an autonomous 1/18th scale race car designed to test RL models by racing on a physical track. Using cameras to view the track and a reinforcement model to control throttle and steering, the car shows how a model trained in a simulated environment can be transferred to the real world.

https://aws.amazon.com/deepracer/

201
Q

Amazon Textract

A

Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, layout elements, and data from scanned documents. It goes beyond simple optical character recognition (OCR) to identify, understand, and extract specific data from documents.

https://aws.amazon.com/textract/

202
Q

Amazon Bedrock On-demand pricing

A

On-demand pricing is the most appropriate option for a company that is uncertain about the time commitment or extent of its usage. This pricing model allows the company to pay for Amazon Bedrock services based on actual usage without requiring any upfront payment or long-term contract. It provides flexibility and scalability, making it suitable for organizations that need to adapt their usage according to evolving needs or have unpredictable workloads.

Amazon Bedrock pricing:

via - https://aws.amazon.com/bedrock/pricing/

203
Q

Amazon Bedrock Provisioned Throughput

A

provisioned throughput, a model that reserves a certain amount of capacity in advance for a discounted rate - Provisioned throughput is less suitable in this scenario because it is designed for situations where the usage is consistent and predictable. This model involves committing to a certain level of capacity, which may lead to unnecessary costs if the actual usage is lower than anticipated. Since the company lacks clarity on its time commitment and usage patterns, provisioned throughput does not offer the flexibility needed.

204
Q

EC2 Spot Instances

A

Spot Instances are a pricing model offered by AWS for EC2 compute instances, which allows you to bid for spare EC2 capacity at reduced rates. Spot instances can be interrupted by AWS with little notice. This is not applicable as a pricing model for Amazon Bedrock. This option just acts as a distractor.

205
Q

EC2 Reserved Instances

A

Reserved Instances offer a lower rate for EC2 compute resources in exchange for a one- or three-year commitment. This is not applicable as a pricing model for Amazon Bedrock. This option just acts as a distractor.

206
Q

Amazon S3

A

Amazon S3, which is a scalable object storage service fully integrated with Amazon Bedrock

Amazon S3 is the recommended storage solution for datasets used in model validation when working with Amazon Bedrock. S3 provides a highly scalable and durable storage platform, and it is natively integrated with Amazon Bedrock, allowing easy access to large datasets required for model customization, training, and validation tasks. It supports a wide range of data formats and allows seamless access from various AWS services.

207
Q

Amazon EFS

A

Amazon EFS, which is a managed file storage service that allows shared access to file data for Amazon Bedrock - This option is incorrect because Amazon EFS is designed for file-based workloads that require shared access across multiple instances, such as content management or web serving. It is not optimized for storing large datasets used in machine learning workflows with Amazon Bedrock, and it does not provide native integration for this purpose.

208
Q

Amazon EBS

A

This option is incorrect because Amazon EBS is designed for block storage and is typically attached to EC2 instances for use cases that require high performance and low latency, such as databases or enterprise applications. It is not suitable or supported for storing datasets used in model validation with Amazon Bedrock, as it lacks the scalability and native integration provided by Amazon S3.

209
Q

Amazon RDS

A

Amazon RDS is designed for managing relational databases and is not intended for storing large, unstructured datasets typically used in machine learning and model validation. It is not integrated with Amazon Bedrock for storing datasets required for model customization and validation tasks.

210
Q

GAN

A

Generative Adversarial Network

The company should use a Generative Adversarial Network (GAN) for creating realistic synthetic data while preserving the statistical properties of the original data

This is the correct answer because GANs are specifically designed for generating synthetic data that is statistically similar to real data. They consist of two neural networks—a generator and a discriminator—that work against each other to create highly realistic synthetic data. GANs have been successfully used in various domains, including image generation, text synthesis, and more, to produce data that retains the underlying patterns and structures of the original dataset, making them highly suitable for this purpose.

via - https://aws.amazon.com/what-is/gan/

211
Q

Support Vector Machines (SVMs)

A

SVMs are used for classification and regression, where the algorithm finds the optimal hyperplane that best separates different classes in the data. SVMs do not generate new data or create synthetic datasets, so they are not suitable for a task that requires generating synthetic data based on existing datasets.

212
Q

Convolutional Neural Network (CNN)

A

CNNs are designed for tasks such as image and video recognition, object detection, and similar applications involving grid-like data (such as pixels in an image). While CNNs are excellent at feature extraction and classification in images, they are not suitable for generating synthetic data, especially for non-visual data types.

213
Q

WaveNet

A

WaveNet is tailored for audio data generation, specifically for tasks such as speech synthesis and audio signal processing. While it is powerful within its specific domain, it is not designed for generating synthetic data outside of audio, making it an unsuitable choice for general-purpose data generation.

214
Q

Domain Adaptation Fine-Tuning

A

Domain Adaptation Fine-Tuning, which involves fine-tuning the model on domain-specific data to adapt its knowledge to that particular domain

Domain Adaptation Fine-Tuning is an effective approach because it takes a pre-trained Foundation Model and further adjusts its parameters using domain-specific data. This process helps the model learn the nuances, terminology, and context specific to the domain, enhancing its ability to generate accurate and relevant outputs in that field. Fine-tuning allows the model to specialize while retaining the general knowledge acquired during initial training.

215
Q

Continued Pre-Training

A

Continued Pre-Training, which involves further training the model on a large corpus of domain-specific data, enhancing its ability to understand domain-specific terms, jargon, and context

Continued Pre-Training is another appropriate strategy for making a Foundation Model an expert in a specific domain. By pre-training the model on a large dataset specifically from the target domain, the model can learn the distinct characteristics, language patterns, and specialized knowledge relevant to that domain. This approach effectively builds upon the model’s existing knowledge, enhancing its domain expertise without starting training from scratch.

via - https://aws.amazon.com/blogs/aws/customize-models-in-amazon-bedrock-with-your-own-data-using-fine-tuning-and-continued-pre-training/

216
Q

Supervised Learning

A

Supervised Learning can be used to train models for specific tasks, it is less effective for turning a Foundation Model into a domain expert unless a large amount of labeled domain-specific data is available. This approach is more task-specific and does not generalize as well as fine-tuning or continued pre-training for overall domain expertise.

217
Q

Incremental Learning

A

Incremental Learning is designed to help models adapt to new data over time without forgetting previously learned knowledge, but it is not specifically tailored for domain specialization. It may help in scenarios where the model needs to adapt continuously, but it lacks the focused, intensive learning required to make the model an expert in a specific domain.

218
Q

Reinforcement Learning

A

Reinforcement Learning is not suitable for making a model an expert in a specific domain, as it focuses on learning optimal behaviors through rewards and penalties in interactive scenarios. It does not involve training the model with domain-specific data or knowledge, making it ineffective for enhancing domain expertise.

219
Q

Use case for a generative AI-powered model

A

Using generative AI to create photorealistic images from textual descriptions

This is a legitimate use case for a generative AI model. Generative models such as DALL-E, Midjourney, and Stable Diffusion are designed to transform text prompts into high-quality, photorealistic images. These models use advanced techniques like Generative Adversarial Networks (GANs) or diffusion models to generate novel visual content based on the input description. This capability is at the core of what generative AI is designed to achieve, enabling applications in digital art, marketing, media creation, and other creative industries.

via - https://aws.amazon.com/what-is/generative-ai/

220
Q

Use case for regression models or supervised learning algorithms

A

Utilizing generative AI to predict housing prices based on historical market data

This is not a valid use case for generative AI. Predicting housing prices involves analyzing structured data, such as historical sales data, economic indicators, and location factors, typically using regression models or supervised learning algorithms. These predictive models are designed to find patterns and relationships within data to forecast future values, which is fundamentally different from generating new content.

221
Q

Use case for Long Short-Term Memory (LSTM) networks or ARIMA models

A

Applying generative AI for financial analysis to forecast stock market trends

Financial analysis and stock market forecasting rely on time-series analysis and other statistical methods that are designed to interpret historical data and predict future outcomes. Techniques such as Long Short-Term Memory (LSTM) networks or ARIMA models are commonly used for these tasks.

Generative AI, which is primarily focused on creating new content like text, images, or music, is not suitable for this kind of predictive analysis.

222
Q

Use case for Convolutional Neural Networks (CNNs)

A

Classifying medical images to detect anomalies or diagnose diseases using generative AI

This is another example of a task that is outside the scope of generative AI. Classifying medical images involves discriminative models designed for classification and detection, such as Convolutional Neural Networks (CNNs). These models are trained to recognize patterns in labeled data and are used for diagnostic purposes, not for generating new images or content.

223
Q

Model Explainability - Shapley values

A

Shapley values provide a local explanation by quantifying the contribution of each feature to the prediction for a specific instance

Use Shapley values to explain individual predictions

Shapley values are a local interpretability method that explains individual predictions by assigning each feature a contribution score based on its marginal effect on the prediction. This method is useful for understanding the impact of each feature on a specific instance’s prediction.

224
Q

Model Explainability - Partial Dependence Plots (PDP)

A

PDP provides a global explanation by showing the marginal effect of a feature on the model’s predictions across the dataset.

PDP is used to understand broader trends in model behavior.

Partial Dependence Plots (PDP), on the other hand, provide a global view of the model’s behavior by illustrating how the predicted outcome changes as a single feature is varied across its range, holding all other features constant. PDPs help understand the overall relationship between a feature and the model output across the entire dataset.

225
Q

Inference - Asynchronous inference

A

Asynchronous inference is the most suitable choice for this scenario. It allows the company to process smaller payloads without requiring real-time responses by queuing the requests and handling them in the background. This method is cost-effective and efficient when some delay is acceptable, as it frees up resources and optimizes compute usage. Asynchronous inference is ideal for scenarios where the payload size is less than 1 GB and immediate results are not critical.

via - https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html

226
Q

Inference - Batch inference

A

Batch inference is generally used for processing large datasets all at once. While it does not require immediate responses, it is typically more efficient for handling larger payloads (several gigabytes or more). For smaller payloads of less than 1 GB, batch inference might be overkill and less cost-efficient compared to asynchronous inference.

227
Q

Inference - Real-time inference

A

Real-time inference is optimized for scenarios where low latency is essential, and responses are needed immediately. It is not suitable for cases where the system can afford to wait for responses, as it might lead to higher costs and resource consumption without providing any additional benefit for this particular use case.

228
Q

Inference - Serverless inference

A

Serverless inference is a good choice for workloads with unpredictable traffic or sporadic requests, as it scales automatically based on demand. However, it may not be as cost-effective for scenarios where workloads are predictable, and some waiting time is acceptable. Asynchronous inference provides a more targeted solution for handling delayed responses at a lower cost.

229
Q

Model customization - Continued pre-training

A

Continued pre-training uses unlabeled data to pre-train a model

In the continued pre-training process, you provide unlabeled data to pre-train a model by familiarizing it with certain types of inputs. You can provide data from specific topics to expose a model to those areas. The Continued Pre-training process will tweak the model parameters to accommodate the input data and improve its domain knowledge.

For example, you can train a model with private data, such as business documents, that are not publicly available for training large language models. Additionally, you can continue to improve the model by retraining the model with more unlabeled data as it becomes available.

https://aws.amazon.com/blogs/machine-learning/best-practices-to-build-generative-ai-applications-on-aws/

230
Q

Model customization - fine-tuning

A

fine-tuning uses labeled data to train a model

While fine-tuning a model, you provide labeled data to train a model to improve performance on specific tasks. By providing a training dataset of labeled examples, the model learns to associate what types of outputs should be generated for certain types of inputs. The model parameters are adjusted in the process and the model’s performance is improved for the tasks represented by the training dataset.

https://aws.amazon.com/blogs/machine-learning/best-practices-to-build-generative-ai-applications-on-aws/

231
Q

Generative AI model - Transformer models

A

Transformer models use a self-attention mechanism and implement contextual embeddings

Transformer models are a type of neural network architecture designed to handle sequential data, such as language, in an efficient and scalable way. They rely on a mechanism called self-attention to process input data, allowing them to understand and generate language effectively. Self-attention allows the model to weigh the importance of different words in a sentence when encoding a particular word. This helps the model capture relationships and dependencies between words, regardless of their position in the sequence.

Transformer models use self-attention to weigh the importance of different words in a sentence, allowing them to capture complex dependencies. Positional encodings provide information about word order, and the encoder-decoder architecture enables effective processing and generation of sequences. This makes transformers highly effective for tasks like language translation, text generation, and more.

via - https://aws.amazon.com/what-is/generative-ai/

232
Q

Generative AI model - Generative Adversarial Networks (GANs)

A

Generative Adversarial Networks (GANs) work by training two neural networks in a competitive manner. The first network, known as the generator, generates fake data samples by adding random noise. The second network, called the discriminator, tries to distinguish between real data and the fake data produced by the generator.

233
Q

Generative AI model - Variational autoencoders (VAEs)

A

Variational autoencoders (VAEs) learn a compact representation of data called latent space. You can think of it as a unique code representing the data based on all its attributes. VAEs use two neural networks—the encoder and the decoder. The encoder neural network maps the input data to a mean and variance for each dimension of the latent space. The decoder neural network takes this sampled point from the latent space and reconstructs it back into data that resembles the original input.

234
Q

Generative AI model - Diffusion models

A

Diffusion models work by first corrupting data with noise through a forward diffusion process and then learning to reverse this process to denoise the data. They use neural networks to predict and remove the noise step by step, ultimately generating new, structured data from random noise.

235
Q

Embedding models

A

Embedding models are algorithms trained to encapsulate information into dense representations in a multi-dimensional space. Data scientists use embedding models to enable ML models to comprehend and reason with high-dimensional data.

https://aws.amazon.com/what-is/embeddings-in-machine-learning/

236
Q

Embedding Model - BERT

A

Bidirectional Encoder Representations from Transformers (BERT)

Embedding models are algorithms trained to encapsulate information into dense representations in a multi-dimensional space. Data scientists use embedding models to enable ML models to comprehend and reason with high-dimensional data.

BERT is the correct answer because it is specifically designed to capture the contextual meaning of words by looking at both the words that come before and after them (bidirectional context). Unlike older models that use static embeddings, BERT creates dynamic word embeddings that change depending on the surrounding text, allowing it to understand the different meanings of the same word in various contexts. This makes BERT ideal for tasks that require understanding the nuances and subtleties of language.

237
Q

Embedding Model - PCA

A

Principal Component Analysis (PCA) - PCA is a statistical method used for reducing the dimensions of large datasets to simplify them while retaining most of the variance in the data. While it can be applied to various fields, including image compression and data visualization, PCA does not understand or differentiate the contextual meanings of words in natural language processing. Thus, it is not a suitable choice for understanding word meanings in different phrases.

238
Q

Embedding Model - Word2Vec

A

Word2Vec is an early embedding model that creates vector representations of words based on their co-occurrence in a given text. However, it uses static embeddings, meaning each word has a single vector representation regardless of the context. This limitation makes Word2Vec less effective at differentiating words with multiple meanings across different phrases since it cannot adjust the embedding based on context, unlike BERT.

239
Q

Embedding Model - SVD

A

Singular Value Decomposition (SVD) - SVD is a matrix decomposition method used in various applications like data compression and noise reduction. Although it can be part of older methods for text analysis, such as Latent Semantic Analysis (LSA), it is not designed to handle the dynamic, context-dependent meanings of words in sentences. Therefore, it is not suitable for differentiating contextual meanings of words across various phrases.

240
Q

Provisioned Throughput

A

Provisioned Throughput mode, which allows the company to reserve a specific amount of capacity in advance

With fine-tuning, you can increase model accuracy by providing your own task-specific labeled training dataset and further specialize your FMs. With continued pre-training, you can train models using your own unlabeled data in a secure and managed environment with customer managed keys. Continued pre-training helps models become more domain-specific by accumulating more robust knowledge and adaptability—beyond their original training.

Once the fine-tuning job is complete, you receive a unique model ID for your custom model. Your fine-tuned model is stored securely by Amazon Bedrock. To test and deploy your model, you need to purchase Provisioned Throughput. This mode is designed for situations where there is a predictable, continuous workload, such as the intensive compute required during the fine-tuning phase.

https://aws.amazon.com/blogs/aws/customize-models-in-amazon-bedrock-with-your-own-data-using-fine-tuning-and-continued-pre-training/

241
Q

Amazon Bedrock playground

A

The Amazon Bedrock playgrounds provide you a console environment to experiment with running inference on different models and with different configurations, before deciding to use them in an application. There are playgrounds for text, chat, and image models. Within each playground you can enter prompts and experiment with inference parameters. Prompts are usually one or more sentences of text that set up a scenario, question, or task for a model. You cannot use Amazon Bedrock playground to facilitate fine-tuning of the model. This option acts as a distractor.

242
Q

Inference - Batch Inference

A

With batch inference, you can run multiple inference requests asynchronously to process a large number of requests efficiently by running inference on data that is stored in an S3 bucket. You can use batch inference to improve the performance of model inference on large datasets. You cannot use batch inference to facilitate fine-tuning of the model. This option acts as a distractor.

243
Q

ML Algorithm - Reinforcement Learning (RL)

A

The company should leverage reinforcement learning (RL), where rewards are generated from positive customer feedback to train the chatbot in optimizing its responses

Reinforcement learning is the most suitable approach for self-improvement in this context. By leveraging RL, the chatbot can learn from customer interactions in real-time. Positive customer feedback serves as a reward signal that guides the chatbot to improve its responses over time. The chatbot adapts its behavior based on rewards or penalties, refining its conversational skills through continuous feedback loops. This dynamic learning process is effective for environments where responses need to be optimized based on direct user interaction and satisfaction.

https://aws.amazon.com/what-is/reinforcement-learning/

244
Q

ML Algorithm - supervised learning

A

While supervised learning can be effective for training chatbots with labeled data (such as examples of positive and negative customer interactions), it is not ideal for ongoing self-improvement. Supervised learning requires extensive datasets and retraining the model whenever new data is available, making it less adaptive in real-time environments. Additionally, this approach does not directly learn from the immediate feedback provided by customers, which is crucial for dynamic improvement.

245
Q

ML Algorithm - incremental training

A

Incremental training allows a model to update itself with new data while retaining knowledge from old data. However, it may not be sufficient for optimizing chatbot performance in real-time, especially without incorporating direct feedback signals like those in reinforcement learning. Incremental learning is less dynamic than reinforcement learning and may struggle to keep up with fast-changing customer preferences or conversation styles.

246
Q

ML Algorithm - Transfer learning

A

Transfer learning is used when a model trained in one domain or task can benefit from applying its knowledge to a different but related domain. While transfer learning can improve chatbot performance by leveraging pre-trained models, it does not provide the framework for continuous, self-improvement based on ongoing customer interactions. Therefore, it is not the most effective approach for a chatbot seeking to improve through real-time conversations.

247
Q

Securing Generative AI - Building and training a generative AI model from scratch

A

This option involves the company designing, developing, and training a generative AI model entirely from the ground up. It requires the company to take full responsibility for managing the entire machine learning pipeline, including data collection, preprocessing, model design, training, and deployment. Additionally, the company is responsible for securing all the underlying infrastructure, data storage, access controls, and ensuring compliance with relevant security and privacy regulations. This scenario entails the maximum level of security ownership because the company controls all aspects of the AI development lifecycle.

via - https://aws.amazon.com/blogs/security/securing-generative-ai-an-introduction-to-the-generative-ai-security-scoping-matrix/

248
Q

Securing Generative AI - Refining an existing third-party generative AI foundation model by fine-tuning it with data specific to the company

A

In this scenario, the company leverages an existing third-party foundation model and fine-tunes it with its own proprietary data. While this reduces some of the security responsibilities related to the initial model development and training, the company still bears significant security responsibilities. These responsibilities include securing its proprietary data, managing the infrastructure where the fine-tuning occurs, and ensuring compliance with data protection regulations. However, since the foundation model is pre-built by a third party, the overall security ownership is less than when building a model from scratch.

249
Q

Securing Generative AI - Building its own application using an existing third-party generative AI foundation model

A

This option involves the company using a third-party generative AI foundation model to build a custom application. While the company retains some security responsibilities, such as securing the application and managing the infrastructure where it runs, it does not have to manage the security of the model itself or the foundational aspects of the AI service. Therefore, the security ownership is shared with the third-party provider, and the company’s responsibilities are more limited compared to scenarios that involve training or fine-tuning models.

250
Q

Securing Generative AI - Consuming a public third-party generative AI service

A

In this scenario, the company consumes a fully managed, public third-party generative AI service where the third-party provider is responsible for most security aspects, including data protection, model security, and infrastructure management. The company’s responsibilities are limited to securing its own data before sending it to the service and managing user access. This scenario entails the least amount of security ownership since the third-party provider handles the majority of the security controls.

251
Q

AWS intelligent document processing (IDP)

A

AWS intelligent document processing (IDP), with AI services such as Amazon Textract, allows you to take advantage of industry-leading machine learning (ML) technology to quickly and accurately process data from any scanned document or image. Generative artificial intelligence (generative AI) complements Amazon Textract to further automate document processing workflows. Features such as normalizing key fields and summarizing input data support faster cycles for managing document process workflows, while reducing the potential for errors.

252
Q

Generative AI powered summarization chatbot

A

Generative AI powered summarization chatbot leverages large language models to generate concise summaries of text. With prompt engineering, the summarization chatbot can be specifically tailored to accurately extract detailed key points, entities, or legal clauses from complex legal documents.

253
Q

Amazon Comprehend

A

Amazon Comprehend is an effective choice because it is specifically designed to process large volumes of unstructured text data, such as legal documents, and extract key entities, phrases, and insights. It uses machine learning to accurately identify and extract relevant information, like names, dates, and specific legal clauses, making it well-suited for the law firm’s needs. The service can be integrated with other AWS tools to automate and scale the document review process, thereby enhancing efficiency.

254
Q

Amazon Textract

A

Amazon Textract is also a suitable choice because it is designed to extract structured data from scanned documents, such as PDFs or images, which is common in legal settings. Textract can identify key fields, like dates, names, and amounts, and extract them into a structured format. This makes it particularly useful for handling large volumes of physical or scanned documents where the information is not readily accessible as text. Combined with NLP tools like Amazon Comprehend, Textract can provide a comprehensive solution for extracting both structured and unstructured information from legal documents.

255
Q

Convolutional Neural Network (CNN)

A

CNNs are designed for tasks such as image and video recognition, object detection, and similar applications involving grid-like data (such as pixels in an image). While CNNs are excellent at feature extraction and classification in images, they are not inherently designed for document parsing or extraction tasks.

256
Q

Amazon Personalize

A

Amazon Personalize is designed for building machine learning models to provide personalized recommendations, such as product or content suggestions. It does not have any built-in capabilities for text analysis or document processing, making it completely irrelevant for the law firm’s goal of extracting key points from legal documents.

257
Q

WaveNet

A

WaveNet is tailored for audio data generation, specifically for tasks such as speech synthesis and audio signal processing. It does not have the capabilities to analyze legal documents or extract key information, making it an incorrect choice for this task.

258
Q

Model evaluation

A

Model evaluation is the process of evaluating and comparing model outputs to determine the model that is best suited for a use case

259
Q

Model inference

A

Model inference is the process of a model generating an output (response) from a given input (prompt).

260
Q

Amazon Q Developer

A

Amazon Q Developer is powered by Amazon Bedrock.

Amazon Q Developer is a generative artificial intelligence (AI) powered conversational assistant that can help you understand, build, extend, and operate AWS applications. You can ask questions about AWS architecture, your AWS resources, best practices, documentation, support, and more. Amazon Q is constantly updating its capabilities so your questions get the most contextually relevant and actionable answers.

261
Q

Hyperparameters

A

Hyperparameters are external configuration variables that data scientists use to manage machine learning model training.

They are manually set before training a model.

You can customize the hyperparameters of the training job that are used to fine-tune the model. The hyperparameters available for each fine-tunable model differ depending on the model.

Examples: Epochs, Learning Rate, Batch Size

262
Q

Hyperparameters - Epochs

A

One epoch is one cycle through the entire dataset. Multiple intervals complete a batch, and multiple batches eventually complete an epoch. Multiple epochs are run until the accuracy of the model reaches an acceptable level, or when the error rate drops below an acceptable level.

263
Q

Hyperparameters - Learning rate

A

The amount that values should be changed between epochs. As the model is refined, its internal weights are being nudged and error rates are checked to see if the model improves. A typical learning rate is 0.1 or 0.01, where 0.01 is a much smaller adjustment and could cause the training to take a long time to converge, whereas 0.1 is much larger and can cause the training to overshoot. It is one of the primary hyperparameters that you might adjust for training your model. Note that for text models, a much smaller learning rate (5e-5 for BERT) can result in a more accurate model.

264
Q

Hyperparameters - Batch size

A

The number of records from the dataset that is to be selected for each interval to send to the GPUs for training.

265
Q

Amazon Q

A

Amazon Q is a generative AI–powered assistant that allows you to create pre-packaged generative AI applications

Amazon Q is a generative AI-powered assistant for accelerating software development and leveraging companies’ internal data. Amazon Q generates code, tests, and debugs. It has multistep planning and reasoning capabilities that can transform and implement new code generated from developer requests. Amazon Q also makes it easier for employees to get answers to questions across business data.

You cannot choose the underlying Foundation Model with Amazon Q.

266
Q

Amazon Bedrock

A

Amazon Bedrock provides an environment to build and scale generative AI applications using a Foundation Model (FM)

Amazon Bedrock provides an environment to build and scale generative AI applications with FMs. It is a fully managed service that offers a choice of high-performing FMs from leading AI companies. It also provides a broad set of capabilities around security, privacy, and responsible AI. It also supports fine-tuning, Retrieval Augmented Generation (RAG), and agents that execute tasks.

Amazon Bedrock offers a choice of high-performing Foundation Models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API.

267
Q

AWS DeepRacer

A

The AWS DeepRacer vehicle is a Wi-Fi-enabled, physical vehicle that can drive itself on a physical track by using a reinforcement learning model.

You can manually control the vehicle or deploy a model for the vehicle to drive autonomously.

The autonomous mode runs inference on the vehicle’s compute module. Inference uses images that are captured from the camera that is mounted on the front.

A Wi-Fi connection allows the vehicle to download software. The connection also allows the user to access the device console to operate the vehicle by using a computer or mobile device.

https://docs.aws.amazon.com/deepracer/latest/developerguide/what-is-deepracer.html

268
Q

Amazon SageMaker Clarify

A

SageMaker Clarify is specifically designed to help identify and mitigate bias in machine learning models and datasets. It provides tools to analyze both data and model predictions to detect potential bias, generate reports, and help ensure that models are fair and transparent. It can help identify and measure bias within the data preparation stage and throughout the model’s lifecycle. This capability is essential for building trustworthy AI systems that do not inadvertently discriminate against specific groups.

https://aws.amazon.com/sagemaker/clarify/

269
Q

Hyperparameter Tuning

A

The company should use hyperparameters for model tuning, which involves adjusting parameters such as regularization, learning rates, and dropout rates to enhance the model’s ability to generalize well to new data

Hyperparameter tuning is the most effective solution in this scenario because it allows the company to adjust the settings that control the learning process of the model. By fine-tuning hyperparameters, such as increasing regularization or early stopping or adjusting dropout rates, the model can avoid overfitting to the training data and better generalize to new, unseen data in production. This approach helps improve the model’s performance across various data distributions.

via - https://aws.amazon.com/what-is/overfitting/

270
Q

Data Science Process Phase - Exploratory Data Analysis (EDA)

A

The company is in the Exploratory Data Analysis (EDA) phase, which involves examining the data through statistical summaries and visualizations to identify patterns, detect anomalies, and form hypotheses. This phase is crucial for understanding the dataset’s structure and characteristics, making it the most appropriate description of the current activities. Tasks like calculating statistics and visualizing data are fundamental to EDA, helping to uncover patterns, detect outliers, and gain insights into the data before any modeling is done. EDA serves as the foundation for building predictive models by providing a deep understanding of the data.

271
Q

Data Science Process Phase - Data Preparation

A

Data preparation involves cleaning and preprocessing the data to make it suitable for analysis or modeling. This may include handling missing values, removing duplicates, or transforming variables, but it does not typically involve calculating statistics and visualizing data. While data preparation is an important step, it does not encompass the exploratory analysis activities described in the question.

272
Q

Data Science Process Phase - Data Augmentation

A

Data augmentation is a technique used primarily in machine learning to artificially increase the size and variability of the training dataset by creating modified versions of the existing data, such as flipping images or adding noise. It is not related to the tasks of calculating statistics or visualizing data, which are part of EDA.

273
Q

Data Science Process Phase - Model Evaluation

A

Model evaluation refers to assessing the performance of a machine learning model using specific metrics such as accuracy, precision, recall, or F1 score. Model evaluation does not involve exploratory tasks like calculating statistics or visualizing data; instead, it focuses on validating the effectiveness of a trained model. Therefore, this phase does not align with the company’s current activities.

274
Q

AWS Artifact

A

The company should use AWS Artifact to facilitate on-demand access to AWS compliance reports and agreements, as well as allow users to receive notifications when new compliance documents or reports, including ISV compliance reports, are available

This is the correct option because AWS Artifact is specifically designed to provide access to a wide range of AWS compliance reports, including those from Independent Software Vendors (ISVs). AWS Artifact allows users to configure settings to receive notifications when new compliance documents or reports are available. This capability makes it an ideal choice for a company that needs timely email alerts regarding the availability of ISV compliance reports.

The new third-party reports tab on the AWS Artifact Reports page provides on-demand access to security compliance reports of Independent Software Vendors (ISVs) who sell their products through AWS Marketplace.

You can subscribe to notifications and create configurations to get notified when a new report or agreement, or a new version of an existing report or agreement becomes available on AWS Artifact.

275
Q

AWS Audit Manager

A

AWS Audit Manager is focused on helping users automate evidence collection for auditing purposes and assess their AWS environment against specific compliance frameworks. It does not offer functionality for accessing or receiving notifications about ISV compliance reports from AWS.

276
Q

AWS Trusted Advisor

A

AWS Trusted Advisor is a service that provides guidance to optimize AWS resources by analyzing security, cost, performance, and fault tolerance. However, it does not provide features for managing or receiving notifications about compliance reports, including ISV compliance reports. Therefore, it is not suitable for this requirement.

277
Q

AWS Config

A

AWS Config is a tool for monitoring and recording AWS resource configurations and evaluating them against desired configurations. While it is useful for maintaining configuration compliance, it does not deal with external compliance reports or provide notification capabilities for ISV compliance documents, making it irrelevant to the company’s needs.

278
Q

Bias - Sampling bias

A

sampling bias occurs when the data used to train the model does not accurately reflect the diversity of the real-world population. If certain ethnic groups are underrepresented or overrepresented in the training data, the model may learn biased patterns, causing it to flag individuals from those groups more frequently. In this scenario, sampling bias leads to discriminatory outcomes and unfairly targets specific groups based on ethnicity.

279
Q

Bias - Measurement bias

A

Measurement bias involves inaccuracies in data collection, such as faulty equipment or inconsistent measurement processes. This type of bias does not inherently affect the demographic composition of the dataset and, therefore, is not directly responsible for bias based on ethnicity in the model’s outputs.

280
Q

Bias - Observer bias

A

Observer bias relates to human errors or subjectivity during data analysis or observation. Since the AI model processes the data autonomously without human intervention, observer bias is not a factor in the biased outcomes of the model.

281
Q

Bias - Confirmation bias

A

Confirmation bias involves selectively searching for or interpreting information to confirm existing beliefs. This type of bias does not apply to the AI system in this scenario, as there is no indication that the model is designed to reinforce any preconceptions or assumptions related to ethnicity.

282
Q

Few-shots prompting

A

The data should include user-input along with the correct user intent, providing examples of user queries and the corresponding intent

This is the correct answer because few-shots prompting involves providing the model with examples that include both the user-input and the correct user intent. These examples help the model understand and learn how to map various user queries to their appropriate intents. By repeatedly seeing this pairing, the model can generalize from these examples and improve its ability to recognize user intent in new, unseen queries.

via - https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/what-is.html