Amazon Bedrock Flashcards

Understand the AWS Bedrock Platform

1
Q

What is Amazon Bedrock?

A

Amazon Bedrock is a fully managed service that makes foundation models (FMs) from Amazon and leading artificial intelligence (AI) companies available through an API. Amazon Bedrock has a broad set of capabilities to build generative artificial intelligence (generative AI) applications with security, privacy, and responsible AI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does Amazon Bedrock do?

A

Amazon Bedrock is a fully managed service that offers leading foundation models (FMs) and a set of capabilities to quickly build and scale generative artificial intelligence (generative AI) applications. The service also helps ensure privacy and security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does Amazon Bedrock include?

A
  • Foundation models that include a choice of base FMs and customized FMs
  • Playgrounds for chat, text, and images with quick access to FMs for experimentation and use through the console
  • Safeguards such as watermark detection and guardrails
  • Orchestration and automation for your application with knowledge bases and agents
  • Assessment and deployment with model evaluation and provisioned throughput
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the three key benefits of Amazon Bedrock?

A
  • Efficiently build with FMs
  • Securely build generative AI applications
  • Deliver customized experiences by using your organization’s data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which problems does Amazon Bedrock solve?

A
  • Customers want access to multiple models so they can choose the model that best fits their needs.
  • Customers want the models fine-tuned with their data to be private.
  • Customers do not want to manage their infrastructure.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the benefits of using Amazon Bedrock?

A
  • Prompts and responses are not shared with AWS or third-party providers.
  • Amazon Bedrock makes it possible to customize foundation models (FMs) privately.
  • Amazon Bedrock provides access to a wide variety of foundation models (FMs) from Amazon and third-party providers.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can you use Amazon Bedrock to architect a generative AI application?

A

(Refer to the chatbot pic)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are typical use cases for Amazon Bedrock?

A

Text generation

Create new pieces of original content, such as short stories, essays, social media posts, and webpage copy.

Virtual assistants

Build assistants that understand user requests, automatically break down tasks, engage in dialogue to collect information, and take actions to fulfill requests.

Text and image search

Search and synthesize relevant information to answer questions and provide recommendations from a large amount of text and image data.

Text summarization

Get concise summaries of long documents, such as articles, reports, research papers, technical documentation, and books, to quickly and effectively extract important information.

Image generation

Quickly create realistic and visually appealing images for advertising campaigns, websites, presentations, and more.

Guardrails

Implement safeguards customized to your application requirements and responsible artificial intelligence (AI) policies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What else should you know about Amazon Bedrock?

A

Model access
As an Amazon Bedrock user, you will need to request access to FMs before they are available to use. If you want to add models for text, chat, and image generation, you need to choose Model access in the left navigation pane of the Amazon Bedrock console. Model access is subject to IAM permissions for the account and the users. Your use of Amazon Bedrock and its models is subject to the seller’s pricing terms, end-user license agreement (EULA), and Amazon Bedrock service terms.

Shared responsibility model
AWS is responsible for protecting the infrastructure that runs all the services offered by Amazon Bedrock. You are responsible for managing and encrypting your data and applying correct user access controls to data and API calls made to Amazon Bedrock. You are also responsible for the accuracy of the results returned by the customized FMs and third-party FMs.

Quotas
Service quotas, also called limits, are the maximum number of service resources allowed for your AWS account. Amazon Bedrock implements several account-level limits.

Corporate networks
Amazon Bedrock is a fully managed service that is not configured in your virtual private cloud (VPC). You can access the service endpoint from a VPC over PrivateLink. You can also connect your corporate networks to Amazon Bedrock through one of your VPCs.

Regional availability
Currently, Amazon Bedrock is available in select Regions, with plans to expand.

Regional boundaries
With Amazon Bedrock, all your content is processed inside the same Region where the relevant API call was made. Data does not cross any Regional boundaries.

PII data
You can supply personally identifiable information (PII) data in input prompts to Amazon Titan or third-party models. The third-party models will have their own ways of handling the data, but Amazon Titan will always accept it as input.

When Amazon Titan provides output from the prompt, any PII data present that was also in the input prompt will remain in cleartext in the output. Any PII data in the output that was not present in the input prompt will be masked. For more information about the handling of PII data, see the third-party model provider’s EULA.

Service enhancement
None of your data is used to improve or enhance the base FMs. Your data is not shared with any model providers.

Model customization
When you fine-tune a base FM, your data is used to fine-tune a copy of the base FM. This FM copy is private to you. Neither the fine-tuned FM nor the data used to fine-tune it is shared with any other customers or model providers.

Identification of these fine-tuned models uses standard Amazon Resource Names (ARNs), and only the AWS account that created the model can access it. Amazon Bedrock does not expose any of the model-specific tuning details, such as the weights, and you cannot export any of the custom model artifacts.

Model evaluation
With model evaluation on Amazon Bedrock, you can efficiently evaluate, compare, and select the best FM for your use case. Amazon Bedrock offers a choice of automatic evaluation and human evaluation.

You can use automatic evaluation with predefined metrics, such as accuracy, robustness, and toxicity. You can use human evaluation workflows for subjective or custom metrics, such as friendliness, style, and alignment to brand voice. For human evaluation, you can use your in-house employees or an AWS managed team as reviewers. You can evaluate the model by using the curated datasets available with the service or by using your own datasets.

Guardrails
Guardrails for Amazon Bedrock evaluates user inputs and FM responses based on use-case-specific policies and provides an additional layer of safeguards regardless of the underlying FM. Guardrails can be used with FMs available on Amazon Bedrock including Anthropic Claude, Meta Llama, Cohere Command, AI21 Labs Jurassic, Amazon Titan Text. It can also be used with fine-tuned FMs and Agents for Amazon Bedrock. Customers can create multiple guardrails, each configured with a different combination of controls, and use these guardrails across different applications and use cases.

*Watermark detection**
Watermark detection is a safeguard feature in Amazon Bedrock. It helps detect whether an image was generated by an Amazon Titan Image Generator model on Bedrock.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How much does Amazon Bedrock cost?

A

With the On-Demand mode, you pay for only what you use, with no time-based term commitments. For text generation models, you are charged for every input token processed and every output token generated. For embeddings models, you are charged for every input token processed. A token consists of a few characters and refers to the basic unit that a model uses to understand user input and prompts to generate results. For image generation models, you are charged for every image generated.

With this mode, you can purchase model units for a specific base or custom model. The Provisioned Throughput mode is primarily designed for large, consistent inference workloads that need guaranteed throughput. Custom models can be accessed only by using Provisioned Throughput. A model unit provides a certain throughput, which is measured by the maximum number of input or output tokens processed each minute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the basic technical concepts of Amazon Bedrock?

A

You can use Amazon Bedrock from the AWS Management Console or with an API.

AWS Console
You can use the Amazon Bedrock playgrounds to interact with FMs to generate text or an image or to have a conversation using chat. Amazon Bedrock supports the selection of an FM from a set of model providers.

By using the playgrounds in Amazon Bedrock, you can submit a natural language command (prompt) to the FM and get a response or an answer. You can influence the response from the model by adjusting model inference parameters, such as the temperature, so that the answer can vary from being factual to creative. You can provide prompts to generate text, generate images, summarize text, receive answers to questions, or have a conversation by using chat.

Using the Amazon Bedrock console, you can use capabilities such as safeguards, orchestration, model assessment, and deployments.

AWS API
You can use a single Amazon Bedrock API to securely access FMs. By using the same API, you can privately pass prompts and responses between the user and the FM. The Amazon Bedrock API can be used through the AWS SDK to build a generative AI application and to integrate with other AWS services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do you interact with the Amazon Bedrock playground?

A

You can access Amazon Bedrock with the AWS Management Console to use the text, chat, or image playground. In the text or image playground, you can choose an FM, enter a prompt in the text field, and run the prompt to generate a response.

In the chat playground, you can interact with the FM of your choice by using a conversational interface. In the image playground, you can use the Amazon Titan Image Generator model or Stable Diffusion FM for text-to-image query and response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What parameters can be controlled by the AWS Bedrock playground?

A

The following parameters control randomness and diversity:

Temperature: LLMs use probability to construct the words in a sequence. For any given sequence, there is a probability distribution of options for the next word in the sequence. When you set the temperature closer to 0, the model tends to select the higher-probability words. When you set the temperature further from 0, the model might select a lower-probability word.

Top P: This parameter controls choosing from the smallest number of tokens where the combined, or cumulative, probability of the tokens exceeds the Top P parameter. A higher value for Top P, such as 0.9, implies that the output will be chosen at random from a larger number of tokens, which increases diversity. However, a higher value can cause the output to become incoherent. Lower values decrease the number of tokens available for selection, which increases the predictability of the next token.

The following parameters control length:

Response length: The response length configures the maximum number of tokens to use in the generated response.

Stop sequences: A stop sequence is a sequence of characters. If the model encounters a stop sequence, it stops generating further tokens. Different models support different types of characters in a stop sequence and different maximum sequence lengths and might support the definition of multiple stop sequences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly