Amazon Bedrock Getting Started Flashcards

1
Q

Bedrock capabilities

A

Foundation models that include a choice of base FMs and customized FMs

Playgrounds for chat, text, and images with quick access to FMs for experimentation and use through the console

Safeguards such as watermark detection and guardrails

Orchestration and automation for your application with knowledge bases and agents

Assessment and deployment with model evaluation and provisioned throughput

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

T or F, you can access Bedrock FM through a single API.

A

True. You can use a single API to securely access customized FMs and FMs provided by Amazon and other AI companies. By using the same API, you can privately and more efficiently pass prompts and responses between the user and the FM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Can you establish private connectivity between your onpremsis and Bedrock?

A

You can use AWS PrivateLink with Amazon Bedrock to establish private connectivity between your FMs and on-premises networks without exposing your traffic to the internet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Do you need to manage infrastructure with Bedrock?

A

With the Amazon Bedrock serverless experience, you don’t need to manage the infrastructure. You can fine-tune and deploy FMs without creating instances, implementing pipelines, or setting up storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Knowledge Bases for Amazon Bedrock supports popular databases for vector storage, including

A

the vector engine for Amazon OpenSearch Serverless, Pinecone, Redis Enterprise Cloud, Amazon Aurora, and MongoDB. If you do not have an existing vector database, Amazon Bedrock creates an OpenSearch Serverless vector store for you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Continued pre-training is available for

A

Amazon Titan Text Express and Amazon Titan models on Amazon Bedrock. This type of training adapts the model from a general domain to a more specific domain, such as medical, law, or finance. It does this while preserving most of the capabilities of the Amazon Titan base

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You can fine-tune an FM by using Amazon Bedrock and providing your own

A

labeled training dataset to improve the model’s performance on specific tasks. Amazon Bedrock makes a separate copy of the base FM and trains this private copy of the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

For prompt-based learning and RAG, you are not customizing the FM. However, when you fine-tune an FM,

A

you are customizing the FM and creating a private copy of the FM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

With Amazon Bedrock, all your content is processed inside

A

the same Region where the relevant API call was made. Data does not cross any Regional boundaries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

True or False All of your data is used to improve or enhance the base FMs. Your data is shared with any model providers.

A

None of your data is used to improve or enhance the base FMs. Your data is not shared with any model providers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

When you fine-tune a base FM, your data is used to fine-tune a copy of the base FM.

A

This FM copy is private to you. Neither the fine-tuned FM nor the data used to fine-tune it is shared with any other customers or model providers.

Identification of these fine-tuned models uses standard Amazon Resource Names (ARNs), and only the AWS account that created the model can access it. Amazon Bedrock does not expose any of the model-specific tuning details, such as the weights, and you cannot export any of the custom model artifacts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Watermark Detection

A

Watermark detection is a safeguard feature in Amazon Bedrock. It helps detect whether an image was generated by an Amazon Titan Image Generator model on Bedrock.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Bedrock Charges on demand

A

For text generation models, you are charged for every input token processed and every output token generated. For embeddings models, you are charged for every input token processed. A token consists of a few characters and refers to the basic unit that a model uses to understand user input and prompts to generate results. For image generation models, you are charged for every image generated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Bedrock Charges - Provisioned Throughput

A

With this mode, you can purchase model units for a specific base or custom model. The Provisioned Throughput mode is primarily designed for large, consistent inference workloads that need guaranteed throughput. Custom models can be accessed only by using Provisioned Throughput. A model unit provides a certain throughput, which is measured by the maximum number of input or output tokens processed each minute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Playground in Amazon Bedrock with the AWS Management Console

A

text, chat, or image playground.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Temperature

A

Temperature: LLMs use probability to construct the words in a sequence. For any given sequence, there is a probability distribution of options for the next word in the sequence. When you set the temperature closer to 0, the model tends to select the higher-probability words. When you set the temperature further from 0, the model might select a lower-probability word.

17
Q

Top P

A

This parameter controls choosing from the smallest number of tokens where the combined, or cumulative, probability of the tokens exceeds the Top P parameter. A higher value for Top P, such as 0.9, implies that the output will be chosen at random from a larger number of tokens, which increases diversity. However, a higher value can cause the output to become incoherent. Lower values decrease the number of tokens available for selection, which increases the predictability of the next token.

18
Q

Response Length

A

The response length configures the maximum number of tokens to use in the generated response.

19
Q
A
19
Q

Stop Sequences

A

A stop sequence is a sequence of characters. If the model encounters a stop sequence, it stops generating further tokens. Different models support different types of characters in a stop sequence and different maximum sequence lengths and might support the definition of multiple stop sequences.

20
Q
A