Generative AI Model Fundamentals Flashcards

1
Q

What is self-attention?

A

Self-attention is a mechanism used in neural networks, particularly in natural language processing (NLP) models, that allows each element in a sequence (like a word in a sentence) to focus on, or “attend to,” other elements in the same sequence. Learns relationships between words.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are tokens?

A

Numerical representations of words or parts of words.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Whare are embeddings?

A

Mathematical representations (Vectors) that encode the meaning of a token.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Top P?

A

the model picks the next word from the smallest set of options that together make up a certain probability “p” (like 90%), focusing on the most likely words

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Top K

A

The model picks the next word from the top “k” most likely options, limiting choices to the top few possibilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Tempature?

A

The randomness / creativity of the window.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the context window?

A

How many tokens an llm can process at once?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the Jurassic-2 foundation model good at?

A

Multilingual LLM for text generation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is stabiliy.ai good at?

A

Image generation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How can you use a foundation model in SageMaker?

A

By using SageMaker Jumpstart

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly