Optimizing Foundation Models Flashcards

1
Q

Embedding is the process by which

A

text, images, and audio are given numerical representation in a vector space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Embedding is usually performed by

A

a machine learning (ML) model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Enterprise datasets, such as documents, images and audio, are passed to ML models as tokens and are vectorized. These vectors in an n-dimensional space, along with the metadata about them, are stored in purpose-built vector databases for faster retrieval.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Two words that relate to each other will have similar

A

embeddings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Here is an example of two words: sea and ocean. They are randomly initialized and their early embeddings are diverse. As the training progresses, their embeddings become more

A

similar because they often appear close to each other and in similar context.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The core function of vector databases is to

A

compactly store billions of high-dimensional vectors representing words and entities. Vector databases provide ultra-fast similarity searches across these billions of vectors in real time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The most common algorithms used to perform the similarity search are

A

k-nearest neighbors (k-NN)

cosine similarity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Agents- Intermediary operations:

A

Agents can act as intermediaries, facilitating communication between the generative AI model and various backend systems. The generative AI model handles language understanding and response generation. The various backend systems include items such as databases, CRM platforms, or service management tools.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Agents - Action launch

A

Agents can be used to run a wide variety of tasks. These tasks might include adjusting service settings, processing transactions, retrieving documents, and more. These actions are based on the users’ specific needs understood by the generative AI model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Agents - Feedback integration

A

Agents can also contribute to the AI system’s learning process by collecting data on the outcomes of their actions. This feedback helps refine the AI model, enhancing its accuracy and effectiveness in future interactions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Human evaluation involves real users interacting with the AI model to provide feedback based on their experience. This method is particularly valuable for assessing qualitative aspects of the model, such as the following:

Human evaluation is often used for iterative improvements and tuning the model to better meet user expectations.

A

User experience: How intuitive and satisfying is the interaction with the model from the user’s perspective?
Contextual approriateness: Does the model respond in a way that is contextually relevant and sensitive to the nuances of human communication?
Creativity and flexibility: How well does the model handle unexpected queries or complex scenarios that require a nuanced understanding?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Benchmark datasets, on the other hand, provide a quantitative way to evaluate generative AI models. These datasets consist of predefined datasets and associated metrics that offer a consistent, objective means to measure model performances, like

A

Accuracy
Speed and Efficiency
Scalability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Creating a benchmark dataset is a

A

manual process that is necessary to properly evaluate LLM performances using RAG systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

In practice, a combination of

A

both human evaluation and benchmark datasets is often used to provide a comprehensive overview of a model’s performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

LLM as a judge

A

evaluation of LLM performance using a benchmark dataset can be automated using this

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Fine-tuning is critical because it helps

A

Increase specificity:
Improve accuracy: Reduce biases:
Boost efficiency:

17
Q

Fine-tuning - Instruction tuning

A

This approach involves retraining the model on a new dataset that consists of prompts followed by the desired outputs. This is structured in a way that the model learns to follow specific instructions better. This method is particularly useful for improving the model’s ability to understand and execute user commands accurately, making it highly effective for interactive applications like virtual assistants and chatbots.

18
Q

Fine-tuning: Reinforcement learning from human feedback (RLHF):

A

This approach is a fine-tuning technique where the model is initially trained using supervised learning to predict human-like responses. Then, it is further refined through a reinforcement learning process, where a reward model built from human feedback guides the model toward generating more preferable outputs. This method is effective in aligning the model’s outputs with human values and preferences, thereby increasing its practical utility in sensitive applications.

19
Q

Fine-tuning Adapting models for specific domains:

A

This approach involves fine-tuning the model on a corpus of text or data that is specific to a particular industry or sector. An example of this would be legal documents for a legal AI or medical records for a healthcare AI. This specificity enables the model to perform with a higher degree of relevance and accuracy in domain-specific tasks, providing more useful and context-aware responses.

20
Q

Fine-tuning Transfer Learning

A

This approach is a method where a model developed for one task is reused as the starting point for a model on a second task. For foundational models, this often means taking a model that has been trained on a vast, general dataset, then fine-tuning it on a smaller, specific dataset. This method is highly efficient in using learned features and knowledge from the general training phase and applying them to a narrower scope with less additional training required.

21
Q

Fine tuning Continuous pretraining:

A

This approach involves extending the training phase of a pre-trained model by continuously feeding it new and emerging data. This approach is used to keep the model updated with the latest information, vocabulary, trends, or research findings, ensuring its outputs remain relevant and accurate over time.

22
Q

The data preparation for fine-tuning is distinct from initial training due to the following reasons

A

Specificity: The dataset for fine-tuning is much more focused, containing examples that are directly relevant to the specific tasks or problems the model needs to solve.
High relevance: Data must be highly relevant to the desired outputs. Examples include legal documents for a legal AI or customer service interactions for a customer support AI.
Quality over quantity: Although the initial training requires massive amounts of data, fine-tuning can often achieve significant improvements with much smaller, but well-curated datasets.

23
Q

Key steps in fine-tuning data preparation Data Curation

A

Data curation: Although it is a continuation, this involves a more rigorous selection process to ensure every piece of data is highly relevant. This step also ensures the data contributes to the model’s learning in the specific context.

24
Q

ROUGE is a set of metrics used to evaluate

A

automatic summarization of texts, in addition to machine translation quality in NLP.

25
Q

The main idea behind ROUGE is to count the number of

A

overlapping units. This includes words, N-grams, or sentence fragments between the computer-generated output and a set of reference (human-created) texts.

26
Q

ROUGE-N:

A

This metric primarily assesses the fluency of the text and the extent to which it includes key ideas from the reference.

27
Q

ROUGE-L: This metric uses the

A

longest common subsequence between the generated text and the reference texts. It is particularly good at evaluating the coherence and order of the narrative in the outputs.

28
Q

BLEU is a metric used to evaluate the

A

quality of text that has been machine-translated from one natural language to another.

29
Q

BERTScore is increasingly used alongside traditional metrics like BLEU and ROUGE for a

A

more comprehensive assessment of language generation models. This is especially true in cases where capturing the deeper semantic meaning of the text is important.

BERTScore evaluates the semantic similarity rather than relying on exact lexical matches, it is capable of capturing meaning in a more nuanced manner.

30
Q

Key steps in fine-tuning data preparation: labeling

A

Labeling: In fine-tuning, the accuracy and relevance of labels are paramount. They guide the model’s adjustments to specialize in the target domain.

31
Q

Key steps in fine-tuning data preparation: governance and compliance

A

Governance and compliance: Considering fine-tuning often uses more specialized data, ensuring data governance and compliance with industry-specific regulations is critical.
Representativeness and bias checking: It is essential to ensure that the fine-tuning dataset does not introduce or perpetuate biases that could skew the model’s performance in undesirable ways.

32
Q

Key steps in fine-tuning data preparation: Feedback integration:

A

For methods like RLHF, incorporating user or expert feedback directly into the training process is crucial. This is more nuanced and interactive than the initial training phase.

33
Q

Amazon Augmented AI (Amazon A2I) is a service that makes it easy to build human review workflows for machine learning predictions.

A

It allows developers to incorporate human review into their machine learning applications to improve model accuracy and ensure compliance with regulatory or business requirements. With Amazon A2I, developers can create human review workflows, manage the workforce, and integrate human review into their applications.

34
Q

Federal Risk and Authorization Management Program (FedRAMP) focuses on cloud services for federal agencies.

A

While relevant for cloud products, it does not provide the comprehensive security standards and guidelines that the NIST framework does for all federal information systems.

35
Q

Amazon SageMaker Clarify is an essential tool for machine learning specialists who are concerned about

A

bias and transparency in their models. It helps detect bias at multiple stages, including during data preparation, after model training, and even during inference.
Amazon SageMaker Clarify enhances model transparency by providing detailed explanations for ML predictions. It offers feature importance scores, illustrating how each input feature influences the model’s predictions. This transparency allows stakeholders to understand the reasoning behind the model’s decisions, thereby building confidence in the model’s outputs. Using SageMaker Clarify, organizations can meet regulatory requirements and ensure that their ML models are fair and interpretable.