Domain 2: Gen AI Fundamentals 24% Flashcards
____ is a subset of deep learning. Like deep learning, this is a multipurpose technology that helps to generate new original content rather than finding or classifying existing content.
Generative AI
_____ looks for statistical patterns in modalities, such as natural language and images.
Gen AI foundational models
_____ are very large and complex neural network models with billions of parameters that are learned during the training phase or pre-training.
Gen AI foundational models
The more parameters a model has, the more _____ it has, so the model can perform more advanced tasks.
memory
Gen AI models are built with _____, _____, _____,and_____ all working together.
neural networks, system resources, data, and prompts
The current core element of generative AI is the _____.
transformer network
_____ are pre-trained on massive amounts of the text data from the internet, and they can use this pre-training process to build up a broad knowledge base.
Large Language Models (LLMs)
A ____is a natural language text that requests the generative AI to perform a specific task.
prompt
The process of reducing the size of one model (known as the teacher) into a smaller model (known as the student) that emulates the original model’s predictions as faithfully as possible.
distillation
A prompt that contains more than one example demonstrating how the large language model should respond.
few-shot prompting
A second, task-specific training pass performed on a pre-trained model to refine its parameters for a specific use case.
fine tuning
A form of fine-tuning that improves a generative AI model’s ability to follow instructions, which involves training a model on a series of instruction prompts, typically covering a wide variety of tasks. The resulting model generates useful responses to zero-shot prompts across a variety of tasks.
instruction tuning
An algorithm for performing parameter efficient tuning that fine-tunes only a subset of a large language model’s parameters.
Low-Rank Adaptability (LoRA)
A system that picks the ideal model for a specific inference query.
model cascading
The algorithm that determines the ideal model for inference in model cascading. It is typically a machine learning model that gradually learns how to pick the best model for a given input, and it could sometimes be a simpler, non-machine learning algorithm.
model router
A prompt that contains one example demonstrating how the large language model should respond. For example, the following prompt contains one example showing a large language model how it should answer a query.
one-shot prompting
A set of techniques to fine-tune a large pre-trained language model (PLM) more efficiently than full fine-tuning. It typically fine-tunes far fewer parameters than full fine-tuning, yet generally produces a large language model that performs as well (or almost as well) as a large language model built from full fine-tuning.
parameter-efficient tuning
Models or model components (such as an embedding vector) that have already been trained. Sometimes, you’ll feed pre-trained embedding vectors into a neural network. Other times, your model will train the embedding vectors themselves rather than rely on the pre-trained embeddings.
pre-trained model
The initial training of a model on a large dataset. Some models are clumsy giants and must typically be refined through additional training.
pre-training
Any text entered as input to a large language model to condition the model to behave in a certain way. These can be as short as a phrase or arbitrarily long (for example, the entire text of a novel).
prompt
A capability of certain models that enables them to adapt their behavior in response to arbitrary text input (prompts). In the paradigm, a large language model responds to a prompt by generating text.
prompt-based learning
The art of creating prompts that elicit the desired responses from a large language model. Humans perform. Writing well-structured prompts is an essential part of ensuring useful responses from a large language model.
prompt engineering
Using feedback from human raters to improve the quality of a model’s responses. The system can then adjust its future responses based on that feedback.
Reinforcement Learning from Human Feedback (RLHF)
An optional part of a prompt that identifies a target audience for a generative AI model’s response. Without it, a large language model provides an answer that may or may not be useful for the person asking the questions. With it, a large language model can answer in a way that’s more appropriate and more helpful for a specific target audience.
role prompting
A technique for tuning a large language model for a particular task, without resource intensive fine-tuning. Instead of retraining all the weights in the model, this automatically adjusts a prompt to achieve the same goal.
soft prompt tuning
A hyperparameter that controls the degree of randomness of a model’s output. Higher result in more random output, while lower result in less random output.
temperature
A prompt that does not provide an example of how you want the large language model to respond.
zero-shot prompting
The process that a trained machine learning model uses to draw conclusions from brand-new data.
inference
When the purpose of the prompt is finished.
completion
The amount of textual information that the AI can take into account at any given time when processing language
context window
The smallest units of text that an AI model processes.
tokens
This breaks down text into these units to make it manageable for computational models.
tokenizer
Providing examples inside the context window is called _____. With this, you can help LLMs learn more about the task being asked by including examples or additional data in the prompt.
in-context learning
The input that you sent into your generative model is called the _____, which consists of instructions and content.
prompt
T/F: The larger a model is, the more likely it is to work without additional in-context learning or further training. Because the model’s capability increases with size, it has supported the development of larger and larger models.
True
LLMs encode a deep statistical representation of a language. This understanding is developed during the _____ phase when the model learns from vast amounts of unstructured data.
pre-training
During _____, the model weights get updated to minimize the loss of the training objective, and the encoder generates an embedding or vector representation for each token.
pre-training