Prompt Engineering for LLMs Flashcards
RLHF
Reinforcement Learning from Human Feedback
Base Model
Transformer Architecture
Tokenizer
Fine Tuning
Hallucinations
Confabulations
HuggingFace
TicToken
Autoregressive Models
Temperature and Probabilities
Intermediate Results
HHH
helpful, honest, and harmless
SFT
Supervised fine-tuning (SFT) model
Reward Model
RLHF model
Instruct Models
ChatML
prompt Injection
An approach to controlling the behavior of a model by inserting text into the prompt in such a way that it conditions the behavior.
Chat Completion API
feed-forward pass
- context retrieval
- Snippitization
- Snippit scoring and prioritizing
- Prompt assembly
Context retrieval
Snippitizing content
Scoring and prioritizing snippits
Prompt assembly
RAG
Retrieval Augmented Generation
contrastive pre-training
FAISS
Facebook AI Similarity Search
HNSW
Hierarchical Navigable Small Words
neural retrieval
lexical retrieval
hierarchical summarization
rumor problem
In-context learning
The closer a piece of information is to the end of the prompt, the more impact it has on the model.
lost middle phenomenon
While the model can easily recall the beginning and end of the prompt, it struggles with the information stuffed in the middle.
Conversational Agents
Plan-and-Solve Prompting
Branch-to-Solve Merge
LLM frameworks
LangChain,
Semantic Kernel, AutoGen,
DSPy,
…
DAG
Directed Acyclic Graph
AutoGen