Prompting Flashcards

1
Q

Fine Tuning

Prompt Engineering

A

Crafting an input prompt. Trial-and-error process. Helps define which training data should be used to formulate the response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Key Elements of Context

A
  1. Persona/Role
  2. Purpose
  3. Clear/Precise
  4. Concise
  5. Specific
  6. Details
  7. Examples
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Persona / Role

A

Works best when:
* Quality of answer is subjective.
* You’re trying to emulate a specific style.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Purpose

A

Define the purpose of the prompt:
* Tone, e.g., funny, professional, casual?
* Format: paragraph, bulleted list, essay, code, etc.
* Audience: who is this for?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Clarity / Precision

A

Get rid of unnecessary information, jargon, confusing phrases, or mistakes.

They can lead AI down the wrong path.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Conciseness

A

Less words –> less tokens –> less cost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Chain of Thought Prompting

A

Request AI show its thought process (or provide the thought process in the prompt) as it solves a given problem. This technique forces an AI to reason about a problem and solve it in a way a person would, by breaking it down and solving it step-by-step. Also, by asking the AI to show us its thought process, we can identify where something went wrong if the AI were to give an incorrect answer. Use “show me your thought process” or something similar in the prompt.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Use Cases for Chain of Thought Prompting

A
  • solving multi-step/complex problems requiring logical reasoning
  • addressing tasks that require logical deduction
  • solving mathematical or analytical challenges
  • explaining complex concepts or processes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Complex Tasks

A

Avoid making assumptions. Instead, break down complex tasks into simple sub-problems with a logical order and relationships.

  • Improves understanding: allows AI to focus on one task at a time, reducing the thought processing required and increasing accuracy.
  • Reduces errors: the step-by-step nature of this technique facilitates error detection and correction during task execution.
  • Better accuracy: clear instructions for each subtask ensure the AI applies appropriate logic and principles, improving overall accuracy.
  • Better logical flow: A structured, step-by-step approach maintains a coherent and understandable flow in problem-solving.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Iteration

A

Trying multiple prompts until you get an adequate response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Specificity

A

Include specific limitations or requirements. However, avoid too much information as it may confuse the AI or cause it to focus on less important aspects of your request.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Details & Context

A

Include any details essential to understanding the request, such as historical context or related concepts.

Example: lines of code and error message.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Examples

A

Adding examples clarifies what you mean and gives the AI a baseline to work from.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Types of Prompting

A
  1. Zero Shot
  2. One Shot
  3. Few-Shot
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Zero-Shot Prompting

A

No examples provided. Use for simple tasks where answer is straightforward.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

One-Shot Prompting

A

One example provided. Use when trying to specify specific format, context, or style.

17
Q

Few-Shot Prompting

A

Allows a model to learn from a very small number of examples or “shots” (typically 2-5) for a new task. Examples are provided directly in the prompt or input to the model, rather than through additional training.

Does not modify the underlying model weights or architecture. Fast to implement and flexible to adapt to new tasks or classes on the fly. Can be integrated with fine-tuning for better results. Valuable in scenarios where obtaining large labeled datasets is expensive, time-consuming, or impractical. Example: facial recognition, diagnosing rare diseases, or translation for low resource languages.

18
Q

Retrieval Augmented Generation (RAG)

A

Combines old and new data, blending different bits of information to generate better answers. Can use fresh data in real-time to supplement training data. This means AI can give us more accurate responses to our questions, even if they’re about recent events or custom data.

Either complex, API-based or uploading docs to chatbot.

19
Q

RAG Components

A
  1. Retriever Model - processes the query by converting it into computer-understandable format (numerical) and searches the knowledge base to find the most relevant documents.
  2. Knowledge Base - repository of documents, articles, research papers, etc. to supplement training data.
20
Q

RAG Steps

A
  1. Query
  2. Retriever Model searches for relevant text in Knowledge Base
  3. Retriever Model adds relevant text to Query
  4. LLM generates a response.
21
Q

Benefits of RAG

A
  1. Precise, reliable answers from up-to-date data.
  2. Tailors AI responses for personalized information.
  3. Information can be instantly updated without retraining.
  4. Saves time for users and business.
  5. Improves user satisfaction due to accurate, relevant responses.
  6. Can ask AI to point out where it found answers/details.
22
Q

Fine Tuning

A

Process of taking a pre-trained model and further training it on a smaller, targeted dataset to adapt it for specific tasks, domains/industries, or company needs, e.g., medical research, legal analysis, customer support. Leverages the general knowledge and skills of a large, powerful model and applies them to a specific field or objective. Reduces training time and computational resources compared to training a model from scratch. There are different types of fine tuning.

23
Q

Benefits of Few-Shot Prompting

A
  1. Fast to implement
  2. Slight latency
  3. Flexible to adapt to new tasks; rapid prototyping
  4. Personalization
  5. Doesn’t require a lot of data
  6. Can be integrated with fine-tuning for better results.
  7. Minimal maintenance
24
Q

Downsides of Few-Shot Prompting

A
  1. Slightly higher ongoing cost (longer prompts)
  2. Lower accuracy (less examples, may not be up-to-date)
  3. Worse for complex tasks
  4. Slight latency
25
Q

When Few-Shot Prompting is Valuable

A

Scenarios where obtaining large labeled datasets is expensive, time-consuming, or impractical. Quick adaptation to new tasks; rapid prototyping; personalization; rare scenarios.

Ex: facial recognition, diagnosing rare diseases, or translating low res

26
Q

When RAG is Valuable

A

Question-answering systems/chatbots w/ access to specific Knowledge Basis; content-generation w/ factual accuracy; summarization requiring external information; tech support w/ product docs.

27
Q

When Fine Tuning is Valuable

A

Specialized chatbot or virtual assistant; domain-specific text generation; domain-specific sentiment analysis; customized summarization; translation.

28
Q

Downsides of RAG

A
  1. Increased latency due to retrieval
  2. Medium training cost
  3. Medium implementation speed
  4. Must maintain Knowledge Base
  5. Medium accuracy
29
Q

Downsides of Fine Tuning

A
  1. Requires lots of data (100K+)
  2. Must adjust model weights
  3. Slower speed to implement
  4. Higher training cost
  5. Periodic retraining