Module 3 Flashcards
Discover the Art of Prompting
What is prompt engineering?
The practice of developing effective prompts that elicit useful output from generative AI.
What is an LLM?
A large language model (LLM) is an AI model trained on large amounts of text to identify patterns between words, concepts, and phrases, so that it can generate responses to prompts.
How do LLMs learn to generate useful responses to prompts?
LLMs are trained on millions of sources of text, including books, articles, websites, and more, to learn patterns and relationships in human language. The more high-quality data the model receives, the better its performance.
How do LLMs predict the next word in a sequence?
LLMs use statistics to analyze the relationships between words in a sequence and compute probabilities for thousands of possible next words. They predict the next word based on the highest probability.
What is a simple example of how an LLM predicts the next word?
In the incomplete sentence “After it rained, the street was…”, an LLM might predict the next word as “wet” (high probability), “clean” (lower probability), or “dry” (extremely low probability).
What are some limitations of LLMs?
Limitations of LLMs include biases in training data, insufficient content about specific topics, and the tendency to hallucinate or generate factually inaccurate text.
What is an example of hallucination in LLMs?
If an LLM provides incorrect information about a company’s history, such as the wrong founding date or number of employees, this is an example of hallucination.
What are the components of a good prompt framework for generative AI?
Task,
Context,
References,
Evaluate,
Iterate.
(re-read article maybe?)
What are some common uses of LLMs to boost productivity and creativity?
Content creation, summarization,
classification,
extraction,
translation,
editing,
problem-solving.
What are 3 ways to make prompts more effective?
Consider what you want the LLM to produce -
The LLM will generate more useful output when you include a specific instruction in your prompt, like create, summarize, classify, extract, translate, edit, or solve.
Provide necessary context - The LLM will generate more useful output when you include detailed instructions, with specific guidance about the style or format of the output you want.
Assign the LLM a role, job, or function to reinforce the purpose of the prompt and help guide the LLM to produce useful output.
What is an iterative process?
It involves creating a first version, evaluating it, and improving upon it in subsequent versions until the desired outcome is achieved.
It should be applied to prompt engineering.
Why might different LLMs respond differently to similar prompts?
Each LLM is developed with unique training data and programming techniques, leading to variations in background knowledge and response capabilities.
(re-watch video?)
What does the term “shot” mean in prompt engineering?
“shot” is a synonym for “example.”
What are the different names for prompting techniques based on the number of examples given to the LLM?
Zero-shot prompting (no examples),
one-shot prompting (one example),
few-shot prompting (two or more examples).
What is chain-of-thought prompting?
A technique that involves requesting a large language model to explain its reasoning processes step by step, from input to output.