Prompt Engineering Flashcards
Understand all intricacies, processes and implementations of Prompt Engineering
What are the principles of prompting?
- Write Clear and Specific Instructions
- Give the model time to “think”
What are some examples/guidelines of principle 1?
Clear Prompts are not necessarily short
use delimiters to clearly indicate distinct parts of the input
Ask for structured output i.e json/html
Check whether conditions are satisfied
Few-Shot Prompting
What are some examples/guidelines of principle 2?
Instruct the model to think longer on the problem
specify steps needed to complete the task
Instruct the model to work out its own solution before rushing to a conclusion
What are prompt injections? how can we avoid them?
When a user gives a malicious prompt/input designed to manipulate the behaviour of a language model in an unintended way.
This can be avoided using delimiters, all input being inserted in delimiters will allow the language model to differentiate between actual input, and instructions from the LLM owner.
What is few-shot prompting?
When you include a few examples of input and output pairs directly in the prompt.
Beneficial to guide LLM’s behaviour
Tailor specific outputs
What is model hallucination? How can we mitigate it?
When a Large Language model very confidently fabricates a response. This happens as many people use LLM’s as a search engine, rather than a reasoning mechanism.
We can reduce hallucinations by first finding relevant info, then answering the question. Based on the relevant information.
What is iterative prompt development?
It is a life cycle of prompt developing
It starts with the prompt idea then goes to implementation, then after experimental result we do error analysis on the implementation, and then repeat until we are satisfied.
Summary:
Idea ==> Implementation (code/data) ==> Experimental Result ==> error analysis ==> repeat
How can we take advantage of an LLM’s summarizing ability? What is the disadvantage of using ‘summarize’?
LLM’s are very powerful and useful for creating summaries.
They are capable of making a summary within a word limit, as well as with a focus on a particular topic.
However, even when adding the focus, summaries still include non relevant info, so we can use ‘extract’ instead as it will only get relevant info
How does inference work in LLM’s
LLM’s have very strong inference ability. They have the ability to infer sentiment from any given prompt, as well as infer topics (Given a long piece of text, what is this text about?)
They can also index topics, i.e given a list of topics, index them in a given text.
What is 0-shot learning
When given no training data, simply just a prompt, an algorithm delivers.
How are LLM’s language translation ability?
As LLM’s are trained with sources in many languages, these models have very good translation ability.
What are an LLM’s strongest capabilites?
As it is a language model, its capabilities are strongest in language:
- expanding
- inferring
- summarizing
- transforming
- translating
What are the roles within a language model:
system, user, assistant
System is provided with initial context
user is the user of the LLM
role: is the role assigned to LLM usually by the assistant
What is an example of using expanding in LLM’s?
generate customer service emails that are tailored to each customer’s review.
What is an example of using summarizing in LLM’s?
Summarizing product reviews for sentiment