Prompt Engineering Flashcards

Understand all intricacies, processes and implementations of Prompt Engineering

1
Q

What are the principles of prompting?

A
  1. Write Clear and Specific Instructions
  2. Give the model time to “think”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are some examples/guidelines of principle 1?

A

Clear Prompts are not necessarily short

use delimiters to clearly indicate distinct parts of the input

Ask for structured output i.e json/html

Check whether conditions are satisfied

Few-Shot Prompting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are some examples/guidelines of principle 2?

A

Instruct the model to think longer on the problem

specify steps needed to complete the task

Instruct the model to work out its own solution before rushing to a conclusion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are prompt injections? how can we avoid them?

A

When a user gives a malicious prompt/input designed to manipulate the behaviour of a language model in an unintended way.

This can be avoided using delimiters, all input being inserted in delimiters will allow the language model to differentiate between actual input, and instructions from the LLM owner.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is few-shot prompting?

A

When you include a few examples of input and output pairs directly in the prompt.

Beneficial to guide LLM’s behaviour

Tailor specific outputs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is model hallucination? How can we mitigate it?

A

When a Large Language model very confidently fabricates a response. This happens as many people use LLM’s as a search engine, rather than a reasoning mechanism.

We can reduce hallucinations by first finding relevant info, then answering the question. Based on the relevant information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is iterative prompt development?

A

It is a life cycle of prompt developing

It starts with the prompt idea then goes to implementation, then after experimental result we do error analysis on the implementation, and then repeat until we are satisfied.

Summary:
Idea ==> Implementation (code/data) ==> Experimental Result ==> error analysis ==> repeat

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How can we take advantage of an LLM’s summarizing ability? What is the disadvantage of using ‘summarize’?

A

LLM’s are very powerful and useful for creating summaries.

They are capable of making a summary within a word limit, as well as with a focus on a particular topic.

However, even when adding the focus, summaries still include non relevant info, so we can use ‘extract’ instead as it will only get relevant info

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does inference work in LLM’s

A

LLM’s have very strong inference ability. They have the ability to infer sentiment from any given prompt, as well as infer topics (Given a long piece of text, what is this text about?)

They can also index topics, i.e given a list of topics, index them in a given text.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is 0-shot learning

A

When given no training data, simply just a prompt, an algorithm delivers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How are LLM’s language translation ability?

A

As LLM’s are trained with sources in many languages, these models have very good translation ability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are an LLM’s strongest capabilites?

A

As it is a language model, its capabilities are strongest in language:

  • expanding
  • inferring
  • summarizing
  • transforming
  • translating
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the roles within a language model:

A

system, user, assistant

System is provided with initial context

user is the user of the LLM

role: is the role assigned to LLM usually by the assistant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is an example of using expanding in LLM’s?

A

generate customer service emails that are tailored to each customer’s review.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is an example of using summarizing in LLM’s?

A

Summarizing product reviews for sentiment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is an example of translation with LLM’s?

A

Imagine you are in charge of IT at a large multinational e-commerce company. Users are messaging you with IT issues in all their native languages. Your staff is from all over the world and speaks only their native languages. You need a universal translator!

17
Q

What is an example of transforming?

A

Transforming a message to be more formal and business-friendly

18
Q

What is an example of inferring?

A

Extracting sentiment from a product review

19
Q

What is prompt engineering?

A

Prompt engineering is the process of designing and optimizing the inputs (prompts) given to a language model to elicit desired outputs or behaviors.

20
Q

What are “prompt templates”?

A

Prompt templates are predefined structures or formats for prompts that can be reused or adapted for different tasks, helping ensure consistency and clarity in the model’s responses.

21
Q

What is the difference between few-shot and one-shot prompting?

A

Few-shot prompting uses multiple examples, while one-shot prompting uses only a single example to guide the model’s response.