Essentials of Prompt Engineering Flashcards

1
Q

Elements of a prompt Instructions:

A

This is a task for the large language model to do. It provides a task description or instruction for how the model should perform.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Elements of a prompt Context:

A

This is external information to guide the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Elements of a prompt - Input data:

A

This is the input for which you want a response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Elements of a prompt Output indicator

A

This is the output type or format.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Negative prompting

A

is used to guide the model away from producing certain types of content or exhibiting specific behaviors. It involves providing the model with examples or instructions about what it should not generate or do.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Good prompt broken down

A

Instructions: Given a list of customer orders and available inventory, determine which orders can be fulfilled and which items have to be restocked.

*
Context: This task is essential for inventory management and order fulfillment processes in ecommerce or retail businesses.

*
Input data:

Orders:

Order 1: Product A (5 units), Product B (3 units)
Order 2: Product C (2 units), Product B (2 units)
Inventory:

Product A: 8 units
Product B: 4 units
Product C: 1 unit
*
Output indicator: Fulfillment status:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Randomness and diversity

A

This is the most common category of inference parameter. Randomness and diversity parameters influence the variation in generated responses by limiting the outputs to more likely outcomes or by changing the shape of the probability distribution of outputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Three most common Randomness and Diversity parameters

A

Temperature, Top P, Top K

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

When interacting with FMs, you can often configure these to limit or influence the model response.

A

inference parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Two most common categories of inference parameters

A

Randomness and diversity
length

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Temperature

A

This parameter controls the randomness or creativity of the model’s output. A higher temperature makes the output more diverse and unpredictable, and a lower temperature makes it more focused and predictable. Temperature is set between 0 and 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Top p

A

is a setting that controls the diversity of the text by limiting the number of words that the model can choose from based on their probabilities. Top p is also set on a scale from 0 to 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Top K

A

Top k limits the number of words to the top k most probable words, regardless of their percent probabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Low top K setting

A

With a low setting, like 10, the model will only consider the 10 most probable words for the next word in the sequence. This can help the output be more focused and coherent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

High top K setting

A

This can lead to more diverse and creative output, because the model has a larger pool of potential words to choose from

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

In general low temperature, top p and top k result in

A

Less creative, more coherent, and repetitive responses and vice versa

17
Q

The length inference parameter category refers to the settings that

A

control the maximum length of the generated output and specify the stop sequences that signal the end of the generation process.

18
Q

The maximum length setting determines the

A

maximum number of tokens that the model can generate during the inference process. This parameter helps to prevent the model from generating excessive or infinite output, which could lead to resource exhaustion or undesirable behavior.

19
Q

Stop sequences are

A

special tokens or sequences of tokens that signal the model to stop generating further output. When the model encounters a stop sequence during the inference process, it will terminate the generation regardless of the maximum length setting.

Stop sequences can be predefined or dynamically generated based on the input or the generated output itself. In some cases, multiple stop sequences can be specified, allowing the model to stop generation upon encountering any of the defined sequences.

20
Q

Stop sequences are particularly useful in tasks where

A

the desired output length is variable or difficult to predict in advance. For example, in conversational artificial intelligence (AI) systems, the stop sequence could be an end-of-conversation token or a specific phrase that indicates the end of the response.

21
Q

Zero-shot prompting is a technique where

A

a user presents a task to a generative model without providing any examples or explicit training for that specific task.

21
Q

Improper settings in maximum length and stop sequences can lead to

A

incomplete outputs, or conversely, to excessive and potentially nonsensical generations.

22
Q

Few-shot prompting is a technique that involves providing a language model with

A

contextual examples to guide its understanding and expected output for a specific task. In this approach, you supplement the prompt with sample inputs and their corresponding desired outputs, effectively giving the model a few shots or demonstrations to condition it for the requested task

23
Q

When employing a few-shot prompting technique, consider the following tips:

A

Make sure to select examples that are representative of the task that you want the model to perform and cover a diverse range of inputs and outputs.

Experiment with the number of examples.

24
Q

Chain-of-thought (CoT) prompting is a technique that

A

divides intricate reasoning tasks into smaller, intermediary steps. This approach can be employed using either zero-shot or few-shot prompting techniques. CoT prompts are tailored to specific problem types. To initiate the chain-of-thought reasoning process in a machine learning model, you can use the phrase “Think step by step.” It is recommended to use CoT prompting when the task requires multiple steps or a series of logical reasoning.

25
Q

Poisoning refers to the

A

intentional introduction of malicious or biased data into the training dataset of a model. This can lead to the model producing biased, offensive, or harmful outputs, either intentionally or unintentionally.

26
Q

Hijacking and prompt injection refer to the technique of

A

influencing the outputs of generative models by embedding specific instructions within the prompts themselves

27
Q

Exposure refers to the risk of exposing

A

sensitive or confidential information to a generative model during training or inference. An FM can then inadvertently reveal this sensitive data from their training corpus, leading to potential data leaks or privacy violations

28
Q

Prompt leaking refers to the

A

unintentional disclosure or leakage of the prompts or inputs (regardless of whether these are protected data or not) used within a model. Prompt leaking does not necessarily expose protected data. But it can expose other data used by the model, which can reveal information of how the model works and this can be used against it.

29
Q

Jailbreaking attempts involve

A

crafting carefully constructed prompts or input sequences that aim to bypass or exploit vulnerabilities in the AI system’s filtering mechanisms or constraints. The goal is to “break out” of the intended model limitations.