Essentials of Prompt Engineering Flashcards

1
Q

Elements of a prompt

A

1) Instructions: what you want the LLM to do
2) Context: The external information to guide the model
2) Input data: The input you want a response for
4) Output indicator - Output type

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Negative prompting

A

Guiding a model towards a output by including what you don’t want in the output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Temperature

A

Measure between 0 and 1. Controls the randomness or creativity of output. Higher temperature means model is more diverse and unpredictable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Top P

A

Scale from 0 to 1. setting that controls the diversity of the text by limiting the number of words that the model can choose from. With a top P of .25 the model will only consider words that make up the top 25% of the total probability distribution of the model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Top K

A

Top k limits the number of words to the top k most probable words, regardless of their percent probabilities. For instance, if top k is set to 50, the model will only consider the 50 most likely words for the next word in the sequence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Length

A

The length inference parameter category refers to the settings that control the maximum length of the generated output and specify the stop sequences that signal the end of the generation process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Maximum length

A

The maximum length setting determines the maximum number of tokens that the model can generate during the inference process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Stop sequences

A

Stop sequences are special tokens or sequences of tokens that signal the model to stop generating further output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

zero-shot prompting

A

Zero-shot prompting is a technique where a user presents a task to a generative model without providing any examples or explicit training for that specific task

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Few-shot prompt

A

Providing a language model sample inputs and their corresponding desired output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Chain of thought prompting

A

Chain-of-thought (CoT) prompting is a technique that divides intricate reasoning tasks into smaller, intermediary steps. You can use the phrase “think step by step.” It is recommended to use COT when tasks have multiple steps or require a logic series

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

3 of the most common types of prompt misuses and risks

A

1) Poisonings, hijacking and prompt injection
2) Exposure and prompt leaking
3) Jailbreaking

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Poisoning

A

The intentional introduction of malicious or bias data into the training dataset of a model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Hijacking and prompt injection

A

the technique of influencing the outputs of generative models by embedding specific instructions within the prompts themselves.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Jailbreaking

A

the practice of modifying or circumventing the constraints and safety measures implemented in a generative model or AI assistant to gain unauthorized access or functionality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Exposure

A

Exposure refers to the risk of exposing sensitive or confidential information to a generative model during training or inference

16
Q

Prompt Leaking

A

Prompt leaking refers to the unintentional disclosure or leakage of the prompts or inputs (regardless of whether these are protected data or not) used within a model.