Essentials of Prompt Engineering Flashcards
Elements of a prompt
1) Instructions: what you want the LLM to do
2) Context: The external information to guide the model
2) Input data: The input you want a response for
4) Output indicator - Output type
Negative prompting
Guiding a model towards a output by including what you don’t want in the output
Temperature
Measure between 0 and 1. Controls the randomness or creativity of output. Higher temperature means model is more diverse and unpredictable.
Top P
Scale from 0 to 1. setting that controls the diversity of the text by limiting the number of words that the model can choose from. With a top P of .25 the model will only consider words that make up the top 25% of the total probability distribution of the model
Top K
Top k limits the number of words to the top k most probable words, regardless of their percent probabilities. For instance, if top k is set to 50, the model will only consider the 50 most likely words for the next word in the sequence
Length
The length inference parameter category refers to the settings that control the maximum length of the generated output and specify the stop sequences that signal the end of the generation process
Maximum length
The maximum length setting determines the maximum number of tokens that the model can generate during the inference process
Stop sequences
Stop sequences are special tokens or sequences of tokens that signal the model to stop generating further output
zero-shot prompting
Zero-shot prompting is a technique where a user presents a task to a generative model without providing any examples or explicit training for that specific task
Few-shot prompt
Providing a language model sample inputs and their corresponding desired output
Chain of thought prompting
Chain-of-thought (CoT) prompting is a technique that divides intricate reasoning tasks into smaller, intermediary steps. You can use the phrase “think step by step.” It is recommended to use COT when tasks have multiple steps or require a logic series
3 of the most common types of prompt misuses and risks
1) Poisonings, hijacking and prompt injection
2) Exposure and prompt leaking
3) Jailbreaking
Poisoning
The intentional introduction of malicious or bias data into the training dataset of a model
Hijacking and prompt injection
the technique of influencing the outputs of generative models by embedding specific instructions within the prompts themselves.
Jailbreaking
the practice of modifying or circumventing the constraints and safety measures implemented in a generative model or AI assistant to gain unauthorized access or functionality.