Guidelines Flashcards

1
Q

Principle # 1

A

Write clear and specific instructions.
Clear is not equal to short prompts. Longer prompts provide models with more clarity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Principle # 2

A

Give model time to think. If the task is too complex to perform, you can tell the model to spend longer on the task which means they can spend more computational resources to complete the task. This is applicable if the model is giving reasoning errors or not producing the accurate results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Principal # 1: Tactic # 1

A

Use delimiters e.g. triple quotes “””,
Triple backticks, ```,
Triple dashes —,
Angle brackets: <>,
XML tags: <tag></tag>
It can be any punctuations that separate specific pieces of text from the prompt.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Principal # 1: Tactic # 2:

A

Ask for structured output e.g. HTML or JSON

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Principle # 1: Tactic # 3:

A

Check whether conditions are satisfied. Check assumptions required to do the task. E.g. given a paragraph with recipe making steps, include prompt to extract steps in the sequence of steps.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Principle # 1: Tactic # 4

A

Few shot prompting i.e. give successful examples of completing tasks. Then ask model to perform the task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Principle # 2: Tactic # 1

A

Specify steps or output format to complete the task

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Principle # 2: Tactic # 2

A

Instruct the model to work out its own solution before rushing to a conclusion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

LLM Model Limitations

A

The model does not perfectly memorize the information it has seen so it does not know the boundary of it’s knowledge. Make statements that sound plausible but are not true. This is called Hallucinations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Reducing Hallucinations

A

Hallucinations is a known weakness of the model.
1) Use tactics
2) First find relevant information then answer the question based on the relevant information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly