LLMs Flashcards

1
Q

What is a corpus?

A

Corpus (or corpora for plural) is the text/body of texts the AI was trained on.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is context?

A

The section of the prompt the AI uses in their prediction of the next word.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Explain the Markhov assumption

A

The future evolution of an object is independent of its history and solely based on the last step.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe the process of a unigram n-gram model?

A

Counts the number of words in a corpus, assigns weightings to each word based on it’s count, suggests words based on these weightings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe the process of a bigram n-gram model?

A

Given the last word of the context, what is likely to be the next word. It counts each word-pair in the corpus and assigns weightings based on the count. It then checks the last word of the context, then suggests a word based on the probabilities of the word pairs with which the first word of the pair is the last word of the prompt.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Describe the process of a trigram n-gram model?

A

Given the last two words of the context, what is likely to be the next word. It counts the occurrence of each three-word sequence in the corpus and assigns weightings based on the count. It then checks the last two words of the context, then suggests a word based on the weightings of the three-word sequences with which the first two words of the pair is the last two words of the prompt.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why do n-gram models using a higher n number fall short?

A

A sequence of words in the context needs to appear exactly the same in the corpus in order for the model to recognise it, check a weighting, and suggest the next word. If the exact context is not in the corpus, the model cannot suggest the next word.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the drawbacks of n-gram models?

A

It cannot link information from different sections of a text.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a deterministic model?

A

A model with a temperature of zero which gives a predictable result.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is an LLM?

A

Large Language Models are computational neural networks notable for their ability to achieve general-purpose language generation and other natural language processing tasks such as classification.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a limitation of LLM’s and how is this circumvented?

A

LLM’s can only suggest words that are included in the corpus. To circumvent this, we use large training data sets such as social media or the Internet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the temperature of an AI model and what does a low and high temperature give? What are the use cases?

A

Temperature - how random an LLM is.

A temperature of zero gives zero randomness and a temperature of 2 give a high degree of randomness. At a low temperature, only the most likely outcome is selected, whereas at a high temperature, all words are equally likely to be selected.

Different temperatures have different uses. A low temperature is good for predictable text, such as for a cover letter, whereas a higher temperature can give poetry.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is an epoch?

A

An epoch is a training run, or a pass through the corpus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the training progression of an LLM?

A

Before training starts and even after the first few epochs, the predictions stored and given by the model are random. Any coherence is random.

Going through training runs increases coherence and relevancy, but a lot of training runs (1000s to 10000s are needed).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the best method of correction and why?

A

A human can correct the LLMs, but due to the high level of epochs, it’s much more efficient for the model to correct itself based on the original text due to the large amount of training data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the measure of loss and what is the target figure? What is the word describing an LLM that is overtrained?

A

During each epoch, the neural network compares its prediction with the original data. The model’s prediction is likely off by some amount. The difference between the predicted and actual values is called theloss.

Each epoch reduces the loss, though how much we can reduce the loss and make the AI more accurate decreases over time.

Although reducing the loss does make an LLM more coherent and give better advice, a loss of zero would mean the outputted text is exactly the same as the original text, defeating the purpose of the LLM. It cannot output novel text. This is called overfitting.

A model isoverfitwhen it replicates patterns in the training data so well that it cannot account for new data or generate new patterns.

To prevent overfitting, we need to monitor training closely once the model begins performing well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is preprocessing and why is it needed?

A

Need to turn raw data (full of mistakes and inconsistencies) into a clean data set via two processes - tokenisation and preprocessing.

Preprocessing makes all letters lowercase and removes punctuation. As a computer sees M and m as different, making all characters lowercase increases the frequency of words that would have lowercase and uppercase variations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What considerations should we make with regards to punctuation and capitalisation during preprocessing?

A

Sometimes, such as spam filtration, it’s important to take capitalisation into account and not weed this out in preprocessing.

One also needs to account for punctuation, but a punctuation mark can mean multiple tings. It’s not reliable to assume a capital letter and full stop means a new sentence. Isntead, we can add a new special character

19
Q

What is tokenisation?

A

Tokenization is breaking a corpus into units that the model can train on. Words and punctuation are split into separate tokens.

20
Q

What is white-space tokenisation?

A

Whitespace tokenisation is splitting words by spaces.

21
Q

What are the drawbacks of using syllables as tokens and what is done to overcome this?

A

We can also split plural and tense words to isolate the stem (e.g. in started, start!, and starting, start is the stem). This requires knowledge of the grammar of the language.

Instead, one can build an LLm that uses characters as tokens.

22
Q

What is Byte-Pair Encoding and what is it’s process?

A

Byte-Pair Encodingis a tokenization algorithm that builds tokens from characters.

A further optimization is to search for the most common word pair, then make that word pair into a single token and replace all instances of the word pair in the corpus. If a single character no longer appears as it is counted for in a word pair token, we can remove it from the vocabulary of the LLM.

23
Q

What is a neural network?

A

A neural network is a linked collection of nodes split into layers. It has an input layer, and output layer, and any number of middle hidden layers. Each layer consists of nodes, and the nodes are connected between layers. Each node is assigned a random weight (or zero).

During development, the input nodes pass information to the nodes of the hidden layer. The strength of the signal is adjusted according to the weights and passed onto all nodes of the next layer - this is a probability between 0 and 1.

Each node is looking for a specific feature. Generally speaking, the further along the NN, the more complex the feature the node is looking for is.

Eventually, an output is given. In supervised learning, the human analyses the accuracy of the output. If accurate, positive feedback is given and the weights of the contributing nodes are increased so the result is more likely to be given again. If it is wrong, the weights are decreased so the result is less likely to be given.

The architecture is the neurons and hidden layers, whereas the weights are the calculation.

If the variables are plotted onto a graph, there is a linear regression showing the trend - this is the decision boundary.

24
Q

How is information inputted into a neural network?

A

To be inputted into a neural network, data needs to be in numerical form.

25
Q

What are the positives of neural networks?

A

NNs are good at finding trends in lots of variables.

26
Q

What is deep learning and what are the implications for NNs? What does deep learning allow?

A

Deeper networks have more hidden layers and contribute to deep learning. This needs more compute and the reasoning of the AI model is more complex. The deeper a network gives, the hard it becomes to tell what the AI is doing. If the AI model is giving an important decision (i.e. loan approval), we need to know the reason behind the decision.

Deeper neural networks allow us to combine more complex calculations to define more complex criteria and features by added more (deeper) hidden levels.

27
Q

What considerations should a Developer building a neural network bear in mind?

A

Calculations require compute. The more calculations needed, the more computer power needed. This determines neura network design

28
Q

What are the steps to building a neural network?

A
  1. Design the neural network and data collection
    The NN should be designs to avoid known biases and risks.
    There are libraries containing premade NNs, but it is important to use a mixed of premade NN and your own.
    One should also create the training data sets. It may be possible to use existing data sets, but one may need to build one.

2.Training epochs and NN adjustment

29
Q

What data is needed to train a neural network?5

A

A training data set and a separate testing data set.

The NN is run on the same training data in each epoch, then validated at the end with the testing data.

30
Q

How can a NN be refined?

A

If the outputs of a NN are not clear after several hundred testing rounds, new variables or new data is needed.

31
Q

What is backpropagation ?

A

The neurons contributing to the error are reduced and those giving good answer are increased.

32
Q

What is the loss function? What is the error of loss function?

A

A measure of the accuracy of a AI models output compared to the expected. If high, the model is way off.

The error of loss function is the value showing the error level on each epoch

33
Q

What is the global optimal solution? How can one get to it?

A

A globally optimal solution is one where there are no other feasible solutions with better objective function values.

One can get to the GOS by having multiple agents, starting at multiple points, having a large and robust data set.

34
Q

What is a locally optimal solution?

A

A locally optimal solution is one where there are no other feasible solutions “in the vicinity” with better objective function values.

35
Q

What is the Learning Rate?

A

The degree of backpropagation with every epoch.

36
Q

What is overfitting?

A

Overfitting is an undesirable machine learning behavior that occurs when the machine learning model gives accurate predictions for training data but not for new data. This can also lead to coincidental connections.

37
Q

What is a Confusion Matrix?

A

A Confusion Matrix shows correct and incorrect results.

38
Q

What is the Precision of an AI model?

A

How much you should trust your programme when it gives a result. right answers / total = precision. % of answers that are correct

39
Q

What is the recall of an AI model?

A

Recall - how much an AI can give the thing you are looking for. % if times it will give a correct result

40
Q

What is one reason AI development is accelerating so quickly?

A

Efficiency of algorithms is going up, so less computing power. Amount of available computing power (compute) is also increasing, so effectiveness of AI si going up 4 fold

41
Q

What is Deep Evidential Learning (DEL)?

A

Deep learning that provides predictions and the uncertainty of those predictions. It can give the confidence of a model’s output. This is important in medical settings and in self-driving cars.

42
Q

What are the uses of DEL?

A

Applications include medical diagnosis, autonomous vehicles, and financial forecasting.

43
Q

What are the two types of uncertainty?

A

Uncertainty can be divided into two types

  1. Aleatoric Uncertainty: This is the inherent noise in the data. For example, variability in measurements or natural randomness
  2. Epistemic Uncertainty: This stems from the model’s lack of knowledge. For instance, uncertainty due to insufficient training data or model parameters.