13 - Epilogue Flashcards
What are LLMs?
LLMs are Large Language Models that predict the next word in a sequence based on prior words.
What is theory of mind?
Theory of mind is a cognitive ability that allows humans to make inferences about someone else’s beliefs or state of mind using external behavioral cues.
How does ChatGPT demonstrate theory of mind?
ChatGPT can infer that Alice will experience a headache after using the wrong glasses, based on the context provided.
What does the training of an LLM involve?
It involves predicting the next word in a sequence and adjusting parameters to minimize the loss between predicted and actual words.
What is word embedding?
Word embedding is the process of converting words into vectors embedded in high-dimensional space.
What is the function that LLMs approximate during training?
LLMs approximate a conditional probability distribution for the next word given a sequence of input words.
What does the softmax function do in LLMs?
The softmax function converts output vectors into probabilities that sum to 1.
What is generative AI?
Generative AI learns a probability distribution over data and samples from it to produce outputs.
What is emergent behavior in LLMs?
Emergent behavior refers to capabilities that arise in larger models that smaller models do not exhibit.
What issues were prevalent in AI before LLMs?
Concerns about bias and discrimination in AI systems were prevalent.
What was a notable incident of bias in AI?
Google Photos mistakenly tagged African Americans as gorillas, highlighting issues of racial bias.
What is a concern regarding data representation in ML?
Incomplete data can lead to biased predictions, particularly if certain groups are underrepresented.
What is the difference between correlation and causation in ML?
ML systems can mistakenly assume correlation implies causation, leading to erroneous predictions.
What must ML engineers do to avoid bias?
They must ensure training data is diverse and representative and may need to explicitly de-bias the data.
True or False: LLMs are capable of reasoning.
This is debated; some see LLMs as sophisticated pattern matchers, while others see glimmers of reasoning ability.
What is a significant advantage of LLMs for programmers?
LLMs can generate code based on natural language descriptions of problems.
Fill in the blank: LLMs are trained on a corpus of _______.
[training text]
What does backpropagation do in LLM training?
Backpropagation adjusts the network’s parameters to minimize prediction loss.
What is the size of GPT-3 compared to GPT-2?
GPT-3 has 175 billion parameters, while GPT-2 has 1.5 billion.
What is the role of the neural network in an LLM?
It acts as a function approximator for predicting the next word in a sequence.
What does the term ‘stochastic parrots’ refer to?
It refers to the notion that LLMs may simply repeat patterns without true understanding.
What is a potential danger of LLMs?
They can perpetuate and amplify societal biases present in the training data.
What is one way bias can enter machine learning?
Through the use of incomplete or unrepresentative training data.
What is the significance of the Simons Institute workshop in 2023?
It highlighted ongoing discussions and research on the implications of LLMs.