13 - Epilogue Flashcards

1
Q

What are LLMs?

A

LLMs are Large Language Models that predict the next word in a sequence based on prior words.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is theory of mind?

A

Theory of mind is a cognitive ability that allows humans to make inferences about someone else’s beliefs or state of mind using external behavioral cues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does ChatGPT demonstrate theory of mind?

A

ChatGPT can infer that Alice will experience a headache after using the wrong glasses, based on the context provided.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does the training of an LLM involve?

A

It involves predicting the next word in a sequence and adjusting parameters to minimize the loss between predicted and actual words.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is word embedding?

A

Word embedding is the process of converting words into vectors embedded in high-dimensional space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the function that LLMs approximate during training?

A

LLMs approximate a conditional probability distribution for the next word given a sequence of input words.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does the softmax function do in LLMs?

A

The softmax function converts output vectors into probabilities that sum to 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is generative AI?

A

Generative AI learns a probability distribution over data and samples from it to produce outputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is emergent behavior in LLMs?

A

Emergent behavior refers to capabilities that arise in larger models that smaller models do not exhibit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What issues were prevalent in AI before LLMs?

A

Concerns about bias and discrimination in AI systems were prevalent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What was a notable incident of bias in AI?

A

Google Photos mistakenly tagged African Americans as gorillas, highlighting issues of racial bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a concern regarding data representation in ML?

A

Incomplete data can lead to biased predictions, particularly if certain groups are underrepresented.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the difference between correlation and causation in ML?

A

ML systems can mistakenly assume correlation implies causation, leading to erroneous predictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What must ML engineers do to avoid bias?

A

They must ensure training data is diverse and representative and may need to explicitly de-bias the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

True or False: LLMs are capable of reasoning.

A

This is debated; some see LLMs as sophisticated pattern matchers, while others see glimmers of reasoning ability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a significant advantage of LLMs for programmers?

A

LLMs can generate code based on natural language descriptions of problems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Fill in the blank: LLMs are trained on a corpus of _______.

A

[training text]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What does backpropagation do in LLM training?

A

Backpropagation adjusts the network’s parameters to minimize prediction loss.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the size of GPT-3 compared to GPT-2?

A

GPT-3 has 175 billion parameters, while GPT-2 has 1.5 billion.

20
Q

What is the role of the neural network in an LLM?

A

It acts as a function approximator for predicting the next word in a sequence.

21
Q

What does the term ‘stochastic parrots’ refer to?

A

It refers to the notion that LLMs may simply repeat patterns without true understanding.

22
Q

What is a potential danger of LLMs?

A

They can perpetuate and amplify societal biases present in the training data.

23
Q

What is one way bias can enter machine learning?

A

Through the use of incomplete or unrepresentative training data.

24
Q

What is the significance of the Simons Institute workshop in 2023?

A

It highlighted ongoing discussions and research on the implications of LLMs.

25
Q

What is bias in the context of AI algorithms?

A

Bias refers to the inaccuracies in the algorithm’s predictions, influenced by how data is interpreted and the questions posed to it.

26
Q

What is a significant concern regarding LLMs as discussed in the text?

A

Concerns include AI being biased, toxic, or dangerous.

27
Q

What example is provided to illustrate bias in AI?

A

The interaction where GPT-4 identifies the nurse as pregnant in a sentence, showcasing a sexist interpretation.

28
Q

What is RLHF?

A

Reinforcement Learning using Human Feedback, a technique to fine-tune AI models.

29
Q

What is the implication of certainty in AI predictions according to researchers Celeste Kidd and Abeba Birhane?

A

AIs that make predictions with certainty, regardless of factuality, risk altering human cognitive beliefs.

30
Q

How do humans form beliefs based on data?

A

Humans sample a small subset of available data and form beliefs with high certainty, making them stubborn to revise.

31
Q

What is the problem of credit assignment in neural networks?

A

The challenge of determining how to adjust the weights of connections in a network when it makes an error.

32
Q

What is backpropagation?

A

An algorithm used to train artificial neural networks by adjusting weights based on the error of predictions.

33
Q

What did Daniel Yamins discover while working on his machine vision project?

A

The architecture that worked best for recognizing objects was a convolution neural network (CNN).

34
Q

What was the focus of Yamins’ research in relation to CNNs?

A

To see if a CNN could predict biological neural responses to novel images.

35
Q

What did the researchers find when they compared CNN activity to monkey brain activity?

A

The CNN predicted the behaviors of brain areas corresponding to the layers of the network.

36
Q

What was a significant finding from DiCarlo’s lab regarding AlexNet?

A

AlexNet was used to model the ventral visual stream of macaques, correlating artificial neuron activity with monkey neural sites.

37
Q

What is the ventral visual stream responsible for?

A

Recognizing people, places, and things.

38
Q

What are LLMs beginning to hint at in terms of human cognition?

A

Hints of theory of mind and complex pattern matching.

39
Q

What ongoing debate in cognitive science is mentioned regarding language acquisition?

A

Whether aspects of human language depend on innate abilities or can be learned through exposure.

40
Q

True or False: LLMs can learn syntax and grammar from statistical patterns in human written language.

41
Q

What is a key difference between biological neurons and artificial neurons?

A

Biological neurons spike, while artificial neurons do not.

42
Q

What is the energy consumption comparison between LLMs and human brains?

A

LLMs consume about 1,664 watts, while human brains use 20 to 50 watts.

43
Q

What is a potential requirement for achieving human-like general intelligence in AI?

A

Whether disembodied AIs can develop such intelligence or if they need to be embodied.

44
Q

Fill in the blank: The backpropagation algorithm is used to train _______.

A

artificial neural networks.

45
Q

What does the phrase ‘form follows function’ imply in the context of CNNs and brain activity?

A

The structure of the CNN corresponds to the functions of specific brain areas.

46
Q

What is one key challenge in AI that needs to be addressed alongside its development?

A

Concerns about bias and the impact of AI predictions on human cognition.