AIML 131 mid term Flashcards

1
Q

What is overfitting in the context of supervised ML?

A

When the model is too complex and learns too much detail.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are error surfaces in neural networks?

A

Represent the error based on different weight combinations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the concept of ‘feature space’ in ML?

A

The space of all possible combinations of input features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How are modern neural networks organized?

A

Organized into layers, with input and output layers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the purpose of dimensionality reduction models in unsupervised learning?

A

Simplifying the representation of data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are some examples of supervised ML models for regression?

A

Linear regression, polynomial regression

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is gradient descent in the context of neural networks?

A

Optimization method to minimize the error by adjusting weights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does DBSCAN clustering work?

A

Identifying dense regions based on radius and point count.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do reinforcement learners improve over time?

A

By trying outputs at random and receiving feedback to adjust actions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How are input words represented in LLMs?

A

By creating word embeddings that cluster similar meanings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How does the attention mechanism address the bottleneck problem in LLMs?

A

It provides direct access to important input words for each output word.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is ‘parameter space’ in ML?

A

The space of all possible combinations of model parameters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How do transformers improve upon the attention mechanism?

A

By replacing recurrent networks with self-attention mechanisms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How does reinforcement learning differ from supervised learning?

A

In reinforcement learning, the algorithm learns behaviours based on feedback.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How does supervised learning work?

A

Machine learns to map inputs to outputs with labeled examples.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What role do rewards and punishments play in reinforcement learning?

A

They guide the learner to improve its actions.

17
Q

How do you evaluate a regression model?

A

Using error measures like SSE, MSE, MAE, RMSE.

18
Q

What is SSE, MSE, MAE, RMSE?

A

Sum Squared Error, Mean Squared Error, Mean Absolute Error, root mean squared error

19
Q

What is the bottleneck problem in recurrent networks?

A

They remember recent words better and struggle with long dependencies.

20
Q

How does k-means clustering work?

A

Choosing k cluster centroids, assigning items, and recomputing centroids.

21
Q

What are some key concepts related to language models (LLMs)?

A

Predicting the next word, learning from large corpora, self-supervision.

22
Q

What are clustering models used for in unsupervised learning?

A

Identifying different types of input.

23
Q

What is principal component analysis (PCA)?

A

Algorithm to reduce the dimensions of data while preserving variance.

24
Q

How do you evaluate a classifier model?

A

Using accuracy, error rate, and confusion matrix for binary classifiers.

25
Q

What are recurrent networks and how are they used in LLMs?

A

Networks that remember earlier words in sequential data.

26
Q

What are transformers in the context of LLMs?

A

Models that use self-attention mechanisms for encoding and decoding text.

27
Q

What are word embeddings used for in LLMs?

A

To represent words in an n-dimensional space.

28
Q

How are neural networks structured?

A

Organized into layers of neuron-like units with connections.

29
Q

How does an autoencoder neural network work for dimensionality reduction?

A

It learns to map each input back onto itself through a bottleneck layer.

30
Q

What are some examples of supervised ML models for classification?

A

k-nearest neighbour, decision trees

31
Q

What is underfitting in the context of supervised ML?

A

When the model is too simple and doesn’t learn real structure.

32
Q

What are word representations in LLMs?

A

Words represented in a one-hot scheme with probabilities for the next word.

33
Q

How does training a neural network work?

A

Adjusting weights to improve performance using training data.

34
Q

How do unsupervised ML models differ from supervised ML models?

A

They do not have labeled target outputs in training data.

35
Q

How do neural networks learn to predict the next word in natural language text?

A

By training on large corpora, predicting probability distributions.

36
Q

What is reinforcement learning?

A

Algorithm learns good outputs through trial and error with rewards and punishments.