AIML 131 mid term Flashcards
What is overfitting in the context of supervised ML?
When the model is too complex and learns too much detail.
What are error surfaces in neural networks?
Represent the error based on different weight combinations.
What is the concept of ‘feature space’ in ML?
The space of all possible combinations of input features.
How are modern neural networks organized?
Organized into layers, with input and output layers.
What is the purpose of dimensionality reduction models in unsupervised learning?
Simplifying the representation of data.
What are some examples of supervised ML models for regression?
Linear regression, polynomial regression
What is gradient descent in the context of neural networks?
Optimization method to minimize the error by adjusting weights.
How does DBSCAN clustering work?
Identifying dense regions based on radius and point count.
How do reinforcement learners improve over time?
By trying outputs at random and receiving feedback to adjust actions.
How are input words represented in LLMs?
By creating word embeddings that cluster similar meanings.
How does the attention mechanism address the bottleneck problem in LLMs?
It provides direct access to important input words for each output word.
What is ‘parameter space’ in ML?
The space of all possible combinations of model parameters.
How do transformers improve upon the attention mechanism?
By replacing recurrent networks with self-attention mechanisms.
How does reinforcement learning differ from supervised learning?
In reinforcement learning, the algorithm learns behaviours based on feedback.
How does supervised learning work?
Machine learns to map inputs to outputs with labeled examples.
What role do rewards and punishments play in reinforcement learning?
They guide the learner to improve its actions.
How do you evaluate a regression model?
Using error measures like SSE, MSE, MAE, RMSE.
What is SSE, MSE, MAE, RMSE?
Sum Squared Error, Mean Squared Error, Mean Absolute Error, root mean squared error
What is the bottleneck problem in recurrent networks?
They remember recent words better and struggle with long dependencies.
How does k-means clustering work?
Choosing k cluster centroids, assigning items, and recomputing centroids.
What are some key concepts related to language models (LLMs)?
Predicting the next word, learning from large corpora, self-supervision.
What are clustering models used for in unsupervised learning?
Identifying different types of input.
What is principal component analysis (PCA)?
Algorithm to reduce the dimensions of data while preserving variance.
How do you evaluate a classifier model?
Using accuracy, error rate, and confusion matrix for binary classifiers.
What are recurrent networks and how are they used in LLMs?
Networks that remember earlier words in sequential data.
What are transformers in the context of LLMs?
Models that use self-attention mechanisms for encoding and decoding text.
What are word embeddings used for in LLMs?
To represent words in an n-dimensional space.
How are neural networks structured?
Organized into layers of neuron-like units with connections.
How does an autoencoder neural network work for dimensionality reduction?
It learns to map each input back onto itself through a bottleneck layer.
What are some examples of supervised ML models for classification?
k-nearest neighbour, decision trees
What is underfitting in the context of supervised ML?
When the model is too simple and doesn’t learn real structure.
What are word representations in LLMs?
Words represented in a one-hot scheme with probabilities for the next word.
How does training a neural network work?
Adjusting weights to improve performance using training data.
How do unsupervised ML models differ from supervised ML models?
They do not have labeled target outputs in training data.
How do neural networks learn to predict the next word in natural language text?
By training on large corpora, predicting probability distributions.
What is reinforcement learning?
Algorithm learns good outputs through trial and error with rewards and punishments.