LESSON 11 - Unsupervised learning foundation Flashcards
What is one characteristic of unsupervised learning that distinguishes it from supervised learning?
Unsupervised learning does not require explicit labels, and it focuses on discovering statistical regularities and patterns in the environment without constant feedback.
How is reinforcement learning different from supervised learning in terms of the feedback provided?
In reinforcement learning, the feedback is not in the form of correct or wrong but is more like +1 or -1, related to the outcomes of the actions taken.
What is the main advantage of unsupervised learning, particularly in industrial settings?
The absence of the need for labels in unsupervised learning is advantageous, making it useful for tasks like anomaly detection in industrial settings.
What is the causality issue mentioned in unsupervised learning, and how is it addressed?
The causality issue refers to the need for active observation rather than passive observation. Transfer learning is suggested as a solution, where unsupervised learning is followed by supervised learning.
How does unsupervised learning contribute to transfer learning in a sequential manner?
Unsupervised learning is initially used to extract features and then followed by supervised learning, making it easier to perform tasks accurately when new examples are introduced.
In the context of unsupervised learning, what role does feature extraction play in preparing for supervised learning?
Unsupervised learning involves describing objects and extracting features, which is useful for supervised learning where decisions about hyperplanes or lines are made for classification tasks.
How does hierarchical processing in neural networks contribute to making classification problems linearly separable?
Hierarchical processing in neural networks, as seen in the example of car and plane manifolds, enables linear separability by transforming initially indistinguishable objects into distinguishable ones as the visual path progresses.
What is the primary goal of learning representations in neural networks, and how is data compressed in the process?
The main goal of learning representations is to extract essential features. Data is compressed by removing unnecessary features, such as correlated ones, reducing the number of features while approximately capturing the relationships.
How is Principal Component Analysis (PCA) used to find directions of maximum variability in a dataset?
PCA is a statistical technique that identifies directions of maximum variability in a dataset. It finds the first principal component, representing the direction with the highest variance, and subsequent components in decreasing order of variance.
What is the purpose of decomposing and compressing a matrix in PCA, and what are the vectors representing principal components used for?
Decomposing and compressing a matrix in PCA is done to capture many variabilities while maintaining a representation similar to the original. The vectors representing principal components are used to reconstruct the data with fewer dimensions.
In unsupervised learning, how does reinforcement learning differ from other approaches in terms of interaction with the environment?
Reinforcement learning involves active interaction with the environment, where actions are taken and feedback is received, marked by values like +1 or -1, rather than traditional correct or wrong feedback.
What is the key difference between supervised learning and unsupervised learning when it comes to the availability of external teaching signals?
In supervised learning, there is a constant external teaching signal available, while in unsupervised learning, the agent tries to discover statistical regularities without explicit lab
How does transfer learning leverage unsupervised learning in the context of feature extraction?
Transfer learning starts with unsupervised learning, extracting features and preparing the model. Later, supervised learning is applied to build upon the extracted features for specific tasks.
Explain the role of clustering and feature extraction in unsupervised learning.
Clustering and feature extraction are techniques used in unsupervised learning to identify prototypes and reduce the number of features, respectively, without relying on explicit labels.
What is the significance of hierarchical processing in neural networks, and how does it contribute to linear separability?
Hierarchical processing transforms initially indistinguishable objects into distinguishable ones, contributing to linear separability in neural networks, particularly in the context of visual paths and object recognition.