LESSON 11 - Unsupervised learning foundation Flashcards

1
Q

What is one characteristic of unsupervised learning that distinguishes it from supervised learning?

A

Unsupervised learning does not require explicit labels, and it focuses on discovering statistical regularities and patterns in the environment without constant feedback.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How is reinforcement learning different from supervised learning in terms of the feedback provided?

A

In reinforcement learning, the feedback is not in the form of correct or wrong but is more like +1 or -1, related to the outcomes of the actions taken.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the main advantage of unsupervised learning, particularly in industrial settings?

A

The absence of the need for labels in unsupervised learning is advantageous, making it useful for tasks like anomaly detection in industrial settings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the causality issue mentioned in unsupervised learning, and how is it addressed?

A

The causality issue refers to the need for active observation rather than passive observation. Transfer learning is suggested as a solution, where unsupervised learning is followed by supervised learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How does unsupervised learning contribute to transfer learning in a sequential manner?

A

Unsupervised learning is initially used to extract features and then followed by supervised learning, making it easier to perform tasks accurately when new examples are introduced.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In the context of unsupervised learning, what role does feature extraction play in preparing for supervised learning?

A

Unsupervised learning involves describing objects and extracting features, which is useful for supervised learning where decisions about hyperplanes or lines are made for classification tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How does hierarchical processing in neural networks contribute to making classification problems linearly separable?

A

Hierarchical processing in neural networks, as seen in the example of car and plane manifolds, enables linear separability by transforming initially indistinguishable objects into distinguishable ones as the visual path progresses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the primary goal of learning representations in neural networks, and how is data compressed in the process?

A

The main goal of learning representations is to extract essential features. Data is compressed by removing unnecessary features, such as correlated ones, reducing the number of features while approximately capturing the relationships.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How is Principal Component Analysis (PCA) used to find directions of maximum variability in a dataset?

A

PCA is a statistical technique that identifies directions of maximum variability in a dataset. It finds the first principal component, representing the direction with the highest variance, and subsequent components in decreasing order of variance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the purpose of decomposing and compressing a matrix in PCA, and what are the vectors representing principal components used for?

A

Decomposing and compressing a matrix in PCA is done to capture many variabilities while maintaining a representation similar to the original. The vectors representing principal components are used to reconstruct the data with fewer dimensions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

In unsupervised learning, how does reinforcement learning differ from other approaches in terms of interaction with the environment?

A

Reinforcement learning involves active interaction with the environment, where actions are taken and feedback is received, marked by values like +1 or -1, rather than traditional correct or wrong feedback.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the key difference between supervised learning and unsupervised learning when it comes to the availability of external teaching signals?

A

In supervised learning, there is a constant external teaching signal available, while in unsupervised learning, the agent tries to discover statistical regularities without explicit lab

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How does transfer learning leverage unsupervised learning in the context of feature extraction?

A

Transfer learning starts with unsupervised learning, extracting features and preparing the model. Later, supervised learning is applied to build upon the extracted features for specific tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Explain the role of clustering and feature extraction in unsupervised learning.

A

Clustering and feature extraction are techniques used in unsupervised learning to identify prototypes and reduce the number of features, respectively, without relying on explicit labels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the significance of hierarchical processing in neural networks, and how does it contribute to linear separability?

A

Hierarchical processing transforms initially indistinguishable objects into distinguishable ones, contributing to linear separability in neural networks, particularly in the context of visual paths and object recognition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How does the removal of unnecessary features contribute to the compression of data in neural networks?

A

The removal of unnecessary features, such as correlated or constant ones, reduces the number of dimensions in the data, effectively compressing it while capturing essential relationships.

17
Q

What does Principal Component Analysis (PCA) aim to achieve in a given dataset, and how are principal components represented?

A

PCA aims to find directions of maximum variability in a dataset. Principal components are vectors representing these directions, ordered by the amount of variance they capture.

18
Q

In the context of PCA, what is the role of orthogonal components when compressing images?

A

Orthogonal components are used to compress images in PCA. By representing images in terms of these orthogonal components, the original data can be reconstructed with reduced dimensions.

19
Q

How is unsupervised learning utilized in convolutional networks to map and compress images?

A

Convolutional networks map every pixel to a data sample, compressing images by projecting them into a lower-dimensional space using hidden layers, ultimately aiming for effective decompression.

20
Q

Explain the concept of self-supervised learning and its application in neural networks.

A

Self-supervised learning involves training a neural network to predict the next element or fill in omissions in data. It does not rely on traditional labels, making it a form of unsupervised learning with a focus on data prediction.

21
Q

How does unsupervised learning contribute to the analysis of psychological studies, as exemplified in the Big Five personality traits?

A

Unsupervised learning, specifically the extraction of features, can be applied to psychological studies like the Big Five, summarizing different aspects of personality by approximating them with a reduced set of main features.

22
Q

What distinguishes convolutional networks from other unsupervised learning models in terms of mapping pixels to data samples?

A

Convolutional networks map pixels to data samples and compress them using hidden layers, presenting an approach where images are projected into a lower-dimensional space.

23
Q

How is the gradual change in latent code utilized in generating smooth transitions between digits or faces?

A

The gradual change in latent code allows for smooth transitions between digits or faces, enabling the generation of images with subtle changes, providing a more continuous and natural appearance.

24
Q

What does the decrease in the loss function during training signify, and how can it be helpful in assessing the learning progress?

A

The decrease in the loss function during training indicates that the model is learning and fitting the data better. It can be helpful in assessing learning progress, with a lower loss indicating improved performance.

25
Q

In the context of neural networks, how is overfitting identified, and what measures can be taken to address it?

A

Overfitting is identified when there is a significant difference between training and test phase performance. Measures to address overfitting may include introducing a validation set or adjusting regularization techniques.