LESSON 14 - Unsupervised deep learning Flashcards

1
Q

What is the stochastic nature of RBMs, and how does the activation value of a neuron relate to its likelihood of turning on?

A

RBMs are stochastic because activation levels represent probabilities. The activation value of a neuron indicates how likely it is to turn on.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe the process of learning in RBMs, including the bottom-up and top-down phases during training.

A

In RBMs, learning involves processing input patterns in a bottom-up phase (from visible neurons) and then a top-down phase. The activity of visible neurons is influenced by the internal states of the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the significance of stacking multiple RBMs in the context of deep learning, and how does it contribute to hierarchy?

A

Stacking multiple RBMs allows for the creation of deep networks. The hidden neurons of the first RBM become visible neurons for the second, forming a hierarchy of representation from simple to more complex features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain the concept of sparsity in the context of RBMs and its role in improving selectivity.

A

Sparsity in RBMs refers to preventing too many hidden neurons from being active for the same stimulus. This improves selectivity, making the network more focused on specific features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How does a linear classifier on top of a deep belief network (DBN) enhance its capabilities, and what is the advantage of using semi-supervised learning?

A

A linear classifier on top of a DBN allows for class label representation. Semi-supervised learning, using supervision for a small part of the dataset, is advantageous for training the classifier.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Elaborate on the advantage of using a trained DBN for multiple tasks, and how does it relate to the concept of hierarchy?

A

A trained DBN can perform multiple tasks using a small additional read-out layer, leveraging the hierarchy of hidden layers. This allows for a more abstract representation across different tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

In the study’s results, what does the graph with different bars represent, and what does it reveal about the advantages of a hierarchical structure?

A

The graph with different bars represents the resilience to noise in decoding tasks. The hierarchy of hidden layers in the DBN makes representations more abstract and robust, as evidenced by the consistently high accuracy compared to a single level RBM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How is feedback implemented in a DBN, and what are the three main ways it can be utilized, as mentioned in the text?

A

Feedback in a DBN is implemented through bidirectional connections. The three main ways feedback can be utilized are generating prototypes, cross-model interactions, and mixing to solve ambiguities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Explain the first way feedback is utilized in DBNs, known as the “generation of prototypes.”

A

In the “generation of prototypes,” the DBN is activated with a class label, and the generation starts from the top-hidden layer, progressing to the visible layer. This results in a prototype image representing the specified class.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How can cross-model interactions be achieved in a DBN, and what does it involve?

A

Cross-model interactions in a DBN involve processing different inputs, such as images and text, independently at first and later integrating them. The two inputs are initially processed independently before being jointly considered.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the purpose of mixing in DBNs to solve ambiguities, and how does it exploit the recurrent nature of the network?

A

Mixing in DBNs to solve ambiguities involves iteratively propagating activity up and down the network. This exploits the recurrent nature of the network to eliminate noise in the initial state, converging to low-energy states representing trained patterns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How does the recurrent nature of DBNs contribute to handling noisy images and maintaining low-energy states?

A

The recurrent nature of DBNs helps in handling noisy images by repetitively changing activations until the network finds a stable, low-energy state. This eliminates noise and aligns with learned representations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What application is mentioned for generating images using DBNs, and what term is used to describe this process?

A

DBNs can be used for generating images, particularly in the context of art creation. The term “prototypes” is used to describe the generated images.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

In the context of representation learning, how is DBN utilized to build hierarchical models?

A

DBNs are used for representation learning, contributing to the construction of hierarchical models by capturing hierarchical features and representations in their hidden layers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the fundamental difference between the bottom-up and top-down phases of learning in RBMs?

A

The bottom-up phase involves processing input patterns from visible neurons, while the top-down phase sees the activity of visible neurons influenced by the internal states of the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How does sparsity in RBMs contribute to improving selectivity, and where is it particularly applied in the network?

A

Sparsity in RBMs prevents too many hidden neurons from being active for the same stimulus, improving selectivity. It is particularly applied to the top layer of the network.

17
Q

Explain the term “Markov Blanket” in the context of RBMs and why it is useful during inference.

A

The Markov Blanket for a hidden unit in RBMs is the set of neighbors, which, in this case, is the set of all visible units. It is useful during inference as it allows for parallel sampling of units outside the Markov Blanket.

18
Q

How is the training of RBMs related to generative models, and what is the significance of the positive and negative phases?

A

RBMs are generative models trained to minimize the discrepancy between observed data and generated data. The positive phase involves processing input patterns to compute internal representations, while the negative phase generates plausible data to compute correlations.

19
Q

In the context of DBNs, how does the hierarchical structure contribute to resilience against noise, as mentioned in the study’s results?

A

The hierarchical structure of DBNs, with progressively abstract representations in hidden layers, contributes to resilience against noise. The study’s results show that accuracy remains high even with increased noise, indicating robustness.

20
Q

What is the role of feedback connections in a DBN, and how do they differ from traditional feed-forward connections?

A

Feedback connections in a DBN are bidirectional, allowing information to propagate both up and down the layers. Unlike traditional feed-forward connections, these bidirectional connections enable recurrent interactions.

21
Q

Explain the concept of “cross-model interactions” in the context of DBNs and how it relates to processing different types of inputs.

A

Cross-model interactions in DBNs involve initially processing different inputs independently and later integrating them. This allows for independent processing of inputs like images and text before joint consideration.

22
Q

What is the significance of using feedback connections to generate prototypes in a DBN, and how is it related to class labels?

A

Feedback connections in a DBN can be used to generate prototypes by activating class labels and propagating the generation from the top-hidden layer to the visible layer. This process creates a prototype image associated with a specific class label.

23
Q

How does mixing in DBNs help solve ambiguities, and what role does the recurrent nature of the network play in this process?

A

Mixing in DBNs involves iterative propagation of activity up and down the network, helping to solve ambiguities in the presence of noise. The recurrent nature of the network allows it to reach low-energy states corresponding to trained patterns.

24
Q

In the context of DBNs, what advantage does the use of a linear classifier on top provide, and how does it relate to semi-supervised learning?

A

A linear classifier on top of a DBN facilitates class label representation. Semi-supervised learning leverages supervision for a small part of the dataset, allowing for more efficient training of the classifier.

25
Q

How does the concept of a “Markov Blanket” relate to the efficiency of parallel sampling in RBMs?

A

The Markov Blanket for a hidden unit in RBMs is the set of neighbors, and its observation allows for the efficiency of parallel sampling. By observing the values in the Markov Blanket, such as during inference, sampling can be performed in parallel.