Recurrent Neural Networks Flashcards

1
Q

Was sind Recurrent Networks?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Für welche Szenarien sind Recurrent Networks optimal?

A

Wenn vergangene Variablen gebraucht/behalten werden müssen.
Vor allem bei Problemen mit einem zeitlichen Faktor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Beschreib ein Simple Recurrent Network

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Was ist ein Kontext Layer?

A

kopiert Output eines hidden layers von t-1 und gibt ihn als Input in das Netzwerk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Welche Alternative gibt es zu SRN, wenn man ein feedforward Netzwerk benutzen möchte?

A

Time Delay Neural Network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Was sind Vorteile von Time Delay Neural Network?

A
  • Feedforward architecture
    ▪ easy parallelization
    ▪ efficient learning
  • No memory fading over
    the length of the context
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Was sind Nachteile von Time Delay Neural Network?

A

Drawbacks of TDNNs:
- Length of context is fixed
- For large contexts:
▪ large input dimensionality
▪ many weights/parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Was ist ein Echo State Network?

A
  • Variation von SRN
  • verschiedene Historien eines SRN führen zu verschiede Aktivierungen, wenn recurrent/input weights zufällig sind
  • also kann man auch nur die output weights trainieren
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Was ist die echo state property?

A

the largest
eigenvalue of W^rec must not be much larger than 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Was ist der Vorteil an ESN?

A

Das Optimierungsproblem kann durch Algebera gelöst werden, ohne iteratives trainieren

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Was ist der Nachteil an SRN?

A

Durch ihre einfache Struktur können die SRNs nicht hierarchisch angeordnet werden, es entwickelt sich also keine komplexere Struktur, da nur auf einer Ebene gelernt wird (Output Layer). Sie sind gut für basic classifier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Was sind Extreme Learning Machines?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Was ist der Vorteil von Extreme Learning Machines?

A

Schnelles Training

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Was ist der Nachteil von Extreme Learning Machines

A

Difficult / data-dependent setting of parameters, e.g. scaling
of weights and their sparseness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Was ist ein Hopfield Network

A
  • idea of Associative Memory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Wie wird bei Hopfield Networks gelernt?

A

Gewichtsaktualisierung: Die Verbindungen (synaptische Gewichte) zwischen den Neuronen werden gemäß der Hebb’schen Regel aktualisiert.
w_ij = 1/n * Summe (über p Patterns) von x_i^p * x_j^p

17
Q

Hopfield Networks

How many patterns can be stored in memory?

A
18
Q

Hopfield Network

Was ist die Energy of the Network’s Activations?

A
19
Q

Was ist die Idee dahinter, die Hebb Rule zu erweitern, sodass 2N Patterns gemerkt werden können?

A
20
Q

Was sind Limitationen von Hopfield Networks?

A
  • erlauben nur wenige Patterns
  • erlauben nur statische Patterns, keine dynamischen
21
Q

Erkläre Conway’s Game of Life

A
22
Q

Which of the statements about Recurrent Neural Networks (RNNs) are true?
1. Considering an unlimited number of timesteps, a simple recurrent network can be translated into a feed-forward MLP.
2. In an echo-state network (ESN), only the weights between the hidden and the output layer are trained, thus the error function can be differentiated to a linear system.
3. Simple recurrent networks retain absolute knowledge of each previous input in the sequence.
4. RNNs can be used for sequence classification, but not for sequence prediction.

A

2

23
Q

Complete the following statement: The predictive performance of decision trees can be improved using…?
1. Random Trees
2. Bagging
3. Boosting

A

alle drei

24
Q

Which of the following ensemble learning techniques is more likely to yield better predictive performance than the other?
1. Bagging better than Boosting
2. Boosting better than Bagging
3. Both always show the same performance

A

2

25
Q

Which of the following statements about Dropout in neural networks are true?
1. Dropout can prevent a network from underfitting the training data.
2. The choice of neurons to be deactivated is made based on their prior reliability.
3. Dropout applied on the input level can be viewed as bagging of input features.
4. Dropout strongly resembles ensemble learning when using a network repeatedly with alternating deactivated neural units.

A

3 und 4

26
Q

Complete the following statement: During Bagging or Bootstrap Aggregation, …?
1. …subsets of the original training data are sampled with replacement.
2. …all models in the ensemble operate on the same resampled subsets of training data, but with different random initialization of model parameters.
3. …an input sample is classified by taking the vote of the most reliable predictor as final output label.

A

1