Neural Networks Flashcards

1
Q

Beschreib Hebbs Rule

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Beschreib den Aufbau eines Perceptrons

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Wie lautet die Funktion der inner activation?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Wie funktioniert das Aktualisieren der Gewichte?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Wie minimiert man E?

A

Über das Gradientenabstiegsverfahren

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Beschreib die Rule for the change of weights mathematisch

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Wie funktioniert der Perceptron Algorithmus?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Wie funktionert lineare Separierbarkeit mir Perceptrons?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Was sind Sigmoid Neurons?

A

Haben folgende Aktivierungsfunktion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why not choose all-same input weights for a layer?

A

all units will have same activations → information loss

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why not choose all-same output weights for a layer?

A

learning signals will be same → symmetry-breaking problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Wie funktioniert Error Backpropagation?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Leite die Sigmoid Activation Function ab

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Beschreib Batch update

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Beschreib online update

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Beschreib Minibatch Updates

A
17
Q

Was ist der Momentum Term? Was sind die Vorteile davon?

A
18
Q

Which of the statements about convolutional neural networks (CNNs) are true?
1. With CNNs, one can put emphasis on certain objects and/or areas in an image to increase classification accuracy.
2. The kernel size has to be chosen greater than the size of the input image.
3. CNNs can capture dependencies between pixels, which distinguishes them from “classic” feed-forward neural networks.
4. The filters used in CNNs are manually designed for each dataset.

A

1 und 3

19
Q

Which of the following statements about the Perceptron algorithm are true?
1. At the start of training a perceptron, all weigths are initialized with 0.
2. Before updating the weights, the errors for all datapoints seen so far have to be calculated.
3. The value of the cost function depends both on the weight parameters of the perceptron and the input.
4. The weights are updated by the gradient of the cost function scaled by the learning rate.

A

3 und 4

20
Q

Which of the following statements about Receiver Operating Characteristic (ROC) curves is true?
1. ROC curves are used to measure the harmonic mean of precision and recall.
2. ROC curves are used to visualize the relationship between sensitivity and specificity.
3. ROC curves plot a function dependent on the False Positive rate.
4. The AUC (Area Under Curve) can be used as a measure of model performance.

A

2 und 4

21
Q

Which of the functions below are commonly used as activation functions in deep neural networks?
1. Hyperbolic Tangent (tanh)
2. Hyperbolic Secant (sech)
3. Softmax
4. Rectified Linear Unit (ReLU)
5. Sigmoid

A

alles außer 2

22
Q

One major disadvantage of linear activation functions is that they often suffer from vanishing gradients, i.e. weight adaptation saturates too quickly during the training process. Stimmt das?

A

Falsch

23
Q

Beschreib die Aktivierungsfunktion Hyperbolic Tangent (tanh)

A
24
Q

Was sind Vorteile von tanh?

A
  • Outputs are zero-centered, which can help with faster convergence in training
  • Stronger gradients compared to the sigmoid function
25
Q

Was sind Nachteile von tanh?

A

Can still suffer from the vanishing gradient problem for very large or small inputs

26
Q

Beschreib Softmax als Aktivierungsfunktion

A
27
Q

Beschreib die Aktivierungsfunktion Rectified Linear Unit (ReLU)

A
28
Q

Was sind Vorteile von ReLU?

A
  • Computationally efficient
  • Helps mitigate the vanishing gradient problem
  • Allows for sparse activation in neural networks
29
Q

Was sind Nachteile von ReLU?

A

Can suffer from the “dying ReLU” problem, where neurons can become inactive and stop learning

30
Q

Was sind Vorteile der Sigmoid Funktion?

A
  • Outputs can be interpreted as probabilities
  • Smooth gradient
31
Q

Was sind Nachteile der Sigmoid Funktion?

A
  • Outputs are not zero-centered
  • Can suffer from vanishing gradient problem for very large or small inputs