FCFF neural networks Flashcards

1
Q

Problems of delta rule

A

1) Does not converge for non linearly separable problems
2) Do not minimiz the number of mistakes
3) No combination of non-linearity
4)For small P the transform is almost linear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Possible task for multi-layer networks in addition to classification

A

Signal processing, function modeling, model parameter mapping,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Cost functions for regression

A

MAE, RMSE, RMSEL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

MAE formula

A

sum(abs(t-u))/n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

RMSE formula

A

sqrt(sum((t-u)^2)/n) NB n inside the square root

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

RMSEL formula

A

sqrt(sum((log(t+1)-log(u+1))^2)/n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Binary cross-entropy

A

-(tlog(u) +(1-t)log(1-u))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

categorical cross-entropy

A

-sum(t*log(u))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What happens in back propagation

A

Signal error is propagated from output layers back to hidden layers so that all the synaptic weights can be updated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What happens in forward step

A

Input patterns enter the network
each neuron processes it with the resulting value steadily running through the network, layer by layer, until a result is generated by the output layer. Actual output is compared to the expected output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Backpropagation formula

A

FORMULA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Vanishing gradient problem

A

Early hidden layers learn much more slowly than later hidden layers. Weights in the early hidden layers may undergo erratic updating

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Solution for vanishing gradient problem

A

1) Removing fully connection exploiting local receptive field and convolutional neural processing
2)Using autoencoder units
3)automatic weight control of neurons that are saturating during training by means of dropout

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a problem of linear activation in multilayer network?

A

The combination of hyper planes generated by internal neurons divides the input space in closed and partially-closed regions characterized by irregular linear bounds

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Capability of approximating functions of ffNN

A

FFNN with a single hidden layer of sigmoidal units are capable of approximating uniformly any continuous multivariate function, to any degree of accuracy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Effect of small and large values of eta for FFNN

A

Small values imply good approximation but slow convergence
Large values imply fast convergence but possible either local minima or oscillations