True/False Flashcards
The mean-field theory for Hopfield network yields the exact value for the critical storage capacity.
False
That the energy cannot increase under the deterministic Hopfield dynamics is a consequence of the fact that the weights are symmetric.
True
The stochastic update rule for the Hopfield network is different from the Metropolis algorithm.
True
All stored patterns are local minima of the energy function.
False
The detailed balance condition is a necessary condition for the Markov-Chan Monte-Carlo algorithm to converge.
False
That the energy cannot increase under the deterministic Hopfield dynamics is valid only if the thresholds are put to zero.
False
For a given 𝛼, the one-step error probability for the deterministic Hopfield network is lower when the diagonal weights are set to zero.
False
In the limit of N → ∞ the order parameter m𝜇 can have more than one component of order unity, the other components are small.
True
The stochastic update rule for the Hopfield network is identical to the Metropolis algorithm.
False
The detailed balance condition is a necessary condition for the Markov-Chain Monte-Carlo algorithm to converge.
False
That the energy cannot increase under the deterministic Hopfield dynamics is a consequence of the fact that the weights are symmetric.
True
The mean-field theory for the Hopfield network yields the exact value for the critical storage capacity.
False
All stored patterns are local minima of the energy function.
False
Not all local minima of the energy function of the Hopfield network correspond to stored patterns.
True
The stochastic update rule of the Hopfield network is different from the Metropolis algorithm.
True
That the energy cannot increase under the deterministic Hopfield dynamics is a consequence of the fact that the diagonal weights are set to zero.
False
That the energy cannot increase under the deterministic Hopfield dynamics holds also when the thresholds are zero.
True
The detailed condition is a necessary condition for the Markov-Chain Monte-Carlo algorithm to converge.
False
A perceptron that solves the parity problem with N inputs contains at least N^2 hidden neurons.
False
Increasing the number of hidden neurons in the network increases the risk of overfitting.
True
Two hidden layers are necessary to approximate any real valued-function with N inputs and one output in terms of a perceptron.
False
Using stochastic gradient decent in backpropagation assures that the energy either decreases or stays constant.
False
In minimisation with a Lagrange multiplier, the function multiplying the Lagrange multiplier can also assume negative values.
False
Some of the functions with 5 Boolean valued inputs and one Boolean valued output are linearly separable.
True
Different layers of a deep network learn at different speeds because their effects on the output are different.
True
The weights in a perceptron are symmetric.
False
L1-regularisation reduces small weights more than L2-regularisation.
True
Weight decay helps against overfitting.
True
Increasing the number of hidden neurons in the network increases the risk of overfitting.
True
Two hidden layers are necessary to approximate any real valued-function with N inputs and one output in terms of a perceptron.
False
Pruning increases the risk of overfitting.
False
Using stochastic gradient decent in backpropagation assures that the energy either decreases or stays constant.
False
In minimisation with Lagrange multiplier, the function the Lagrange multiplier must be equal to or larger than zero.
True
Back-propagation is a form of unsupervised learning.
False
To make use of back-propagation, it is necessary to know how the target outputs of input patterns in the training set.
True
“Early stopping” in back-propagation helps to avoid being stuck in local minima of energy.
False
“Early stopping” in back-propagation is a way to avoid overfitting.
True
Using stochastic path through weight space in back-propagation helps to avoid being stuck in local minima of energy.
True
Using stochastic path through weight space in back-propagation prevents overfitting.
False
Using stochastic path through weight space in back-propagation assures that the energy either decreases or stays constant.
False
There are 2^(2^n) functions with n Boolean valued inputs and one Boolean valued output.
True
None of the functions with 5 Boolean valued inputs and one Boolean valued output are linearly separable.
False
There are precisely 24 functions with 3 Boolean valued inputs and one Boolean valued output (equal to zero ore one) where exactly three of the possible inputs maps to zero.
False
Oja’s learning is a form of unsupervised learning.
False
Then dimension of the output space of a Kohonen network must be equal to the dimension of the input space.
True
The number of neurons in the input layer of a perceptron is equal to the number of input patterns.
True
You need access to the state of all neurons in a multilayer perceptron when updating all weights through backpropagation.
True
Consider the Hopfield network. If a pattern is stable it must be an eigenvector of the weight matrix.
False
If you store two orthogonal patterns in a Hopfield network, they must always turn out unstable.
False
Kohonen algorithm learns convex distributions better than concave ones.
True
The number of N-dimensional Boolean functions is 2^N.
False. it is (the number of choices)^(the number of inputs)
The weight matrices in a perceptron are symmetric.
False
Using g(b)=b as activation function and putting all thresholds to zero in a multilayer perceptron allows you to solve some linearly inseparable problems.
False
You need at least four radial basis functions for the XOR-problem to be linearly separable in the space spanned by the radial spaces.
False
Consider p>2 patterns uniformly distributed on a circle. None of the eigenvalues of the covariance matrix of the patterns is zero.
True
Assume that the weight vector in Oja’s rule corresponds to a stable steady state after a given iteration. The weight vector may change in the next iteration.
True
If your Kohonen network is supposed to learn the distribution P(xi), it is important to generate the patterns xi^(my) before you start training the network.
False
All one-dimensional Boolean problems are linearly separable.
True
In Kohonen’s algorithm. the neurons have fixed positions in the output space.
True
Some elements of the covariance matrix are variances.
True