Module 13: Neural Networks Flashcards
Which of the following are true about linear classifiers? Please check all that apply.
They can be used for both regression and classification.
All sets of data points are linearly separable.
When using the perceptron learning rule, the weights are updated when the actual output does not match the hypothesis output.
The learning rule must be applied to one example at a time.
They can be used for both regression and classification.
When using the perceptron learning rule, the weights are updated when the actual output does not match the hypothesis output.
Which of the following best characterizes the difference between parametric and nonparametric models?
A parametric model has a fixed size on the number of parameters.
T/F
For linearly separable data, there exists only one decision boundary that separates the classes.
False
There could be multiple boundaries, but their performance may vary.
Y/N
Is it possible that the assignment of observations to clusters do not change between successive iterations in K-Means?
When approaching the converge point, the state may not change.
Relative to single-layer perceptrons, neural networks gain their power from
stacking of layers
Multi-layer architecture of neural network provides enough parameters to get better performance on the task.
In neural networks, nonlinear activation functions: Please check all that apply.
- Make it possible to do the gradient calculation in backpropagation, as opposed to using step function which isn’t differentiable.
- Help to learn nonlinear decision boundaries.
- Are applied only to the output units.
- Always output values between 0 and 1.
Make it possible to do the gradient calculation in backpropagation, as opposed to using step function which isn’t differentiable.
Help to learn nonlinear decision boundaries.
T/F
When performing k-means clustering, each observation always starts in its own cluster, and then pairs of clusters are merged in each iteration.
False
This is only true of agglomerative clustering.