10.2 Deep learning Flashcards
A neural network embedding maps instances from a ____-dimensional input space to a ____-dimensional representation space.
- high / low
Embeddings generally map instances from a high-dimensional input space like pixels to a low-dimensional space that represents some useful features of the input, in which it is easier to learn.
Which operation in a convolutional neural network produces an output that is smaller than the input?
- All of these
All of these will produce an output smaller than the input. The (valid) output of a convolution with kernel size > 1 is always smaller than input image, regardless of the stride. Max pooling with stride > 1 will also produce an output smaller than its input.
If a deep neural network is overfitting, which of these options would help prevent that? (select all that are correct)
- Reduce training epochs
- Add dropout after some layers
Dropout is a regularization that can be added to neural networks to reduce overfitting, and training for fewer epochs (early stopping) may also help prevent overfitting. Changing layer type is not generally helpful to prevent overfitting (unless it reduces the number of weights to train) and adding more layers may make the problems worse (more parameters to train means more chance to overfit to training data).
Which is not generally true of the current state-of-the-art deep neural networks?
- They show human-like generalization to tasks for which they were not trained
State-of-the-art networks often meet or exceed human performance on the tasks for which they are trained, but they often don’t generalize like humans (for example, they are sensitive to adversarial image attacks). Neural networks learn non-linear functions of their input data due their non-linear activation functions. Current state-of-the-art networks need large datasets for training due to their very large number of parameters.