Techinques Flashcards

1
Q

Visualizing activations of DL models

A

Any model architecture can be visualized with the filters of any layer. Only the initial layers are comprehensible using the technique. The last layer is useful for the nearest neighbor approach. Nearest neighbor visualisation of the images is useful when objects are similar and hence, we can understand the model’s prediction. This last layer can also be visualised with PCA and t-SNE (use TensorBoard embeddings).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Guided backpropagation

A

The visualization of features directly can be less informative. Hence, we use the training procedure of backprop to activate the filters for better visualization. Since we pick what neurons are to be activated for backprop, it is called guided backprop.

  1. Define the loss function - maximize the activation of a particular layer e.g. block5_conv1 of VGG. Gradient ascent process - important to smoothen the gradient by normalizing the pixel gradients. This loss function converges rather quickly.
  2. The output of the image should be normalized to visualize it back.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly