lesson_7_flashcards
What is gradient-based visualization?
A method to understand neural networks by computing gradients of the loss or activations with respect to inputs, visualizing important input features.
What are saliency maps?
Visualizations showing regions of an input image that most affect the loss or activations, derived from gradients.
What is guided backpropagation?
A visualization method that modifies backpropagation to focus only on positive influences, ignoring negative gradients.
What is Grad-CAM?
A visualization technique that highlights important regions of an input by weighting feature maps by their gradient-based importance.
What is style transfer?
A method to generate images combining the content of one image and the style of another by optimizing a loss that combines both features.
What is a Gram matrix in style transfer?
A matrix representing feature correlations across layers, used to match textures between the style and generated images.
What are adversarial examples?
Inputs intentionally perturbed to mislead a model, causing it to make confident but incorrect predictions.
What is the texture bias in CNNs?
CNNs often rely more on texture than shape for classification, unlike humans, leading to misclassifications under certain conditions.
What is robustness testing in neural networks?
Evaluating a network’s performance against input perturbations, noise, or adversarial attacks to understand its reliability.
How are adversarial defenses implemented?
By augmenting training data with adversarial examples or applying input transformations like noise or blurring.
What is optimizing input images in neural networks?
Using gradients to modify images, either to maximize class scores or visualize what features activate certain neurons.
What is the role of layer-wise visualization in CNNs?
It reveals features learned at different depths, from edges in early layers to object parts and entire objects in deeper layers.
What is the significance of redundancy in convolutional kernels?
Redundancy across learned kernels ensures robust feature extraction but can also indicate inefficiencies in training.
What are key applications of visualization in neural networks?
Debugging networks, understanding biases, and gaining insights into learned representations for interpretability.
What is the limitation of neural network visualizations?
They often rely on subjective interpretations and may not comprehensively represent the model’s behavior or distributed representations.