lesson_6_flashcards

1
Q

What is a convolutional neural network (CNN)?

A

A neural network architecture that uses convolutional layers to extract spatial features, alternating with pooling layers for downsampling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a receptive field in CNNs?

A

The region of the input image that influences a particular activation in deeper layers, growing with network depth.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is transfer learning in deep learning?

A

A method to reuse features learned from large datasets like ImageNet for new tasks, reducing the need for large labeled datasets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are advanced convolutional network architectures?

A

Architectures like AlexNet, VGG, Inception, and ResNet that introduce innovations like small filters, residual connections, and modular designs for scalability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the role of skip connections in ResNet?

A

They allow gradients to bypass layers, improving gradient flow and enabling the training of very deep networks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the benefits of small convolution filters (e.g., 3x3) over large ones?

A

They reduce parameters, improve efficiency, and achieve the same receptive field depth with stacked layers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the backwards pass in a convolution layer?

A

A process to compute gradients for weights and inputs during backpropagation using the chain rule and cross-correlation operations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the role of data augmentation in CNN training?

A

Enhances generalization by artificially expanding datasets through transformations like rotations, flips, and noise addition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the main limitation of transfer learning?

A

It performs poorly when the target task is significantly different from the source task, such as natural images to sketches.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the difference between cross-correlation and convolution in forward passes?

A

Cross-correlation does not flip the kernel, while convolution does; deep learning often uses cross-correlation for simplicity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are modular designs in advanced CNN architectures?

A

Repeated patterns of layers, such as 3x3 convolutions in VGG or parallel filters in Inception, to increase depth and scalability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the importance of initialization in CNNs?

A

Proper initialization ensures stable gradient flow and faster convergence during training.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How does transfer learning reduce computation?

A

By freezing pre-trained convolutional layers and updating only the final layers, it reduces the parameters to train on small datasets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is semi-supervised learning in the context of CNNs?

A

A learning paradigm where models are trained on a small labeled dataset alongside a larger unlabeled dataset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How do residual connections improve optimization?

A

By enabling identity mappings, they prevent gradient degradation and allow efficient training of very deep networks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly