Computer Vision II Flashcards

1
Q

Where is pooling located?

A

In the architecture block

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What’s pooling?

A

It’s a way to reduce the feature map size, instead of applying a kernel with learnable weightsthe average or maximum of the values at the kernel position is taken

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the 2 pooling layer types?

A

Two pooling layer types: Either specify the kernel size
(classical pooling) or the desired output size (adaptive
pooling)
No learnable weights are involved, and thus, the kernel size
can be adapted per batch

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are Bottleneck layers?

A

Convolutions with a kernel size of 1 × 1 are cheaper than those with larger kernel sizes
– bottleneck layers make use of this by first, reducing the number of channels, then, applying the
computationally more expensive convolution, and finally, increasing the number of channels to the
original size again

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What’s momentum?

A

In the loss-landscape those small bumps are local minima that may be far away from the optimum
→ Momentum is desired for the weight optimization steps as well

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Does 1cycle training schedules the momentum parameter as well?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What’s the formula for momentum?

A

mt =β · mt–1 + (1 – β) · gt
θt =θt–1 – γ · mt

g = gradient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What’s RMSProp? What’s the formula?

A

Rooted Mean Square Propagation. Its an optimizer that adapts the learning rate per weight
vt =α · vt–1 + (1 – α) · g^2

θt =θt–1 – γ ·gt/√(vt + ε)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What’s ADAM?

A

Adaptive Moment Estimation
It combines the ideas of momentum and RSMProp in an algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which of the following statements is true about ResNets?
1. In the original ResNet the input to an operation is skipped forward and concatenated with the output for every network operation.
2. More layers in a network make the model more powerful and therefore always lead to improved performance.
3. In the two loss landscapes shown on slide 16, it is visible that ResNets cause steeper edges that make the optimizer “roll down the hill” faster.
4. Pooling is not allowed in a ResNet-Block as the size of the resulting feature map would not match the size of input feature map.
5. If the number of input channels does not equal the desired number of output channels a true identity path is never possible.

A

5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which of the following statements is true about ResNets? (Multiple Choice)
1. A convolution with a kernel size of 1 x 1 would not make sense in the stem of a state of the are ResNet, as these convolutions do not reduce the size of the feature map.
2. The reduction in number of operations from a 9 x 9 kernel to a 3 x 3 kernel is proportionally the same as from a 3 x 3 kernel to a 1 x 1 kernel if one disregards the bias related computations.
3. One reason for the effectiveness of ResNets is that the input is kept close to the network output via the skip connections.
4. Bottleneck layers do not necessarily have fewer kernels than plain ResNet layers
5. The plus sign in the fifth line of the code on slide 18 is not the typical ResNet addition of a skip connection to the output.

A

All of them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which of the following statements is true? (Multiple Choice)
1. If the value of a particular parameter has changed a lot in the last few updates, this means that active learning is taking place and the learning rate is most likely in a favourable range for that parameter.
2. With momentum the optimizer cannot get stuck in local optima.
3. When preprocessing the dataset, the image size for batch_tfms must be less than or equal to the image size for item_tfms.
4. The momentum parameter is a hyperparameter and its optimal value can be different for every task similar to the learning rate.
5. At the beginning of the training (due to the high stochasticity of the initialisation) and at the end of the training (to overcome the last bumps) a high momentum is desired.

A

3,4,5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the primary purpose of using skip connections in ResNet architectures?

A) To increase the model’s depth
B) To reduce the model’s computational cost
C) To bring the input closer to the output and smooth the loss function
D) To perform data augmentation

A

Answer: C) To bring the input closer to the output and smooth the loss function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which optimizer is known for adapting the learning rate for each parameter individually?

A) SGD
B) Adam
C) RMSprop
D) Adagrad

A

Answer: B) Adam

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Given a convolutional layer output of size 16x16, apply a 2x2 max-pooling layer with a stride of 2. What is the resulting output size?

A

8x8

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A ResNet block has an input dimension of 64 channels and an output dimension of 256 channels with a bottleneck layer of dimension 64. Calculate the number of parameters in the bottleneck layer (assume kernel size 1x1).

A

The number of parameters in the bottleneck layer:

Parameters
=(1×1×64x256) +256=16640
Parameters=(1×1×64×256)+256=16384+256=16640

17
Q

Describe the concept of adaptive pooling and its advantage in building fully convolutional networks.

A

Answer:
Adaptive pooling adjusts the output size of the pooling layer to a specified dimension, regardless of the input size. This allows for creating fully convolutional networks that can process images of varying sizes while producing a fixed-size output, which is useful for tasks requiring a specific output dimension, such as image classification.

18
Q

Explain how bottleneck layers and simpler stem layers can reduce computational costs in deep neural networks like ResNet.

A

Answer:
Bottleneck layers reduce the number of parameters and computations by using a smaller number of channels for intermediate layers before expanding to a larger number of channels. This compression reduces the computational load without significantly affecting the network’s capacity. Simpler stem layers, which are the initial layers of the network, also help by using fewer operations to process the input, thereby speeding up computations.

19
Q

If a neural network using SGD with momentum shows signs of getting stuck in local minima, how might the use of momentum help?

A

Answer:
Momentum helps by incorporating a fraction of the previous update’s direction and magnitude into the current update. This allows the network to maintain its direction and potentially escape local minima by smoothing out the oscillations and speeding up convergence towards the global minimum.

20
Q

A custom-built ResNet achieves 87% accuracy on the Imagnette dataset. What steps might you take to further improve this accuracy?

A

Answer:

Fine-tuning the learning rate using a learning rate finder.
Adding more data augmentation techniques to improve generalization.
Increasing the model depth while carefully adding more skip connections.
Using advanced optimization techniques like Adam or learning rate schedulers.
Experimenting with different batch sizes and number of epochs to find optimal training settings.
Applying regularization techniques such as dropout or weight decay to prevent overfitting.

21
Q

Write the formulation for the Adam

A

Check notes