Lesson 4 - CONV: relevant architectures & components Flashcards

1
Q

What is data augmentation?

A

Apply a set of operations on a given data sample to produce additional samples

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why do we want to do data augmentation?

A

To make the model more generic and make it more robust against changes in the input

–> increase training data
–> introduce variability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What operations can we do when doing data augmentation?

A

Mirroring, cropping, rotation, color shifting,…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

When doing data augmentation, can I apply any random operation or can I apply an operation randomly?

A

No, you should still ensure that your “new” image respects the values of the old one. For example, if you have a picture of a tree and you crop it, you must be sure not to crop of the tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is dropout? What are its benefits?

A

With dropout we deactivate a neuron with a given probability

Benefits:
–> Avoid overfitting
–> Promote ensemble learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does it mean when you deactivate a neuron?

A

You set a neuron to 0 (output).
Setting a neuron to 0 will stop the information from being propagated further

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How does dropout help to avoid overfitting the model

A

When canceling out some features, you force the model to learn based on other features

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why does dropout promotes ensemble learning?

A

–> by forcing the model to learn other features, you are learning to combine multiple features and that is where ensemble comes from

–> also by doing dropout, you still have sub-networks that survive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Considering relevant architectures, what is the Neocognitron (1982)

A
  • goal: recognition of position-shifted / shape-distorted patterns
  • proposed the cell-plane arrangement (convolution)
  • hierarchical structure
  • convolution/sub-sampling combination
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Considering relevant architectures, what is LeNet-5 (1998)

A
  • 7 layers: 3 conv, 2 subsampling, 2 FC
  • addressed handwritten digit recognition task
  • MNIST dataset was proposed
  • one of the first use of ConvNets + Backprop
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Considering relevant architectures, what is AlexNext (2012)

A
  • 5 conv layers + 3 fc layers
  • trained across 2 GPU’s (model parallelism)
  • 60M param., 650K neurons
  • No need to pair convolutional with pooling layers
  • ReLU for convolutional layers
  • data augmentation and dropout
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What where the enablers of AlexNet?

A
  • scientific community
  • hardware developments
  • open-access datasets
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

In 2014 we are going very deep with VGG-net. What do you know about it? How is it different?

A
  • fixed-size 3x3 kernels
  • use same conv. to preserve resolution
  • trained by splitting data across 4 copies of the same model
    –> data parallelism
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

VGG-net used stacked kernels and same convolutions. What are the benefits of that?

A
  • smaller kernels = less parameters to estimate
  • larger receptive field with less parameters
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

In 2014, GoogLeNet went even deeper.

A
  • branching architecture
  • aggregate the output of different branches [inception modules]
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what can you tell about ResNet (2015)

A
  • provided a skip mechanism to assist the backpropagation of gradients
  • enable going deeper (18, 34, …, 152 layers!)
17
Q
A