Convolutional Neural Networks Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

A convolutional layer consists of filters, what do these filters do?

A

Each layer is made up of a set of filters - each filter extracts a set of features like edges. The output of the convolutional layers is then a feature map/activation map

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why do many convolutional layers end with ReLu activation functions?

A

The purpose of activation functions is mainly to add non-linearity to the network, which otherwise would be only a linear model. A convolutional layer by itself is linear exactly like the fully connected layer.

ReLu is great since it transforms the output in a way where it makes backpropagating easier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the advantage of multiple convolutional layers?

A

Convolutional layers detects things in images like patterns through filters. Multiple conv layers then can detect patterns in patterns from the earlier layers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does a max pooling layer do?

A

Max pooling layers takes in an image (often a filtered image) and outputs a version of this image with reduced dimensions but increase of depth as the filters learned earlier causes added depth to the image.

They look at an area and keeps the max value in each area before moving away with the defined stride - reducing the dimensions that way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does a Fully connected layer do?

A

The job of the fully connected layer is to compress the information of its input into a feature vector of the number of classes.

Classes = 100 = 1x100
Classes = 10 = 1x10
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is dropout layers?

A

Dropout layers essentially turn off certain nodes in a layer with some probability, p. This ensures that all nodes get an equal chance to try and classify different images during training, and it reduces the likelihood that only a few, heavily-weighted nodes will dominate the process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why do you end many CNNs with Softmax functions?

A

Softmax can turn the feature vector received from a FC-layer into a probability range. This probability range can be used to extract the most likely class for the given image

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

When is cross entropy a good choice for a loss function?

A

When you are working with classification tasks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why do you use zero-padding?

A

Without padding the dimensions of the input image would decrease too quickly and the network would not be able to learn anything

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What’s a good rule of thumb for learning rate decay?

A

decay = alpha_init / epochs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly