ANN Lecture 5 - Convolutional Neural Networks Flashcards

1
Q

What is wrong with using only fully connected layers for image classification?

A

Local patterns are not used, because the input images get flattened into one long array.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Problem of Template Matching

A

Only matches template and image part if identical. If there is one pixel of noise it is no match.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Cross Corelation

A

Slide a kernel over a signal and compute the dot product.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Convolutional Neural Network - Inputshape

A

Batchsize, height, width, channels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Convolutional Neural Network - Convolutional Layer

A
  • The kernels that slide over the image get learned. That means the kernel values are the learnable weights.
  • For each kernel we get one feature map
  • All the neurons in one feature map share the same weights
  • each neuron will have different activations as they “see” different parts of the input
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

CNN - Convolution Layer Parameters

A
  • The number of different kernes (which define the number of feature maos in the output)
  • The shape of the kernes: The shape will be (kernel size, lernel size, channels of input to this layer)
  • The padding: valid or same(Feature map has same size as the input)
  • The stride size: how far do you slide the kernel per step
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Pooling

A

Pooling helps reducing the size of the feature maps and generalizing by loosing the exact position of the information.
Max Pooling: Taking the maximum value of the kernel
Average Pooling: Taking the average over all values of the kernel

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Read-Out-Network

A
  • After multiple convolution and pooling layers the size of the feature maps are reduced a lot
  • The feature maps get flattened to one vector
  • One or more fully connected layers are applied
How well did you know this?
1
Not at all
2
3
4
5
Perfectly