6.19 - Pattern recognition and Categorisation in FFFNs Flashcards

Understand hierarchical processing in feed forward neural networks.

You may prefer our related Brainscape-certified flashcards:
1
Q

When is a network “feed forward”?

A

When neurons are connected only to downstream neighbors. For example, a “chain of neurons” 1>2>3 is feed forward.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a perceptron and what does this network architecture look like?

A

The perceptron is an algorithm for supervised learning of binary classifiers in a feed-forward neural network. The perceptron is a linear classifier, it can decide whether an input vector belongs to a specific class by being trained.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the perceptron XOR problem?

A

The fact that a one-layer perceptron cannot learn an “exclusive or” (XOR) logical function.

A one layer-perceptron can only produce linear separation boundaries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the role of convolutional layers?

A

Convolutional layers extract features.

These features can be used in hierarchies to recognize patterns.

The convolution is performed on the input data with the use of a filter or kernel (these terms are used interchangeably) to then produce a feature map. We execute a convolution by sliding the filter over the input. At every location, matrix multiplication is performed that sums the result onto the feature map.

This is a convolution operation. You can see the filter (the pink square) is sliding over our input (the blue square) and the sum of the convolution goes into the feature map (the red square). The area of the filter is also called the receptive field. The size of this filter is 3x3. The fact that one filter is used for the entire image makes convolutional neural networks very location-invariant and prevents them from overfitting. This means they are able to recognize features independent of their location in the input image.

Numerous convolutions are performed on our input, where each operation uses a different filter. This results in different feature maps. In the end, we take all of these feature maps and put them together as the final output of the convolution layer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the role of Regularization?

A

Regularization simplifies network connections.

We want to have neural networks that are able to generalize well, i.e. having weights so that they perform well on multiple datasets (instead of one specific dataset). In supervised learning, we can measure the performance of a network by subtracting the predictions from the ground truth labels. The difference between the two is the loss or cost.

The loss is calculated by adding the regularization term to the error. By doing this, regularization discourages the complexity of the model. Reducing weights to a value close to zero will decrease the loss and simplify the model. This helps to prevent overfitting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain the difference between L1 and L2 regularisation.

A

L1: a cost based on the norm of the weights in the network

L2: a cost based on the euclidean length of the weights of the network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the difference between regression and classification?

A

Classification: is about predicting a label (discrete class/category)

Regression: predicting a quantity (numerical value) given inputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is overfitting?

A

Overfitting is a phenomenon that occurs when a machine learning or statistics model is tailored to a particular training dataset and is unable to generalize to unseen data. This is a problem in complex models, like deep neural networks.

In other words, an overfitted model performs well on training data but fails to generalize.

Usually, the more parameters the model has, the more functions it can represent, and the more likely it is to overfit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does the perceptron algorithm work?

A

(1) Initialize the w, b parameters at 0
(2) Keep cycling through the training data (x, y)
(3) If y(w*x+b)<=0 (a point is misclassified):

I. increase the value of w by y*x

w = w + y*x

II. increase the value of b by y

b = b+y

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe the behavior and significance of the rectified linear unit (ReLU) function.

A

ReLU is a nonlinear activation function.

It returns an output of zero for negative input and returns as output the value x for any positive input x (see the figure).

Therefore, it allows for nonlinearity.

Any relationship or function can be roughly estimated by aggregating many ReLU functions together.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why can we not approximate nonlinear relationships with linear activation functions?

A

A combination of linear layers is equal to a single linear layer because these are affine transformations (preserve lines and parallelism).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a moving average in a CNN?

A

A moving average is a sliding windowed average: for every t in a time series, one computes the average of the N points around it.

The local average is a form of convolution used to smooth out noise in data by replacing a data point with the average of neighboring values in a moving window.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How are convolutional neural networks shift-invariant?

A

Shifts in the input layer leave the output relatively unaffected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does it mean that images are discrete (digital)?

A

They sample the 2D space on a regular grid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

In the context of image processing, what is filtering?

A

Forming a new image whose pixels are some function of the original pixel values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

When can we say that a convolutional filter is a linear system?

A

When summing two pictures and applying the filter leads to the same result as applying the filter to both pictures individually and summing the filtered pictures together.

F(Image1+Image2) = F(Image1)+F(Image2)

(the superposition principle)

17
Q

Is a threshold-based image segmentation filter a linear system?

A

No. Threshold is not a linear transformation, as it ‘loses’ information.

Let’s test it: is the result of applying a threshold to the sum of two images equal to applying a threshold to each image sepparately and summing?

F(Image1+Image2) =? F(Image1) + Image(2)

Answer: in general, No.

In other words, the superposition principle does not hold with threshold-based filters.

18
Q

What is an invariance?

Give 2 examples.

A

Some “essence” of “something” - some property of something that stays the same after a transformation.

For example:

  • Time invariance in word recognition: shows time invariance - we can recognize words irrespective of the time we hear them.
  • Luminance invariance in shape recognition: we can recognize shapes despite variances in luminance.
  • Space invariance in odor recognition: we can recognize odors irrespectively of where we smeel them,
19
Q

What is the premise of the parallel distributed processing (PDP) approach to memory?

A

PDP is a connectionist approach. It stresses that neural representations of concepts are hierarchically distributed throughout the network instead of being stored in localized structures.

20
Q

Is the following statement true or false?

Single neurons invariantly represent higher-level contexts.

A

True - some researchers (e.g., Quiroga) have found single neurons that respond invariantly to higher-level contexts - the Jennifer Aniston neuron.

Note that this finding has been debated

21
Q

What is a Machine Learning (ML) problem?

A

An ML problem is a set of choices:

(1) a dataset - what goes in?
(2) a model - what does the job?
(3) a cost function - how incorrect is the model?
(4) an optimization procedure - how to improve?

22
Q

How is the visual system organized?

A

Hiearchically (mostly)

From low level simple features to high level complex features.

The visual system, as well as convolutional neural networks, break the problem of object recognition in the problem of feature decomposition, with the lower levels having simpler features and higher levels combining more complex features.

23
Q

How are convolutional networks like networks in the visual system?

A

The visual system, as well as convolutional neural networks, break the problem of object recognition in the problem of feature decomposition, with the lower levels having simpler features and higher levels combining more complex features.