SECTION 1: Single layer perceptrons & the basics of ANNs Flashcards

1
Q

What are synaptic efficiencies analogous to in Artificial Neural Networks (including single-layer perceptrons)? (2p)

A

The biological synoptic efficiencies is analogous to the weights in an ANN. The weights regulate how strong impact the input signal has, i.e. how strong the signal is.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

When a neuron ‘fires’ in a biological brain, what is the equivalent in an artificial neuron?
(only one correct answer) (2p)

A. the threshold activation function?
B. the activation of the artificial neuron (i.e. the dot product of the input vector and the weights)
C. connecting the inputs to the particular (e.g. output) neuron
D. the output of the artificial neuron when it has passed through the transfer function, e.g. a Threshold function or Sigmoid function
E. the weight connecting the input neuron to the output neuron

A

C. connecting the inputs to the particular (e.g. output) neuron
is equivalent to the neurons firing in a biological brain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which of the following, given appropriate connection weights, can a Single layer Perceptron linearly classify? (may be more than one correct answer) (4p)

A. AND-gated inputs
B. OR-gated inputs
C. XOR-gated inputs
D. Not XOR-gated inputs

A

A. and B.

A Single Layer Perceptron can linearly classify both AND-gated inputs as well as OR-gated inputs.

XOR- and Not XOR-gated inputs are non linearly separable, and thus can only be classified by a MLP (which can transform input space).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
Observe the following input-output threshold-based rule:
w • x > Ө  →  y = 1
w • x <= Ө  →  y = 0,
where:
w represents the weight vector,
x represents the input vector,
Ө is a threshold, and
y is the output of the network.

A. Rewrite the rule (both upper and lower parts) as augmented weight vectors. (3p)

B. How is Ө now used by the ANN? (3p)

A

A. y = w * x * (-1) (poäng: 1/3)

B. The threshold is now used as a bias node, which mean that it’s used as an extra input, typically with the value -1. (poäng: 1/3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe the Threshold function and the Sigmoid function. What are their main features and how do they differ? (4p)

A
A.  Threshold function
- is used in SLP
- gives a binary output (0 or 1) i.e. its either fully on or fully off (fires or don't fire)
- the activation is compared to 0 (if its bigger than theta, it fires)
The output is some is "mer kantig" :
y
|         \_\_\_\_
|         |
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Describe the notion of decision boundary for linearly separable problems with reference to a Single Layer Perceptron. (2p)

A
Decision boundaries (DB) are used to separate two classes in an SLP.
The DB is used so that examples that lives up to a certain criteria (the threshold) is counted as part of one class, and the others as another class.

Where to draw the DB can be tricky. We don’t want overfitting (where the classes are too specific and don’t generalize well with new examples) or underfitting (where it generalizes too much).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a Perceptron (aka SLP)?

A

The simplest and oldest model of neuron, as we know it. Takes some inputs, sums them up, applies activation function and passes them to output layer. No magic here.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a Feed Forward network (FF)?

A

a neural network where all nodes are fully connected and activation flows from input layer to output, without back loops.
There is one layer between input and output (hidden layer)

In most cases this type of networks is trained using Backpropagation method.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a Multi-Layer Perceptron (MLP)?

A

has an input layer, an output layer and at least one hidden layer (the very simplest version), and uses the Sigmoid function as well as is enabled to backpropagation.

Feedforward fully-connected ANN with at least one hidden layer, where feedforward = every neuron in a layer (e.g. input layer) is connected to every neuron in the next layer (e.g. hidden layer)

Thanks to its structure it can compute Linearly non-separable problems, i.e. problems where classification can’t be made by using only one decision boundary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a Recurrent Neural Network (RNN)?

A

This type of NNs is mainly used then context is important — when decisions from past iterations or samples can influence current ones. The most common examples of such contexts are texts  —  a word can be analysed only in context of previous words or sentences.
RNNs can process texts by “keeping in mind” ten previous words

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a Long-Short Term Memory network (LSTM)?

A

This type of network introduces a memory cell, a special cell that can process data when data have time gaps (or lags). LSTM networks can process video frame “keeping in mind” something that happened many frames ago. LSTM networks are also widely used for writing and speech recognition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a Gated Recurrent Unit (GRU)?

A

GRUs are LSTMs with different gating.

Sounds simple, but lack of output gate makes it easier to repeat the same output for a concrete input multiple times, and are currently used the most in sound (music) and speech synthesis.

The actual composition, though, is a bit different: all LSTM gates are combined into so-called update gate, and reset gate is closely tied to input.

They are less resource consuming than LSTMs and almost the same effective.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is an Autoencoder (AE)?

A

used for classification, clustering and feature compression.

Feedforward fully-connected ANN but where the output layer has the same number of neurons as the input layer. Its main idea is basically to “copy” the input and provide the same as output.

When you train FF neural networks for classification you mostly must feed then X examples in Y categories, and expect one of Y output cells to be activated. This is called “supervised learning”.

AEs, on the other hand, can be trained unsupervised. Their structure  —  when number of hidden cells is smaller than number of input cells (and number of output cells equals number of input cells), and when the AE is trained the way the output is as close to input as possible, forces AEs to generalize data and search for common patterns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a Convolutional Neural network (CNN)?

A

a class of deep neural networks, most commonly applied to analyzing visual imagery.

Feedforward but not fully-connected ANN containing specialist layers (convolutional layers, pooling layers)

A convolutional layer is structured as pairs of

  • a feature map and
  • a pooling map

CNNs use a variation of multilayer perceptrons designed to require minimal preprocessing. They are also known as shift invariant or space invariant artificial neural networks, based on their shared-weights architecture and translation invariance characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly