Introduction Flashcards

1
Q

What is a perceptron?

A

Perceptron: network of threshold nodes for pattern classification.

Perceptron learning rule – first learning
algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

State 3 differences between human brain and Von Neumann machine

A

Von Neumann:
One or a few high speed (ns) processors with considerable computing power;
One or a few shared high speed buses for communication;
Sequential memory access by address

Human Brain:
Large number (10^11) of low speed processors (ms) with limited computing power;
Large number (10^15) of low speed connections
Content addressable recall (CAM)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How is human brain adaptive compared to a von Neumann machine?

A

Adaptation by changing the connectivity (topology & the thickness of connections)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Hebbian rule of learning?

A

Hebbian rule of learning: increase the connection strength between neurons i and j whenever both i and j are activated increase the connection strength between nodes i and j whenever both nodes are simultaneously ON or OFF.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Whose work was the origin of automata theory.

A

Pitts & McCulloch (1943)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which was the first mathematical model for biological neurons?

A

Pitts & McCulloch (1943)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why did the US government stop funding ANN research? State 3 reasons.

A

Single layer perceptron cannot represent (learn) simple functions such as XOR.

Multi-layer of non-linear units may have greater power but there was no learning algorithm for such nets.

Scaling problem: connection weights may grow infinitely

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What caused a renewed enthusiasm in ANN research in the 90’s?

A

– New techniques
• Backpropagation learning for multi-layer feed forward
nets (with non-linear, differentiable node functions)
• Physics inspired models (Hopfield net, Boltzmann
machine, etc.)
• Unsupervised learning (LVQ nets, Kohonen nets)

– Impressive applications (character recognition, speech
recognition, text-to-speech transformation, process control,
associative memory, etc.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does generalisation in the context of ML mean?

A

we want the network to perform well on data that was not used during the training process!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In the context of supervised learning, what does training mean?

A

a process of tweaking the parameters to minimize the errors of predictions (outputs should be close to targets)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does back propagation mean?

A

The backpropagation algorithm searches for weight values that minimize the total error of the network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the two steps of back propagation?

A

– Forward pass: in this step the network is activated on one example
and the error of each neuron of the output layer is computed.

– Backward pass: in this step the network error is used for updating the
weights. Starting at the output layer, the error is propagated backwards
through the network, layer by layer, with help of the generalized delta
rule. Finally, all weights are updated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a downside of back propagation?

A

No guarantees of convergence, especially when learning rate is too large or too small. In case of convergence, it is the local minima or global minima.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How do you solve the convergence problem of back propagation in practice?

A

try several starting configurations and learning rates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Give three examples of MLP and describe in a line what they do

A

NetTalk: a network that reads aloud texts

ALVINN: a Neural network that drives a car

Falcon: a real-time system for detecting fraud with credit card transactions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Briefly describe the revolution of NN?

A

In the 1940s, first mathematical model of biological neuron by Pitts & McMullcoch

1949 Hebbian rule

1950’s and 60’s single layer perceptron

1980’s and 90’s multi layer perceptron like NETtalk, ALVINN and Falcon

2006
– Restricted Boltzmann Machine as a building block for DNN
– Contrastive Divergence algorithm
– Combine supervised with unsupervised learning
– Deep Belief Networks, Stacked Auto-Encoders

2010 : AlexNet (ImageNet Challenge)

2013: DeepMind “solves” the Atari 2600 games

2016-: AlphaGo, AlphaGoZero, AlphaZero (Go, Chess, …?)

17
Q

What is the Perceptron convergence theorem?

A

Perceptron convergence theorem: “everything that can be represented by a perceptron can be learned”

18
Q

What are the enabling factors of the huge success of Neural networks?

A

• Availability of “Big Data”: the more data we have the
better we can train a network

• Powerful Hardware (GPU’s): the speedup of the training
the process by 100-1000 times reduces the training time
from years to hours

• New algorithms and architectures: leaving the MLP
standard behind …

19
Q

What are key deep learning architectures?

A
  • Convolutional Networks: when adding layers enforce hidden nodes to learn “local features” - that reduces the number of parameters
  • Recurrent Networks: networks for modeling sequential data
  • Autoencoders: networks that learn intrinsic features of the data
  • RBMs, VAEs, GANs: generative models of the data
20
Q

Give an example of a CNN application with very high accuracy

A

MNIST : 99.67%

21
Q

What is MNIST? Describe the architecture and features of the algorithm

A

training:
60.000 images

testing:
10.000 images

each image:
32x32 pixels

accuracy: 99.7%

22
Q

Which were the most misclassified digits by MNIST?

A

There were 33 misclassified digits, most confusion being between 7 and 9 and 3 and 5.