Introduction Flashcards
What is a perceptron?
Perceptron: network of threshold nodes for pattern classification.
Perceptron learning rule – first learning
algorithm
State 3 differences between human brain and Von Neumann machine
Von Neumann:
One or a few high speed (ns) processors with considerable computing power;
One or a few shared high speed buses for communication;
Sequential memory access by address
Human Brain:
Large number (10^11) of low speed processors (ms) with limited computing power;
Large number (10^15) of low speed connections
Content addressable recall (CAM)
How is human brain adaptive compared to a von Neumann machine?
Adaptation by changing the connectivity (topology & the thickness of connections)
What is Hebbian rule of learning?
Hebbian rule of learning: increase the connection strength between neurons i and j whenever both i and j are activated increase the connection strength between nodes i and j whenever both nodes are simultaneously ON or OFF.
Whose work was the origin of automata theory.
Pitts & McCulloch (1943)
Which was the first mathematical model for biological neurons?
Pitts & McCulloch (1943)
Why did the US government stop funding ANN research? State 3 reasons.
Single layer perceptron cannot represent (learn) simple functions such as XOR.
Multi-layer of non-linear units may have greater power but there was no learning algorithm for such nets.
Scaling problem: connection weights may grow infinitely
What caused a renewed enthusiasm in ANN research in the 90’s?
– New techniques
• Backpropagation learning for multi-layer feed forward
nets (with non-linear, differentiable node functions)
• Physics inspired models (Hopfield net, Boltzmann
machine, etc.)
• Unsupervised learning (LVQ nets, Kohonen nets)
– Impressive applications (character recognition, speech
recognition, text-to-speech transformation, process control,
associative memory, etc.)
What does generalisation in the context of ML mean?
we want the network to perform well on data that was not used during the training process!
In the context of supervised learning, what does training mean?
a process of tweaking the parameters to minimize the errors of predictions (outputs should be close to targets)
What does back propagation mean?
The backpropagation algorithm searches for weight values that minimize the total error of the network
What are the two steps of back propagation?
– Forward pass: in this step the network is activated on one example
and the error of each neuron of the output layer is computed.
– Backward pass: in this step the network error is used for updating the
weights. Starting at the output layer, the error is propagated backwards
through the network, layer by layer, with help of the generalized delta
rule. Finally, all weights are updated.
What is a downside of back propagation?
No guarantees of convergence, especially when learning rate is too large or too small. In case of convergence, it is the local minima or global minima.
How do you solve the convergence problem of back propagation in practice?
try several starting configurations and learning rates
Give three examples of MLP and describe in a line what they do
NetTalk: a network that reads aloud texts
ALVINN: a Neural network that drives a car
Falcon: a real-time system for detecting fraud with credit card transactions