05 Neural Networks Flashcards
The Learning Problem
Improve over task T with respect to performance emasure P based on experience E.
Supervised learning
Unsupervised learning
Training data does not include desired outputs, instead the algorithm tries to identify similarities between the inputs that have something in common are categorised together.
Reinforcement learning
The algorithm is told when the answer is wrong, but does not get told how to correct it. Algorithm must balance exploration of the unknown environment with exploitation of immediate rewards to maximize long- term rewards.
Evolutionary learning
Biological organisms adapt to improve their survival rates and chance of having offspring in their environment, using the idea of
fitness (how good the current solution is
The Machine Learning Process
- Data Collection and Preparation
- Feature Selection and Extraction
- Algorithm Choice
- Parameters and Model Selection
- Training
- Evaluation
We are born with about _____ neurons. A neuron may connect to as many as _____ other neurons
We are born with about 100 billion neurons. A neuron may connect to as many as 10,000 other neurons
Hebb’s Rule
- Strength of a synaptic connection is proportional to the correlation of two connected neurons.
- If two neurons consistently fire simultaneously, synaptic connection is increased (if firing at different time, strength is reduced).
- “Cells that fire together, wire together.”
How realistic is McCulloch and Pitts Neurons Model?
Not Very.
– Real neurons are much more complicated.
– Inputs to a real neuron are not necessary summed linearly.
– Real neuron do not output a single output response, but a SPIKE TRAIN.
– Weights wi can be positive or negative, whereas in biology connections are either excitatory OR inhibitory.
Neural Networks: Updating the weights
Aim: minimize the error at the output
The learning rate n
ɳ controls the size of the weight changes.
• Why not ɳ = 1?
– Weight change a lot, whenever the answer is wrong. – Makes the network unstable.
• Small ɳ
– Weights need to see the inputs more often before they change significantly.
– Network takes longer to learn.
– But, more stable network.
Bias Input
• What happens when all the inputs to a neuron are zero?
– It doesn’t matter what the weights are,
– The only way that we can control whether neuron fires or not is through the threshold.
• That’s why threshold should be adjustable.
– Changing the threshold requires an extra parameter that we need to write code for.
• We add to each neuron an extra input with a fixed value.
A single layer perceptron can only learn _____ problems.
A single layer perceptron can only learn linearly separable problems.
Boolean AND function is linearly separable, whereas Boolean XOR function (and the parity problem in general) is not.
In contrast to perceptrons, multilayer networks can learn not only multiple _______, but the boundaries may be _____.
In contrast to perceptrons, multilayer networks can learn not only multiple decision boundaries, but the boundaries may be nonlinear.
Linear Models can only identify flat decision boundaries like ___
Linear Models can only identify flat decision boundaries like straight lines, planes, hyperplanes, …