Lecture 4 Notes Flashcards
What is the primary limitation of McCulloch and Pitts neurons?
They can compute anything but can’t learn.
How does a McCulloch Pitts neuron determine if it fires?
If the sum of incoming neurons multiplied by their synapse weights exceeds a threshold.
What is the formula to update the state of a McCulloch Pitts neuron?
Xi(t+1) = step( ∑ wij xj(t) - ui)
What is a perceptron?
A simple neural network that can learn, using sensory neurons connected to motor neurons.
What type of network structure does a perceptron have?
Feed-Forward with one synaptic layer and two node layers.
Weights start at 0, Inputs and outputs are vectors of 1/0, one per input node
What does the learning rule for a perceptron involve when the output is wrong?
- Decrease weight for wrong 1 Wij = Wij + XJ
- Increase weight for wrong 0 Wij = Wij - XJ
What is the better learning rule for a perceptron?
Delta Wij = alpha(correct i val - output i) Xj. (transposed x)
What type of functions can a perceptron learn?
Only linearly separable Boolean functions.
What role does the bias play in a perceptron?
It always fires and has the same value as the threshold in McCulloch Pitts.
What is the effect of a high alpha value in perceptron learning?
It results in fast learning but may not settle on a solution.
What is the sigmoid function used for in neural networks?
It replaces the step function to allow for non-linear differentiation.1 / (1 + e^(-Bu))
What is the range of the sigmoid function?
0 to 1.
What is the formula for the derivative of the sigmoid function?
σ(u)(1-σ(u))du.
What is meant by gradient descent in neural networks?
Moving away from the direction of the gradient to minimize error.
What does Mean Squared Error measure? What do error functions need
.5 ∑ (c - o)². Zero-for-zero and larger errors more positive
What is the purpose of the backpropagation algorithm?
To update weights in a multi-layer neural network based on error.
What is Hebb’s Rule in neural networks?
If neuron A fires to neuron B, and B fires when A fires, the synaptic strength grows.
What does Hebb’s Rule state about negative correlation?
It means synaptic strength decreases.
What is a characteristic of the Hopfield Network?
It simulates associative memory and is similar to a hash table. (key-value pairs)
What is competitive learning in neural networks?
Only the weights from input to the winning output neuron are changed.
What is the effect of leaky learning in competitive learning?
Updates loser weights by a smaller degree to prevent dead units.
What does feature mapping involve in competitive learning?
Lateral inhibition with positive weights to close neighbors and negative weights to distant ones.
Fill in the blank: The formula for updating weights in backpropagation is Delta W ij = ______.
-alpha (partial-d Error / partial d Wij)
True or False: A single-layer perceptron can learn any Boolean function.
False.
What are the steps in the Error Backpropagation Algorithm?
- Shuffle data
- Find h
- Find O
- Find delta W and delta V
- Add change to W and V
Formula for assigning weight values for Hopps field network
Wij = (1/N) ∑ Xi ^ (p) Xj ^ (p). Summation for all p in P