Lecture 4 Notes Flashcards

1
Q

What is the primary limitation of McCulloch and Pitts neurons?

A

They can compute anything but can’t learn.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How does a McCulloch Pitts neuron determine if it fires?

A

If the sum of incoming neurons multiplied by their synapse weights exceeds a threshold.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the formula to update the state of a McCulloch Pitts neuron?

A

Xi(t+1) = step( ∑ wij xj(t) - ui)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a perceptron?

A

A simple neural network that can learn, using sensory neurons connected to motor neurons.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What type of network structure does a perceptron have?

A

Feed-Forward with one synaptic layer and two node layers.
Weights start at 0, Inputs and outputs are vectors of 1/0, one per input node

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does the learning rule for a perceptron involve when the output is wrong?

A
  • Decrease weight for wrong 1 Wij = Wij + XJ
  • Increase weight for wrong 0 Wij = Wij - XJ
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the better learning rule for a perceptron?

A

Delta Wij = alpha(correct i val - output i) Xj. (transposed x)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What type of functions can a perceptron learn?

A

Only linearly separable Boolean functions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What role does the bias play in a perceptron?

A

It always fires and has the same value as the threshold in McCulloch Pitts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the effect of a high alpha value in perceptron learning?

A

It results in fast learning but may not settle on a solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the sigmoid function used for in neural networks?

A

It replaces the step function to allow for non-linear differentiation.1 / (1 + e^(-Bu))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the range of the sigmoid function?

A

0 to 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the formula for the derivative of the sigmoid function?

A

σ(u)(1-σ(u))du.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is meant by gradient descent in neural networks?

A

Moving away from the direction of the gradient to minimize error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does Mean Squared Error measure? What do error functions need

A

.5 ∑ (c - o)². Zero-for-zero and larger errors more positive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the purpose of the backpropagation algorithm?

A

To update weights in a multi-layer neural network based on error.

17
Q

What is Hebb’s Rule in neural networks?

A

If neuron A fires to neuron B, and B fires when A fires, the synaptic strength grows.

18
Q

What does Hebb’s Rule state about negative correlation?

A

It means synaptic strength decreases.

19
Q

What is a characteristic of the Hopfield Network?

A

It simulates associative memory and is similar to a hash table. (key-value pairs)

20
Q

What is competitive learning in neural networks?

A

Only the weights from input to the winning output neuron are changed.

21
Q

What is the effect of leaky learning in competitive learning?

A

Updates loser weights by a smaller degree to prevent dead units.

22
Q

What does feature mapping involve in competitive learning?

A

Lateral inhibition with positive weights to close neighbors and negative weights to distant ones.

23
Q

Fill in the blank: The formula for updating weights in backpropagation is Delta W ij = ______.

A

-alpha (partial-d Error / partial d Wij)

24
Q

True or False: A single-layer perceptron can learn any Boolean function.

25
Q

What are the steps in the Error Backpropagation Algorithm?

A
  • Shuffle data
  • Find h
  • Find O
  • Find delta W and delta V
  • Add change to W and V
26
Q

Formula for assigning weight values for Hopps field network

A

Wij = (1/N) ∑ Xi ^ (p) Xj ^ (p). Summation for all p in P