5 NDC Feedforward Rate Networks and Backpropagation Flashcards

1
Q

How does an INDIVIDUAL rate neuron model perform simple computations?
What does this computation depend on?

A

by transferring its input into its desired output
depends on weight and threshold

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the purpose of a linear decision boundary in rate neuron models?

A

to categories inputs into TWO group - one group elicits strong response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the shape of the response curve in rate neuron model?

A

sigmoidal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What type of neuron is in the Leaky Integrate and Fire model?
Are the dynamics of this model linear, quadatic or cubic?

A

-a single point neuron
-linear dynamics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the rate in the rate neuron model represent?

A

graded activity / firing rate ??

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does tau represent in the sigmoidal response curve of the rate neuron model?
As tau changes, what happens to this curve?

A

-parameter that characterises time scale
-smaller tau -> steeper sigmoidal curve and vice versa

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does u represent in the sigmoidal response curve of the rate neuron model?

A

-strength of input neuron receives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What type of values go in a truth table?

A

binary: 1 and 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In a linear rate neuron model with two inputs, what is the equation?

And also with a threshold?

A

y=x1w1 + x2w2

theta < x1w1 + x2w2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

In a rate neuron model as you change operation from AND to OR, what happens to the threshold (theta)?

A

threshold hold decreases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What kind of equation is the decision boundary of rate neuron model?
What is the equation? what is m?

A

-linear (equation of a straight line)
-x2=mx1+c
m is gradient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

When theta equals zero, where does the (linear) decision boundary go through?
Why?

A

theta =0, goes thru origin of axes

because theta represents the the y intercept c

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the three different activation functions? and their curve shapes?

A

linear - I (linear)
threshold - squarery S
logistic sigmoid - S

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

When is the response of the rate neuron model greatest?

A

when the pattern of inputs matches the pattern of weights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

In the logisitic activation function eqn., what is w3 equal to?

A

minus theta

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is supervised learning?

A

learning from example or from a teacher

16
Q

What is the point of using stochastic gradient descent in the rate neuron model?

A

to iteratively adjust the weights in the direction which reduces the error

17
Q

What is the equation for gradient descent?
What does it mean?

A

๐ž๐ซ๐ซ๐จ๐ซ[๐จ๐ฎ๐ญ๐ฉ๐ฎ๐ญ]=(targetโˆ’๐จ๐ฎ๐ญ๐ฉ๐ฎ๐ญ)^2

to get the error of the output, find the difference between the desired/target output and the output we have now (and the square it)

18
Q

What is the output of a rate neuron model ? y=

A

weighted sum of inputs
โˆ‘wi xi

19
Q

What is the learning rate in the gradient descent recipe? What does it do?
Why canโ€™t it be too big or too small?

A

alpha: is the magnitude of the adjustment of the weights/ the size of the steps

big: overfit of weights an miss optimal low error points
small: can cause the model to take forever to adjust to the ideal weight

20
Q

Chain Rule:
If we have a function A[B] of a function B[C],
finding the derivative of A[B[C]] with respect to C is given by what?

A

๐‘‘๐ด[๐ต[๐ถ]]/๐‘‘๐ถ = ๐‘‘๐ด/๐‘‘๐ตร—๐‘‘๐ต/๐‘‘๐ถ

21
Q

In the gradient descent graph, what are the axes?

A

y = error[wi] x=wi

22
Q

As the learning rate (alpha) decreases what happens to the speed of the descent ?

A

gradient descent is faster (smaller steps)

23
Q

What is the chain rule of the gradient descent?

A

derror[output[weight]] / dwi

24
Q

What is the gradient descent recipe?

A
  • Identify the variables you need to change; ๐’˜_๐’Š for a neuron.
  • Define a cost function that you need to minimise to get the desired output:
    The squared error, the squared difference between the neuronโ€™s actual output and desired output.
  • Use the chain rule to find the derivative of the cost function with respect to the variable being updated; ฮ”๐‘ค_๐‘–
    The derivative tells you how much change in the error will result from a change in ๐‘ค_๐‘–
    Moving in the direction of the derivative will increase the error; move in the opposite direction.
    Multiply the derivative by a learning rate to adjust the step size of the learning rule.
    ๐‘ค_๐‘–โ†๐‘ค_๐‘–โˆ’๐›ผโ‹…ฮ”๐‘ค_๐‘–
25
Q

what is the truth table like for the XOR exclusive or computation?

A

neuron response (output) given when light or bell is on but not both at same time

26
Q

What are hidden neurons?

A

neurons behind output neuron in feedforward networks

27
Q

What sort of network do you use to solve the XOR problem?
What computation are the left and right hidden neurons doing?

A

-feedforward neural network
-left: AND
right: OR