Introduction to neural networks Flashcards

1
Q

order the 8 levels of organisation from large to small

A

person
brain
system
map
network
neuron
synapse
molecule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

briefly describe the evolution of computational neuroscience

A

the neuron doctrine in late 19th century where golgi and cajal were the first to visualise neurons
they are made up of axons, soma and dendrite
in the early to mid 20th century neurons began to be viewed as electrical circuits with Hodgkin-Huxley model describing how APs are created and maintained
synapses allow neurons to communicate via the release of excitatory (glu) or inhibitory (GABA) NTs
spiking- short electrical transients called APs are what allow synaptic communication, with action potentials being propagated down neurons to synapses
computational neuroscience allows higher levels of neuroscience to be viewed such as networks and lower levels allows computational models of single neurons- Hodgkin-Huxley and LIF models

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

describe the perceptron

A

simple modelling of a neuron to a single point in space with inputs (dendrites) and outputs (axons/synapses), where synaptic weight modulates how inputs effect outputs (higher synaptic weight, more effect input will have on output)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

why is computational modelling of neuroscience important?

A

we need to develop theories of neuronal computation to explain experimental observations and identify new neuroscience questions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what are formal models?

A

explicitly stated theories where all assumptions must be declared

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what are the 4 ideas of what makes a good model?

A

-it copies components of the original system to the finest details
-to produce behaviour indistinguishable from the brain
- to make testable predictions
discover underlying mechanisms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

discuss models that copy components of the original system to the finest details

A

currently we are underway in computing/simulating the entire human brain and with AI it has never been closer
however, if we can understand the brain enough to make a complete computational model, the model would be redundant
there are also queries as to whether a computational model could completely simulation a human brain e.g. taking nature and nurture into consideration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

discuss models that have behaviour indistinguishable from the human brain

A

artificial neural networks have shown human-level behaviour in complex tasks e.g. beating human champions at videogames
however, in an example like this AI gaming computer is trained by playing against itself and the rate of training is sped up, so they are doing the equivalent of 180 years of training to get to human level, hence isn’t really a model indistinguishable from human learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

discuss models that make testable predictions and discover underlying mechanisms

A

this is a good model as they can present a theory which is falsifiable by generating testable predictions and models can be improved to make predictions more accurate
these models rely on abstraction where a simple model is based on a complex mechanism- simple models are better models as they can generate great insights in neuronal networks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what is the weakness (or strength of all models)

A

they are abstractions- to an extent they are all wrong

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Give the linear equation for output of perceptron

A

output=weight x input
y=w1x1+w2x2 (y=∑xw)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

draw an image of the linear neuron model perceptron

A

see book

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

when would a perceptron have the highest possible output?

A

when weight= output
(W=Y)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what does negative synaptic weight represent

A

inhibitory PSP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what does positive synaptic weight represent?

A

excitatory PSP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what is non-linear activation model of a neuron/perceptron called?

A

threshold neuron

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

what’s the difference between a threshold and a linear neuron model?

A

instead of output being equivalent to input and weighted some, we have a threshold so output is all or nothing- either Y= 0 or Y=1 dependant on the relationship between linear weighted sum and threshold

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

draw a threshold neuron model

A

look in book

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

outline the metaphor of the neuron as a plumbing system

A

replace neuron with reservoir of water
in order to release water (fire AP) out, the water level has to be above the tap. The tap position = threshold potential.
- the inputs are reservoirs of their own, with the amount of water in each reservoir corresponding with amount of input and level of taps respond to their weights- if its in the bottom of reservoir, the weight is increased (so has a bigger input on output)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

how is weight measured biologically?

A

size of PSP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what does the water reservoir metaphor not allow for?

A

negative weights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

outline the AND operation with pavlovian conditioning

A

reward awarded if both stimuli are high/active e.g. both loud sound and bright light

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

draw the truth table for AND operation with light and tone

A

Light Tone Reward
0 0 0
0 1 0
1 0 0
1 1 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

how would the AND operation be presented in a model neuron

A

2 inputs (x) which can be either 1 or 0. The neuron will only reach threshold with both 1 and 1, so the neuron will give y=1 output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

outline the OR operation with pavlovian conditioning

A

the animal gets a reward when either 1 of the 2 stimuli is present or both are present e.g. light bright OR loud sound or both bright and loud

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

changing from the AND to the OR operation represents what change to the neuron?

A

either threshold moving down or input weight moving up

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

draw the truth table for OR operation with light and tone

A

Light Tone Reward
0 0 0
0 1 1
1 0 1
1 1 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

when active, is w1x1+w2x2 larger or smaller than threshold?

A

larger
(and vice versa for inactive)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

rearrange in 2 steps w1x1+w2x2>θ to make x2 the subject

A

w2x2>-w1x1 + θ
x2> -w1/w2 x1+ θ/w2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

what are the unknowns and knowns in this equation: x2> -w1/w2 x1+ θ/w2

A

input (x) unknown
weight (w) known

31
Q

change the subjects to make it a line equation: x2> -w1/w2 x1+ θ/w2

A

x2=m x1+c

32
Q

what do m and c stand for: x2=m x1+c

A

m= slope -w1/w2
c= intercept- θ/w2

33
Q

if you change threshold in line equation, what does it change about the line?

A

the intercept point

34
Q

what is this line formed from x2=m x1+c called?

A

decision boundary

35
Q

draw the tick cross tables for XOR problem

A

see book

36
Q

what is supervised learning?

A

learning from examples or by a teacher- where you are told if you are right or wrong e.g. teaching neurons which levels of light and tone results in reward by teaching the weights of inputs

37
Q

what is stochastic gradient descent

A

an optimisation algorithm in machine learning whereby the system finds the parameters which reduce error (difference between output we want and output we get)

38
Q

what is the delta rule

A

a rule that uses gradient descent to minimise error and train neurons- finding the weights which minimise error

39
Q

error[output]=

A

(target-output)^2

40
Q

explain error[output]=(target-output)^2

A

that error as a function of output = the difference in target and output squared
it’s squared because error cannot be negative as lowest error is 0

41
Q

what does it mean if error=0

A

target=output- the goal

42
Q

output[weight]=

A

∑WiXi
i

43
Q

explain output[weight]=∑WiXi
i

A

output as a function of weight is equal to the sum of all inputs * all weights

44
Q

error[outputs[weights]]=

A

(target-∑WiXi)^2

45
Q

explain error[outputs[weights]]= (target-∑WiXi)^2

A

the error, as a function of outputs, which is an output of weights is equal to the target- the sum of weights and inputs squared

46
Q

how do we find the error?

A

we use gradient descent, plotting error against weight to find the weight at which the error is minimal by finding the derivative of how error changes with respect to a particular weight

47
Q

when represented on a graph, the derivative is highest when the gradient is

A

steepest

48
Q

what angle does the derivative show as when is positive on graph

A

0-90 degrees

49
Q

what angle does the derivative show as when is negative on graph

A

90-180 degrees

50
Q

if the derivative is negative, how does the weight need to be altered?

A

increased

51
Q

if the derivative is positive, how does the weight need to be altered?

A

decreased

52
Q

draw a learning equation where Wi moves in the opposite direction of derivative

A

wi <–Wi-alpha derror[wi]/d wi

53
Q

what is alpha in this equation: wi <–Wi-alpha derror[wi]/d wi

A

the learning rate- how big the step in weight has been programmed to take to minimise error

54
Q

what is the advantage and disadvantage of a higher learning rate?

A

you are more likely to reach minimum error, however it will occur over a larger number of trials (will take longer)

55
Q

what’s the advantage and disadvantage of a lower learning rate?

A

you may miss the minimum error due to overshoot, however you may get there quicker

56
Q

how could you programme in variation in learning rate to speed up learning

A

could chose to having the learning rate as a function of gradient descent curve, so when you have a steep curve (large derivative, you can speed up learning rate)

57
Q

turn derror[output[weights]]/dwi into an ABC equation

A

dA[B[C]]/dC

58
Q

dA[B[C]]/dC=

A

dA/dB x dB/dC

59
Q

from dA[B[C]]/dC=dA/dB x dB/dC you can derive…

A

error[output]= (target-output)^2

60
Q

what’s the final result of the chain rule?

A

derror[output[weights]]/dwi= -2(target-output)xi

61
Q

what’s the equation for moving in the opposite direction?

A

wi <– wi + alpha (target-output) xi

62
Q

what’s the word to describe target-output?

A

surprise

63
Q

wi <– wi + alpha (target-output) xi
this equation shows that the change in weight for each stimulus is given by multiplying:

A

the learning rate (alpha)
the difference between outcome and predicted outcome (surprise)
strength of stimulus (Xi)

64
Q

when is update in weights the highest?

A

when surprise in the greatest

65
Q

can a single neuron/perceptron do the AND operation

A

yes

66
Q

can a single neuron/perceptron do the OR operation?

A

yes

67
Q

can a single neuron/perception do the XOR operation?

A

no

68
Q

show how 3 perceptrons can do the XOR operation

A

view notes

69
Q

do real neural networks show linearalities?

A

no

70
Q

what is different in the firing shape between linear and non-linear neurons

A

linear have a fire or no fire hard threshold shape, whereas non-linear fit into sigmoidal shape

71
Q

draw and describe the difference between logistic sigmoid activation function and step activation function

A

look in book for perceptron diagram and equation and description

72
Q

what are the 5 key types of neural networks

A

point-neuron (single perceptron)
feed-forward network
multilayer network
recurrent network
hybrid network

73
Q
A