Introduction to neural networks Flashcards
order the 8 levels of organisation from large to small
person
brain
system
map
network
neuron
synapse
molecule
briefly describe the evolution of computational neuroscience
the neuron doctrine in late 19th century where golgi and cajal were the first to visualise neurons
they are made up of axons, soma and dendrite
in the early to mid 20th century neurons began to be viewed as electrical circuits with Hodgkin-Huxley model describing how APs are created and maintained
synapses allow neurons to communicate via the release of excitatory (glu) or inhibitory (GABA) NTs
spiking- short electrical transients called APs are what allow synaptic communication, with action potentials being propagated down neurons to synapses
computational neuroscience allows higher levels of neuroscience to be viewed such as networks and lower levels allows computational models of single neurons- Hodgkin-Huxley and LIF models
describe the perceptron
simple modelling of a neuron to a single point in space with inputs (dendrites) and outputs (axons/synapses), where synaptic weight modulates how inputs effect outputs (higher synaptic weight, more effect input will have on output)
why is computational modelling of neuroscience important?
we need to develop theories of neuronal computation to explain experimental observations and identify new neuroscience questions
what are formal models?
explicitly stated theories where all assumptions must be declared
what are the 4 ideas of what makes a good model?
-it copies components of the original system to the finest details
-to produce behaviour indistinguishable from the brain
- to make testable predictions
discover underlying mechanisms
discuss models that copy components of the original system to the finest details
currently we are underway in computing/simulating the entire human brain and with AI it has never been closer
however, if we can understand the brain enough to make a complete computational model, the model would be redundant
there are also queries as to whether a computational model could completely simulation a human brain e.g. taking nature and nurture into consideration
discuss models that have behaviour indistinguishable from the human brain
artificial neural networks have shown human-level behaviour in complex tasks e.g. beating human champions at videogames
however, in an example like this AI gaming computer is trained by playing against itself and the rate of training is sped up, so they are doing the equivalent of 180 years of training to get to human level, hence isn’t really a model indistinguishable from human learning
discuss models that make testable predictions and discover underlying mechanisms
this is a good model as they can present a theory which is falsifiable by generating testable predictions and models can be improved to make predictions more accurate
these models rely on abstraction where a simple model is based on a complex mechanism- simple models are better models as they can generate great insights in neuronal networks
what is the weakness (or strength of all models)
they are abstractions- to an extent they are all wrong
Give the linear equation for output of perceptron
output=weight x input
y=w1x1+w2x2 (y=∑xw)
draw an image of the linear neuron model perceptron
see book
when would a perceptron have the highest possible output?
when weight= output
(W=Y)
what does negative synaptic weight represent
inhibitory PSP
what does positive synaptic weight represent?
excitatory PSP
what is non-linear activation model of a neuron/perceptron called?
threshold neuron
what’s the difference between a threshold and a linear neuron model?
instead of output being equivalent to input and weighted some, we have a threshold so output is all or nothing- either Y= 0 or Y=1 dependant on the relationship between linear weighted sum and threshold
draw a threshold neuron model
look in book
outline the metaphor of the neuron as a plumbing system
replace neuron with reservoir of water
in order to release water (fire AP) out, the water level has to be above the tap. The tap position = threshold potential.
- the inputs are reservoirs of their own, with the amount of water in each reservoir corresponding with amount of input and level of taps respond to their weights- if its in the bottom of reservoir, the weight is increased (so has a bigger input on output)
how is weight measured biologically?
size of PSP
what does the water reservoir metaphor not allow for?
negative weights
outline the AND operation with pavlovian conditioning
reward awarded if both stimuli are high/active e.g. both loud sound and bright light
draw the truth table for AND operation with light and tone
Light Tone Reward
0 0 0
0 1 0
1 0 0
1 1 1
how would the AND operation be presented in a model neuron
2 inputs (x) which can be either 1 or 0. The neuron will only reach threshold with both 1 and 1, so the neuron will give y=1 output
outline the OR operation with pavlovian conditioning
the animal gets a reward when either 1 of the 2 stimuli is present or both are present e.g. light bright OR loud sound or both bright and loud
changing from the AND to the OR operation represents what change to the neuron?
either threshold moving down or input weight moving up
draw the truth table for OR operation with light and tone
Light Tone Reward
0 0 0
0 1 1
1 0 1
1 1 1
when active, is w1x1+w2x2 larger or smaller than threshold?
larger
(and vice versa for inactive)
rearrange in 2 steps w1x1+w2x2>θ to make x2 the subject
w2x2>-w1x1 + θ
x2> -w1/w2 x1+ θ/w2