Week 2: How to Model the Brain = CHECKED Flashcards
McCulluch Pits Model of Neurons 1943 has X1,X2,X3 having synaptic connecitons
to a receiver neuron Y
Simmplest approximation we can make of McCulluch Pits Model of Neurons is
Add inputs of X neurons (X1+X2+X3) which gives output activity of Y neuron
Making MCP model more realistic by saying more inputs more important than others by adding
synaptic weights
Although hardly used, The MCP is the grand father
of all neuron models
Disadvantage of MCP model is that it ignores
properties of ion channels, different types of synapses etc..
We can calculate the output of Y in McCulloh Pits weighted model of neurons by
w1X1 + w2X2 +w3X3 = Y
McCulloh Pits Formula means the large w
influence Y more
Some synapses are more stronger than others due to
learning
We can write w1X1 +w2X2 + w3X3 in McCulloh Pits Model more concisely as realistically there are more than 3 neurons giving input to receiver neuron Y
Writing sigma formula with N = arbitary number of neurons
Transfer function is introducing
one more step between Y and the final output of the neuron
McCulloh Pits Model of Neurons (1943) Transfer function G is… (4)
Define a threshold value Θ
if Y ≥ Θ then Y = 1 (neuron active)
if Y < Θ then Y = 0 (neuron silent)
also called ‘step function’
Final output from McCulloh Pits Model of Neuron is
Y activation of neuron
G(Y) = r = 1 or 0
McCulloh Pits Final Neuronal Model, Y is referred to as
activation of neuron which is fairly abstract notion
McCulloh Pits Final Neuronal Model, Y could be thought of as the internal state of the neuron
in a state that leads to action potentials or does not (neuron is silent)
McCulloh Pits Final Neuronal Model, output is r
it is some measure of output of the neuron given its activation
McCulloh Pits Final Neuronal Model,
We tentatively (not definitely) identify r (the output) with
firing rate (number of action potentials fired per second)
In Linear Neuron Model they do not use a step function as transfer function since
real neurons have a lot of variability in their firing (not just firing just at 0 or 1)
Diagram of Linear Neuron Model Trasnfer Function
Linear Neuron Model’s Transfer function is
piece-wise linear
Linear Neuron Model’s Final Output is (2)
G(Y) = r = Y
r can have values between 0 and infinity (what???)
In Linear Neuron Model transfer function
Y < 0 then neuron is silent because
there can be no negative firing rates so it is off limits
In Linear Neuron Model, it seems unreasonable to have - (2)
firing rate grow without a bound as input increase
We can not have neurons for instance to fire million spikes per second
In Linear Neuron Model, it seems unreasonable to have firing rate grow without a bound as input increase as…
Therefore, in Sigmoid Neuron Model
(2)
Their firing rate can not go faster than a given frequency
We should introduce a saturating transfer function
In Sigmoid Neuron model, (3)
As G(Y) = r grow, Y grows
As G(Y) = r grows more, we hit the threshold where we saturate the output of Y
This transfer function our output does not grow to infinity with infinite inputs
The McCulloh Pits, Linear Neuron and Sigmoid Neuron have different ways in which concept of mapping summed inputs to firing rate due to
having different transfer functions (G)