03 Spiking NNs Flashcards
Cortex
“Human” specific part of the brain.
Different parts of the Cortex in charge of different tasks
But: structure is the same in all of them. (supports the “Single learning algorithm theory”)
6 layers vertical columns pass layers
connections between layers (layer 6 can be connected to layer 6 and layer 3)
Structure of Neurons
Dendrites – input
Soma - summation
Axon - output
Synapses - connection
Brain - learning theory
Cortex: unsupervised learning
Basal ganglia: reinforcement learning
Cerebellum: supervised learning
Hypotheses
- The brain optimizes cost functions
- Cost functions are diverse across areas and time
- Specialized circuits for key problems
Postsynaptic Potential (PSP)
Postsynaptic potentials are changes in the membrane potential of the postsynaptic terminal of a chemical synapse.
Strength of post-synaptic potential (PSP) depends on:
- Amount of neurotransmitters in axon
- Number of ion channels (receptors) in dendrites
- In simulators, abstracted by synaptic strength (weight)
Plasticity: change in one of these quantities
Synaptic plasticity
In neuroscience, synaptic plasticity is the ability of synapses to strengthen or weaken over time, in response to increases or decreases in their activity.
aka: Synaptic plasticity enables learning
Plasiticy depends on precise timing of spikes
- LTP – Long Term Potentiation (+)
- LTD – Long Term Depression (-)
- Hebbian rule: “Neurons who fire together wire together” Learning is local and incremental
Wiki: SNN
Spiking neural networks (SNNs) are artificial neural network models that more closely mimic natural neural networks.[1] In addition to neuronal and synaptic state, SNNs also incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not fire at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather fire only when a membrane potential – an intrinsic quality of the neuron related to its membrane electrical charge – reaches a specific value. When a neuron fires, it generates a signal which travels to other neurons which, in turn, increase or decrease their potentials in accordance with this signal.
classic spiking neuron model
Differential equations with respect to time
PSP shape (kernel)
Input: Current
Output: Spikes
Variables: Membrane potential: V(t)
Parameters:
- Threshold: V_th
- Resting potential: V_rest
- Leak (membrane time constant): t_m
- Refractory period: t_ref
Rate Coding
Spiking rate is computed over discrete time intervals
Input vectors map to output vectors
Rate-based networks = Analog networks
Inconvenients:
- Computing spike rates is slow
- Inefficient
Common use: cognition and images
binary coding
When a neuron fire, it is said „active“ for a given amount of time ∆t
We can sample the spike train at any time
Same principles for values ∈ R : exponential filter instead of binary (simulates PSPs)
Common use: stochastic inference
Gaussian Coding
dealing with spatial stimuli
Neurons have spatial positions
We fit a gaussian on the spiking rates
–> common use: proprioception in muscles
Synchronous coding schemes
need to define a reference time, e.g. a spike
information is encoded with respect to the reference
different schemes:
- time-to-first spike
- temporal coding
- rank order coding
- correlation coding
support complex computations with few neurons
very efficient, but not very robust to noise
Dealing with correlations
Repeating spation-temporal spiking patterns
requires spike train analysis tools
common use: decoding stimuli in spike trains
tool: elehant framework
so far not used for learning, very complex.
used for analyzing data. (train model that produces a spike train. analyze the spation-temporal correlations of the spike trains to gain insights)
I/O - encoding and decoding
Its up to you how to encode inputs and decode outputs
different coding schemes can be used within the same network
makes modelling a network a whole lot more complex
For our brain: it is not know. Theory: different codings used in different areas.
Synaptic plasticity as learning
Hebb’s postulate: learning is local and cooperative
- Local: the weights are adjusted with respect to local information
- cooperative: the weights are adjusted on simultaneous activation
Learning happens at different timescales:
- short-term plasticity
- long-term plasticity –> relevant for learning in this lecture
Different types of long-term synaptic plasticity
Spike-timing-dependent plasticity
* Depends on relative spike-timing of post synaptic neurons
Rate-based plasticity
* Depends on rate (frequency) of pre- and postsynaptic firing
Voltage-based plasticity
* The synapse has access to the post-synaptic membrane potential
Reward-based plasticity
* Plasticity controlled by global a reward signal (neuromodulator dopamine)
Structural plasticity
* Learning by rewiring connections instead of just changing the synaptic weights