03 Spiking NNs Flashcards

1
Q

Cortex

A

“Human” specific part of the brain.

Different parts of the Cortex in charge of different tasks

But: structure is the same in all of them. (supports the “Single learning algorithm theory”)

6 layers vertical columns pass layers
connections between layers (layer 6 can be connected to layer 6 and layer 3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Structure of Neurons

A

Dendrites – input

Soma - summation

Axon - output

Synapses - connection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Brain - learning theory

A

Cortex: unsupervised learning

Basal ganglia: reinforcement learning

Cerebellum: supervised learning

Hypotheses

  1. The brain optimizes cost functions
  2. Cost functions are diverse across areas and time
  3. Specialized circuits for key problems
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Postsynaptic Potential (PSP)

A

Postsynaptic potentials are changes in the membrane potential of the postsynaptic terminal of a chemical synapse.

Strength of post-synaptic potential (PSP) depends on:

  • Amount of neurotransmitters in axon
  • Number of ion channels (receptors) in dendrites
  • In simulators, abstracted by synaptic strength (weight)

Plasticity: change in one of these quantities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Synaptic plasticity

A

In neuroscience, synaptic plasticity is the ability of synapses to strengthen or weaken over time, in response to increases or decreases in their activity.

aka: Synaptic plasticity enables learning

Plasiticy depends on precise timing of spikes

  • LTP – Long Term Potentiation (+)
  • LTD – Long Term Depression (-)
  • Hebbian rule: “Neurons who fire together wire together” Learning is local and incremental
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Wiki: SNN

A

Spiking neural networks (SNNs) are artificial neural network models that more closely mimic natural neural networks.[1] In addition to neuronal and synaptic state, SNNs also incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not fire at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather fire only when a membrane potential – an intrinsic quality of the neuron related to its membrane electrical charge – reaches a specific value. When a neuron fires, it generates a signal which travels to other neurons which, in turn, increase or decrease their potentials in accordance with this signal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

classic spiking neuron model

A

Differential equations with respect to time

PSP shape (kernel)

Input: Current

Output: Spikes

Variables: Membrane potential: V(t)

Parameters:

  • Threshold: V_th
  • Resting potential: V_rest
  • Leak (membrane time constant): t_m
  • Refractory period: t_ref
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Rate Coding

A

Spiking rate is computed over discrete time intervals

Input vectors map to output vectors

Rate-based networks = Analog networks

Inconvenients:

  • Computing spike rates is slow
  • Inefficient

Common use: cognition and images

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

binary coding

A

When a neuron fire, it is said „active“ for a given amount of time ∆t

We can sample the spike train at any time

Same principles for values ∈ R : exponential filter instead of binary (simulates PSPs)

Common use: stochastic inference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Gaussian Coding

A

dealing with spatial stimuli

Neurons have spatial positions
We fit a gaussian on the spiking rates

–> common use: proprioception in muscles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Synchronous coding schemes

A

need to define a reference time, e.g. a spike

information is encoded with respect to the reference

different schemes:

  • time-to-first spike
  • temporal coding
  • rank order coding
  • correlation coding

support complex computations with few neurons

very efficient, but not very robust to noise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Dealing with correlations

A

Repeating spation-temporal spiking patterns

requires spike train analysis tools

common use: decoding stimuli in spike trains

tool: elehant framework
so far not used for learning, very complex.
used for analyzing data. (train model that produces a spike train. analyze the spation-temporal correlations of the spike trains to gain insights)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

I/O - encoding and decoding

A

Its up to you how to encode inputs and decode outputs

different coding schemes can be used within the same network

makes modelling a network a whole lot more complex

For our brain: it is not know. Theory: different codings used in different areas.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Synaptic plasticity as learning

A

Hebb’s postulate: learning is local and cooperative

  • Local: the weights are adjusted with respect to local information
  • cooperative: the weights are adjusted on simultaneous activation

Learning happens at different timescales:

  • short-term plasticity
  • long-term plasticity –> relevant for learning in this lecture
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Different types of long-term synaptic plasticity

A

Spike-timing-dependent plasticity
* Depends on relative spike-timing of post synaptic neurons

Rate-based plasticity
* Depends on rate (frequency) of pre- and postsynaptic firing

Voltage-based plasticity
* The synapse has access to the post-synaptic membrane potential

Reward-based plasticity
* Plasticity controlled by global a reward signal (neuromodulator dopamine)

Structural plasticity
* Learning by rewiring connections instead of just changing the synaptic weights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Spiking networks as binary recurrent networks

A

Spiking networks can be viewed as recurrent binary networks

this way they can be implemented efficiently with deep learning frameworks

every forward step updates the state of the network by a discrete time interval

17
Q

Backprop for spikes: challenges and solutions

- non-differentiability

A

activation function for spikes is the step function.
This is not differentiable.

–> surrogate gradients

The gradient is approximated with a surrogate
aka: replaced by a fake gradient, that does, what we want it to

synchronous forward/backward steps
–> local error computation

18
Q

Backprop for spikes: challenges and solutions

- time dynamics

A

time dynamics are not taken into account
–> eligibility traces (not backprop-through-time)

include time convolution into your update function. in short: an extra variable is included, that stores temporal information…

19
Q

Backprop for spikes: challenges and solutions

- feedback alignment

A

error sigma_j symmetric feedback (weight transport)

synapses only work in one direction. information can not be fed backward.

–> feedback alignment
fake synapses are implemented from the error directly to each layer. the gradient is approximated …