Plasticity, Self-organisation and Hebbian Learning Flashcards
What is the difference between neural plasticity and learning?
Learning is a cognitive process, while plasticity is a neural mechanism.
What is homeostatic plasticity?
The capacity of neurons to change their parameters to regulate their own excitability, a compensatory mechanism.
Describe a conductance-based synapse
I_syn = gbar_syn n (V-Vsyn)
When a signal arrives from the presynaptic neuron, a current is generated in the postsynaptic neuron which is a function of the synaptic conductance, the proportion of open receptors (n) and the driving potential for that type of synapse.
There is a flow of ions modulated by the membrane potential of the postsynaptic neuron and the synaptic conductance.
Describe a current-based synapse model
A model that approximates the conductance-modulated flow of ions as the injected current that is not dependent on the membrane potential.
This simplification is known as a current-based synapse and it has less biological plausibility than the conductance-based synapse model.
What is the conductance-based model of a synapse?
Just as in HH ion channels, it is equal to
total conductace * receptor activation * driving force
This graph represents the time course of synaptic potentials for different receptors. In terms of temporal characteristics, GABAB postsynaptic current is slow or fast?
Slow.
This graph represents the time course of synaptic potentials for different receptors. Which one is the fastest receptor?
AMPA
What does η define in this equation for synaptic plasticity?
The learning rate
What type of plasticity is represented in this equation?
Hebbian Learning
What is the problem with this learning rule?
The weight growth is unbounded. They will grow indefinitely when the neurons continue to be activate together.
What is the difference between supervised and unsupervised models?
An unsupervised model (such as Hebbian learning) self-organizes and tries to make sense of the data independently without a predefined label or error. No indication of “what should be learned”.
In a supervised learning model, errors are predefined, there is a cost function and the algorithm uses gradient descent to reduce the error.
What are the properties of reward/reinforcement learning?
(1) Success is predefined
(2) There exists a reward function
(3) There is a memory trace to remember previous steps
Explain the terms in the Hebbian learning formula means.
At a certain time (implicitly defined here), neurons will change their weights as a function of the correlation between their activities and the learning rate ($\eta$).
If we assume that ai and aj are bounded sigmoid functions such that they only have values between 0 and 1, the neurons are maximally co-active when both of them are 1 (maximally coactive).
How can we stabilize Hebbian models?
One way is to subtract the average expected activity from the activities of both ai and aj.
If we consider now a scenario where activations exceed the expected average, this stabilization will lead to depression of the output.
We see long-term potentiation as well as long-term depression in the same function.
What is the relationship between spike-timing-dependent plasticity and Hebbian learning?
Spike-timing dependent plasticity is the “spiking version” of Hebbian learning. It transforms the Hebbian idea that neurons have to be co-active to the idea that neurons have to be co-active with a certain time in between them.