6.17 - Self-organisation and Hebbian Learning Flashcards
What is the difference between topography and topology of neural networks?
Topography.
A topographic map is the ordered projection of a sensory surface, like the retina or the skin, or an effector system, like the musculature, to one or more structures of the central nervous system. Topographic maps can be found in all sensory systems and in many motor systems.
Topology.
Network Topology refers to the layout of a network. How different nodes in a network are connected to each other and how they communicate is determined by the network’s topology.
Make the adjacency matrix for this network
Note: this is an undirected network
What is the difference between neural plasticity and learning?
Learning is a cognitive process, while plasticity is a neural mechanism.
What is homeostatic plasticity?
The capacity of neurons to change their parameters to regulate their own excitability, a compensatory mechanism.
Describe a conductance-based synapse
I_syn = g_syn(V-Vsyn)
When a signal arrives from the presynaptic neuron, a current is generated in the postsynaptic neuron which is a function of the synaptic conductance and the driving potential for that type of synapse.
There is a flow of ions modulated by the membrane potential of the postsynaptic neuron and the synaptic conductance.
Describe a current-based synapse model
A model that approximates the conductance-modulated flow of ions as the injected current that is not dependent on the membrane potential.
This simplification is known as a current-based synapse and it has less biological plausibility than the conductance-based synapse model.
Why is the conductance-based model a more accurate representation of a synapse?
There are two elements being added with the conductance-based model:
(1) Proportion of ion channels open
(2) Driving force for a synapse
In terms of temporal characteristics, GABAB postsynaptic current is slow or fast?
Slow.
What does η define in this synaptic plasticity equation?
The learning rate
What is problematic with this learning rule?
The weights are not bounded. They will grow more and more as the neurons continue to be activated together.
What is the difference between supervised and unsupervised models?
An unsupervised model (such as Hebbian learning) self-organizes and tries to make sense of the data independently without a predefined label or error. No indication of “what should be learned”.
In a supervised learning model, errors are predefined, there is a cost function and the algorithm uses gradient descent to reduce the error.
What are the properties of reward/reinforcement learning?
(1) Success is predefined
(2) There exists a reward function
(3) There is a memory trace to remember previous steps
The formula below depicts the principle underlying Hebbian learning. Explain what this formula means.
At a certain time (implicitly defined here), neurons will change their weights as a function of the correlation between their activities and the learning rate (eta).
If we assume that ai and aj are bounded sigmoid functions such that they only have values between 0 and 1, the neurons are maximally co-active when both of them are 1 (maximally coactive).
How can we stabilize Hebbian models?
One way is to subtract the average expected activity from the activities of both ai and aj.
If we consider now a scenario where activations exceed the expected average, this stabilization will lead to depression of the output.
We see long-term potentiation as well as long-term depression in the same function.
What is the relationship between spike-timing-dependent plasticity and Hebbian learning?
Spike-timing dependent plasticity is the “spiking version” of Hebbian learning. It transforms the Hebbian idea that neurons have to be co-active to the idea that neurons have to be co-active with a certain time in between them.