6.17 - Self-organisation and Hebbian Learning Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is the difference between topography and topology of neural networks?

A

Topography.

A topographic map is the ordered projection of a sensory surface, like the retina or the skin, or an effector system, like the musculature, to one or more structures of the central nervous system. Topographic maps can be found in all sensory systems and in many motor systems.

Topology.

Network Topology refers to the layout of a network. How different nodes in a network are connected to each other and how they communicate is determined by the network’s topology.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Make the adjacency matrix for this network

A

Note: this is an undirected network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the difference between neural plasticity and learning?

A

Learning is a cognitive process, while plasticity is a neural mechanism.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is homeostatic plasticity?

A

The capacity of neurons to change their parameters to regulate their own excitability, a compensatory mechanism.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe a conductance-based synapse

A

I_syn = g_syn(V-Vsyn)

When a signal arrives from the presynaptic neuron, a current is generated in the postsynaptic neuron which is a function of the synaptic conductance and the driving potential for that type of synapse.

There is a flow of ions modulated by the membrane potential of the postsynaptic neuron and the synaptic conductance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Describe a current-based synapse model

A

A model that approximates the conductance-modulated flow of ions as the injected current that is not dependent on the membrane potential.

This simplification is known as a current-based synapse and it has less biological plausibility than the conductance-based synapse model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why is the conductance-based model a more accurate representation of a synapse?

A

There are two elements being added with the conductance-based model:

(1) Proportion of ion channels open
(2) Driving force for a synapse

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In terms of temporal characteristics, GABAB postsynaptic current is slow or fast?

A

Slow.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does η define in this synaptic plasticity equation?

A

The learning rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is problematic with this learning rule?

A

The weights are not bounded. They will grow more and more as the neurons continue to be activated together.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the difference between supervised and unsupervised models?

A

An unsupervised model (such as Hebbian learning) self-organizes and tries to make sense of the data independently without a predefined label or error. No indication of “what should be learned”.

In a supervised learning model, errors are predefined, there is a cost function and the algorithm uses gradient descent to reduce the error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the properties of reward/reinforcement learning?

A

(1) Success is predefined
(2) There exists a reward function
(3) There is a memory trace to remember previous steps

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The formula below depicts the principle underlying Hebbian learning. Explain what this formula means.

A

At a certain time (implicitly defined here), neurons will change their weights as a function of the correlation between their activities and the learning rate (eta).

If we assume that ai and aj are bounded sigmoid functions such that they only have values between 0 and 1, the neurons are maximally co-active when both of them are 1 (maximally coactive).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How can we stabilize Hebbian models?

A

One way is to subtract the average expected activity from the activities of both ai and aj.

If we consider now a scenario where activations exceed the expected average, this stabilization will lead to depression of the output.

We see long-term potentiation as well as long-term depression in the same function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the relationship between spike-timing-dependent plasticity and Hebbian learning?

A

Spike-timing dependent plasticity is the “spiking version” of Hebbian learning. It transforms the Hebbian idea that neurons have to be co-active to the idea that neurons have to be co-active with a certain time in between them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Write a weighted adjacency matrix for this graph:

A