Computational Neuroscience 2 Flashcards

1
Q

Learning is achieved by…

A

…modifying the synaptic weights of the network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

First learning algorithm that automatically changed the weights in neural networks came from…

A

…Widrow-Hoff trying to devise an automatic speech to text system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Widrow-Hoff rule

A
  1. Initial weights can have random, or zero, values
  2. A CS is presented and the output recorded
  3. The output is compared to a desired output, and if there is an error the weights are adjusted in order to reduce the error (supervised learning).
    Same as the Rescorla-Wagner models.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Extinction trials

A

Trials in which the learned condition is ‘unlearned’. That is achieved by exposing the animal to the CS without the US.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The blocking effect

A

Pavlov suggested that it was only the co-occurrence of the CS and US that were key to conditioning.
However, by the late 60s several studies showed that in order to produce conditioning, an stimulus needs to have predictive power about the US.
Rabbits are separated in two groups: pre-trained group and sit-exposure group
Phase 1
• The pre-trained group is trained to the tone as CS before the airpuff US
• Meanwhile the sit-exposure animals sit in the experimental chamber.
Phase 2
• Both groups are trained to respond to a CS consisting of simultaneous tone and light (read the paper)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Error correction learning in the brain

A

The Rescorla-Wagner model requires a teaching signal to act as a feedback mechanism.
Therefore, it is not a model of learning at neuron level
However there is evidence that certain brain structure do provide a feedback signal that the system can used as a teaching signal.
• One example is the cerebellum — an area heavily involved in learning reflex responses
• In 1968, Richard Thompson suggested that there was a teaching signal in the cerebellum that the brain could use in conditioning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Error correction in the cerebellum

A

CS information reaches the cerebellum
Information about the US (airpuff)
travels through the inferior olive
Climbing fibers carry this information to the cerebellum
The output of the cerebellum is related to the conditioned response (eyeblink)
Part of the output also travels back to the inferior olive through inhibitory connections.
The pathway activated by the US acts as the desired output.
Thus the difference between the CR and the US determines the activity of the inferior olive
Inferior Olive activity = US - CR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Generalisation

A

The capacity to learn stimuli that have not been present in the training set, or the degree to which learning one stimulus is transferred to other stimuli.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Perceptron

A

An algorithm for supervised learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Generalisation gradients in pigeons

A

Come back to

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Generalisation in one layer network

A

Networks with local representation cannot generalise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Distributed representation

A

Coarse coding
It may allow the possibility of generalisation.
Representation is more robust; loss of a few neurons will not dramatically degrade the performance of the neural network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

In distributed representation, each stimulus is…

A

…mapped onto more than one node. The mapping can be topographic, but that’s not necessary for the network to perform. A key factor is that the degree of overlap between two stimuli represents their degree of similarity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly