Hoorcollege 11-12 Flashcards

computational methods

1
Q

Three main categories of learning strategies

A

Unsupervised learning: the neural network receives input from the outside world, and its synaptic weights change as a consequence of this input.

Supervised learning: the neural network receives input from the outside world and also the desired output, so that the network can change its synaptic weights to reach such output.

Reinforcement learning: the neural network receives input from the outside world, and a reward/punishment teaching signal which bias the learning towards a desired output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

biologische voorbeelden learning strategies

A
  • Unsupervised learning: for example, receptive fields.
  • Supervised learning: links with biological mechanisms still unclear. A good
    candidate is learning in the cerebellum (teaching signals).
  • Reinforcement learning: classical conditioning.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

BCM rule

A

probeert leren in visual system te verklaren
extensie van hebb rule
lost twee belangrijke aspecten van de stabiliteitsproblem van heb op

BCM voegt een depressiefactor toe waardoor synpasen slapper kunnen worden
En het voegt een drempel toe zodat een synaps niet overgeactiveerd kan worden

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

overeenkomst verschil BCM met Oja

A

overeenkomst proberen beide het stability probleem van Hebb op te lossen
Verschil Oja voegt een forgetting term toe, zodat strong synapsen niet nog sterker kunnen worden

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Waarom moeten er regels worden toegevoegd voor spikes

A

van nature is spikes niet mogelijk in computerprogrammas

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Spike Timing Dependent Plasticity STDP rule

A

pre synaps voor post synaps= versterking
post synaps voor pre synaps= verslapping

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

triplet rule

A

je krijgt alleen LTP bij POST-PRE-POST activatie.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Welke spikes en depolarisaties zijn noodzakelijk voor LTP

A

presynaptische spike en postsynaptische depolarisatie(geen spike nodig)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

burst rule

A

regel die niet hebb rule volgt. geeft aan dat burst van neurons die snel na elkaar komen, LTD opwekken.
Vissen gebruiken dit in het water om magneetvelden te detecteren.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

boolean functie AND & OR

A

twee synapsen vuren op projectordendriet, deze vuurt alleen door als beide synapsen vuren(AND) of als maar 1 van de twee vuurt(OR)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Perceptron & classifiers

A

perceptrons worden gebruikt als classiefiers, meerde factoren worden dan berekend om een scheidslijn te maken tussen hond en kat bijvoorbeeld.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Hoe wordt het opgelost dat de lijn van de classifiers niet getrokken kan worden omdat beide soorten door elkaar lopen qua parameters

A

een extra hidden layer bijvoorbeeld waardoor er twee lijnen komen te ontstaan

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

universal approximation theorem

A

It can be mathematically proven that a feedforward (FF) neural network with at
least one hidden layer is able to approximate any continuous function, and
therefore solve in principle any classification problem given enough training,
input, etc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

deep neural networks

A

When the number of hidden layers is sufficiently large, FF neural networks are usually referred to as deep neural networks.

FF neural networks can contain more than one hidden layer. This improves their computing power to even higher levels, and allows them to solve problems like face recognition, speech-to-text translation, real-world object classification, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

backpropagation

A

In a nutshell, backpropagation works like this:
1. Compare the real vs desired output of the model
2. Slightly change the weights in the direction that decreases the error
3. Repeat this process as you travel backwards in the network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Problems and limitations of deep neural networks:

A

Problems and limitations of deep neural networks:
* Catastrophic forgetting: networks tend to forget previously learned
information when learning new things.
* Overfitting: if the networks are trained too much, they reach bad
conclusions that are not generalizable.
* Existence of many local minima often
prevents to reach optimal solution.
* They are not biologically realistic at the moment.

15
Q

reinforcement learning

A

the neural network receives input from the outside
world, and a reward/punishment teaching signal which bias the learning
towards a desired output.
* Repetition/classic conditioning. Function of dopamine in the brain.
* Basal ganglia: action selection.
* Cerebellum: error-driven learning
* Cortex: predictive coding.

16
Q

Temporal differences (TD) rule

A

Toevoegen van een reward in de toekomst,

17
Q

the actor-critic model

A
  • The actor selects a behavior (e.g. go vs no-go) based on sensory input and
    expected reward, and such behavior leads to a reward;
  • The critic compares the reward obtained with the reward expected, and
    calculates the reward prediction error (via TD rule);
  • Both actor and critic learn (i.e. modify their expectations) with the help of
    the prediction error.
18
Q
A