Animal learning Flashcards

1
Q

What is supervised learning?

A

Supervised learning involves presenting a network with example inputs and information about what the correct outputs should be, and involves modifying the synaptic weights/ wires so the output matches the target. - The delta rule.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is unsupervised learning?

A

“cells that wire together fire together”. Hebbian learning, no correct output is supplied during the learning process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How was the delta rule discovered?

A

the delta rule was discovered in neural networking research, as a way of modifying synaptic weights of an artificial neuron.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How was the Rescorla- wagner model discovered?

A

Was discovered as a way of describing how associative strengths should change during a classical conditioning task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the calculation for the rescorla wagner model?

A

Gradient descent to reduce the error with respect to a specific synaptic weight (or associative strength)
1. Define the output as the sum of the inputs multiplied by the weights:
m= s1-s2 goes to m=s1w1 + s2w2 w1=1 w2= -1
2. The error = the squared difference between the output and the target
3. Perform gradient descent on this error function
- Find the derivative (chain rule) - add a minus sign
= Wi= α ( target - ∑WiSi) si.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what does each part of the rescorla wagner model mean

A

Wi= α ( target - ∑WiSi) si.

Wi= The change in weights
α= the learning rate
target - output (∑WiSi) = the suprise. The greater the surprise the better the learning.
si = input.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why is the chain rule used to derive the animal learning model but not used in homeostasis

A

The chain rule is a helpful trick in calculus used to help find the derivative, because changing the synaptic weights changes the output and in turn the error the chain rule is used.
Homeostasis does not involve changing the weights and therefore the chain rule is not needed for homeostasis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How can blocking be explained by the rescorla wagner model

A

The rescorla wagner model predicts that learning occurs when there is a diffference between the predicted outcome and the actual outcome (suprise).
Therefore, in classical conditioning tasks where bokcing is used there is no suprise and therefore no learning.
BLOCKING:
1. Conditioned to recognise bell for food
2. Added bell and light predicts food
3. bell on its own = no learning
This is cue competition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is backpropogation

A

A supervised learning algorithm for updating the synaptic weights in a neural network, gradient descent gives us the delta rule which tells us how to modify the weights of a single neuron to reduce the error that is due to its output.
However an extra step is required when considering networks of neurons, because of the way that negative output values are dealt with in networks.
The principle is the same as the delta rule (gradient descent) but the chain rule s applied more times.
The result is that the propogation of the error is playe dback through the weights before it is updated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly