Problem 4 Flashcards
Hebbs law
Suggests that if 2 neurons on either side of a synapse are activated simultaneously, the strength of that synapse is selectively increased
Weight
Is determined by the degree of activity between the 2 neurons
e.g.: activate simultaneously = increase; activate separately = reduce
Strong positive vs strong negative weights
- Strong positive weights
- -> nodes that tend to be both positive/ both negative at same time - Strong negative weights
- -> nodes that tend to have opposites
There are certain drawbacks on the hebbian learning method .
Name them.
- Interference
- -> the number of associations which can be stored before they begin to interfere on each other are limited - Saturation
- -> when the weights keep increasing, all units will be active when one presents an input to the network
Neo-Hebbian learning
Solution to Saturation
Involves the “forgetting” term which allows you to forget a little bit of the weight every time
Differential Hebbian learning
Interested in whether the neurons behave the same way
–> not interested in whether neurons are active at the same time
Drive reinforcement theory
Builds upon differential hebbian learning and introduces time + time difference
–> in order to capture classical conditioning, we have to account for the prediction of CS based on US
Pattern associator
Refers to a particular kind of network, which is presented with pairs of patterns during training
What are the results after successful learning of the pattern associator ?
- Recalls one of the patterns at output when the other is presented at input
- Responds to novel inputs by generalizing from its experience it had with similar patterns
Name the advantages of the pattern associator NWs.
- Tolerance of noisy input
- Resistance to internal damage
- Ability to extract a central prototype from similar examples
Explain the NS model on the basis of the “taste and sight of chocolate”.
Suggests that during learning in the NS, 2 patterns are presented simultaneously
- Representing the taste of chocolate
- -> reaching the dendrites via unmodifiable snaypses - Representing the sight of chocolate
- -> via modifiable synapses
THUS: learning takes place via the modification of synapses
CN model (Connectionist network model)
Like in the NS, this model suggests that during learning 2 patterns are presented simultaneously
- P1 must be produced as an output unit
- P3 is presented to input units
THUS: pattern association takes place by modifying the strength of the connections between input + output units
In the CN model there are several terms that are equivalent to the terms used in the brain.
Name the equivalent terms for the following:
- Axon
- Dendrite
- Synaptic strength
- Input line
- Output unit
- Weight
Hebb rule for weight change
delta wij = eai aj
–> rule is in multiplicative form as in order for a synapse to increase in strength both pre + postsynaptic activity must be present
e
Refers to the learning rate constant which specifies how much a synapse alters in any one pairing of the two patterns
ai
Refers to the activity of element i in pattern 1
aj
Refers to the activity of element j in pattern 2
Pattern associators have several important properties.
Name 6 of them.
- Generalization
- -> if a recall cue is similar to an already learnt pattern, the program produces a similar response to the new pattern - Fault tolerance
- -> even if some synapses on neuron i are damaged, net input might still be a good approximation - Distributed representations
- -> knowing the state of most elements to know which elements are represented - Prototype extraction + noise removal
- Speed
- -> recall is fast - Interference
- -> not a bad thing
Autoassociative memories
Are capable of retrieving a piece of data upon presentation of only partial info from that piece of data
e.g.: Hopefield NWs, capable of remembering data by observing a portion of that data
Hopefield NWs can take on 2 different forms.
Name them.
- Asynchronous
- -> one unit is updated at a time - Synchronous
- -> all units are updated at the same time
Autoassociator
Refers to a particular from of pattern associator which is trained with the delta rule
–> its aim is to reproduce the same pattern at output that was present at input
What are the 3 most advantageous assets of autoassociators ?
- Store independent memories on the same set of connections
- perform well with incomplete/noisy input
- automatically form prototypical instances of categories
Recurrent connections
Refer to connections whereby the output line of each unit is connected to the dendrites of the other units
–> present in autoassociators
Competitive learning
Refers to a variant of Hebbian learning
–> here output units are in competition for input patterns
Why do these competitive neurons compete with each other in competitive learning?
To see which one of them is the most similar to that particular input vector
–> thus the more strongly one particular unit responds to an incoming stimulus, the more its shuts down the other members of the cluster
What is the basic structure of competitive learning models ?
A set of hierarchically layered units, with each layer connecting with layer above it
–> connections within are inhibitory; between = excitatory