Week 5: Hopfield Network = CHECKED Flashcards
Why associative memory models? - (4)
- Introduce learning
- Introduce fundamental ideas about associating patterns of neural activity
- Associating patterns or sequences of patterns is needed for episodic memory
- The hippocampal anatomy maps very well onto these ideas
The Hopfield Network uses neurons that are very simple which are the
standard artificial neurons with no dynamics
Representation of the Hopfield (1982) Associative Memory Network:
Representation of the Hopfield (1982) Associative Memory Network shows (2)
all the neurons are connected with each other
Neuron Si is connected to Sj with weight of wij
Assumption of the Hopfield Associative Memory Network
Assume a fully connected network with symmetric connections (Wij = Wji)
Properties of the Hopfield (1982) Associative Memory Network (5)
- Simple connectionist neurons
- No dynamics
- We impose the update schelude
- Sign function as a transfer function
- Units can be active (Si = 1) or inactive (Si = -1)
In Hopfield network, the sign function as transfer function meaning: (2)
If a value is below 0 then it is set to -1
If a value is above 0 then set it to 1
Equation of activity of neuron in Hopfield Associative Memory Network
What do we mean by symmetric connections in Hopfield Associative Memory Network?
We mean that weight in one direction is the same as the weight in other direction
In Hebbian learning it proposes that
neurons that “fire together wire together”
Hebbian learning proposes that:
neurons that “fire together, wire together” which in other words
meaning, if sender and receiver are both active (3)
- Sender likely contributed to making the receiver fire!
- Thus, it strengthens the connection between sender and receiver
- That is the weight increases
In Hebbian learning,
changing the weights of synaptic connections between neurons mathematically in Hopfield Network by
We take one of the weight (Wij) and add to it the product of activity of pre and post synaptic neurons (if both active) times this very tiny number (epilson)
The symbol ε in weight equation means: (2)
- A tiny number as we don’t want to change the the weights in the network too quickly
- Most cases you want to incrementally learn something new (so have multiple presentations of two stimuli to associate them together)
The first step of the Hopfield network learning
impose a pattern we want to learn then let the learning rule act
What do we mean by imposing a pattern?
To impose a pattern, we clamp the activity of a subset of neurons for one pattern and let the learning rule act to change those synaptic weight connections in the network
Diagram of example of imposing a pattern for instance Pattern 1 - (6)
Pattern 1 a given number of neurons are active (orange) and keep them active
By saying these activity (i.e., firing state) of neurons can not be updated
Then we let the learning rule act between all these neurons
Connection between blue and orange neurons not going to be strengthened (-1 [non-active] * 1 [active] = -1)
Connections between orange and orange neurons connection is strengthed (1 * 1 = 1) so that in future we don’t force these neurons to be active as one neuron makes the other one fire.
Connection between two neurons that are silent (two blue = -1 * - 1 = -1) so weight increases meaning that neuron silences the other one.
Learnign rule table act when imposing a pattern table:
neurons (3)
- Both -1, weight goes up = connec strengthened
- Both 1, weight goes up = connec strengthened
- Mixed, weight goes down, may lead to pruning of connections.
With more neurons in Hopfield network (imposing pattern letting learning rule act)
We can have many more patterns
In the hopfield network, patterns of activations are (2)
learned as ‘stable states’ under the rule for updating activations
If we clamp activity of neurons for one pattern, (some active some silent) and let learning rule act the weights will change until there is no more change in set of active neurons
Stable states mean
update rule produces no more changes in active neurons
When pattern of activation does not change anymore we say…
we say a stable state has been reached
We can update learning rule for units in Hopfield network model in two ways: 2)
Asynchronously: One unit is updated at a time, picked at random or a pre-defined order
Synchronously: All units are updated at the same time.
In update rule, many different
patterns can be learned in the same network, but the memory capacity is limited to ~ 0.14N (N is the number of neurons) in Hopfield network
Memory in the Hopfield network is
“content addressable”, performing “pattern completion of a partial cue”
What does it mean by memory in the Hopfield network is content addressable?
Content addressable simply means part of the content of the memory is sufficient to address to find the complete memory
What does it mean in memory in Hopfield network, performing a ‘pattern completion of partial cue?’ - (3)
Say we learned this pattern of three active neurons and give partial cue where neuron on top left is active.
Then update the network which will cause neuron on top middle to be active because of learning connections as well as the other active neuron
we complete the pattern at time of learning from a partial input.
The memory capacity is limited in Hopfield network meaning we may form
overlapping memories which can cause spurious memories (a fake memory via combinations of real memories) to be formed
The Hopfield network does not work
in isolation
Memory of what…? = HF network does not exist in isolation - (3) = toy model
Say we have a memory of “I saw a magenta turtle that was squeaking”
That should trigger the activity of neurons in visual cortex that represented magenta turtle as well as neurons in auditory cortex that represent the squeak sound
Direct connections of neurons in Hopfield network to other neurons
Pattern completion will continue in associative memory store in HF network but also
extend to reactive the neurons in the sensory cortices that were first active when you first memorised the thing (“e.g., the magenta turtle squeaking”)
Why this toy model (3)
The hippocampus has extensive connections to virtually ‘all association areas (polymodal) in neocortex
But no necessarily direct connections to early (unimodal) sensory cortices
So sketch below is a severe simplification
Continuous vs discrete attractors diagram
Hopfield memories are discrete attractors! - (2)
Don’t want a continuous attractor as too easy to get interference between different patterns
We want to separate our memories
Once learning is done we can perform recall in which involves:
Start from a pattern similar a memorized pattern of activation, change activation according to sign of input (update until no changes occur) to recover original pattern
Once learning is done, we perform recall example - (3)
Original pattern we have all three neurons active, two of them are now active (via moo! via auditory cortex)
and pattern completion and these neurons will drive via synaptic connections for other neuron to be active (visual representation of a cow)
then we recall a nearby (i.e., similar) pattern that “I saw a purple cow that was mooing”
To support a pattern of activation, connectionist should be
positive between units in the same state (i.e., 1/1 or -1/-1) and negative between units in different units (1/-1 or -1/1) i.e., si sj wij > 0.
The learning rule sets the weights so that
to-be-remembered patterns of activity are stable or attractor states.