Week 5: Hopfield Network = CHECKED Flashcards

1
Q

Why associative memory models? - (4)

A
  1. Introduce learning
  2. Introduce fundamental ideas about associating patterns of neural activity
  3. Associating patterns or sequences of patterns is needed for episodic memory
  4. The hippocampal anatomy maps very well onto these ideas
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The Hopfield Network uses neurons that are very simple which are the

A

standard artificial neurons with no dynamics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Representation of the Hopfield (1982) Associative Memory Network:

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Representation of the Hopfield (1982) Associative Memory Network shows (2)

A

all the neurons are connected with each other

Neuron Si is connected to Sj with weight of wij

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Assumption of the Hopfield Associative Memory Network

A

Assume a fully connected network with symmetric connections (Wij = Wji)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Properties of the Hopfield (1982) Associative Memory Network (5)

A
  1. Simple connectionist neurons
  2. No dynamics
  3. We impose the update schelude
  4. Sign function as a transfer function
  5. Units can be active (Si = 1) or inactive (Si = -1)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

In Hopfield network, the sign function as transfer function meaning: (2)

A

If a value is below 0 then it is set to -1

If a value is above 0 then set it to 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Equation of activity of neuron in Hopfield Associative Memory Network

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What do we mean by symmetric connections in Hopfield Associative Memory Network?

A

We mean that weight in one direction is the same as the weight in other direction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In Hebbian learning it proposes that

A

neurons that “fire together wire together”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Hebbian learning proposes that:

neurons that “fire together, wire together” which in other words

meaning, if sender and receiver are both active (3)

A
  • Sender likely contributed to making the receiver fire!
  • Thus, it strengthens the connection between sender and receiver
  • That is the weight increases
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

In Hebbian learning,

changing the weights of synaptic connections between neurons mathematically in Hopfield Network by

A

We take one of the weight (Wij) and add to it the product of activity of pre and post synaptic neurons (if both active) times this very tiny number (epilson)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The symbol ε in weight equation means: (2)

A
  1. A tiny number as we don’t want to change the the weights in the network too quickly
  2. Most cases you want to incrementally learn something new (so have multiple presentations of two stimuli to associate them together)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The first step of the Hopfield network learning

A

impose a pattern we want to learn then let the learning rule act

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What do we mean by imposing a pattern?

A

To impose a pattern, we clamp the activity of a subset of neurons for one pattern and let the learning rule act to change those synaptic weight connections in the network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Diagram of example of imposing a pattern for instance Pattern 1 - (6)

A

Pattern 1 a given number of neurons are active (orange) and keep them active

By saying these activity (i.e., firing state) of neurons can not be updated

Then we let the learning rule act between all these neurons

Connection between blue and orange neurons not going to be strengthened (-1 [non-active] * 1 [active] = -1)

Connections between orange and orange neurons connection is strengthed (1 * 1 = 1) so that in future we don’t force these neurons to be active as one neuron makes the other one fire.

Connection between two neurons that are silent (two blue = -1 * - 1 = -1) so weight increases meaning that neuron silences the other one.

17
Q

Learnign rule table act when imposing a pattern table:

neurons (3)

A
  1. Both -1, weight goes up = connec strengthened
  2. Both 1, weight goes up = connec strengthened
  3. Mixed, weight goes down, may lead to pruning of connections.
18
Q

With more neurons in Hopfield network (imposing pattern letting learning rule act)

A

We can have many more patterns

19
Q

In the hopfield network, patterns of activations are (2)

A

learned as ‘stable states’ under the rule for updating activations

If we clamp activity of neurons for one pattern, (some active some silent) and let learning rule act the weights will change until there is no more change in set of active neurons

20
Q

Stable states mean

A

update rule produces no more changes in active neurons

21
Q

When pattern of activation does not change anymore we say…

A

we say a stable state has been reached

22
Q

We can update learning rule for units in Hopfield network model in two ways: 2)

A

Asynchronously: One unit is updated at a time, picked at random or a pre-defined order

Synchronously: All units are updated at the same time.

23
Q

In update rule, many different

A

patterns can be learned in the same network, but the memory capacity is limited to ~ 0.14N (N is the number of neurons) in Hopfield network

24
Q

Memory in the Hopfield network is

A

“content addressable”, performing “pattern completion of a partial cue”

25
Q

What does it mean by memory in the Hopfield network is content addressable?

A

Content addressable simply means part of the content of the memory is sufficient to address to find the complete memory

26
Q

What does it mean in memory in Hopfield network, performing a ‘pattern completion of partial cue?’ - (3)

A

Say we learned this pattern of three active neurons and give partial cue where neuron on top left is active.

Then update the network which will cause neuron on top middle to be active because of learning connections as well as the other active neuron

we complete the pattern at time of learning from a partial input.

27
Q

The memory capacity is limited in Hopfield network meaning we may form

A

overlapping memories which can cause spurious memories (a fake memory via combinations of real memories) to be formed

28
Q

The Hopfield network does not work

A

in isolation

29
Q

Memory of what…? = HF network does not exist in isolation - (3) = toy model

A

Say we have a memory of “I saw a magenta turtle that was squeaking”

That should trigger the activity of neurons in visual cortex that represented magenta turtle as well as neurons in auditory cortex that represent the squeak sound

Direct connections of neurons in Hopfield network to other neurons

30
Q

Pattern completion will continue in associative memory store in HF network but also

A

extend to reactive the neurons in the sensory cortices that were first active when you first memorised the thing (“e.g., the magenta turtle squeaking”)

31
Q

Why this toy model (3)

A

The hippocampus has extensive connections to virtually ‘all association areas (polymodal) in neocortex

But no necessarily direct connections to early (unimodal) sensory cortices

So sketch below is a severe simplification

32
Q

Continuous vs discrete attractors diagram

A
33
Q

Hopfield memories are discrete attractors! - (2)

A

Don’t want a continuous attractor as too easy to get interference between different patterns

We want to separate our memories

34
Q

Once learning is done we can perform recall in which involves:

A

Start from a pattern similar a memorized pattern of activation, change activation according to sign of input (update until no changes occur) to recover original pattern

35
Q

Once learning is done, we perform recall example - (3)

A

Original pattern we have all three neurons active, two of them are now active (via moo! via auditory cortex)

and pattern completion and these neurons will drive via synaptic connections for other neuron to be active (visual representation of a cow)

then we recall a nearby (i.e., similar) pattern that “I saw a purple cow that was mooing”

36
Q

To support a pattern of activation, connectionist should be

A

positive between units in the same state (i.e., 1/1 or -1/-1) and negative between units in different units (1/-1 or -1/1) i.e., si sj wij > 0.

37
Q

The learning rule sets the weights so that

A

to-be-remembered patterns of activity are stable or attractor states.