Hopfield Flashcards

1
Q

What are associative networks?

A

Networks that acts as pattern associators

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the two types of association networks and how do they differ?

A

Autoassociative networks correlate a pattern with itself.

Heteroassociative networks associate one pattern with another one. Resonating networks are a special class of heteroassociative networks.

Each of these network types has different structures and dynamics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What do Autoassociative networks do and what is their general architecture?

A

Autoassociative networks can regenerate a noise-free, complete pattern from one that is noisy or incomplete. Architectures are generally single-layered e.g. Hopfield

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the most obvious use of autoassociative networks and how?

A

The most obvious use is to create clean patterns. This is often done as part of a larger process. For example, the input to a classification network may be put through an autoassociative stage first to filter the data .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do heteroassociative networks work and what could this be used for?

A

it is possible to have a classification scheme or heteroassociation process which outputs a distributed pattern instead of simply activating a single node. This output pattern might be passed through an autoassociative processor to clarify a pattern which might have been ambiguous.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the underlying concept in Hopfield networks?

A

a single network of interconnected, binary value neurons can store multiple stable states.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What papers published the concepts underlying Hopfield networks and what do they present about the network?

A

The concepts underlying the Hopfield network were published by Hopfield [1982,1984], and by Hopfield and Tank [Hopfield & Tank; 1986, Tank & Hopfield; 1987]. These papers discuss how the convergence of a presented pattern follows a path of energy minimization for the total energy of the network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the features of a Hopfield network?

A

Suppose we create a network of binary-valued neurons, where each neuron is connected to the other, but not back to itself. (There is no direct feedback in this network.) Assume all the connection strengths are symmetric. This means that for any two neurons, i and j , then ay - aji, where a is the strength of the connection between these two neurons. This network can have a set of stable states. For each stable state, each binary-valued neuron takes on a value (0 or 1) so that when it acts on its neighbors (via the connection strengths described in the previous paragraph), the values of each neuron do not change. If we give this network an input pattern close to one of these states, and let the neurons affect each other, then the network can converge to that nearby state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How can a Hopfield network be used as an autoassociator?

A

The implication of this capability is that we can use the network as an autoassociator. First, we create a network which stores a number of stable states. We will create a matrix which stores the connection weights between the neurons. Then, we can present a pattern to the network. This gives each neuron a value. We let the neurons affect each other via their connection weights. If is at all close to one of the patterns or stable states which the network has learned, then the neuron values will change to be those of that learned pattern. This gives us the potential ability to input a noisy or partial pattern into the network, and regenerate the closest learned pattern.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly