Task 4 - M&M Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

How does competitive learning work ?

-How does it find a categorisation?

A
  • is unsupervised
  • network finds a categorisation for itself based on the similarity between input patterns and the number of output units available
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What do competitive learning networks learn?

A

learn to categorise input patterns into related sets, with one output unit firing for each set

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What happens when an input pattern is presented to a competitive learning network?

A

output units compete with each other to determine which has the largest response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What happens with the connections to active and inactive input units to output units in competitive learning?

A

connections to winning output unit are strengthened and those from input units which were inactive are weakened

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How are the weights set in competitive learning?

A

weights are set by prior learning of network, not by an explicit external teacher

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the 3 phases of competitive learning?

A

excitation, competition and weight adjustment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

To which connections only are weight adjustments done in competitive learning?

A

only made to connections feeding into the winning output unit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which kind of learning rule does competitive learning use?

A

Uses a local learning rule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

In which way are competitive networks a feature of many real brain circuits ?

A
  • Can remove redundancy-> allocates a single output neuron to represent a set of inputs which co-occur
  • they can produce outputs for different input patterns which are less correlated with each other than the inputs were
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which kind of associator is the auto- associator?

A

Form of pattern associator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the aim of the auto- associator?

A

to reproduce the same pattern at output that was present at input

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the difference between pattern associator and autoassociator?

A

Auto-associator: output line of each unit is connected back to the dendrites of the other units -> recurrent connections

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the netinput in an autoassociator?

A

Netinput: external input and internal input, generated by feedback from other units within the autoassociator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are two features of auto-associators?

A
  • Pattern completion

- Noise resistance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does the pattern associator learn?

A

learns to associate one stimulus with the other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How does the training work for a pattern associator?

-What happens if learning is successful?

A
  • training: pairs of patterns presented

- if learning successful: will recall one of the patterns at output when the other is presented at input

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the pattern associator able to do after training?

A

after training: can also respond to novel inputs, generalising from its experience with similar patterns

18
Q

How does pattern association take place?

A

pattern association takes place by modifying the strength of the connections between input units and output units

19
Q

What are properties of pattern associators? (6)

A
  1. Generalisation: they generalise during recall
  2. Fault tolerance: graceful degradation; small damage still leads to correct response
  3. The importance of distributed representations for pattern associators: activity of all elements is used to encode a particular stimulus; generalisation and graceful degradation are only achieved if representations are distributed
  4. Prototype extraction and noise removal: recognition of a prototype that has never been seen
  5. Speed: fast due to parallel processing
  6. Interference is not necessarily a bad thing: allows generalisation, noise reduction and prototype extraction; one reason why it is tolerated is that the ability to generalize between stimuli is more useful than 100% accurate memory of specific past events
20
Q

What is a recurrent network?

A

output line of each unit is connected back to the dendrites of the other units

21
Q

What is the delta rule about?

A

output unit can be corrected by increasing the weights of connections from units in the previous layer which provide a positive input to it and by decreasing the weights of the connections which provide a negative input

22
Q

What does saturation mean in context of Hebbian Learning?

A

If we continue to train the networks, it’s not possible that we can endlessly learn

23
Q

What is sparseness?

A

measure of the proportion of units which will be active in that area in response to an input

24
Q

What does sparse input enable?

A

sparse input enables an auto-associator to store more memories (e.g. dentate gyrus)

25
Q

What is a difference between the Delta and Hebbian learning rule?

A
  • Delta rule: gradually smaller changes of the weights

- Hebb rule: only one learning trial may be required to establish the necessary connections

26
Q

Is it possible with Hebb’s law to build a computer model?

A

No, Hebb’s law is incomplete and inadequate to build a computer model

27
Q

What are the problems of Hebb’s law when it comes to using it as a computer model?

A
  1. does not specify how much the connection between neurons should increase
  2. does not specify how to compute the activity of the two neurons
  3. there is nothing that ever allows the connection strength to decrease
  4. does not specify the exact conditions under which the connections should strengthen
28
Q

What does Neo-Hebbian Learning consist of in its simplest form? (dynamical differential equations)

A

two sets of dynamical differential equations, one governing the activity change of an arbitrary network at a given instant in time and the other governing the weight changes of an arbitrary connection in the network at any instant in time

29
Q

What is an instar?

A

From the point of view of the neurode, it receives a large number of stimulus signals, coming from somewhere “outside” its boundaries
–> from its perspective, it is the center of an inwardly radiating collection of such signals

30
Q

What is an outstar?

A
  • neurode sends its single output signal to a large number of other neurodes in the network
  • when envisioning those outgoing signals as being more or less evenly distributed around the neurode, you can imagine an outwardly radiating star of output signals moving out from the neurode
31
Q

What is each neurode in terms of instars and outstars?

A

every neurode: both the center of an instar, receiving incoming stimuli from the outside and the center of an outstar, transmitting its output back to other instars or the outside world

32
Q

What do you look at in Neo-Hebbian learning?

A

you only look at change in connection strength

33
Q

What are problems of Neo- Hebbian learning?

A
  • behavior of the outstar matches actual classical conditioning not in detail
  • weights only increase in strength BUT a biological system cannot possibly increase without bonds
34
Q

How does the connection strength change in Differential Hebbian Learning?

A

the connection strength changes according to the change (difference) in the receiving neurode’s activation and the change in the incoming stimulus signal

35
Q

What are the problems of Differential Hebbian Learning?

A
  • outstar exists only in a single, continuously changing moment of time called now -> only learn from stimuli that appear simultaneously BUT Pavlov’s experiment: learned more quickly when bell was rung before
  • outstar: acquisition curve is linear; BUT in real life: S-shaped
36
Q

What do you look at in Differential Hebbian Learning?

A

you look at difference in change between connections

37
Q

What happens in Differential Hebbian Learning if either neurode has a constant activity level?

A
  • if either neurode has a constant activity level, it has an activity change of zero -> no learning occurs
  • the activity changes can be either positive or negative, the weight change can be either positive or negative
38
Q

Which two equations can be found in the Drive Reinforcement Theory (=DRT)?

A
  • activity equation: describes the activation level of each neurode in the network
  • weight change equation: describes how the connection strengths change during learning
39
Q

How can incoming signals contribute to the weighted sum in Drive Reinforcement Theory (=DRT)?

A
  • each incoming signal must individually be greater than the threshold -> can contribute to the weighted sum
  • an incoming signal less than or equal to the threshold value is considered a zero signal
40
Q

How do synaptic junctions work in DRT?

A

each synaptic junction is predetermined to be either an excitatory (positively weighted) or an inhibitory (negatively weighted) junction (always remains either positive or negative)

41
Q

Can weights be zero in DRT?

A

No