Task 5 Flashcards
What is neo-hebbian learning?
- mathematical theory of classical conditioning
- tries to integrate decay of connections
- allows a computation of future activity and weights for all neurodes in the network (instar + outsar)
- all neurodes are both: instar and outstar
problems:
- goes against classical conditioning
- shape of acquisition curve (= learning within training time)
What is an instar?
neurodes receives large number of stimulus signals coming from the outside of its boundaries
= inwardly pointing star of incoming stimuli
what is an outstar?
neurode that sends its output to a large number of other neurodes
-> a single neurode sens its input to every neurode in the grid of instars
= outwardly rating star of output signals moving from the neurodes
what is hebbian learning?
- one of the basic learning models in the field of neural networks
- significant learning occurs only when activity of receiving neurode and the currently received signals are strong
criticism:
only allows connections to increase in strength (no provision for decrease)
= inadequate to build a computer model
-> solved with neo-hebbian learning
What are alternatives to hebbian learning?
- neo-Hebbian learning
- differential Hebbian learning
- drive-reinforcement-theory
-> all propose a differential learning law
what is differential hebbian learning?
the connection strength changes according to the difference (= change) in the receiving neurode´s activation and the change in the incoming stimulus signal
- positive = learning
- negative = forgetting
problems:
- impossible when modelling classical conditioning in animals
- difficulties in the problem of time
What is the drive-reinforcemnet theory (DRT) ?
considers incoming stimulus at the current time + recent history of incoming stimuli over a period of time
- actitivity equation
> activity increases, learning strengthens
> weight = product of activty chnage
> the more activity, the more it wires - weights can reach 0 (cannot cross 0)
> consistent with biology (no synapses are sometimes excitatory, sometimes inhibitory) - the change in incoming activity is ususally restricted to positive changes only (no learning if the incoming signal is decreasing in strength)
memory foundation in the Hippocampus
- two interlocking sheets of cells (3 stages)
> dentate gyrus (DG)
> cornu ammonis (CA3 + CA1) - episodic memory
> when hippocampus is damaged, impossible to form new ones - procedural memory
> when hippocampus is damaged, still possible
-> the type of memory in which the hippocampus is involved in requires the combination of information from different sources to form consciously retrievable memories of specific events/ facts
what is the dentate gyrus
- input to the hippocampus from other neocortical areas over the perforant path
- output is carried by mossy fibres to CA3 region
- sparse input
What are the two areas of cornu ammonis ?
CA3
- receives input from dentate gyrus and perforant path
- intrinsic, recurrent excitatory connections as dominant source of input
- output to CA1
CA1
- receives input from CA3
- output leaveas hippocampus and returns to the neocortical areas which provided the perforant path inputs
What is a computational theory of hippocampal operation?
- internal structure of processing route in the hippocampus as basis for computational theory of how episodic memory might be formed
- explains how you can recall something after having it seen once
competitive learning network
= preforant path acts as competitive learning system
> DG granule cells send input to CA3 = sparse representation of incoming signal to the hippocampus
> any given input pattern excites relatively few CA3 cells
What are the four stages of information processing ?
- competitive learning (DG3)
- Autoassociation (CA3)
- Competitive learning (CA1)
- pattern association (C1 + entorhinal cortex)
What is the network model?
= neural network simulation of hippocampal operation
developed to test whether the network could store a large number of unrelated patterns after only a single presentation of each one, and after retrieve them from partial cues
how was the network model tested?
- 100 random patterns
- activity passed around the network and connection (-> Hebbian learning)
- recall cues were given (fragments of the original pattern)
- entorhinal firing resulting from processing of retrieval cue is closer to the originally learned pattern than the retrieval cue is
= cued recall
-> this implementable qualitative model allows us to see how performance changes as various aspects of the simulatuion are changed (= which parts contributed to the performance of the whole)
Was eventuell noch fehlt:
- non-specific inputs
- autoassociation in CA3
- role of CA3 recurrence