Task 5 Flashcards
What is Hebb’s Law?
When a neuron stimulates another neuron while the receiving neuron is actively firing, the connection between them strengthens.
What are some limitations of Hebb’s Law?
It does not specify how much connections should strengthen.
It lacks a mechanism for reducing connection strength.
It does not define the conditions under which learning should occur.
Why is Hebbian learning difficult to implement in computer models?
Because it allows connection strength to increase indefinitely, leading to computational instability.
What is Neo-Hebbian learning?
A refined mathematical formulation of Hebb’s Law using differential equations to describe neuron activity and weight changes over time.
What is a neurode?
An artificial neuron with multiple input signals and one output signal.
What are instar and outstar neurodes?
Instar – A neurode that receives input signals.
Outstar – A neurode that sends output signals to multiple receivers.
How does instar-outstar learning relate to neural networks?
Each neurode is both an instar (receiving input) and an outstar (sending output), allowing complex learning and adaptation.
What is differential Hebbian learning?
A variation of Hebb’s Law where connection strength changes based on the difference in neuron activation rather than absolute activation.
How does differential Hebbian learning differ from simple Hebbian learning?
If neuron activity stays constant, no learning occurs.
Weight changes can be positive or negative, allowing for adaptive learning.
Why is differential Hebbian learning a better biological model?
Because biological neurons must allow for both strengthening and weakening of connections.
What is Drive-Reinforcement Theory (DRT)?
A learning model based on differential Hebbian learning that includes time-dependent memory formation, making it closer to classical conditioning.
How does DRT improve upon traditional Hebbian learning?
It incorporates time-based learning, not just present-moment stimulus.
It explains S-shaped learning curves, where learning starts slow, speeds up, then slows again.
It considers recent history of stimuli, rather than only current inputs.
What happens when the hippocampus is damaged?
New episodic memories cannot be formed.
Old memories remain intact.
Procedural memory (skills) is unaffected.
How does the hippocampus receive information?
It gets input from the parahippocampal gyrus and entorhinal cortex, which collect data from multiple sensory areas.
What are the three main stages of information processing in the hippocampus?
Dentate gyrus (DG) – Filters and sparsely encodes inputs.
CA3 region – Performs autoassociation for memory recall.
CA1 region – Transmits processed information back to the cortex
What is the perforant path?
The main input pathway connecting the neocortex to the hippocampus, providing sensory and contextual information.
What is the role of the dentate gyrus (DG)?
Creates sparse representations of input signals.
Reduces overlap between similar memories.
Improves the storage capacity of the hippocampus.
What is the role of CA3 in memory?
Acts as an autoassociative memory network.
Uses recurrent connections to strengthen memory recall.
Allows cued recall (retrieving a full memory from partial input).
How do recurrent connections in CA3 aid memory?
They allow CA3 neurons to reinforce each other, improving long-term recall and pattern completion.
What is the role of CA1?
Transmits processed memory data back to the cortex.
Further refines memory recall before sending it to storage.
How does sparse encoding improve memory?
By ensuring that similar events are stored with minimal overlap, reducing confusion during recall.
What happens when recurrent CA3 connections are removed?
Memory recall worsens.
CA3 activation patterns become more random.
Later memory stages (CA1, entorhinal cortex) also perform worse.
How was the hippocampal memory model tested?
100 random memory patterns were stored using Hebbian learning.
Fragments of the patterns were later used as retrieval cues.
Successful recall occurred when the retrieved memory correlated with the original input.
How does episodic memory formation differ from procedural memory formation?
Episodic memory – Forms quickly, stores individual events separately.
Procedural memory – Forms gradually, blends multiple experiences into a knowledge base.
How does the hippocampus prioritize which memories to store?
It favors memories linked to emotional or motivational responses.
Amygdala inputs help encode memories with emotional significance.
How does Hebbian learning relate to AI and machine learning?
It provides a model for adaptive learning, where AI can strengthen connections based on usage.
How do hippocampal models inspire artificial memory systems?
They suggest autoassociation networks for better memory recall.
They inspire competitive learning to refine inputs before storage.
They model time-sensitive learning, which helps in sequential decision-making.
What are some real-world applications of neural memory models?
Cognitive AI – Creating AI systems that remember and learn from past experiences.
Medical diagnostics – Using memory models to predict patient outcomes.
Neuroscience research – Understanding memory disorders like Alzheimer’s.
What is the main principle of Hebbian learning?
“Neurons that fire together, wire together” – strengthening connections between frequently co-activated neurons.
Why is Hebbian learning biologically plausible but computationally problematic?
It lacks mechanisms to reduce connection strength, leading to unbounded growth and computational instability.
How does Hebbian learning explain synaptic plasticity?
It models how neural connections strengthen or weaken based on repeated use, a key principle of neuroplasticity.
What is synaptic scaling, and how does it counteract Hebbian learning?
It is a homeostatic process that ensures neurons don’t become overactive by scaling down excessive synaptic strength.
What is the difference between Hebbian and Neo-Hebbian learning?
Hebbian learning – Describes general strengthening of neural connections.
Neo-Hebbian learning – Uses mathematical equations to simulate dynamic neuron activity over time.
What role do differential equations play in Neo-Hebbian learning?
They allow for time-dependent adjustments in learning, capturing gradual weight changes in artificial neural networks.
How do instar and outstar neurodes contribute to learning?
Instar – Receives incoming signals from multiple sources.
Outstar – Sends output signals to multiple receiving neurons.
What is differential Hebbian learning?
A variation of Hebbian learning where weight changes depend on changes in neural activity, not just absolute activation levels.
Why is differential Hebbian learning considered more biologically realistic?
It allows for both strengthening and weakening of connections, preventing runaway synaptic growth.
How does differential Hebbian learning improve computational learning models?
It introduces stability by limiting excessive synaptic strength.
It enables adaptive learning, where synapses can increase or decrease in response to stimuli.
How does Drive-Reinforcement Theory (DRT) relate to classical conditioning?
It incorporates time-based reinforcement, mimicking how biological organisms learn over time.
How does DRT differ from simple Hebbian learning?
It considers past stimuli, not just current inputs.
It mimics the S-shaped learning curve seen in animal conditioning.
It weights recent experiences more strongly, making learning more dynamic.
Why does Hebbian learning fail to explain classical conditioning?
Hebbian learning only strengthens connections based on simultaneous stimuli, while conditioning depends on time-separated stimuli.
What is the primary function of the hippocampus?
It enables episodic memory formation by creating associations between different elements of an experience.
How does hippocampal damage affect memory?
Episodic memory formation is impaired.
Old memories remain intact.
Procedural memory (skills) is unaffected.
How does the hippocampus process sensory information?
It receives inputs from the parahippocampal gyrus and entorhinal cortex, integrating data from multiple cortical areas.
What is the role of recurrent connections in CA3?
They allow neurons to reinforce each other’s activation, improving memory recall and pattern completion
How does CA1 contribute to long-term memory storage?
It refines hippocampal output and sends memory information back to cortical storage areas.
How does sparse encoding in the dentate gyrus improve memory efficiency?
By ensuring similar memories are stored with minimal overlap, reducing interference.
What happens when recurrent CA3 connections are disabled?
Memory recall worsens.
CA3 activation becomes less structured.
CA1 and entorhinal recall also degrade.
How does the hippocampus prioritize which memories to store?
It prioritizes emotionally or motivationally significant experiences, integrating amygdala-based emotional inputs.
How do hippocampal models inspire AI memory networks?
They suggest autoassociation networks for better recall.
They inspire competitive learning to refine memory encoding.
They model time-sensitive learning, helping AI predict and recall sequential events.
What are some real-world applications of neural memory models?
Cognitive AI – AI systems that simulate human-like memory recall.
Medical diagnostics – Using memory models to predict patient outcomes.
Neuroscience research – Understanding memory disorders like Alzheimer’s disease.
How does Drive-Reinforcement Theory (DRT) improve AI learning algorithms?
By integrating time-based learning mechanisms, allowing AI to learn from past experiences over time.