Week 10: flashcards
List of stuff we learned with models (7)
- Spinal cord (HH)
- WM (IF)
- HD (Rate)
- Hopfield network, memory matrices, spatial memory models (connecitonist neurons)
- Learning rules (Hebbian, competitive, BCM)
- Perceptron (supervised learning)
- Reinforcement learning (Q-learning, TD learning, model-based)
Marr’s 3 levels
Example to introduce framework of Marr’s 3 levels is trying to understand a computer do arithmetic? (3) understand at 3 levels
- Computational level
- Algorthmic level
- Implmenation level
At computational level to try and understand how a computer does an arithmetic
How do Arithmetic like a+b = c
At algorithim to try and understand how a computer does an arithmetic (2)
like particularly way we cast the problem and solve it
000100101 + 00111101 = 01101101
At implementation level to try and understand how a computer does an arithmetic (2)
How does it implement the solution in hardware like transistors on chips on computer
David Marr originally proposed 4 levels but
levels 3 and 4 a commonly combined
Marr wrote 3 extremely influential (still today)
papers on cortex, cerebellum, hippocampus and influential theory of vision
Marr’s 3 levels can be used to classify
the models we learned in the model
Marr’s 3 levels applied to lamprey locomotor
Lamprey locomotor network focuses on
how does the lamprey spinal cord control swimming (generating a travelling wave of muscle activation)
Marr’s 3 levels applied to lamprey locomotor
Computational level problem
Has to generate rhythmic activity with a delay between spinal cord segments
Marr’s 3 levels applied to lamprey locomotor
Algorithmic level
We use coupled osciliators to do this
There is multiple realisability at algorthmic and implenetation level meaning (2)
More than one algortihm can exist for a given computaiton
More than one implementaiton can exist for a given algorithm
Marr’s 3 levels
Algorithm level = couples osilicators for lamprey locomotor
Example of ‘meaning’ flashcard (more than 2 algorithms for a given computation, more than one implementaiton can exist for a given algorithm)
Two ways of producing generating osciliations at algortthmic level (3)
Two ways of producing osciliations at algorithmic level
* Escape from inhbition (w3)
* Alternating we could have instrinic osciliations with bursting neurons (neurons fire a burst of spikes and shut down on their own)
*
Marr’s 3 levels
Algorithm level = couples osilicators for lamprey locomotor
Example of ‘meaning’ flashcard (more than 2 algorithms for a given computation, more than one implementaiton can exist for a given algorithm)
Two ways of producing generating osciliations at algortthmic level = how do we decide which one is a better model? (3)
With experiments! Cut the connection between hemi-segments in fictive locomotion experiment.
Are we still get osciliations? Are they still bursting?
If we do then might suggest we have individual bursting capability in hemi-segment
Marr’s 3 levels
Example of ‘meaning’ flashcard (more than 2 algorithms for a given computation, more than one implementaiton can exist for a given algorithm)
Example: multiple ways of generating osciliations at implementation level (4)
- Escape from inhbition
- Vary type of inhbition between two sides = Glycinergic or Gabaergic inhbition
- OR
- For instrinic osilications (if we chose that algorithm) we could have instrinic osciliations with bursting neurons, with ion channel config 1; instrinic osciliations with bursting neurons with ion channel config 2 (more than one congif of ionic channels can lead to bursting)
How to decide the multiple ways to producing osciliations at implementation level for instance for lamprey locomotor network?
With experiments, record the ion channels, check if there is sAPH and which channel is responsible etc..
When you model, what is considered implentation level (in real brain) is in part (2)
determined by your chosen model
You can not model ion channels with IF neurons but yu could (within the scope of the model) say something about the implementation level
Spike-frequency adapation mechanism recap (4)
- Spike-frequency adapation interval without spikes becomes larger with time
- Fewer inhbitatory APs arrive at contra-lateral side
- Other side has time to be active and inhbiti previously active side
- Escape from inhbition
Spike-frequency adapation due to
sAPH which is consequence of Ca+ flowing in each AP and Ca trigger activation of KCa which is hyerpolarising so harder for next spike to be emitted
Example: Marr’s 3 levels applied to WM
Quesiton is
How does the brian generate/implement WM?
Example Marr’s 3 levels to WM
Computational level
Maintain information in population of neurons on short time scales
Example Marr’s 3 levels to WM
We can think of two algorithms
One with oscilaitions (e.g., Lisman-Idiart model) vs attractors (persistent activity = cell maintain own activity level via recurrent conenctions)
Example Marr’s 3 levels to WM
Implenetation levels for osciliation based WM (3)
Idiart and Lisman: Specific osciliations where they come from, underpinning terms of ionic channels, ADP
OR
Various possible attractor network implenets = similar to ring attractor entworks
Marr’s 3 levels of HD
Computational Level:
How does the brain generate head-direction?
Marr’s 3 levels of HD
Algorithm level (2)
Had ring attractor =excit and inhbit connections violation of Dale’s principles states single neuron has either excit or inhbitatory
So this model good at algorithm but not implenetation as violates this principle
Marr’s 3 levels
HD
Different Implenetation level (2)
- Could have 3 rings separating excitatory and inhbitatory connections (Excit ring project to inhbi ring and project back)
- OR 2 ring OR 1
What do Marr’s 3 levels mean for modelling? (3)
- a model does not capture all 3 levels to be useful!
- We can have a model that captures the algorithmic level but stays agnostic with regard to implementation
- We can have a model that provides a correct algorithmic account but is wrong with regards to mechanism at level of implementation
The neuron is the (building blocks to create a causal mechanism) - (3)
- Basic structural unit of the brain
- Basic functional unit of the brian
- Neurons form networks, interact via synapses
Neurons’ casual mechanism (2)
- Chain of processing through neural networks
- At which eventually lead to cognition and motor output
If we want to define this concept then
neural mechanism is a structure of casually interacting elements specified in terms of constituents of the brain and their interactions
If we want to make (mechanistic) models of brain function we need (2)
models, and connect them to casually interacting networks
Be aware all mdoels are simplifcaitons
A neural representation is
a specific way of coding information (e.g., location, direction)
Example of neural representation
HD tuning curve, Place cell red dots, grid cell
Neural computation is
mapping between neural representations
Example of neural computation
The synaptic weights between BVC and PCs ‘compute’ PCs (output ) from BVC input
Braitenberg Vehicles (3)
- Type of model
- Not for predicitng measurement
- Tools for thinking about the brain
Braittenburg vehicles help make sense of
whole modelling enterprise
A brain (or more abstractly a control mechanism) is essential for
gudiing “the vehicle” and produce adaptive behaviour
Braitenburg was inspired by
Nobert Wiener (a key foundational influence on Cybernetics - study of goal-directed mechanism, formalising the notion of feedback
Vehicle 1 which has a
Sensor directly conneected to the motor
In Vehicle 1 properities (3)
The sensor drives the motor
The strength of the signal to the motor depend on the intensity of sensory signal
Say, the sensor senses the concentration of nutrients, or oxygen, or the intensity of light
In Vehicle 1 linear relationship between
motor signal and sensor signal
How might Vehicle 1 move? (3)
- Vehicle move faster when light intensity increases (fast)
- Speed up and slow down randomly according to sensory signal
- It might change direction due to external forces but not of its own volition
How do we interpret motion of Vehicle 1 (2)
- Seem to dislike bright light
- Afraid of being discovered and seeks darkness to remain hidden
Vehicle 2 produced from
some mutation durign self-duplication or fusion two type 1 vehicles are joined
Vehicle 2 can have - (3)
- Ipi-lateral connections (A)
- Contra-lateral connections (B)
- Both (c) = same as V1
How vehicle 2 move?
If vehicles 2A and 2B within the vincinitt of source (5)
V2a will avoid the source (the wheel closer to itis more strongly driven
V2b will approach the source (the wheel closer to itreceives less drive than the other)
V2a and V2b dislike sources (both hit them hard if coming a source straight on)
But V2a is a ‘coward’
V2b is ‘aggressive’
Vehicle 3 of adding
different ‘neuro transmitter’ => (implying) change the positive connection from sensory to a negative inhbitatory connection
How does vehicle 3 move? (2)
V3a approaches the source and comes to rest near it - it ‘adores’ the source
V3b comes to rest facing away from the sourceand any small perturbation (a flash of light) will lead it away – it’s an ‘explorer’
In vehicle 3 the more motor signal obtained from vehicles
less sensory signal output you get
Vehicle 3C which
combines 4 types of sensors
What types of sensors does Vehicle 3C have? (4)
Light (+ uncrossed)
Temperature (+ crossed)
Oxygen (- crossed)
Nutrient density (- uncrossed)
What behaviour does vehicle 3C manifest? (3)
It dislikes heat (turns away from hot places)
It hates light bulbs and attacks them head on
It prefers well-oxygenated areas (slows down) and areas rich in nutrients (if both are low, it speeds up to get out of there
May be tempted for vehicle 3C to ascribe the
values or goals
Up to vehicles 1,2,3, 3C we had sensory of this kind: (4)
- more/of X the faster/slower the wheels turn
- More sensor signal less motor signal
- More motor signal more sensory signal
- Oddly limiting so maybe more up to some point then less = vehicle 4A (introduce non-linear mapping)
In Vehicle 4 we (3)
- observe a multitidue of interesting behaviour in agents like Vehicle 4a like complex trajectory (Vehicle 4a moving like figure 8 around light source
- Especially combined with properities of vehicle 3c
- As well as other motor dependencies
We can add threshold to sensors in vehicle 4A which (4)
introduce sudden onsets of behaviour
Some input from sensor but vehicle does not do anything but at a given intensity the vehicle moves in a given direction = either instantly at given voltage or slowly ramping up or slow increase, plateau then nothing if intensity gets intense
They look like ‘decisions’ , if decisions being made we may be tempted that these vehicles have some form of “will”
Or they need to integrate stimulus over time to get moving (need convincing)
What does it all mean in Braitenburg vehicles? (6)
We have discovered that with very simple rules – by combining only a few – we can create vehicle with wonderfully complex behaviour.
We understand their behaviour as a consequence of their simple internal workings
But what if we didn’t know about their internal workings? Their behaviour still looks complex.
We might be tempted to assume complex mechanism underneath!
Braitenberg suggests called this: The law of uphill analysis and downhill invention (so downhill is building something then figure out what it can do, if you don’t how it is made then hard to analysis = uphill analysis)
Guessing from the outside is hard => When we analyse a mechanism from the outside we tend to overestimate its complexity
Going through the internal workings and playing out behaviours (even if initially unexpected) is much easier.