Space and the Parietal Lobe Flashcards
Computational level - describe and specify the problems in a ______ manner, but do not say how these problems are _____. Is the aim to learn a function or estimate uncertainty? eg: memory
–> WHAT IS THE GOAL?
Algorithmic level - forms a _____ between the computational and _______ levels, describing _____ the identified computational problems can be ______. This is where the usefulness of ______ and machine _______ comes in.
–> STEPS to solve the problem
The implementation level is the physical substrate or _______ and it’s organisation, in which the computation is _______. This could be biological (neurons and synapses) or artificial (silicon and transistors). Connecting one neuron to another, etc
–> IMPLEMENTING the ALGORITHM
generic
solved
bridge implementational how solved Bayesian learning
mechanism
performed
When trying to understand something, it can be useful to see what it’s like when someone ________ do the function. This is exemplified in people with ________. This is NOT due to _______, inability to ____ or abnormal _______ tone.
cannot dyspraxia weakness move muscle
MAPS REPRESENT WHERE THINGS ARE
What are the two types of coordinate systems?
Allocentric - a map representing where things are relative to external reference (north) - useful for long-term storage (doesn’t change every time you move - stable reference frame). It is always relative to something
–> most of the LTM with hippocampus is related to this
Egocentric - a map representing things relative to where you are.
–> for controlling arms, legs, looking, etc
We have both - retinotopic maps of the visual cortex and somatosensory map are egocentric. BUT you can also have a visual map of campus, perhaps relative to the particular gate you enter campus from, or relative to north…this is NOT egocentric as you’re facing a different way each time you enter campus…
Maps in the brain are a _________ of where things are. These are always ______ to something. You can give coordinates relative to a landmark, to the person, to north, etc. These are different coordinate ______.
When we control our actions we have to use vision, walking, reaching, pointing….and these all require different reference ______. Ultimately for movement we have to control our limbs and this is ________ (relative to ourselves)
representation
relative
frames
frames
egocentric
GENERAL THEME
This lecture is about _______ what we perceive (sensory) to movement
- -> maps, coordinate system for action
- role of sensory signals vs internal signals for knowing where the body is
Coordinating vision into movement (the transformation)
We need TWO coordinate systems for this - egocentric and allocentic
connecting
Dyspraxia is the….
lack of function or action. Not connecting sensory experience with movement.
- -> same as apraxia - disorder of movement not due to
- weakness
- inability to move
- abnormal muscle tone/posture
- other motor deficits
- intellectual deterioration
us + cube –> how can we figure out how to move it??
What about pointing to it/reaching out?
- Eyes have retinotopic map - angle specified by which neurons stimulate the cube.
- But retinotopic map is not enough to walk to the cube. Eyes and body have to line up.
- So we need to line up retinal coordinates with body coordinates
- pointing to or reaching out to the cube is A LOT more complicated.
–> you should know calculation from eye to head to body, but not the arm.
GENERAL THEME: we need different calculations in the body and be able to coordinate them to do different things
us + cube example…what does this tell us about different reference frames?
- -> this is basically a description of some of the problems the brain needs to solve to accomplish it’s goals.
- -> In some cases we have maps…in other cases on-the-fly computation of the coordinates of only individual objects (instead of the whole map) –> some/most of the happens in the parietal lobe.
- Angles can be added to create torso-relative and hand-relative coordinates, most probably in parietal lobe
- We need different coordinate frames for different actions (eg: eye centred, body centred, head/ear centred, hand-centred)
- Maps specify coordinates of everything in a particular coordinate frame.
In reaching (direction of something relative to the hand) we need to know eye orientation relative to head, head relative to neck, etc - You also need to know your eyes position at all times.
- There is a command which compensates for eye motion so you don’t think the world has moved.
- BUT the maps for hand-relative and torso-relative coordinates have not been found in the brain –> maybe location is calculated spontaneously each time an object is attended to? Locations can change quite quickly anyway…like swatting a fly…skiiing or skating
GENERAL THEME: we need different calculations in the body and be able to coordinate them to do different things
- eye-centred calculations
- body-centred (somatosensory)
- head/ear-centred
- hand-centred
- -> maps specify coordinate of everything in a particular coordinate frame
THIS IS PART OF THE DORSAL STREAM - where things are relative to the body.
–> full maps have NOT been found for hand-relative and torso-relative coordinates (maybe location is calculated on-the-fly each time an object is attended to?)
Eg: pointing a laser-pointer - doesn’t hold different calculations in mind…only calculates the one it needs in the moment
–> very hard to study this
d
The where/how visual pathway is mediated by the ______ lobe (more for location)
But the ventral pathway is important too - for recognising objects - then the brain has to calculate where it is in space and relative to the hand..ultimately send signals to eye muscles to look/reach out.
parietal
ROLLER SKATING - constantly updating maps and coordinates
Sensing location does not just have to come from vision, it can also come directly from the ______ (about posture, etc)
muscles
–> We have to go from retinotopic coordinates, to head-centric coordinates, then use knowledge of neck twisting, etc
UNLESS WE ARE BURT OR an OWL we have to know where our eyes are relative to our head
We also have to know where our eyes are relative to our _____. We have receptors (cranial nerves) which tell us how much muscles are contracted, etc. Muscle spindles to detect contraction, etc, to detect up/down/left/right, etc
MOVING EYES left-right-left quickly - how do we know that the eyes have moved relative to the head vs the world has moved because of an earthquake?
We need the gaze angle (or eye position) to
- _______ the direction for pointing, walking, etc.
- resolve whether the ______ moved, or the ______ moved, or the world moved.
- -> how do we know what things move relative to each other?? Monkey moving up the tree vs down the monkey?
- -> all about resolving ambiguity
How does the brain do this??
head
calculate
eyes
object
What are some possibilities for sensing the gaze angle (or eye position)? (Two options)
(1) direct sensing of eye position (proprioceptive sensation) - NOT correct…why???
- proprioception is is sensors on muscles to see how much they’re flexed
- -> press on corner of eye - brain does not compensate for eye motion here…instead you see the world move…but the eye has not moved. So obviously it’s not sensing eye movement directly…
(2) remembering where you told your eyes to move and it uses this to update coordinates that the whole room moved (efference copy - use a copy of the command to update locations)
Brindley and Merton 1960 experiment
- got forceps and pulled on one of the eye muscles - objects were perceived to move in opposite direction
- -> moving the eye did this
- held eye still and got people to try to move their eyes - couldn’t move them BUT perceived the world to move - eye movement command is being used to update locations (it shouldn’t have done this as it was being held still, but it’s obviously hard-wired and didn’t anticipate the eye being held still) - telling the eye to move and we are perceiving it moving even if the eye doesn’t move.
- -> the eye movement command was used to update locations - hard-wired circuit
- -> the eye movement command gets sent to parietal cortex (hub where information comes together about different locations) to:
- update representations of where things are - get input into your consciousness so you don’t feel like things are moving around when you move your eyes
- compensate for retinal motion, so you don’t perceive world to move when your eyes move.
KEY POINTS
More than one coordinate system needed
• Direction of things relative to Eye (retinotopic map)
• Used to look at things
• Angles added to it to create torso-relative and hand-relative
coordinates, probably in the parietal lobe
• Direction of things relative to Hand, used for reaching
• Need to know orientation of Eyes relative to Head, Head relative to
Neck, etc. all the way down to Hands
• Need to know your eyes’ position
• You keep a copy of command telling eyes where to move
• Command also used to compensate for eye motion, so you don’t think
world moved
•Full map not been found in brain for hand-relative and torso-relative
coordinates.
•Location calculated on-the-fly each time object is attended?
•What brain areas?
- summary involves making calculations of head/hand/eyes relative to the world, and what algorithm do you need for this?
- sometimes we have maps, other times we have on-the-fly calculations
g
The way you’re
using “vision” vs. “perception” here sounds like how researchers usually use the words “sensation” vs. ”perception”, with
sensation meaning processing very close to the receptors (e.g., retina) but perception involving more interpretive stages such
as cortex. The word “vision” tends to encompass sensation and perception both
g