MOD A Flashcards
Who invented the term Cognitive Neuroscience?
Gazzaniga and Miller
What is cognitive neuroscience?
Multidisciplinary field that scientifically investigate the relationship between all mental functions and cognitive processes, the mind, and their biological substrate, the brain, by putting tougher the cognition, which embeds all mental processes, from sensory to perceptual, attention, memory, language, and neuroscience, which is the study of the neural basis of mind and behavior.
How does the cognitive Neuroscience studies the relationship between mental processes and biological substrate?
Through neuroimaging techniques, MRI, PET, et cetera, electrophysiology (single unit, intra-cranial, scalp recordings…), neuromodulation (TMS, TES…), neuropsychology, genetics, optogenetics, simulations etc.
These methods have different level of invasiveness, for istance single-unit recordings have the maximum spatial temporal resolution, but also maximum invasiveness, if we want an electrode that records from single units, we need to open the bone and open the brain to go deep. There are also less invasive techniques in which you don’t need to open the scalp and you can still record and sometimes interact with our brain.
Which are the brain axis?
X axis: coronal section, left (negative values) to right (positive values) direction
Y axis: sagittal section, rostro-caudal direction, anterior (positive values) to posterior (negative values)
Z axis: horizontal section, dorso-ventral direction, upper (positive values) to lower (negative values)
Who first introduced the idea that the brain is the physical a substrate of mind and is therefore considered the father o f neuroscience?
Alcmeon of Croton
Who marked the transition stage from a ventricle-centric view to a cerebral-centric view?
Descartes
Which Greek philosophers had a cerebro-centric view?
Hippocrates, Plato, Alcmaeon
Who had a heart centric view?
Egyptians, Aristotle
Which were the possible view of the mind anciently?
- cerebro-centric view : the brain is the site of the mind/the soul
- heart-centric view: the heart is the site of the mind/the soul (not completely wrong, in some cases has been demonstrated that the heart precedes the brain in action)
- ventricle-centric view: the cerebrospinal fluid is the essence of the soul
We shouldn’t believe that the brain has the only primacy in generating the mind. There is a continuum in the brain and the rest of the body
What is Plato thought about the mind and the knowledge?
According to Plato, the brain is the seat of the soul
He described the soul as a chariot kept under pressure by three forces: 1) the charioteer, which represents the intellect, the reason, and rationality, that must guide the rest of the soul in truth by governing, by managing 2)two horses, that represents rational thought and moral principles, and 3) the black horse that represents the appetites, instincts, and impulsions, the irrational part of the soul. So Ho put emotion on one side and cogition on the other side.
Plato had an hymnatist view, that he described through the Reminiscence theory: all knowledge originates before birth in a hypothetical place called Hyper-uranium, where humans are exposed to the truest sense of things. And after birth, this knowledge is widely retrieved. Through our lives, through reminiscence, everything we experience in our lives on this world is, for Plato, just an imperfect copy of what we already learned before. According to Plato, this is the only possible explanation for a human knowledge of certain abstract concepts, which cannot be explained using empirical evidences
What is Aristotle thought about the mind and the knowledge?
He believed that the soul enables a body to engage in the necessary activities of life, and its functions are built one upon another:
- We start from the most basic functions, the nutritive functions that we share with all living things, plants, animals, humans, etc
- Then, on the top of that, we’ll have another soul, called a sensible soul, which we share with animals.
- And finally, we have the rational soul, which is only the typically human.
the human being has all the three souls, the three functions, and according to Aristotle, it’s what makes us unique.
Aristotle was also considered the father of research on memory: Mental images have a crucial role, since they have a push of all in memory. Through associations, we learn new concepts, which is somehow true even nowadays, association is important for memory, infact it is also used for memory techniques, so analogy, memory technique, et cetera, et cetera.
According to Aristotle individuals are born without a built-in mental content. Therefore, their own knowledge comes from experience. When we born we are a blank blackboard, a written tablet that is still empty, and in Greek, called grammatrion, and in Latin, tabula rasa, and all knowledge, what we write in that tablet, it will come from experience.
This is Aristotle empiricist view: memory is not a pre-birth place from where one could retrieve any type of innate knowledge: before learning and experience, intellect and memory are nothing
What is Hippocrates and Galen contribute?
They are known for hypothesizing the relationship between the physical functioning and our personality in the legal category. They believed that some organic fluids showed 4 different type of temperaments. According to Hippocrates, there are four types of personalities based on the prevalence of one fluid in the body out of four fluids:
• The sanguine personality, a very happy personality, where there is more red fluid than the other three fluids.
• The choleric, angry personality, more yellow bile
• The phlegmatic, very calm personality, flaky, more plhegm
• The melancholic, depressive personality, more black bile
If these four humors are well balanced, the organism is healthy. If they are out of balance, one prevale on the other.
The contribute of this theory is in the fact that mental processes are linked to something biological
What is Descartes view?
Descartes had an innatist position (according to Plato idea)
Made a distinction between:
• res cogitans: soul that operates according to the free will rules, “the thing that thinks”
• res extensa: body, seen as a machine that follows natural laws and can be investigated, “the thing that occupies space”
Descartes focused on input and output, so considered human behaviour as the result of brain processes and functions. According to the theological norms of that period, the soul could not be investigated, for this reasons he did a separation of the spirit (res cogitans) from the body- machine (res extensa), because he needed to allow the study of human being and behavior. For Descartes human behavior is explainable in mechanistic terms (reflexes), so there was no need to talk about the soul. To compare the human body to a machine that could be studied like other machines makes Descartes the father of the modern cognitivist theories that compare mind to an information processing system
Descartes established that the pineal gland was the seat of soul, the place in which all our thoughts are formed had to be in the innermost part of the brain and probably the only structure that is not split into two parts, so this uniqueness gave it a special status.
What Thomas Willis did?
He gave a very detailed description of brain and nervous system, was the first to list the 12 cranial nerves in a numbered fashion
He also described the system that supplies blood to the brain, the circle of Willis, which is important to know also for neurologists and neuropsychologists because lesions occlusions of some of these blood system channels would cause specific deficits in the cerebral artery, typically causing aphasias if it is on the left side or neglect if it is on the right side, so spatial attentional problems
pioneer in the study of the functions of nervous system, he recognized the cortex as the basis of cognition instead of the ventricles.
He had the intuition that the gyrification is linked to cognitive complexity. Gyrification is the number and complexity of the gyrus of the brain. The brain is composed by gyri and sulci; gyri are the most prominent and they have some shapes. Apart from the main ones, each brain has its own structure and shape of the gyri, and also number of gyri
Pioneer in the study of the dys-functions of nervous system, he coined the term Neurology and described many brain pathological conditions including epilepsy, psychiatric conditions (but did not improve treatment).
What do the Empiricists believed?
Hobbes, Locke, Hume. They believed as Aristotle, that knowledge is not something innate but it is gathered in our mind through some associative principles for instance temporal continuity: something that occurs after something else is learned more easily; Change of cause and effect are also associative principles that can help in increasing our knowledge; another principle is similarity and contrast
What is the data of born of modern psychology?
1879, the founding of the first lab of experimental psychology in Leipzig by Wundt
What did Wundt do?
Funded the first lab of experimental psychology in Leipzig in 1879, considered the date of birth of modern psychology. Wundt’s lab in Leipzig was the first place where a human being, the experimenter, was formally studying another human being’s mind, the experimentee.
He was the first researcher who systematized the study of psychology. He claimed that psychology should get rid of subjectivism, although one of the main methods in his lab was a subjective method called introspection, self-observation with some training, with some systematization of systematic and experimental methods. Introspection, as he means it, is looking inside. But he also used very objective methods that are used even nowadays by probably psychologists, for instance he used a chronoscope, a precision tool to measure durations of cyclic phenomena and processes, which combined mechanical watchmaking science with new technologies, such as electric telegraphy. In the first experiments with Vesper’s times, this chronoscope called the Higgs chronoscope, this is a copy of which, was used and allowed to measure the temporal intervals with very fine temporal resolution in the order of milliseconds.
Through chronoscope he first tested Reaction Times
What can we say about Titchener?
Considered the father of Structuralism, that states that the mind should be conceived and studied as a sum of its components. So, the psychologist, following the elementist criterion, has to decompose, fractionate every conscious mental state into elementary components. We do it through introspection, that is the subjective method that Titchener learned in Wundt’s lab. The themes mostly investigates were the element of consciousness: sensations, images, affective states. Each element of consciousness is characterized by some components:
• Quality, for istance an acute sound
• Intensity, how strong the sound is
• Duration, how long the sound lasts
• Clarity, how much at the center of my consciousness is the phenomenon that I’m experiencing in my consciousness
According to Titchener, se should try to avoid running the so-called stimulus error, attributing meanings or names to raw data of conscious experience, while they should be reported without interfering, in order to do this a long-lasting training, very complex training, is needed according to Titchener.
What did William James do?
Considered the father of Functionalism, interprets a psychic phenomenon as non-separable evidence, so contrasting with the elementism by the structuralism. Psychic phenomena should be established in their evolution, in their dynamism. The brain is seen as an organism that adapts to the environment that is always there., in this sense it is inspired by the theory of evolution by Charles Darwin. Since brain and mind keep changing, we cannot capture many phenomena by dissecting the elements that compose them.
Contribute of Weber (psychophysic)?
He studied minimal difference of stimulus intensity capable of modifying the reaction to it:
Just Noticeable Difference: Is the threshold that we have in noticing a difference between two stimuli that differ very little with each other along one dimension.
He noticed that the just noticeable difference between two stimuli is inversely proportional to the initial intensity of the stimulus, and that relationship between the difference and the initial intensity is a constant. A constant that will depend along a given range, of course, not the extreme, on each of us as a different K.
Contribute of Fechner?
He generalized the Law of Just Noticeable Difference to other physical variables and sensory modalities. For istance in economy field there is an effect that is known as the Weber-Fechner Law of Pricing. It simply means that we tend to view prices not in absolute terms, but relative to other prices we are thinking about at that moment.
Contribute of Hermann Von Helmholtz (physiologist)
He studied sensory modalities such as vision and hearing and proposed models for the functioning of these senses. He expanded what was already known as the law of specific nerve energy by Müller, according to which sensations we receive do not depend much on the type of stimulation exerted over the sense organs, but on the specific sense organ that is stimulated. So if you have a mechanical pressure on the skin, you will have a totally different sensation than if you have a mechanical pressure on the retina, where the organs of vision that are, cones and rods, they will transmit a totally different sensation that will be, in that case little spots of light. And the mechanical force is exactly the same on skin and on retina. What changes is the organ that receives that type of energy.
This should be more generally called the law of specific cortical region, as it is at the end of the day the cortical region where the input is transduced and transmitted that will code for the type of sensation that we receive, and these cortical regions are also plastic. Some people that become blind might, for instance, recycle their occipital lobes for auditory stimuli later on, so it’s not fixed; there are sensible or critical periods of course where you have maximum opportunity of developing a specificity for a certain modality but it’s not all or none.
One reason why Helmotz has remained famous in physiology is his attempt to study the velocity of impulse propagation in the nerves.
He used a frog model through which he found out that the speed of impulse propagation varies as a function of:
- Diameter of the axon (the larger the axon is, the faster the propagation is)
- Presence of myelin sheath (faster propagation if myelin is present)
Contribute of Exner
He defined “Reaction Time” as the time interval between
1. the presentation of a stimulus
2. the subject’s response, the response latency
We have a lot of intra and inter-individual variability in RTs, Somebody might be slower and that might, for instance, be an advantage in terms of accuracy maybe, that in different environments we make that one the winner instead of the quicker, and vice versa. That’s how natural selection acts.
Contribute of Franciscus Donders
He was an ophthalmologist, an oculist, and he’s considered the father of mental chronometry.
He studied reaction times and analyzed the clever ways to exploit the reaction time methodology.
Reaction Time measures will be used by Wundt and then forgotten for a while, but strongly re-introduced by cognitivism. Reaction times were one of the “objective” methods to study mental processes
What is Mental Chronometry and Reaction Times?
Mental chronometry draws inferences about the content, the duration, and temporal sequences of mental operations through the scientific study of reaction times, which are a proxy of the processing speed inside our brain during the performance of different cognitive tasks. It’s a proxy because we don’t know how much each step that brings from the sensation to the explicit response, would last. We only know the end of the cascade. Reaction time is just one volume at the end of the cascade and depends on many factors.
A modern technique to track reaction times and durations are ERPs, evoked potentials or event-related potentials, deflections that are ordered, sequenced in time from the onset of a critical event that could be a stimulus, for instance, but could also be a response of the participant, onwards or backwards. And since this is dialogued with a given event, you can exploit the different deflections, positive and negative, to track the timing of events, new events that precede, perhaps, an overt, a manifest response, a patterned response. So it’s another way to do mental chronology in a much more refined fashion.
Physiologist S. Exner (1871) called “Reaction time” the time interval between the presentation of a stimulus and the subject’s response, that is the response latency
Importance of inter- and intra-individual difference
You can exploit the response time difference between one task and the other to infer how long did each added processing stage last, through cognitive subtraction, which derives from Donder’s insertion method, but have to keep in mind that these inferences would be based on the pure insertion assumption that the insertion of a single cognitive process does not affect any other process, either before or after that process. This assumption sometimes, unfortunately, has been proven to be false, which makes the whole logic fall down.
What are ERPs?
evoked potentials or event-related potentials, deflections that are ordered, sequenced in time from the onset of a critical event that could be a stimulus, for instance, but could also be a response of the participant, onwards or backwards.
What are the most common procedure of mental chronometry, based on reaction times?
Simple RTs (press a button when you see a stimulus): simple reaction times are a common interest procedure to study mental chronometry, is the simplest task you can ask a participant to perform. The simplest one is to just passively fix some stimuli, but if you want to get a response, simple reaction time tasks are the simplest ones. It then tell you something about the timing of stimulus detection, how fast was that participant in detecting a given stimulus in any modality. So basically it enables to analyze stimulus detection
Go/no-go (when the stimulus will appear, press button A if the stimulus is a square, and do not press anything if it is not a square). In the Go/no-go task, you change the task by asking the participant not to react to any stimulus that appears, but to react to a specific category of stimuli. For instance, squares and not circles. So you have Go/no-go stimuli that prompt a response and Go/no-go stimuli that prevent a response. Then if there is inhibition involved or selection of a non-response, it’s a matter of debate in the literature, but still, this is the task. In this case, you don’t only have to detect a stimulus, but you also have to discriminate the stimulus feature that prompts you to respond or not to respond, it could be the color or the tone of an acoustic stimulus, and so on and so forth. Here the focus is not only on stimulus detection but also on stimulus discrimination
Choice RTs (press button A if the stimulus is a square, button B if it is a circle), even a more complex task by asking participants to choose among different responses. 2 is the simplest example, but you can have 4, 8 responses, 10 responses if you use your fingers, for instance. The task is, press button A if the stimulus is a square, press button B if the stimulus is a circle, and you can change the feature of the stimulus that prompts from one response to the other, of course. So what are the processing stages in this case?
1. Stimulus detection, you have to detect whether the stimulus is present or not, especially if it is suprasatial
2. Stimulus discrimination, discriminate between the 2 or more types of stimuli
3. Response choice
What is the central idea of Localizationism?
the hypothesis that specific functions, attention, memory, language, thought, reasoning, are performed by specific areas or regions within the brain. The fundamental idea is that the brain is a sort of mosaico, composed by independent modules. This lead to the idea that brain shapes may lead to different personality, behavioural, cognitive features in each individual. The implication of a strong localizationist view is that the brain areas work in isolation. Of course, this is the extreme view. In all human beings, there are somehow microscopically similar structures in the brain, therefore, the assumption of localizationists was that if we are created the same way in terms of brain structures, then probably we use these structures for doing similar things. These structures might have similar functions across individuals.
Who was Fieschi?
He was a criminal who had a particular shape of the brain, so appearently, according to a localizationst view , the two things were related. It probably was a coincidence due to the inter individual differences, that are absolutely common.
For what Gall is know for?
He is most known for pioneering the study of localization of mental functions in the brain in the 18th century, through Phrenology, which he is consider the father of.
Although phrenologists were unable to measure brain size directly, they stated that they could infer directly by the observation of bumps on the scalp as a good proxy of underlying brain structure and also cognitive processes implemented in these brain structures.
Gall found 27 abilities, which were, in his opinion, related to specific areas of the brain.
A more modern version of that is for instance voxel-based morphometry. One of the most popular methods of morphometry. It is the most widely used neuroimaging techniques to investigate how cerebral volume difference could be correlated with clinical deficits or cognitive capacities from images of brain.
if it is totally false that the brain shape can change the scalp external surface, it instead change somehow, even if subtly, the internal vault of the scalp. And this can be exploited scientifically when studying brain evolution, with the scalp bones belonging to our ancestors, found by the bipaleontologists, and also the endocasts that could be refilled, for instance, with wax or something like that, to study how it might appear, at least superficially, the brain of these ancestors. So the study of the endocasts somehow gets something from the phrenology logic.
Gall also established the difference between gray and white matter and was among the first, that described aphasia syndromes, language deficits, due to lesions in the brain, in the frontal lobes. So he actually already guessed correctly that the frontal lobes give non-fluent aphasia.
Jean-Pierre Flourans
Flourens criticised Gall theory, so he was against Localizationism, he believed that the soul is a unitary spiritual entity, and carried out empirical experiments, to demonstrate the equipotentiality of the cortex, causing brain lesions, slide by slice, in birds.
Whatever lesion was caused, the bird had similar deficits, although these experiments, so according to him, the brain acts as a functional whole. Although some specific functions could still be performed by some primary regions. So, mental functions represented widely throughout the whole brain against the localizationist view.
Jean Baptiste Bouillaud
reported clinical cases with loss of speech after frontal lesions, similar to what Gall had done
Contribute of Marc Dax?
Marc Dax had the intuition that language is localized in the frontal lobes.
Actually, he found out that the left hemisphere is probably dominant for language because lesions there and not in the right cause problems in language field such as verbal memory, speech problems, et cetera.
He didn’t write a scientific article at the time. He just wrote some handwritten notes that then he forgot in his journal. When, later on, his son published almost 30 years later this work, Broca and had already become famous as the first one who discover that left frontal lobe is responsible for aphasia
What’s the importance of scientific publications?
- Establishing priority in scientific discoveries (see Dax vs. Broca)
- Knowledge dissemination: Scientific publications spread new discoveries and insights to the global research community.
- Peer review: The publication process includes rigorous peer review, ensuring quality and reliability of research.
- Progress tracking: Publications create a historical record of scientific advancements and evolving theories.
- Collaboration catalyst: Published work often sparks new collaborations and further research.
- Funding justification: Published research helps justify continued funding for scientific projects.
- Policy influence: Scientific publications can inform and shape public policy decisions.
- Educational resource: Publications serve as valuable teaching materials in academic settings.
- Innovation driver: Published findings can lead to practical applications and technological innovations.
- Public engagement: Open-access publications help bridge the gap between scientists and the general public.
- Career advancement: Publications are crucial for researchers’ professional development and recognition.
Gratiolet
He had an holistic view of the brain, indeed he was in favor of equipotentiality of the human brain. Mental processes are a product of the brain as a whole and could not be fractionated in different regions.
Aubertin
He had a localizationist view of brain supported by his studies where he showed that stimulating the anterior lobe, that is the frontal lobe, would cause a speechless patient so would cause the end of language production, transitorily, until the inhibitory stimulation was going on, without loss of consciousness. So this is an interesting observation, that different cognitive processes could be dissociated from each other. You can still be aware of what’s happening around you, but you lose another faculty, that is production of language, in that case
Contribute of Paul Broca?
In 1861 reported the case of a patient who is known as Leborgne, or as Monsieur TomTom, because after suffering from epilepsy, he was hospitalized because he soon lost his speech faculty, and the only syllable that he could repetitively pronounce was Tom. Broca later suggested a possible localization of speech functions in the second, third frontal convolution.
Later on, he also described another patient, Lelong, with a similar deficit, but a much more circumscribed lesion, so now it was even easier to localize production language function, the third lower frontal convolution of the left frontal lobe. So, Broca became more confident about an association between this area and speech capacity.
The lesions observed later on by reconstructing through neuroimaging what was destroyed in the brain of these patients extend well beyond the so-called Broca’s area. For instance, the superior longitudinal fasciculus in the ventral area in Leborgne was also destroyed, not only the focal Broca’s area properly.
By more modern reconstructions, in the Monsieur Tonton the stroke lesion, disconnected all perisylvian language networks including the arcuate fasciculus, but also some other frontoparietal connections and the frontal aslant tract once again, etc. It might be that the whole disconnection pattern led to language deficits in this patient, not only the focal cortical lesion that was visible in the autopsy performed by Broca (Connectionism)
Hugues Duffau
A modern neurosurgeon, who is representative of an associationist view of the brain sustains that Broca’s area is not the speech area.
He stated that massive surgical reception of brain regions, dogmatically considered as critical in a localizationist view, can be achieved with no functional deficit.
Contribute of Carl Wernicke
In 1874, Wernicke described a complementary type of aphasia with respect to Broca, the sensory aphasia, as he called it, and nowadays it’s called Wernicke’s aphasia.
Wernicke introduced also the first network review of language comprehension and production, which represents a movement from localizationism to associationism.
He described a patient that were able to produce language, but they were not able to comprehend, to understand what somebody else was telling them in oral communication, so complementary to Broca patients.
He hypothesized what was happening in a language network, without the possibility to observe that network with modern neuroimaging, for instance. And he was saying that what was happening is that language is comprehended in the superior-posterior temporal lobe, an area that is nowadays known as the Wernicke’s area, and then this information is passed towards the Broca’s area for production. So if you have a lesion in the Wernicke’s area, you get a fluent aphasia. You can speak, but you cannot understand what somebody else is telling you. If you have a lesion in the Broca’s area, you have a Broca’s aphasia, a non-fluent aphasia. You cannot speak, but you understand what somebody else is telling you.
But if you have a lesion in the arcuate fasciculus, a fascicule of white matter tracts that connects Wernicke’s area, posteriorly, to the Broca’s area, anteriorly, Wernicke hypothesized that you could have another type of deficit, the conduction aphasia, the inability to repeat what somebody else is telling you. You are able to understand, you are able to produce language, but you are not able to repeat what somebody else is telling you, because this highway is broken.
What’s the Paris Aphasia Debate?
A fight between Pierre Marie and Dejerin
• Pierre Marie, supported a localizationist view of aphasias, in support of Broca’s theory • Dejerine: defended a more holistic view, and denied the role of the left frontal lobe in language
They even had a duel among themselves, because one offended the other, and so Pierre Marie published a paper that was very provocative against Dejerine’s and Broca’s view, entitled, ‘Third Frontal Convolution Plays No Special Role at All in the Function of Language’.
The case of Phineas Gage
This case demonstrated that also the anterior part of the frontal lobe that up to then was treated as the silent region of the brain because there was no manifest immediate symptom from destroying this, was instead so important to change completely the personality of a person, supporting Localizationism.
As Harlow, the doctor who described this case, was saying, in this regard his mind was radically changed after an accident where a rod just destroyed the frontal pole of this patient. So decidedly that his friends and acquaintances say he was no longer him. He was very rational before the mission, and he became totally instinctive, irrational, with a lot of coprolatic language after the mission, impulsive, etc., etc.
The case was revised by Damasio in a paper published in Science in 1994, with a digital reconstruction of the areas lesioned in Phineas Gage’s brain, and the main message by Damasio was a localizationist message, that is, that the medial frontal cortex destroyed in Gage was responsible for abstract representations of values, moral judgments, etc. So a strictly localizational view by Antonio Damasio. Anyways, some advertisements documented some public appearances by Gage, who was found out to work in Chile as a stagecoach driver, so he was somehow rehabilitated from his deficit, although the stagecoach job is a kind of routine job. It doesn’t require too much planning, because the horses go back and forth from the same route all over the land. So the heavily dysfunctional and unable to adapt to Gage described at the beginning by Harlow, in his article, probably was true only at the beginning of the accident. Probably Gage, thanks to neuroplasticity, became far better adapted socially than as previously thought.
The lesion to the frontal pole caused by this rod passing through his brain would not only cause a destruction of the frontal pole, but also many connections between this region and other brain regions. The uncinate for instance, would have been destroyed, thus explaining aggressive behavior, impulsive behavior by Phineas Gage, which is part of the limbic system. The case revised in terms of connectivity, supports also connectionist view.
Hermann Munk
He probably provided the first clear and influential evidence for the visual functions of the occipital lobe. In 1881, he caused lesions of the occipital lobe of dogs and monkeys, reporting, for instance, anopsias of different types when this happened. This evidence was in favour of a localizationist view,
John Jackson
He’s also famous in neurology for the Jackson march, that is the way in which some types of epileptic crises start and advance by following if they originate along the Rolandic fissure, the order of the representation of our soma, of our body in the motor cortex. So, for instance, if they start from the hand, then they go to the arm, and then to the shoulder, and then to the other parts of the brain. So, this is called the Jackson march.
He also hypothesized the roles of left and right hemisphere. In most people, he said
• the left brain hemisphere is the leading one
• the right hemisphere is the automatic one.
So, there is a major and a minor, there is a dominant and a non-dominant hemisphere, according to Jackson. A view that was revolutionized by the findings by Sperry, and later on by his student, Mike Gazzaniga, through the well-known studies of split-brain patients.
Split brain patients
Patients in which the surgeon cuts the main connection tract between the two hemispheres, the corpus callosum, typically for medical reasons. For instance, to avoid that an epilepsy would spread from one hemisphere to the other. But the iatrogenic consequence of that, the side effect, is that the two hemispheres now work almost in isolation and when you present something to the left, it’s probably only the right hemisphere that is working, and vice versa. If you ask the patients to perform a task with the right hand, it will be only the left hemisphere that does this, and the opposite with the left hand.
These studies are an evidence against holism, since two hemispheres do two relatively different things, so that would somehow contradict equipotentiality of the brain cortex.
Roger Sperry and Michael Gazzaniga
Through the study of split brain patients like the patient W.J., They found out is that the left hemisphere is not good for everything. For instance, in some visual-monitor-spatial tasks, like when you have to rearrange some tubes to reconstruct some visual pattern, what happens is that it’s the left hand that is dominant, even in right-handed people, that does this much easier than the right hand that is governed by the left hemisphere, which up to then was treated as the major one, the dominant one. The right hand was incapable in tasks of matching a set of blocks to a pattern of cards, for istance. The left hand instead, governed by the right hemisphere, executed the task much better, and it was even trying to help the right hand when the right hand was not able to do that, according to Gazzaniga’s work.
Anyway, these works and many others support the idea that in many people the left hemisphere is dominated for language skills compared to the right hemisphere, which is dominating for visual-spatial perception and action. In the popular imagination, the split-brain studies would eventually give rise to depictions of the right brain as the creative, artistic, irrational side, the left brain as the logical, analytical side.
Charles Gross
He is a cognitive neuroscientist and historian of neuroscience, he said that considering the right hemisphere as the creative one in absoulte terms has become exaggerated and simplified to the point of being incorrect. It’s true that certain cognitive activities tend to use one hemisphere over the other, for almost all cognitive activities, however, you need both. So there is this myth that needs to be disproven.
What is the contribution of cognitive neuropsychology in the 70s to Localizationism?
cognitive neuropsychology is a branch of cognitive psychology that aims to understand how the structure and function of the brain relate to specific psychological processes. And because of this focus on mind-brain relationships, cognitive neuro psychology is the ancestral discipline of cognitive neuroscience. Actually, neuropsychology at the time had the key role of helping localizing lesion locations according to the specific deficits, in a very rough way because initially there was no neuro imaging . So cognitive neuropsychology starts from brain damage, observes patient behavior and deficit profiles, and infer what the healthy brain would do in that damaged lesion if the lesion was working.
What are single-case patients studies, and what are their advantages and disadvantages ?
Single-case patient studies are studies and detailed descriptions of single individuals.
Advantages:
- Non average artifact: If you have a group, sometimes you average their performance and you get a number that might paradoxically be not representative of any of your patients. You might have bimodal distribution, and if you don’t look at your data, you average that distribution and you find an average, of course, illusory value that is not representative of anything.
- Detailed description
- Vivid, powerful, convincing
- Compatible with clinical work, clinical work is not compatible a lot with long collection of data with many, many patients involved
- Can identify exceptions to the rule: single cases are also important to advance theories by disproving some theories or hypotheses. Because you need a lot of evidence to build up a robust theory in science, but just one exception, if well documented, could be important and critical to destroy the theory and, you know, showing that there is a need for refinement or a rejection of the theory itself.
- Can study rare and unusual events: there are some cases that are so rare that it’s impossible to find many patients who have the same condition, a group of patients with enough power to do group studies. Think, for instance, about multiple personalities and disorders. For istance the Three faces of Eve that was documented by Thigpen and Checkley in the 50s, or H. M. by Scoville and Milner at the beginning was a unique case. W.J. was a split-brain patient that was a unique case.
Disadvantages
- Very limited generalization: you lack the possibility to generalize findings because what you found in a patient might be just idiosyncratic for that patient
- Potential for selection bias: you might look for that rare case because it’s very fashionable, it’s very interesting, it’s very nice, but maybe you are just cherry-picking something that is not really replicable
- Potential for subjective interpretation: you might not be objective about how to describe the patient because of different reasons.
What was Semenza study about?
A single case study about progressive macrographia for block letters.
A case of a patient that had a neglect for the top part and the left part, so she tries to track some lines on the top and on the left. She didn’t have any problem in writing in cursive style, she only had problems in writing in block letters. And we call this progressive writing. So this is macrographia, never described in these terms. When the first part of the composed word was written in cursive, she had no problem, when we brought it in block letters, she started with the macrographia pattern. It was difficult to interpret this case, probably this pattern is due to many reasons, and she also had a lot of lesions of different nature. So a difficult case, but it tells you that writing in one holograph might require different sub-modules than writing in another holograph. And that neglect on one any field or the other could somehow be related to our capacities to write in an ordered and constant way.
Patient with tumor lesions, both anatomical and functional multifocal cortical damage (PET: hypometabolism in bilateral FC, hypermetabolism in the right temporo-parietal WM, rIFG, CC).
Macrographia is usually associated with damage in the cerebellum and basal ganglia and with dystonic tremor. Shows signs of visuo-spatial neglect for the left and the top of extra-personal space. Macrographia is limited to block letters, which also go in a right-downward direction, no deficits for cursive and numbers, even for words half cursive – half block.
Interpretation: the top-left neglect attracts her attention downward and to the right but affects only block letters because cursive and numbers are extremely automatized (= require less attention and control).
Macrographia is usually associated with damage in the cerebellum and basal ganglia and with dystonic tremor. Shows signs of visuo-spatial neglect for the left and the top of extra-personal space. Macrographia is limited to block letters, which also go in a right-downward direction, no deficits for cursive and numbers, even for words half cursive – half block.
Interpretation: the top-left neglect attracts her attention downward and to the right but affects only block letters because cursive and numbers are extremely automatized (= require less attention and control).
Types of neuropsychological dissociations from single case studies
Classical: when a patient is impaired on Task X but normal on Task Y
Strong: neither task is performed at normal level, but Task X is perfomed much in better than Task Y
Double: patient A impaired on Task X but normal on Task Y, and patient B normal on Task X but is impaired on Task Y. Helpful to uncover the underlying functional architecture of human cognition. The existence of a double dissociation is widely considered to largely (but not entirely) rule out task difficulty as a competing explanation.
Oskar and Cecile Vogt
Neuroanatomists who contributed to Localizationism by developing a myeloarchitectonic map of the brain with 250 distinguished areas. Most of the human cortex is neocortex, which is made up of 6 layers
• Molecular layer (layer I): few stellate neurons and pyramidal neurons (disposed horizontally), some apical dendrites and glial cells. Input layer, considered target of inter-hemispheric cortico-cortical afference. • External granular layer (layer II): a lot of stellate neurons, few pyramidal neurons. Input layer, considered target of inter-hemispheric cortico-cortical afference. • External pyramidal layer (layer III): small and medium pyramidal neurons. Output layer for cortico-cortical efference. • Internal granular layer (layer IV): stellate and pyramidal neurons, main target of thalamo-cortical and intrahemispheric cortico-cortical afference. Extremely relevant in primary sensory areas (especially V1). • Internal pyramidal layer (layer V): large pyramidal neurons, which produce extra-cortical efference and connections with lower structures (brainstem, SC). Extremely relevant in M1. • Multiform layer (layer VI): few large pyramidal neurons, mostly small multiform neurons. Output layer -> efference to the thalamus.
Korbinian Broadmann
He divided the cerebral cortex into 52 areas based on cytoarchitectonic differences (distribution, density, type, shape, size of cell bodies).
It is tempting to hypothesize that clear regional neuroanatomic differences in different parts of the cortex imply functional specialization (specific organization to carry out a specific function).
Structural segregation, according to him, means also functional segregation. There is some evidence in favor of this, which would be a strong confirmation of localizationism.
Hubel & Wiesel
Electrophysiologists, Studies with intracellular recordings, In vivo recording from cat’s single cortical neurons (recording from Lateral Geniculate Nucleus). They found out that
Simple cells respond to specific orientations of edges or bars of light. They have distinct receptive fields with excitatory and inhibitory regions. The cell fires most when a bar of light is at the right orientation and in the correct position within its receptive field. Example: A simple cell might respond strongly to a vertical line but not to a horizontal one.
Complex cells also respond to specific orientations but are more sensitive to movement of stimuli.
Unlike simple cells, complex cells are less dependent on the exact position of the stimulus within the receptive field. They respond to a moving bar of light in a particular direction.
This finding suggested that there is a hierarchical functional specialization in the visual cortex, supporting a localizationist view
Hubel and Wiesel’s studies brought to the discovery of popular dominance columns. These are vertical columns inside V1 with less than a millimetre each of space that responds preferentially to input from one eye or from the other eye. So two neighbouring dominance columns within the two eyes will make a hypercolumn. Inside a hypercolumn, you basically have all the ingredients to start dissecting everything you need from the visual impulse. Because you have popular dominance columns, you have blocks that code for colours, you have neurons that code for spatial frequencies, low spatial frequencies, high spatial frequencies, you have neurons that code for orientations of segments.
The popular dominance columns are important for the so-called stereopsis, that is, perception of depth. It’s one of the possible sources of evidence of deepness in our visual world. It’s not the only one, because if you hide one eye, you still can perceive deepness differences through monocular indices, monocular proxies. But one source is popular dominance columns, where the input is not properly aligned, and the disalignment of where in the retina a certain source of light hits the two eyes is indirectly proportional to the distance of the source. Indeed, if the source is too far away, this mechanism doesn’t work anymore, like for the stars or for other objects. But if the object is very close by, the angle of difference with respect, for instance, to the fovea is very diverging in one eye and the other.
Study by Rodrigo Quiroga
In his study, localization of function was shown to be present even in single neurons in medial temporal lobe in epileptic patients that had an implanted set of electrodes for medical reasons, of course. So these epileptic patients had deep electrodes implanted in the medial temporal lobe, and Quiroga and colleagues studied the activity of these neurons while participants were perceiving some different images, including face images of famous people. And they found out that independently of the appearance of the image, if that image referred to a given identity, to a given actress or a given politician, etc., there are some selected neurons that fire specifically for that identity. It could be in profile, it could be far away, it could be small or big, but that neuron would fire for that identity or face.
In the literature there are two extreme hypotheses on how neurons encode different percepts:
• One is based on implicit representation over a very broad, very spread, distributed population of neurons that in a multivariate way would call for a given meaning or a percept. The nuanced activation of a matrix of neurons would somehow code for the identity of a given person, like a face in this case
• The other extreme theory is instead that the identity representation of percepts is based on explicit representations by highly selective cardinal neurons, which is what is favored by at least this type of evidence, the so-called grandmother’s neurons, the neuron that tells you who your grandmother would be.
This paper could be somehow another strong evidence for localizationism.
Wilbur Penfield
Neurosurgeon, He found through intraoperative electrical stimulation of different parts of the brain, especially with awake surgery modalities, when the patient is kept awake without anesthesia while the brain is being opened, operated, and stimulated by the neurosurgeon, he found out that different regions might have different roles.
Unfortunately, they could find out this only for eloquent regions. For regions where stimulation gives you a very quick and easy-to-spot symptom. If you are in the motor cortex, probably you stimulate a given area, the district that is commanded by that given brain area, will start moving, very easy to spot. If you are in the visual cortex, there will be force spins that will be stimulated by the electrical stimulation of the visual cortex. Either static force spins or moving force spins, if you are in a visual region that calls for movements, like MT in the temporal cortex. But if you are in silent regions, like a higher level frontal core, the frontal cortex of the lateral frontal cortex, it’s very difficult to understand what these regions are doing.
Penfield is very famous because he found that activation, stimulation of the temporal cortex hosting the hippocampus, for instance, evokes in the patient episodic memory. The patient reports to remember some episodes from his past life.
He is very well known for the sensory-motor homunculus, in the motor and sensory cortex, M1 and S1. He’s called homunculus because he’s like a monster that’s deformed. So every part of our body, is represented in this stream of cortex, but not in scale. There is a magnification factor that you know what does it depend on, for instance, the sensibility in S1. So if we are much more sensible to tactile stimulation in the tip of the fingers we have more cortex dedication to this input coming from this part of the body. Opposite, for instance, what we have in the back. Very few sensory organs and very little cortex dedicated to the cord, the input from these bits of the body. This is called somatotopic map, from each part of the soma, there is a representation, in a part of the cortex. The real somatosensory and motor map is not so clean, and there is some kind of overlap in the neighboring regions, calling for more than one body district. This model is good for teaching purposes, but when you really go inside, you find that the situation is a bit messier than this.
The H.M. Case
Henry Molaison was a patient who received bilateral medial temporal lobectomy. So an entire lobe almost was cut especially the part with the hippocampus and parahippocampal cortices and other neighboring regions in order to cure intractable, very heavy epilepsy in this case. The surgery was a success. Indeed, the epilepsy of this patient really diminished. However, there was a severe iatrogenic side effect. Iatrogenic means that originates from the medical cure. This side effect was anterograde amnesia, so the patient was not able to learn new information with episodic memory, of course. He was able to learn new information with procedural memory, implicit memory, memory that you cannot declare, you cannot talk about, but your motor system, basal ganglia motor system, et cetera, encapsulates in schemas that you will learn to perform very easily, for instance, writing. (Localizationism)
Patient HM had bilateral disconnection of the limbic system, fornix, important for acquisition of episodic memory, disconnection again of the uncinate fasciculus, which is useful to build up emotional memories, so memory and emotions are very related. The cingulum, that connects the hippocampus and parahippocampal gyrus with retrosplenial cortex, also important for memory (Connectionism)
The person that studied him a lot was Brenda Milner, who wrote her PhD thesis on the HM case, and she became one of the most famous neuropsychologists in the world. It was a real revolution, the study of HM by Brenda Milner. She could document that the amnesia, actually, it was also a bit of retrograde amnesia, not only HM1, but the amnesia persists in spite of normal, actually, above-average intelligence and other cognitive capacities in this patient. Many perceptual tasks, for instance, were performed very well. The patient could sustain a normal conversation, but a few moments after the conversation was finished, the patient didn’t recognize, for instance, the doctor again, and the doctor had to introduce herself again and again every time that a new conversation started. So, Brenda Milner studied this patient as a sort of Rosetta Stone to discover the different dissociations that are possible, not only in terms of cognitive systems per se, but within the memory system, and the study of this patient led, for instance, to the classical taxonomy of human memory that you can see here for a summary.
Taxonomy of the memory
• Short-term memory:
• Definition: is a temporary storage system that holds a limited amount of information for a short duration, typically around 15-30 seconds without rehearsal.
• Capacity: limited to about 7 ± 2 items (Miller’s Law), though more recent studies suggest it may be closer to 4 items.
• Function: primarily serves as a passive holding space for immediate sensory input before it is either forgotten or transferred to long-term memory.
• Example: Remembering a phone number long enough to dial it.
• Working memory:
• Definition: is an active system used for manipulating and processing information that is temporarily stored. It is often considered a refined or functional subset of STM.
• Components (Baddeley and Hitch Model):
• Central Executive: Directs attention and coordinates tasks.
• Phonological Loop: Processes auditory and verbal information.
• Visuospatial Sketchpad: Handles visual and spatial information.
• Episodic Buffer: Integrates information from different sources.
• Function: is crucial for problem-solving, reasoning, and complex cognitive tasks, as it allows us to hold and work with information simultaneously.
• Example: Solving a math problem in your head or understanding a complex sentence.
• Long-term memory
• Definition: LTM is the system used for storing information over prolonged periods, ranging from hours to a lifetime.
• Capacity: LTM is believed to have an essentially unlimited capacity.
• Types of LTM:
• Explicit (Declarative) Memory: the memory that you can talk about explicitly, regulated by the medial temporal lobe, two types:
1. Episodic Memory: Personal experiences and specific events, autobiographical facts (e.g. What was your final high school exam?)
2. Semantic Memory: General knowledge and facts, without spatial temporal information regarding the source of this memory (e.g. What is the definition of a final exam?)
• Implicit (Non-Declarative) Memory:
1. Procedural Memory: Skills and habits (e.g., riding a bike)
2. Priming and Conditioning: Unconscious associations and reflexive responses.
• Function: LTM allows us to retain knowledge and experiences, shaping our identity and enabling learning over time.
• Example: Remembering your childhood home or the capital of a country.
Limits of Localizationism?
No explanation on how different regions/networks interact and communicate with each other to produce functions -> e.g. goal-directed vs stimulus-driven attention involve a complex network, which can be divided into a ventral vs dorsal component. These two may interact or work separately depending on the situation => need to study connections rather than localization.
No one-to-one correspondence between region and function: many regions are engaged in very different functions -> e.g. ACC engaged in different task conditions as it is involved in attention control, error monitoring, energization of other parts of the brain. Complex tasks also activate a similar set of regions (multiple demand system, includes frontal and parietal regions) which is aspecific -> independent from task features. => a localizationist approach cannot fully explain the complexity of the mind-brain relationship.
Michael Posner
He confirmed what was already known from neuropsychology through PET methodology by administering different tasks, scaffolding on one another to help the participants inside the PET scanner. So if you have to passively view some words in the screen, occipital lobes activate, and in particular, if you subtract a lower-level visual stimulus, the visual word form area activates in the lateral inferior part of the occipital lobe. If you listen to words, you see the Wernicke’s area and other areas in the temporal cortex activated. If you have to speak words, the premotor cortex is coming online. And if you have to generate a verb starting from a substantive, from a noun, then there are the Broca’s area and other areas in the temporal cortex that get activated. So brain imaging techniques were used, especially at the beginning, to localize functions and that’s developed in the recent field of cognitive neuroscience.
Ferguson’ study
shows several brain regions, including the nucleus accumbens which is part of the reward system, are active during spiritual experiences, feeling the spirits by religious people in surgery inside the MRI scan while functional MRI BOLD response was recorded. And you see bilateral nucleus accumbens activation in this God experience, spirit experience. Nucleus accumbens that is considered as a neural region where motivation and action meet, playing an important role not only in religious experiences but also in sexual behavior, reward, stress management, drug seeking, feeding, et cetera, et cetera. Of course, the God spot is a very controversial claim. This highly localizationist, reductionist view nowadays is not very well received by the field.that were listed through a continuous narrative could be represented.
And this could be seen as evidence in favor of localization.
Ferguson’ study
shows several brain regions, including the nucleus accumbens which is part of the reward system, are active during spiritual experiences, feeling the spirits by religious people in surgery inside the MRI scan while functional MRI BOLD response was recorded. And you see bilateral nucleus accumbens activation in this God experience, spirit experience. Nucleus accumbens that is considered as a neural region where motivation and action meet, playing an important role not only in religious experiences but also in sexual behavior, reward, stress management, drug seeking, feeding, et cetera, et cetera. Of course, the God spot is a very controversial claim. This highly localizationist, reductionist view nowadays is not very well received by the field.that were listed through a continuous narrative could be represented.
And this could be seen as evidence in favor of localization.
Huth’s study
experimenters employed well-designed experimental protocol scanning participants for an extended period and being able to map across the entire brain how different semantics of different words that were listened through a continuous narrative could be represented. Even if the whole brain somehow represents many words, the authors found out that every word tended to occupy its little portion of cortical surface in a relatively specific manner.
And this could be seen as evidence in favor of localization.
Phylogeny meaning
branch of science that tries to reconstruct the evolutionary history and relationships among, within groups of organisms, species, for instance. Comparing species could provide insights into the evolutionary history, but also into our own ontogeny and clinical conditions sometimes.
What’s Ontogeny?
the study of the developmental history of an organism within its own lifetime, much more limited in time, as opposite to phylogeny, which refers to the evolutionary history of the species that the individual belongs to.
Charles Darwin
In Darwin’s theory of evolution there are a lot of variations of each characteristic for each species.
Natural selection is a force that acts by selecting the characteristics that are winning within a given habitat, a given environment, so it’s the fittest within a given context that would survive and would transmit these characteristics to the offspring.
Genetics was not known yet at that time, but he kind of hypothesized something related to genetics. So, the implication of this is that if we have common ancestors, then with another species, we should have a lot of commonalities with that other species, inherited by the common ancestor.
Hourglass model
A relatively recent model of the relationship between evolution and ontogeny development, that states that there is a an intermediate stage of development of embryos of many species where the human embryo resembles that of other species.
Both inter-species comparisons of morphology and gene expression reveal an intermediate figure during which embryogenesis has a lot of similarity across species as compared to earlier and later stages of ontogenetic development in many fila.
What’s a filum?
group of species with the common basic morphological characteristics, for istance as humans we belong to the filum of cordata, animals that have the presence of a notochord, the spinal column.
What’s a clade?
a clade is a group of organisms that have evolved from common ancestors. The study of evolution of the primate, our clade, brain could allow us to better understand the origin of our own cognition and and its neural basis, and could also provide information on the sources of normal and pathological variability of human neuroanatomy, a major challenge for neuroscience today.
Which are the phases of transition of brain capacity in human ancestors?
The transitions were:
the emergence of the primate cerebral cortex, not only with the brain size expansion but also with its unique features of areas contained in those brains
natural selection as the main evolutionary driver replaced by an interaction between ecological factors, neural factors and cognitive factors. So culture could shape, somehow, our brain evolution in this new transition of our brain evolution.
So this brain transitions accelerated the expansion of the hominid’s brain reaching the threshold for neural capacity that was required for the emergence language.
Extrapolating these developments enabled researchers to predict a third phase that we are probably now living as a living experiment that would hopefully accelerate, or maybe slow down, human cognitive capacities, that is artificial intelligence. We’re going to interact with artificial intelligence in unprecedented new ways that will bring to a third phase; we’ll probably not see but our sons and daughters will see what the final reaccommodation of our brain will be at the end of this phase.
Which are the techniques to study brain evolution?
Radiometric dating
Endocasts
Genome base phylogeny
What’s Radiometric dating?
A technique that allows to study the evolution of brains, especially of extinct species that were living many thousands of years before. For instance, carbon-14 is a radiocarbon dating method for determining the age of an organic material by using the properties and measuring the age the properties of a radioactive isotope of carbon as a trace that will give us hints on the age of that material because while each individual is alive, and only when that individual is alive, these, for instance, bones will absorb some impurities, radioactive trace impurities, and then radioactivity decays when that individual, dies in a very well-known rate. So, you measure the decay of carbon-14 inside the skull of a given fossil finding. And you could, somehow, understand when approximately that individual was alive. Of course, this works only for a recent thousands of years, but there are other radioactive traces, based on uranium, potassium, for instance, that go back to millions of years.
What’s Endocast technique?
A method through which we infer from the shape of the skull the shape of the brain. Endocasts from fossil cranial shapes are a possibility to reconstruct the overall shape of a brain contained inside a skull. The endocast is an internal shape of any hollow object, like the skull, often used to indicate specifically the cranial vault in the study of brain evolution in humans and other organisms. Because the brain leaves an imprint of its shape inside the skull, especially the surface of the brain, of course, so the brain could be somehow reconstructed, for instance, by filling the internal vault, the internal cavity of a skull with wax-casts or plaster-casts, etc., etc. Even if the brain shape does not change the cranium in the outside, contrary to what phrenologists like Gall believed, but Gall was not totally wrong. The outside is not changed, but the inside somehow is imprinted by this shape of the surface of the brain. There are digital reconstructions through endocasts of an extant human, for instance, a chimpanzee that is still existing, and two Australopithecus africanus specimens.
What’s Genome based phylogeny?
A method that allows us to study evolution of the brain through the capacity to sequence genomes, which allowed us a much more precise reconstruction of our phylogeny, so genome-based phylogeny.
The similarity between two species on this technique is defined as the number of genes that are in common between these two species, divided by the total number of genes contained in the genome of the two species.
The evolutionary distance between two species could be interpreted in terms of evolutionary events, such as the acquisition and loss of genes, whereas the underlying properties, the gene content, could be interpreted in terms of functions lost or gained through the gain or loss of genes.
What are the differences between our brain and our ancestors?
For instance the Neanderthal look like more elongated compared to the extant human brain that is more round-like, more globular. So if you try to morph a human brain, to match the Neanderthal’s brain, or vice versa, what would you see is that you have to stretch, for instance, some specific parts of the human brain, the Neanderthal brain, to match the human brain, or vice versa. For instance, in the human brain some regions of the brain that are very important for acquiring very complex, fine motor skills that probably made the success of the Homo sapiens with respect to the Neanderthal species, are more developed, for istance crebellum, visual cortex, some basal ganglia (although everything in evolution is an inference and sometimes a speculation). So, endocranial globularity changes. How round the shape became from Neanderthal to extant human would be a hint on what changed evolutionarily in our brain and cognition.
Different primates, different clades, but also primates within the clade, have different brains with different gyrification complexity, patterns, shapes. And they also have some common characteristics. Almost universal in our primate brain would be the central sulcus, or Rolandi fissure. However, a lot of circumvolutions, minor ones, would do change between one specie and the others. The most striking difference is in terms of size.
However, we cannot just make inferences by looking at the brain size differences in absolute terms. We must somehow, take into account, in order to normalize this brain size, body size, through a ratio somehow. When we do so, you can see clearly that there is a group of outliers that have a brain that is simply too heavy when compared to their body weight, with respect to the same ratio in all the rest of animals such as reptiles, fish, birds, mammals, and primates, in X-axis there is body weight in kilograms, and brain weight in grams is on the Y-axis. Everybody nicely lies on a regression line.
Of course, the more evaluated animals are above it, the primates, and the at least less evaluated ones are below it, but more or less everybody collectively falls in this regression line. But human brain weights too much with respect to their body. How is that possible? This would be metabolically too demanding. Of course, it’s tempting to say that this would allow higher-level cognition, but it costs a lot.
What are the hypothesis about how brain evolves?
Concerted brain hypothesis
Allometry
Mosaic brain hypothesis
How Does the Brain evolves According to the concerted Brain hypothesis?
The Concerted Evolution Theory proposes that brain regions evolve together due to shared developmental pathways and genetic constraints. Essentially, this theory argues that changes in the overall brain size or structure lead to coordinated changes in different brain areas. The evolution of brain regions is seen as a unified process, where all regions grow together rather than evolving independently.
How does the brain evolve according to Allometry hypothesis?
The Allometric Theory explains how brain size scales relative to body size across species. It suggests that, in general, brain size increases in a predictable and proportional manner with body size. However, the increase is typically sublinear, meaning that as an animal gets larger, its brain grows, but at a slower rate than body size. This growth is governed by developmental and metabolic constraints that limit how brain regions can scale.
How does the brain evolve according to mosaic brain hypothesis?
Different parts of the brain evolved independently and at different rates in response to specific environmental pressures and functional demands. This contrasts with the concerted evolution model, which suggests that all brain regions evolve together in a more coordinated way due to constrains like overall brain site or metabolic demands.
Barton and Hardway
Barton and Hardway showed that patterns of covariance across mammalian brain components correspond to their anatomofunctional connectivity. So to understand differential brain evolution we need to study connectivity.
What’s the Connectionism?
In contrast to lesion deficit or clinical anatomy correlation approaches typical of localizationism, the connectionist framework emerged to highlight the importance of associations between different regions.
According to the associationist view there is a small number of brain centers, some of which are specialized, undeniably, and the combination of these specialized centers with associative cortex through connections could explain a wide variety of higher functions and their deficit in localization.
All behavior and cognitive operations in the brain are produced by the action of distributed networks, not single regions, but distributed networks, cohesively connected with each other.
A connectionist approach seems, more respectful of the nature of the brain, which is very complex, composed by 86 billion neurons and importantly, trillions of connections, all of which are anatomically organized over multiple spatial scales and functionally interact over multiple temporal scales, from the milliseconds, think about response times, or early perceptual and sensory processes, to the whole life.
Many 19th and 20th century neuroscientists, Ramon y Cajal, the histologist Golgi, Maynard, Wernicke, Brodman, etc., were all well aware of the importance of connectivity and networks in understanding the nervous system and its function.
So there is a big change in focus, in perspective, from a focus on gray matter structures to a focus on connections of white matter structures. In other words, from the surface, structural anatomy, sulci, giri, etc., to white matter connections, charts of axons that connect different brain areas, nearby but also distant ones, either anatomically or functionally.
Carlos Stephanus
Carlos Stephanus, a French doctor, wrote an impressive anatomy treaty, one of the finest of all, very elegantly printed inside, very artistic. However, the anatomy itself is not really accurate, as, for instance, in other neuroanatomists, like Vesalio, for instance, and Steno, etc., etc. In this book, for instance, both normal and pathological anatomy are shown.
Nicolaus Steno
among the first to reason about the functions that white matter might have in the brain. He also correctly interpreted that cerebral circumvolutions, gyri, are the seat of superior cognitive functions in contrast to theories by Descartes that were dominant in that period.
Raymond Vieussens
in the 17th century, made the first illustration showing the organization of projection fibers in the corticospinal tract. However just looking at the anatomy would not provide many insights about the specific functional meaning of these brain connections.
Isaac Newton
He wrote about the extraordinary role of white matter connections in the brain, and surprisingly enough, he did so towards the end of his book on mathematical principles. He put forward the idea that fibers contain information that is useful for the brain to work and interact with the whole body, the organism.
Johann C. Reil
coined the term psychiatry.
He was a fine anatomist and described for the first time, for instance, the arcuate fasciculus, very important for the language network. The lesion of it would cause conduction aphasia, where the patient can talk, can understand language, but cannot repeat sentences.
He also described other fiber bundles, like for instance the ulcinate fasciculus, which connects contour with temporal structures, now known to be part of the limbic system, so emotional regulation, et cetera, et cetera. And also other associative regions.
Theodor Meynert
Meynert is very famous for a classification of white matter tracts that is still in use today. He distinguished, for instance, three different classes of fibers.
• Projection, white matter tracts that connect the brain with the rest of the body. So from one brain region downstream, projection. The most famous projection fiber are the cortical spinal tracts. Like the one that meets the motor cortex and goes to our spine.
• Commissural fibers are fibers that connect the two hemispheres, typically homologous regions in the two hemispheres. The most important commissural fiber in humans is the corpus callosum.
• Association fibers connect different areas within each hemisphere. Typically along the rostrocaudal sense, or in short range, these are U-shaped fibers that connect nearby regions. But there are also long-range associative fibers. The discovery and description of associative fibers boost the connectionist theories, the associationist theories. These intra-hemispheric connections indeed connect different, even distant regions within each hemisphere, so there must be a reason why this is so. Probably they help integrating the functions of different regions within each hemisphere, and therefore this is a possible evidence for an associationist view of the brain.
What’s a Diffusion Tensor Imaging of Tractography (DTI)?
Diffusion Tensor Imaging (DTI) is a type of MRI (Magnetic Resonance Imaging) technique used to visualize and study the white matter of the brain, specifically the fiber tracts that connect different regions of the brain. Tractography is the process of creating 3D images of these fiber tracts using DTI data. Together, DTI and tractography provide a powerful way to visualize and study the brain’s structural connectivity.
What was Catani article about?
Minor classification of some white matter tracts is visualized in a article by Catani and colleagues through Diffusion Tensor Imaging of Tractography, DTI. So with MRI, in healthy participants superimposed on medial and lateral views of the brain surface. You have projection fibers, like the corona radiata, the fornix, commissure of fibers. This is how the corpus callosum looks like if you use tractography to track where the fibers start and where they end on the cortex. You see the anterior commissure. The cingulum, inferior longitudinal fasciculus, which connects the occipital with temporal lobe structures, the arcuate fasciculus with all its branches, the uncinate fasciculus, frontal and temporal structures like the amygdala, inferior frontal occipital fasciculus, important also for the right hemisphere for language in the left hemisphere, et cetera, et cetera.
Constantin Von Monakow
Monakow is well known for the construct of diaschisis.
“The generally accepted theory according to which aphasia, agnosia, apraxia etc. are due to
destruction of narrowly circumscribed appropriate praxia, gnosia, and phasia centres, must be finally discarded on the basis of more recent clinical and anatomical studies. It is just in the case of these focal symptoms that the concept of complicated dynamic disorders in the whole cortex becomes indispensable.”
A clear associationist view of neuropsychological deficits described up to then.
What is diaschisis?
According to von Monakow is a sudden, temporary breaking up of a pattern of brain activity (function), in a portion of the brain connected (through white matter pathways) to a localized damaged brain portion.
There are different types:
• Commissural diaschisis: contralateral functional depression (especially in the homologous area).
• Associative diaschisis: ipsilateral functional depression.
• Thalamic-cortical diaschisis: depression in the cortex due to thalamic damage.
• Cerebro-spinal diaschisis: depression in the SC due to cortical damage.
• Crossed cerebellar diaschisis: depression in the cerebellum due to cortical damage.
Some aspects of a diascesis emphasized by Von Monakow
• A damage to one brain area can, by loss of excitation, produce cessation of function in regions adjacent to, or remote from, but connected to, the primary site of lesion.
• Diascesis is a clinical diagnosis whose presumptive mechanism is loss of excitation to impact regions rather than neural lesion. And this is not always true. Not always. But this was true.
• Diascesis undergoes gradual regression in well-defined phases, such that resolution with parallel resumption of function in areas of diascesis. Symptomatology might ameliorate over time. You have to recover. You can spontaneously recover due to neuroplasticity, for instance.
• The wave of diascesis follows neuroanatomical pathways, spreading from the site of injury in well-known connectivity. Von Monakow described the phenomenon of diascesis with the example of the cortical lesion to the center of the gyrus.
For istance corticocerebellar diascesis where a stroke in the cortex would also cause a decrease in activation metabolic activity, in the contralateral cerebellum, although the cerebellum is intact, it is not destroyed by the lesion.
Nowadays we know that diaschesis is a bit more complex than it was described first by Monakow and it might also result in an inhibition of an area that used to be excited by the lesion area or an excitation of an area that used to be inhibited by another area; this rather pattern is important for understanding, for instance, maladaptive hyperactivation of contralateral regions that counteract the recovery of function, for instance, in motor stroke, where regions that are intact in the opposite hemisphere with respect to the lesion start being over-activated because they lack the inhibition from homologous regions that are now lesion, and this lack of inhibition brings about hyperactivation that counteracts recovery of function, which is not instead is much more well predicted by peri-regional activation inside the same hemisphere, okay, like Nick Ward and other people have demonstrated. Very complex inhibition-activation phenomena but for Monakow had at least the merit to talk about the concept of diascesis in favor of an associationist view, a connectionist view of the brain, of course.
Wernicke-Lichteim (classical) model of aphasia
Wernicke described first network for language comprehension and production.
This model was then updated by Lichtheim, a German neurologist who published a paper describing a connectionist model of language processing stemming from Broca and Wernicke’s findings but extending them in a new model that tried to describe and predict all possible types of aphasia. This is a classical view of aphasias. This model was both neuroanatomical and functional in its basis and predicted the different language consequences of damage of various brain regions:
1. Broca’s aphasia: lesion in the Broca's node of the network causes non-fluent aphasia where you can understand what other people is telling you but you can’t produce language 2. Wernicke’s aphasia: lesion in the Wernicke's node of the network causes fluent aphasia where you can speak but you cannot understand what others are telling you. 3. Conduction aphasia: lesion in the connection between Wernicke's and Broca's area 4. Transcortical motor aphasia: a lesion between the Broca's area and the concept field, which some early neuropsychologists tried to localize in the parietal cortex, then you can speak, you can talk, you can repeat words but you have the so-called transcortical motor aphasia, an infrequent type of aphasia with difficulties in spontaneous speech but good repetition skills at least for simple sentences or words. 5. Subcortical motor aphasia: a lesion between the Broca's area and the peripheral speech production area 6. Transcortical sensory aphasia: If you have a lesion between the concept field area and the Wernicke's area, you have a transcortical sensory aphasia, an infrequent type of aphasia that might occur when a lesion functionally isolates the Wernicke's area here from the rest of the brain, leaving the reception to outcomes sufficiently unimpaired, that again repetition is preserved also in this case 7. Subcortical sensory aphasia: a lesion between the Wernicke's area and the peripheral auditory input area
McCarthy and Warrington model
a two-root model of speech production, so conduction aphasia, of course, when the verbal input is isolated from the articulatory output, transcortical motor aphasia, when impaired spontaneous speech occurs with intrapassive repetition, which however is impaired, repetition when you need to have the semantic processing of very complex sentences. The specific hypothesized connections in these diagrams might explain behavior of single-case patients with double dissociations for instance between in this case conduction aphasia and transpolitical motor aphasia that somehow bring evidence for these different pathways.
Hugo Liepmann
First case of APRAXIA report in 1900 (single case): 48 yo imperial counselor with striking features: intact spontaneous movement but when asked explicitly to make a gesture/imitate/manipulate something (even imagining) he did so in a clumsy way. He could understand the command, no visual impairment, no paralysis. This was due to disconnection between left hand motor area and posterior (sensory) brain regions.
What’s apraxia?
The apraxia is an acquired inability to perform purposeful skilled actions while the overall cognitive profile is intact, more common with left hemispheric damage (parietal/premotor).
Liepmann in 1908 made a distinction between various forms of apraxia based on connections that may be selectively lesioned:
• Melo-kinetic/limb-kinetic apraxia: innervatory apraxia, damage to PM areas => difficulty with fine (finger) movements (executive apraxia) -> idea of the action but no command to put it in practice.
• Ideomotor apraxia: damage to the intraparietal sulcus, inferior parietal lobule, supramarginal gyrus and also connections to PM cortices => difficulty in determining the nature of movements, disruption of kinesthetic memories -> no imitation of gestures, tool-use pantomime and gestures by verbal command. NB! Actual, spontaneous use of tools may remain intact (affordance).
• Ideational apraxia: ventral lesions (TPJ, angular gyrus), no conceptualization of the action -> deficit in conceiving and executing hierarchical complex motor plans => disorganized sequence of movements. Classical test is to ask to demonstrate tool use with a complex sequence of actions.
Heinrich Lissauer
reported one of the earliest cases of impaired visual recognition of objects, the agnosia.
Caused by different possible disconnections between visual centers
Different types of agnosia:
Apperceptive agnosia: inability to correctly perceive an object as a coherent whole because of perceptual deficits (intact knowledge). Faces tend to pop out (no damage to the FFA). Cannot copy drawings/stimuli of any kind (even letters!). So this implies a compromised visual perception. How do we test these types of patients? For instance, with copies of objects. In aperceptive agnosia cases, the patient is unable to copy even very simple shapes like letters of the alphabet or geometric shapes, etc. But if you ask the person what the object is, the person can very easily describe it and correct it.
Associative agnosia: inability to ascribe meaning to an object despite an accurate perception of that object because of deficits in accessing the stored object representations => disconnection syndrome. Often co-occurs with anomia. Can copy stimuli but cannot give meaning to them. In associative agnosia, instead, the patients are able to copy an object with their own graphical capacities, of course. But then, when you ask them what did you copy, what was this object or this animal, the patient is unable to retrieve the name and identity of the object. For instance, in this case, you might say it’s a kind of dog or something like that. So, intact perception but compromised recognition of the object. So the problem is connectional, there is an inability to access the stored object representation.
Dejerin
he described a syndrome that is called Alexia without aphasia. He also used the theoretical framework of centers and connections that would be, somehow, destroyed, to account for his Alexia without agraphia. Dejerin supported a localizationist view of aphasias, in defense of Broca, against the holistic view by Pierre-Marie. However, in other types of syndromes, he also believed in the importance of brain connections.
Alexia is an acquired reading deficit in previously literate people, people that could read normally, alphabetized people, due to a certain type of deficit, lose this capacity. Without agraphia because they can write with no problem. It is a disconnection syndrome because it’s probably due to white matter tract damage that connects the occipital lobe with other regions, due to lesions, for instance, in the splenium of the corpus callosum. For instance the angular gyrus is believed to be the verbal center, that stores visual memories of letters and words (lesion there: agraphia).
Papez circuit
An early example of a connected network of regions in the brain which was first described in 1937 by James Papez it represents a circular network of brain regions and connections fibers associated with emotion and episodic memory. It is a circuit because if you start from the hippocampus, then fornix, then mammillary bodies, the talamic anterior regions, and the cingular cortex and then back to the hippocampus.
It’s interesting to note that the Papez believed that this circuit was mostly involved in emotions; he was studying the lesions of this circuit in trance and could observe aggression response as after lesions there, however originally this model did not include amygdala which is one of the key regions for emotional regulation that in the extended circuit which is now known as the “limbic network” plays a crucial role so the updated version of the Papez circuit would be called the limbic system.
Goodale and Milner
Goodale and Milner proposed a connectionist model of brain in their study of the two visual streams:
• The ventral visual stream that deals with the capacity to identify objects
• The dorsal visual stream, which progressively reconstructs where the objects are and how, given their position, location, and depth, we can interact with them.
These are two separate visual pathways for perception and action, which consist mostly of association fibers, but also projection fibers (in the same emisphere).
Bill Geshwind
defended the models of aphasia, apraxia, alexia, and other disorders as the most correct ones to interpret these types of syndromes, and he proposed just some updates of these still valid models. For instance, in his work, disconnection with the X syndromes in animals and man. A lot of clinical syndromes could be explained in terms of brain network structure. Normal functions often are not localized to a specific cortical area, but are represented by large-scale networks. For instance, spatial attention was linked to a network of frontal and parietal and subcortical regions interconnected by wide-mounted tracts to implement the different aspects of spatial attention, for istance anchoring, keeping attention, moving attention, or orienting attention after disengaging it from one spatial location to another, all require a network of frontal, parietal, and subcortical interventions.
According to Gashwin, the hemisphere is dominant in people or complex gestures, and he focused more on white matter lesions when explaining apraxia, especially in the arcuate fasciculus, that disconnected to the motor and posterior areass, than on single regions, like in some of Liepmann’s views, like the angular gyrus as a storage place for gestural meetings, and he also produced similar disconnection accounts for conduction aphasia
Ramon y Cajal
He was a Spanish neuroanatomist and histologist. He used, at the time, revolutionary techniques of silver impregnation to visualize the complex branching processes of individual neurons, hypothesizing for the first time that neurons were interconnected with each other, but in a discreet way, with separate spaces that we now know as synapses, through which these neurons could communicate with each other. This model contradicted the main dominant model at the time, that was, instead, the reticular theory by Camillo Golgi,
an Italian anatomist, who actually, paradoxically enough, invented the impregnation method through which Cajal used. According to this theory, the neurons are all interconnected without any discrete separation. The electron microscopy later on in the 50s finally resolved the controversy between Cajal and Golgi, and, of course, the in favour of the Cajal.
This model of discrete neurons interconnected by synapses ideally fit well with connectionist views adopting nowadays graph theory rappresentation, so neurons could be seen as nodes, the axonal projections or synaptic junctions would be seen as the edges, the links between these nodes. However, the analogy is only apparent as Cajal was more interested in microscopic connections between neurons rather than between entire brain areas like the graph theory application by connectomics nowadays does. It is, however, important for various principles or laws, if you wish, about the organization of neurons. One of these laws is the conservation law, the activity and maintenance of neurons requires substantial metabolic energy. We now know the brain consumes 20% of the body’s energy or less. So, there are mechanisms in the brain of animals and the neural system that try to minimize axonal wiring costs, that try to spare cellular material and space, for instance, for cooling, for liver, for food, for volume, etc. For the sake of efficiency, there are also mechanisms that try to minimize the conduction delay in the transmission of information between one neuron and another. And this spares not energy but time, maximizing the integrity and capacity through an organizational anthropology, that is very efficient.
Connectomic models
Connectomics has begun to refine Ramon y Cajal conservation laws in terms of a competition between minimization of wiring costs and maximization of integrative topology to spare energy and time later on. So he had a very modern view of how the brain works. Connectionism should not be confused with the connectionist movement in cognitive science aimed at explaining mental abilities using artificial intelligence networks. This was also called parallel distributed processing, PDP models, neural networks, etc., etc., which are hyper-simplified models of the brain composed by a large number of units. The units would be the analogs of the neurons together with weights that measure the strength of the connections between one unit and another unit. A minimum structure would have an input layer, a hidden layer, and an output layer mimicking responses. Experiments with simulations using this type of model have demonstrated the ability that these models could learn skills such as object recognition, face recognition, some simple types of decision-making, reading, for instance, recognizing letters and producing words.
Connectomic models
Connectomics has begun to refine Ramon y Cajal conservation laws in terms of a competition between minimization of wiring costs and maximization of integrative topology to spare energy and time later on. So he had a very modern view of how the brain works. Connectionism should not be confused with the connectionist movement in cognitive science aimed at explaining mental abilities using artificial intelligence networks. This was also called parallel distributed processing, PDP models, neural networks, etc., etc., which are hyper-simplified models of the brain composed by a large number of units. The units would be the analogs of the neurons together with weights that measure the strength of the connections between one unit and another unit. A minimum structure would have an input layer, a hidden layer, and an output layer mimicking responses. Experiments with simulations using this type of model have demonstrated the ability that these models could learn skills such as object recognition, face recognition, some simple types of decision-making, reading, for instance, recognizing letters and producing words.
There are various drawbacks in these models:
• Biological implausibility: Not necessarily related to the brain, the connectionists did not even attempt, did not even bother to explicitly model the brain in a biologically plausible fashion. For instance, they didn’t care about the fact that in the biological organ of the brain there are different kinds of neurons, different kinds of neurotransmitters.
• Training method: Huge number of learning cycles with backpropagation unrealistic for the brain, it is unclear whether the brain really contains the mechanisms used by connectionists to make these networks, artificial networks, to learn. For instance, back propagation of weights between the different connections between units that would correct errors in first attempt and then restart the cycle over and over again until the weights would be so efficient that the network would finally reach the wanted skill capacity process. And these cycles could last for thousands and thousands of times, which is something that normally is not very plausible. We could learn with very simple associative learning mechanisms, Pavlovian or Skinnerian learnings, even with a single exposure. If you get stuck in the elevator once, probably you will start, if you are an anxious person, developing a phobia for entering inside small places and getting stuck, the claustrophobia, et cetera, et cetera. So, totally implausible in terms of the real biology of the brain.
• Coverage: Good for associative learning but not particularly good for rule-based, high-level processing. They were good for very simple associative learning, but when you come to much more complex rule-based learning that is not only involved in stimulus response and coding, these models were not so good anymore. So, language, reasoning, executive functions were not very well simulated.
Artificial intelligence tools, doesn’t allow you to check and understand the biological plausibility. So, it might be even better in predicting which brain activation is the most likely to occur, given a certain stigma than the neural imaging methods, but it’s mysterious how it reaches it. You cannot track the code, because it’s so complex and full of steps, thousands and thousands of steps, that it’s totally impossible to understand what it does to reach that result. So, the result is good, but it doesn’t explain the brain.
Which are the localizationist methods of modern neuroscience?
As far as a neuroimaging analysis does not take into account the connectivity between the different brain regions that are activated, it falls within the realm of localizationism. The same for structural MRI, of course.
• Voxel lesion symptom mapping (VLSM): an imaging analysis technique that, starting from structural MRI, establishes a relationship between a precise lesion in some voxels of the brain with clinical deficits, symptoms, in a patient, or a group of patients, mostly. • Voxel based morphometry (VBM): which measures volume differences in terms of brain tissue through a voxel-wise comparison of multiple brain regions. • Cortical thickness: which is just a finer way than VBM, than voxel-based morphometry, to measure variations in thickness of the surface cortical mantle. Within and between brains. • Task-related neuroimaging/electrophysiology: Task-related neuroimaging, electrophysiology, when it doesn't involve connectivity, also falls in localizationist.
Which are the localizationist methods of modern neuroscience?
As far as a neuroimaging analysis does not take into account the connectivity between the different brain regions that are activated, it falls within the realm of localizationism. The same for structural MRI, of course.
• Voxel lesion symptom mapping (VLSM): an imaging analysis technique that, starting from structural MRI, establishes a relationship between a precise lesion in some voxels of the brain with clinical deficits, symptoms, in a patient, or a group of patients, mostly. • Voxel based morphometry (VBM): which measures volume differences in terms of brain tissue through a voxel-wise comparison of multiple brain regions. • Cortical thickness: which is just a finer way than VBM, than voxel-based morphometry, to measure variations in thickness of the surface cortical mantle. Within and between brains. • Task-related neuroimaging/electrophysiology: Task-related neuroimaging, electrophysiology, when it doesn't involve connectivity, also falls in localizationist.
Which are the Connectionist methods of modern neuroscience?
Functional connectivity (task and resting state): the study of the synchronization, basically correlation in activation patterns in the time course, between regions across time, even if it’s a non-directional tool. Both in spontaneous activity, during resting state, or during task-related activations, like in psychophysiological interaction, implementation, so functional connectivity, and the software SPM that we will see later on.
Effective Connectivity: Same as functional connectivity, but now there is a directionality through some mathematical tools, known for instance as dynamic causal modeling, structural equation modeling, etc
Tractography/Anatomical connectivity: which uses DTI and DOI sequences, MRI sequences, to mathematically reconstruct the tracts whose properties could be studied. Through, for instance, indices such as fractional and isotropy that tell you how directional would molecules of water be, rather than on the opposite side, how unconstrained they would be, If they are directional, then you are in front of a white matter tract with a direction. If they are somehow chaotic, it means that you are, for instance, in the cerebral spine, a fluid, etc. Tractography is an indirect reconstruction of fibers of the brain, or there are more fine anatomical connectivity measures that directly estimate the tracts connecting different brain regions. These become more invasive, indeed they are used only in animal models because you then need to sacrifice the animal. For instance anterograde or retrograde tract tracing through some viral tools or other types of molecular tracking that then could be imaged and give you an idea of the direction of connectivity.
Which are the main white matter tracts?
Fornix
Cingulum
Uncinate fasciculus
Anterior thalamic radiation
Superior longitudinal fasciculus
Corticospinal tract
Fronto striatal projections
Cortical pontine tracts
Arcuate fasciculus
Optic radiation
Inferior frontal occipital fasciculus
Frontal Aslan tract
Frontomarginal tract and Fronto-orbitopolar tract
What’s the fornix?
a projection tract, that connects the medial temporal lobe, hippocampus, for instance, to mammillary bodies, hypothalamus, part of the Papez circuit, part of the limbic system. It’s important for emotions, but also for episodic memory.
What’s the cingulum?
the dorsal part is formed by association fibers, important for executive control, emotion control, pain, processing, while the ventral part connects the cingulate cortex with a portion of the parhypocampal gyrus, so it’s important, for instance, for episodic memory.
In general the cingulum connects anterior and posterior cingulate cortex, so it’s a white matter tract that supports the default mode network, important for geographical memories, representation of the self in space, in time, mind wandering
What’s the uncinate fasciculum?
association fibers, that connect the limbic system, the amygdala, with the orbitofrontal cortex, important for instance for emotional control or emotional memories.
What’s the Anterior thalamic radiation?
Pvery vast radiation of connections from the thalamus to the cortex and vice versa, important for executive functions, for instance, but also motor functions, planning complex behavior
What’s the Superior longitudinal fasciculus?
Superior longitudinal fasciculus with its three branches, important for language, the left side mostly but not only. Attention, spatial attention on the right side, memory, emotions, etc.
What’s corticospinal tract?
from M1, downstairs, motor behavior, motor commands
What’s corticospinal tract?
from M1, downstairs, motor behavior, motor commands
Fronto striatal projections?
important for controlling cognitive, motor, emotional processes
Cortical pontine tracts?
Important for motor activity
Arcuate fasciculus?
language processing, on the left some production of language, on the right some other aspects like prosody control, on the right side the arcuate fasciculus is also important for viso-spatial processing
Arcuate fasciculus?
language processing, on the left some production of language, on the right some other aspects like prosody control, on the right side the arcuate fasciculus is also important for viso-spatial processing
Optic radiation?
projecting from the lateral geniculate nucleus of the thalamus to V1, primary visual cortex, important for vision, you get blind if you cut this tract.
Inferior frontal occipital fasciculus?
one of the longest associational tracts that we have from frontal lobes to the occipital cortex, important for emotional reactivity to visual stimuli, language, recognition of familiar faces, etc.
Frontal Aslan tract?
important for speech, language function, for instance verbal fluency, lexical decision making, you see a picture, you have to retrieve the name of the object depicted in the picture. It connects, for instance, the Broca’s area with the premotor cortex, etc. It’s also important for working memory functions.
Frontomarginal tract and Fronto-orbitopolar tract?
U-shaped little associated fibers in the frontal pole. Bundle of frontal white matter that connects medial and lateral regions of the frontopolar cortex, probably important for sequential planning, decision making, reasoning, where one conclusion comes from another, what Coquelin called branching, the capacity to use temporal sequencing of behavior with a lot of nesting and hierarchies involved, suspending an action, resuming that action after you finish one other task, et cetera, et cetera.
Tibonde-Shorten study?
it shows possible lesions in different white matter tracts, and if you have a lesion for istance in the fronto-parietal segment, which involves both the arcuate fasciculus and superior longitudinal fasciculus on the right, you would have spatial neglect with specific symptoms like problems in line bisection and line cancellation tasks. But you would not have visual field defects, as for instance, checked with ophthalmologic examination. You have visual deficit problems if you have lesions, for instance, in the optic radiation instead, while also the rest that involves the visual modality, of course, is compromised.
Epelbaum study?
Shows the importance of connectivity in real applications, medical applications. The patient was an epileptic patient that had to get a surgical removal of the epileptogenic source in the brain because epilepsy was not pharmacologically curable.
The neurosurgeons were very worried that the patient could develop some problems, like, for instance, the incapacity to read, the alexia problem. So they wanted to avoid that. So they asked cognitive neuroscientists to tell them which area they had to spare while resecting the epileptogenic region. And so the neuroscientists asked the patient to perform a language task within the scanner, and they observed activations of the region while the patient was reading inside the MRI scanner. So the neurosurgeon basically spared this activated region, but however, the patient at the end of the surgery developed alexia problems.
You can explain that through an associationist view. What happened was that the neurosurgeon spared that region, probably it was the visual work form area for the patient, but cut almost all the connections between this area and the inferior temporal lobe and occipital lobe, backward.
Before surgery, there was a lot of connectivity reconstructed through VTI tractography, and after surgery, this connectivity was gone. So these occipital regions could not communicate anymore with the visual word form area that recognizes letters and words, and therefore the patient developed pure Alexia.
What’s the Holism?
According to the holistic perspective each region is, to some extent, connected with the rest of the brain and contributes, with its little bits of information processing, to a brain function. So, a lesion affecting one area will have consequences on the functionality and connectivity of the whole brain, changing the activity, the dynamic activity of the whole brain.
Who is McIntosh?
According to Randy McIntosh’s hypothesis, the functional relevance of a brain area composed of different nodes will depend on the context in which it is operating, will depend on how other regions connected to this area are or are not connected with it in a dynamic way. A region can participate to several cognitive processes and behaviors. There is no one-to-one correspondence between a function and a region. It depends on the state, on the context of working of that region. So, it is possible also that certain critical nodes serve as switchers, to use Randy McIntosh’s terminology, behavioral catalysts. So, they could switch from one function to another through their connectivity that could dynamically change from one network to another. So, they can participate in different networks’ activity depending on the needs. One example of that would be, for instance, the hippocampus. Hippocampus could exert functional connectivity with different cortical regions when we are explicitly learning some new information, but it could also be connected with subcortical regions if we are implicitly learning some skill. The same hippocampus, the same nodes, connected with different regions depending on the learning context.
John H. Jackson?
Holism has been also opposed to connectionism. Even if the connectionist models were very successful in accounting for a number of disorders, there were still people that were skeptical and rejected these models as according to them. They intrinsically still contain some localizationist principles. Among them, Jackson, a British neurologist, he described hierarchical organization and distributed systems in the brain, so networks. Why is he mentioned as a holistic scientist? He did not assume any one-to-one correspondence between localization and function. And he also observed that after any lesion, functions could show some recovery and were not totally lost. So, if you have a speech problem after a given lesion in the language areas, the problem is that the patient could still say a few words. So, it means that there might be other regions that also cover language function. Similarly for motion. Even if the hands could not move voluntarily in some types of apraxia, they could still move in an automatic response. It means that some functions are not lost according to Jackson. So, these functions could be supported by spared regions. So, according to his view, the nervous system is not a series of separated centers connected among each other, but rather hierarchical organization of neurons distributed where behavior is mediated by their connections throughout the whole brain. However, counter-intuitively with this holistic view, Jackson was among the first to describe inter-hemispheric differences, with the left hemisphere being dominant for language and motor functions and right hemisphere being important for spatial attention or visual processing. He was the founder of the British journal Brain, which is still in the first quartile even nowaday and very prestigious.
Pierre Marie
Enemy of Dejerine, who was the instead localizationist, was against localizationism and connectionism. In favor of holism, he stated that, for instance, in terms of aphasia, there is only one form of aphasia, the posterior aphasia, which was due to non-language intellectual general deficits, disabilities. And the anterior aphasia is, according to Pierre Marie, the posterior aphasia plus some anarthria, so some motoric problems, in his view. So, these accounts made a connectionist model of aphasia, a network of specialized centers completely useless, in his view
Henry Head
he stated in his own work, to localize the damage that disturbs speech and to locate speech in the normal brains are two different things. In other words, he claimed that neuropsychology played little role in revealing how the cognitive system works in normal circumstances.
Henry Head
he stated in his own work, to localize the damage that disturbs speech and to locate speech in the normal brains are two different things. In other words, he claimed that neuropsychology played little role in revealing how the cognitive system works in normal circumstances.
Kurt Goldstein?
developed a holistic model of cerebral functioning that has important implications for theories of recovery. Goldstein obtained a rich body of empirical data from his own work as a clinician in a hospital where there were brain-injured German soldiers, veterans from World War I, and he had the opportunity to try to understand and cure these patients through rehabilitation, but he was disappointed by the fact that localization of function was not empirically working on those patients. Although the analysis through isolation is a necessary first step in scientific study of the brain, Goldstein adopted an holistic approach. Two kinds of clinical observations were important to develop this view in Goldstein’s case.
He stated that localization of performance no longer means to us an excitation of a certain place, but a dynamic process which occurs in the entire nervous system. He had a holistic theory of the brain and the organism based on the Gestalt theory, where a lot of phenomenological forces interact with each other in an integrated way that is difficult to reduce in a localizationist model.
Donald Hebb
Canadian psychologists, best known for his theory of Hebbian learning, which suggests that connections strengthen when two neurons are activated together. Hebb’s work aligns with Holism because it emphasises the inerconnectedness and integration of neural processes, rather than focusing on isolated brain regions or single neurons.
Hebbian rule?
The Hebbian rule is a principle of learning in neuroscience that explains how synaptic connections between neurons strengthen through repeated activity. It’s often summarized as:
“Neurons that fire together, wire together.”
When two neurons are activated simultaneously or in close succession, the synapse (connection) between them becomes stronger. Over time, this strengthening makes it easier for one neuron to activate the other, forming the basis for learning and memory. Conversely, if two neurons are rarely active together, their connection may weaken.
The later discovery of associative long-term potentiation, LTP, is the empirical corroboration of these speculations by Donald Hebb, already produced many years before. Hebb’s work anticipated the now well-known phenomenon of spike timing-dependent plasticity, which requires temporal sequencing between different activations. Temporal continuity as a condition in equation is an important requisite of new connections that help learning.
The Hebbian rule explains how experiences lead to changes in neural circuits, forming the basis for learning. It supports the idea that we learn by associating stimuli that occur together. In essence, the Hebbian rule describes how the brain’s wiring adapts based on experience.
Reverberation and Cell assembling?
Reverberation and cell assemblies are key concepts introduced by Donald Hebb to explain how the brain processes information, stores memories, and supports learning.
A cell assembly is a group of neurons that become connected through repeated simultaneous activation. When you experience something (like seeing a face or hearing a song), specific neurons fire together. If this happens repeatedly, the connections between these neurons strengthen. Once formed, a cell assembly can be reactivated even by a partial stimulus. For example, seeing part of a familiar object can trigger the memory of the whole object.
Reverberation refers to the continuous, self-sustaining activation of a cell assembly, even after the original stimulus is gone. This happens because the neurons within a cell assembly can stimulate each other, keeping the activation going for a short period. Reverberation is critical for short-term memory and attention. It allows information to stay active in your mind briefly, giving your brain time to process it or transfer it into long-term memory.
Karl Lashley?
Famous for inventing the name neuropsychology
He is also considered one of the fathers of holism.
He theorized the law of integration: “the whole is more than the sum of its parts”. It is the interaction of different components of the brain that leads to the functions that can be implemented by the brain itself.
He used an empirical model to come up with these theories that is called the training ablation methods, which he learned from Franz’s lab.
So he would train, for instance, a rat in navigating a maze, a labyrinth, through a lot of trials and errors, and then he would use systematic lesions and remove sections of cortical tissue to see if the rat forgets the acquired memory of spatial locations.
Through his research with rats, he learned that forgetting was mostly dependent on the amount of brain tissue removed from the brain of the animal than on the specific location where this lesion was produced. He called this the law of the mass action. And he believed that it was a general rule that governed how the brain tissue would respond independently of the type of learning. Deficits in memory are proportional to the amount of cortical tissue that is removed. Therefore, we can infer that memory is not localized in one area but is distributed across the cortex.
More generally, a functional deficit following lesion of part of the cortex is related much more to the amount of the lesion, than to the location of the lesion. So specific functions are shared by all neurons, not in the whole brain. He knew that there were primary regions very specialized, but at least in associative cortex.
Because of this law of mass action, even nowadays, when we want to localize specific functions through, for instance, the study of patients with brain lesions, we need to be careful to demonstrate that this function losing through the lesion is not mainly depending on the amount of lesion, on the volume of lesion. How can we do that analytically? we can use the volume of brain lesion as a regressor of no interest in our statistical analysis. So if the location is important and not the amount of lesion per se, this predictor should not be guiding our statistical analysis Because there are not many statistical significance when we try to correlate function and location of lesions.
Another law that Lashley came up with was the law of the equipotentiality. Any part of a functional area in the brain can perform the function associated with that area. If following a lesion, any part of a functional area is spared, the function could be still maintained. In other words, Lashley hypothesized that a piece of functional area could perform the function of the entire area, even when the rest of the area is lesioned. And this is the law of equipotentiality, a principle for recovery of function. In other words, all cortical areas, if they are connected in implementing a given function, could substitute other areas in sustaining learning in case of damage.
Criticism on Lashley’s theories
We now know that mass action was a misinterpretation of the empirical evidence by Lashley, because in order to run in a maze, which is a complex navigation task, the rats required multiple cortical areas that implemented each of them different functions, such as visual perception, movements, planning, retrieval of special memory in hippocampal regions, etc.
So a lot of functions besides from memory, so this type of paradigm suffers from the so-called task impurity issue, you’re not only starting one function with a task, a task is typically multi-componential.
So, initially, small individual portions of the brain alone would not impair the rats’ performance much, but taking large sections removed multiple cortical areas at one time, affecting various functions such as sight, motor coordination, and memory as well, so making the animal unable to run a maze probably, and this is not incompatible with localizationist view, you impair different areas, each of which plays a role in a sub-component of this very complex task, and then you get the performance deficit that Lashley was observing without the need to be totally holistic.
Criticism of the law of equipotentiality: Lashley was observing the phenomenon of neuroplasticity in the brain. Up to a certain extent, certain areas of the brain are able to vicariously take over the functions of other areas if those become unavailable because of damage. Although not to the extent initially argued by Lashley. The degree of neuroplasticity might depend on many factors, including maturational ones. Think about the sensible periods, for instance. So equipotentiality, another criticism, is not compatible with cases such as HM. Serious damage observed after lesions confined in specific portions of the brain that were iatrogenically produced to cure some other problems (pilepsy, in that case) would cause a very specific deficit in episodic memory. This is not compatible with the law of equipotentiality because HM never fully recovered that function, even if the rest of the brain was spared.
Eva Irle?
she wrote an interesting review where she covered 283 published studies on cortical and subcortical lesions in monkeys, she demonstrated that localization of the lesion matters to understand what type of deficit is produced. Of course, if the task performed by the monkey is a visual task and you lesion the occipital cortex, then the monkey would show a deficit across a range of tasks. But, lesion in other regions would cause a constellation of deficits that could be differentiated from one region to another. For istance parietal cortex lesions would mostly impair performance on tactile discrimination tasks, frontal lesions would progressively create deficits the more working memory is required, leaving intact performance on simpler visual discrimination, for instance tasks, et cetera, et cetera. (Criticism of Holism)
John Duncan’s Multiple Demands System?
A quasi-holistic view, It’s a model of how multi-componential behavior, complex behavior is implemented in the brain. There are common patterns of activity representing the brain’s
response to many different kinds of cognitive challenges, similar to task positive network.
Empirical evidence for the multiple demand system came from different sources, neuropsychology, so study of brain lesion patients, animal lesions, for instance, what the Duncan’s group observed was that when you have a lesion in any node of this frontoparietal multiple-demand system, the patient had problems with classical intelligence, fluid intelligence tasks, the g-factor tasks, like omission of relevant components, insertion of irrelevant ones, goals that are unfulfilled, and etc.
Two other examples are neuroimaging, so common multiple demand regions are activated for many cognitive operations and single unit recording in animals. The same neuronal population could learn to code for different aspects of the stimuli, depending on the task at hand in a given moment, in a flexible way, especially in this multiple-demand system fronto parietal, especially prefrontal regions.
A review by Duncan and colleagues shows that independently of the nature of the task, if the task is difficult and complex enough, you get a cluster of activations, this is only focusing on prefrontal regions, that are activated in both hemispheres, response conflict, task novelty, working memory, perceptual difficulty, etc.
What is the evidence from single-neuron recordings? Duncan, Friedman, and colleagues taught monkeys to discriminate between different figures, which were created by morphing in different steps animals like a cat on the one extreme and a dog on the other extreme. With all combinations of them, for istance 80% a cat, 60% cat, 60% dog, 80% dog, 100% dog. So there is a continuum in the visual similarity, going from one extreme, the cat, to the other extreme, the dog. What they did was to teach the monkey to discriminate cats and dogs with arbitrary boundaries between these continuous perceptual exemplars. If the monkey wanted a juice drop, needed to discriminate between them, and selectively respond to one or not to the other. And the monkey, after some training, could learn this task very well. What happens to the neurons in the brain? When you go with single electrodes to record firing rate or other indices of activation, in single neurons in prefrontal cortex, you see that some of these neurons became very selective with the category that the monkey learned. So they were, for instance in this case, this was a neuron that loved dogs, so it was activated, firing rate increased a lot, whenever a dog exemplar of the category dog was shown to the monkey. While this neuron didn’t like cats. So now here you see that the activation is not present, so there is a lowering of the firing rate for cats. What is surprising here? What is surprising is the fact that even very similar images, 60% cat or 60% dog, were extremely categorized in these patterns of firing rate inside that neuron. So this is not a perceptual neuron, this is a decision-making neuron that could discriminate between cats and dogs, even if some of these exemplars are very close to each other and very confusing, very difficult to discriminate. But it is even more surprising the next bit of evidence.
They took a monkey, among the ones that were trained, with implanted electrodes, and they started to teach the monkey a completely different classification of the same stimuli.
Now the monkey had to discriminate between a certain category, composed of cats and dogs, another category composed, again, of cats and dogs, and a third category that was completely orthogonal to the previous one. So the monkey had to forget the previous classification and learn this completely arbitrary new classification. When they went inside the brain and looked at what was happening, they found out that some similar neurons that were totally able to discriminate the previous categories now were able to code for the new orthogonal category and forgot the previous one. So very flexible neurons that could do everything needed by the task at hand in a flexible way.
The multiple demand system is a hypothesis that John Duncan in the UK put forward about how different brain regions cohesively take care of difficult, complex tasks related, for instance, to the general intelligence factor, g-factor. So, as he observed, also in some meta-analysis and also in electrophysiological recordings, MRI studies, etc., there are some frontoparietal regions, also including the supplementary motor area, anterior cingulate cortex, that take care of difficult task conditions. Whatever these conditions are, whatever are the stimuli, materials, so you might have task switching, you might have planning of different actions, task sequence, etc., etc. These regions come up online together when dealing with these cognitive challenges. And lesions
inside that system, wherever in the system, would explain deficits in general intelligence, and the amount of deficits would be proportional to the amount of lesion inside the system, while lesions outside the system would cause more specific deficits but not specific problem with the general intelligence and complex tasks.
John Duncan’s Multiple Demands System?
A quasi-holistic view, It’s a model of how multi-componential behavior, complex behavior is implemented in the brain. There are common patterns of activity representing the brain’s
response to many different kinds of cognitive challenges, similar to task positive network.
Empirical evidence for the multiple demand system came from different sources, neuropsychology, so study of brain lesion patients, animal lesions, for instance, what the Duncan’s group observed was that when you have a lesion in any node of this frontoparietal multiple-demand system, the patient had problems with classical intelligence, fluid intelligence tasks, the g-factor tasks, like omission of relevant components, insertion of irrelevant ones, goals that are unfulfilled, and etc.
Two other examples are neuroimaging, so common multiple demand regions are activated for many cognitive operations and single unit recording in animals. The same neuronal population could learn to code for different aspects of the stimuli, depending on the task at hand in a given moment, in a flexible way, especially in this multiple-demand system fronto parietal, especially prefrontal regions.
A review by Duncan and colleagues shows that independently of the nature of the task, if the task is difficult and complex enough, you get a cluster of activations, this is only focusing on prefrontal regions, that are activated in both hemispheres, response conflict, task novelty, working memory, perceptual difficulty, etc.
What is the evidence from single-neuron recordings? Duncan, Friedman, and colleagues taught monkeys to discriminate between different figures, which were created by morphing in different steps animals like a cat on the one extreme and a dog on the other extreme. With all combinations of them, for istance 80% a cat, 60% cat, 60% dog, 80% dog, 100% dog. So there is a continuum in the visual similarity, going from one extreme, the cat, to the other extreme, the dog. What they did was to teach the monkey to discriminate cats and dogs with arbitrary boundaries between these continuous perceptual exemplars. If the monkey wanted a juice drop, needed to discriminate between them, and selectively respond to one or not to the other. And the monkey, after some training, could learn this task very well. What happens to the neurons in the brain? When you go with single electrodes to record firing rate or other indices of activation, in single neurons in prefrontal cortex, you see that some of these neurons became very selective with the category that the monkey learned. So they were, for instance in this case, this was a neuron that loved dogs, so it was activated, firing rate increased a lot, whenever a dog exemplar of the category dog was shown to the monkey. While this neuron didn’t like cats. So now here you see that the activation is not present, so there is a lowering of the firing rate for cats. What is surprising here? What is surprising is the fact that even very similar images, 60% cat or 60% dog, were extremely categorized in these patterns of firing rate inside that neuron. So this is not a perceptual neuron, this is a decision-making neuron that could discriminate between cats and dogs, even if some of these exemplars are very close to each other and very confusing, very difficult to discriminate. But it is even more surprising the next bit of evidence.
They took a monkey, among the ones that were trained, with implanted electrodes, and they started to teach the monkey a completely different classification of the same stimuli.
Now the monkey had to discriminate between a certain category, composed of cats and dogs, another category composed, again, of cats and dogs, and a third category that was completely orthogonal to the previous one. So the monkey had to forget the previous classification and learn this completely arbitrary new classification. When they went inside the brain and looked at what was happening, they found out that some similar neurons that were totally able to discriminate the previous categories now were able to code for the new orthogonal category and forgot the previous one. So very flexible neurons that could do everything needed by the task at hand in a flexible way.
The multiple demand system is a hypothesis that John Duncan in the UK put forward about how different brain regions cohesively take care of difficult, complex tasks related, for instance, to the general intelligence factor, g-factor. So, as he observed, also in some meta-analysis and also in electrophysiological recordings, MRI studies, etc., there are some frontoparietal regions, also including the supplementary motor area, anterior cingulate cortex, that take care of difficult task conditions. Whatever these conditions are, whatever are the stimuli, materials, so you might have task switching, you might have planning of different actions, task sequence, etc., etc. These regions come up online together when dealing with these cognitive challenges. And lesions
inside that system, wherever in the system, would explain deficits in general intelligence, and the amount of deficits would be proportional to the amount of lesion inside the system, while lesions outside the system would cause more specific deficits but not specific problem with the general intelligence and complex tasks.
John Duncan’s Multiple Demands System?
A quasi-holistic view, It’s a model of how multi-componential behavior, complex behavior is implemented in the brain. There are common patterns of activity representing the brain’s
response to many different kinds of cognitive challenges, similar to task positive network.
Empirical evidence for the multiple demand system came from different sources, neuropsychology, so study of brain lesion patients, animal lesions, for instance, what the Duncan’s group observed was that when you have a lesion in any node of this frontoparietal multiple-demand system, the patient had problems with classical intelligence, fluid intelligence tasks, the g-factor tasks, like omission of relevant components, insertion of irrelevant ones, goals that are unfulfilled, and etc.
Two other examples are neuroimaging, so common multiple demand regions are activated for many cognitive operations and single unit recording in animals. The same neuronal population could learn to code for different aspects of the stimuli, depending on the task at hand in a given moment, in a flexible way, especially in this multiple-demand system fronto parietal, especially prefrontal regions.
A review by Duncan and colleagues shows that independently of the nature of the task, if the task is difficult and complex enough, you get a cluster of activations, this is only focusing on prefrontal regions, that are activated in both hemispheres, response conflict, task novelty, working memory, perceptual difficulty, etc.
What is the evidence from single-neuron recordings? Duncan, Friedman, and colleagues taught monkeys to discriminate between different figures, which were created by morphing in different steps animals like a cat on the one extreme and a dog on the other extreme. With all combinations of them, for istance 80% a cat, 60% cat, 60% dog, 80% dog, 100% dog. So there is a continuum in the visual similarity, going from one extreme, the cat, to the other extreme, the dog. What they did was to teach the monkey to discriminate cats and dogs with arbitrary boundaries between these continuous perceptual exemplars. If the monkey wanted a juice drop, needed to discriminate between them, and selectively respond to one or not to the other. And the monkey, after some training, could learn this task very well. What happens to the neurons in the brain? When you go with single electrodes to record firing rate or other indices of activation, in single neurons in prefrontal cortex, you see that some of these neurons became very selective with the category that the monkey learned. So they were, for instance in this case, this was a neuron that loved dogs, so it was activated, firing rate increased a lot, whenever a dog exemplar of the category dog was shown to the monkey. While this neuron didn’t like cats. So now here you see that the activation is not present, so there is a lowering of the firing rate for cats. What is surprising here? What is surprising is the fact that even very similar images, 60% cat or 60% dog, were extremely categorized in these patterns of firing rate inside that neuron. So this is not a perceptual neuron, this is a decision-making neuron that could discriminate between cats and dogs, even if some of these exemplars are very close to each other and very confusing, very difficult to discriminate. But it is even more surprising the next bit of evidence.
They took a monkey, among the ones that were trained, with implanted electrodes, and they started to teach the monkey a completely different classification of the same stimuli.
Now the monkey had to discriminate between a certain category, composed of cats and dogs, another category composed, again, of cats and dogs, and a third category that was completely orthogonal to the previous one. So the monkey had to forget the previous classification and learn this completely arbitrary new classification. When they went inside the brain and looked at what was happening, they found out that some similar neurons that were totally able to discriminate the previous categories now were able to code for the new orthogonal category and forgot the previous one. So very flexible neurons that could do everything needed by the task at hand in a flexible way.
The multiple demand system is a hypothesis that John Duncan in the UK put forward about how different brain regions cohesively take care of difficult, complex tasks related, for instance, to the general intelligence factor, g-factor. So, as he observed, also in some meta-analysis and also in electrophysiological recordings, MRI studies, etc., there are some frontoparietal regions, also including the supplementary motor area, anterior cingulate cortex, that take care of difficult task conditions. Whatever these conditions are, whatever are the stimuli, materials, so you might have task switching, you might have planning of different actions, task sequence, etc., etc. These regions come up online together when dealing with these cognitive challenges. And lesions
inside that system, wherever in the system, would explain deficits in general intelligence, and the amount of deficits would be proportional to the amount of lesion inside the system, while lesions outside the system would cause more specific deficits but not specific problem with the general intelligence and complex tasks.
What does the term connectome mean?
The concept of connectome is key to contemporary thinking about brain networks within neuroscience. This word was first coined by Olaf Sporns, Giulio Tononi but in parallel and independently also Patrick Eichmann. It is a term that defines a matrix representing all possible pairwise correlations between the neural units of the brain. A connectome should ideally represent knowledge about the wiring diagram of our brain, how our brain regions are connected to each other
More general concepts of connectome would include either a matrix of anatomical connections between large-scale brain areas, or between individual neurons in the micro-scale, or the matrix of functional interactions that is revealed by the analysis of some physiological signal, physiological processes unfolding either slowly, such as in the case of fluctuations measured with the BOLD response and fMRI data, or as fast as the high-frequency neural oscillations detectable with electrophysiological means such as local field potential, signal units recordings, or non-invasive electrophysiology such as electroencephalogram, magnetoencephalogram, and so on.
We have way too many neurons in our human brain, an enormity that would make this enterprise, at least with the technology we have nowadays, impossible. But even with this lower ambition of modeling the Connectome in an approximated way, we have now some advantages with respect to the initial connectionist models by cognitive scientists in the previous century.
One of these advantages is the fact that we have now many mathematical and conceptual advances on how we can study scientifically complex networks. First of all, for instance, graph theory
Moreover, we also have the development of sophisticated technologies for measuring nervous systems that were not available before; genetics to cite a few, optical microscopy, multi-electron recording, histological gene expression. etc.
Scholtens work?
Scholtens has significantly contributed to the understanding of the connectome by exploring how the brain’s structural connectivity underpins cognitive functions and how alterations in these networks can lead to neurological disorders
The connectome of C.Elegans?
The connectome of a little worm, C. Elegans, which has a fairly simple nervous system composed of 300 neurons connected with each other with just 7,000 connections on average. Anyway it took a tedious amount of time to map the C. elegans, 12 years.
Sebastian Seung?
US physicist working in theoretical neuroscience, uses the term human connectome as a metaphor because we couldn’t reach the rich level of precision, the synaptic level of precision that could be reached with simpler animal models such as the C. elegans in humans. He had a very inspiring TED talk a few years ago where he talked about the connectome with a metaphor, the metaphor of a riverbed, the connectome is like the riverbed, which both guides the neural activity and connectivity, but also is shaped by the water that is flowing through it. So, neural activity out of metaphor is guided by its hardwired connections, white matter connections, but it can also shape the connections thanks to its activity and interaction with, for instance, environmental experiences. So the metaphor illustrates how thinking and neural activity alters the connectome every time in a very dynamic way, adding to the difficulty of mapping the human connectome because it is not a static situation, but it’s ever-changing. So, as we start studying it in a hypothetical living brain, it’s going to change very quickly due to experiences, neuroplasticity, etc.
Human Connectom Project
an NIH-funded big project whose goal is to build a neural map connectome that would shed light on the both anatomical and functional connectivity in the healthy human brain, as well as producing a wealth of data that could be then shared among scientists, qualified scientists to analyze them from different angles to ask and address and attack different scientific questions. So it’s an interesting experiment of open science as well, both in healthy and in pathological conditions, developmental, neurodegenerative, psychiatric conditions. There are a lot of other projects that have been inspired by the HCP Human Connected Project, for instance, the ADNI is used to track also longitudinally the neurodegenerative changes in the brain of Alzheimer’s disease, for istance.
Human Connectom Project
an NIH-funded 2009 big project whose goal is to build a neural map connectome that would shed light on the both anatomical and functional connectivity in the healthy human brain, as well as producing a wealth of data that could be then shared among scientists, qualified scientists to analyze them from different angles to ask and address and attack different scientific questions. So it’s an interesting experiment of open science as well, both in healthy and in pathological conditions, developmental, neurodegenerative, psychiatric conditions. There are a lot of other projects that have been inspired by the HCP Human Connected Project, for instance, the ADNI is used to track also longitudinally the neurodegenerative changes in the brain of Alzheimer’s disease, for istance.
Blue Brain Project?
2005, was a Swiss brain research initiative that aimed to create a digital reconstruction of the mouse brain, in this case, with a lot of simulations that were ambitious enough to somehow promise biologically realistic models at the end, in 10 years, by Henry Markram, who launched this project
Blue Brain Project?
2005, was a Swiss brain research initiative that aimed to create a digital reconstruction of the mouse brain, in this case, with a lot of simulations that were ambitious enough to somehow promise biologically realistic models at the end, in 10 years, by Henry Markram, who launched this project
Human Brain Project?
always by Markram, this project then became a European project worth billions and billions of dollars. The Human Brain Project is a 10-year, at least in the initial intentions, research project based on supercomputer simulations with the promise of modeling the whole human connectome in just a decade, a promise that was not fulfilled. So even in the first two years, it was quite clear that this promise was far from being reached. And so Markram was asked to step down from the project. The project became a European competitive project where a lot of PIs tried to come up with new ideas to develop the project further. And nowadays, there is an emanation of it, which is called the eBrain, that would somehow exploit the infrastructure that has been developed inside the Human Brain Project for new interdisciplinary collaborative efforts. So an interesting, well-funded project that, however, did not fulfill the promise of modeling in a realistic way the whole human connectome in 10 years.
Cajal Blue Brain Project?
based in Madrid, including neurological experimentation and computer simulation together, with a strong focus on analysis. They were also a great initiative for the development of the new technology.
China Brain Project?
a 15-year project approved by the Chinese National People’s Congress in 2016, which is then still ongoing, of course. Focusing on research. There is not only cognitive neuroscience involved, there is also artificial intelligence, brain machine interfaces technologies, also neuroethics from the point of view of the values of the Chinese culture.
China Brain Project?
a 15-year project approved by the Chinese National People’s Congress in 2016, which is then still ongoing, of course. Focusing on research. There is not only cognitive neuroscience involved, there is also artificial intelligence, brain machine interfaces technologies, also neuroethics from the point of view of the values of the Chinese culture.
Brain and Mind?
a project in Japan which focuses mostly on brain diseases from a connectome perspective.
Brain Initiative?
in US launched by the Obama’s administration in 2013 with a blend of public and private funding. Many of these big projects, either national or global brain projects, had some data sharing policies that allow qualified researchers to get access to the products of these projects, the data sets that have been produced, and the infrastructure that has been developed. And there are also from time to time competitive calls linked to some of these projects where each PI could participate and try to gain access to the data.
Neuralink project?
Launched by Elon Musk in 2016, aims to develop implantable brain-machine interfaces (BMIs) that allows direct communication between the human brain a d external devices such as computers
The Graph Theory
The origin of graph theory dates back to 1735 and the author is Euler in the Prussian town of Königsberg which nowadays is a Russian town of Kaliningrad. Euler was living there, and a problem of this town was to understand whether it was possible to make the routes around the different streets efficient and popular, whether it was possible to walk around the town through an itinerary that would cross each bridge around this river only once. Euler had this problem to solve and he solved this conundrum by representing the four land masses divided by the river as four nodes abstracted as four letters, and then the different connections between them, the bridges, as low-case letters, the edges in graph theory terminology.
This solution of the problem purposely ignored the details of the geography of that city and focused more on what later became known as the topology, So the topology of a graph defines how the links between elements of a system are organized. Elements are known as nodes and links between nodes are known as edges. In graph theory developed nowadays we have a lot of indices to measure somehow the integrative capacity of a network or a system of networks and the segregation or modularity of this network so two forces that try to reach an efficient balance, integration and segregation.
What’s integration?
The integration is the capacity of different areas of the brain even distant ones to talk to each other
What’s segregation?
indicates the modularity of more local regions that work in a cluster like fashion.
Whats the Clustering Coefficient in Graph Theory?
tells us the probability that two nodes that are each directly connected to a third node will also be directly linked with each other, so the clustering of connectivity in a very local fashion. Index of network segregation
What’s the Characteristic path length?
is the minimum number of edges or connections required to link any two nodes in a network. You might have many ways to connect two nodes, the characteristic path length tells you the minimum number of connections to go from one node to another node that could be also distant from the first one so the path length is simply the distance between two nodes, measured as the number of connections between these two nodes or edges, that’s why it measures the integrative capacity of a network, how easily information can flow from one region of the network to another region of the network. Index of network integration
Betweenness Centrality?
is the number of shortest paths that pass through a node, a node with higher betweenness centrality would have more control over the network because more information will have to pass through it. So betweenness centrality is a measure of hub-like activity of a given node. Index of control over the network. Takes into account the importance of the connections.
Nodal degree?
number of connections of a node to other nodes in the network. So it’s another measure of how central a node is with respect to the context of the networks, again representing a hub-like activity. Index of hub-like activity
What’s an hub?
Hubs are nodes that have a large number of connections (high nodal degree). New nodes connect preferentially to hubs
What’s modularity?
Modularity is the decomposition into subsets of nodes that are more densely connected with each other than with the rest of the system, the rest of the networks with other modules, and the clustering coefficient is one index of modularity
Hierarchical modular organization?
is the organization of many real-life systems including not only our brain but also for instance airport systems; each topological module of the airline network corresponds more or less to a continent or a political territory. There are only a few big airports known as hubs that from one continent reach airports in another continent. All the other smaller airports would connect these big ones on another continent on another network through the hubs within their own geographical units. Typically most of the our intermodular communications are mediated by the so-called hubs that link different modules. Interestingly enough many real-life systems complex systems share these topological properties of modularity suggesting that it represents perhaps an efficient way a near universal characteristic of complex networks.
Small-world networks?
such as airport systems or the brain, networks that reach an optimal balance between low path length (integration) and high clustering (segregation). The right balance would allow an efficient system, while non efficient system are regular networks, that have very few modules an a local efficiency, or random networks which have a global efficiency. Remember Cajal’s conservation law, which now becomes a very modern perspective. He was hypothesizing that in the neural system there is a force that tries to minimize the axonal wiring cost and the conduction delay in the transmission of information between neurons, sparing for instance time to transmit signals from one neuron to another, this would be in modern topological graph theory terms the maximization of the integrative capacity of the brain,
Small-world networks?
such as airport systems or the brain, networks that reach an optimal balance between low path length (integration) and high clustering (segregation). The right balance would allow an efficient system, while non efficient system are regular networks, that have very few modules an a local efficiency, or random networks which have a global efficiency. Remember Cajal’s conservation law, which now becomes a very modern perspective. He was hypothesizing that in the neural system there is a force that tries to minimize the axonal wiring cost and the conduction delay in the transmission of information between neurons, sparing for instance time to transmit signals from one neuron to another, this would be in modern topological graph theory terms the maximization of the integrative capacity of the brain,
Examples of simple graphs
- Binary, Undirected Graphs: Connections between nodes are either present (1) or absent (0), with no directionality. A basic network where nodes represent brain regions and edges represent the presence of a connection, These graphs represent functional connectivity. These can be projected onto the brain’s anatomical structure or represented abstractly.
- Binary, Directed Graphs: Similar to binary graphs but with arrows indicating the direction of information flow. Tract tracing (in animal models) can reveal the origin and destination of neural signals, allowing for directed graphs.
- Weighted, Undirected Graphs: Edges have weights representing the strength of connections but lack directionality. Derived from Diffusion Tensor Imaging (DTI), which measures white matter integrity. Fractional anisotropy values indicate how water diffuses along axons, inferring connection strength. Edge thickness reflects connection strength. These graphs can be:
- Anatomically mapped onto the brain.
- Topologically displayed to highlight network modules and connectivity patterns.
- Non-Anatomically Plausible Graphs: Abstract representations emphasizing modular organization rather than spatial arrangement. fMRI-based graphs showing different brain modules in distinct colors, with connector hubs (critical for inter-modular communication) highlighted as squares.
Advanced Applications of Graph Theory in Brain Research
Tractography with MRI: Builds weighted, undirected graphs representing structural connectivity using DTI. Weights reflect the density of axonal connections but lack directional information.
Use: Studying brain organization and identifying connectivity patterns.
Clinical Applications (e.g., Alzheimer’s Disease):
1. Comparison Studies: Graph theory measures can compare brain connectivity in Alzheimer’s patients versus healthy controls.
2. Findings:Alzheimer’s patients typically show reduced connectivity in key networks:
- Default Mode Network (DMN):Linked to autobiographical memory and self-referential thought.
- Executive Control Network: Involved in cognitive functions like planning and decision-making.
- Inter-network Connectivity: Decreased communication between major brain networks in patients.
Drawbacks of graph theory in connectome
Robustness and stability of graph theory measures has yet to be fully demonstrated. There are issues in replicating the graph theory solution for a given data set from one time to another.
Sometimes, there are convergencies between solutions reached with structural MRI or structural data in general and solutions reached with functional. Data functional connectivity data they don’t match as some papers would claim always
Convergence of data measured at different frequency and spatial scales and with different techniques. Could the human connectome represented with source data such as connectivity with fMRI be similar to the human connectome coming from data like EEG sources or MEG? Well, the answer is controversial
What would be the impact of different analysis pipelines of network generation and analysis? You change a little parameter while generating the graph solution and the graph solution changes which is quite worrying because researchers have a lot of degrees of freedom in analyzing these very complex data with these very complex analytical tools, so this is also another matter of concern.
Drawbacks of graph theory in connectome
Robustness and stability of graph theory measures has yet to be fully demonstrated. There are issues in replicating the graph theory solution for a given data set from one time to another.
Sometimes, there are convergencies between solutions reached with structural MRI or structural data in general and solutions reached with functional. Data functional connectivity data they don’t match as some papers would claim always
Convergence of data measured at different frequency and spatial scales and with different techniques. Could the human connectome represented with source data such as connectivity with fMRI be similar to the human connectome coming from data like EEG sources or MEG? Well, the answer is controversial
What would be the impact of different analysis pipelines of network generation and analysis? You change a little parameter while generating the graph solution and the graph solution changes which is quite worrying because researchers have a lot of degrees of freedom in analyzing these very complex data with these very complex analytical tools, so this is also another matter of concern.
Dimensions in biological networks
Biological networks such as the brain are constrained in spatial dimensions (3-dimensions) and fluctuate dynamically over time (the fourth dimension, how the three dimensions change with time, how activation in different brain regions, different space locations change over time). For brain network analysis, the dimensions of space and time must be somehow incorporated, complemented by a fifth dimension that we call topology. The Graph topology is the definition of the organization of the links/ edges between system elements units called nodes.
There are other important biological elements apart of neurons with their axonal and dendritic connections?
The connectionist/associationist and holistic views basically concerned with connections in the brain, which mainly happen through synapses and axons. Therefore, these views would be more interested in the axons and what supports their electrical transmission efficiency from the presynaptic neuron to the postsynaptic one to the axon terminal to trigger chemical signaling, known as synapse.
But don’t forget also other main characters in this game such as for instance, the oligodendrocytes, some special types of glial cells which are very important because they myelinate; they produce the myelin sheath to insulate the axons, to increase speed and efficiency of their action potential transmission from the presynaptic to the postsynaptic neuron. So oligodendrocytes provide electrical insulation by myelinating axons, which reach faster and more efficient transmission
In the periphery we don’t have oligodendrocytes that myelinate neurons but we have Schwann cells but they perform the same role.
Astrocytes is the most abundant type of neuroglial cell that have as a main goal that of promoting the myelinating activity of oligodendrocytes through the provision of ATP, the most important currency of energy in our body and brain, of course. And they are also involved in other functions such as providing nutrients to the neurons; they can fuel neurons with, for instance, glucose during periods of high rates of glucose consumption and glucose shortage; they could provide lactate which is another source of neural energy for metabolic needs; they could modulate synaptic exchange with uptake and release of neurotransmitters in the space outside the neurons; they also guide vasomodulation so they might serve as the intermediate cells in neuronal regulation of blood flow and speaking of which, if you have fine techniques that allow you to activate specifically glial cells such as the astrocytes and you measure fMRI with fMRI bold activity like the group by Tanaka and colleagues did through optogenetics, you can surprisingly find that part of the bold response is depending not on neuronal activity but on astrocytes. So whenever we see activations coming from fMRI, we should ask ourselves: Is the source of this activation only neuron-related or is it also due to a contribution from the astrocytes
Tanaka and collegues
through optogenetics they found out that part of BOLD response is depending not on neuronal activity but on astrocytes. So whenever we see activations coming from fMRI, we should ask ourselves: Is the source of this activation only neuron-related or is it also due to a contribution from the astrocytes
Theory of primacy of prefrontal cortex?
Based on the allometric view, most evolutionary reconstructions in the past put a lot of emphasis on the differential brain structural changes of the frontal lobes across primate species. Frontal lobes are very tempting because they are known to be the seat of high-level cognition, executive functions, planning, emotional regulation, cognitive regulation, language production, etc. so what makes us humans. In other words it is, according to some hypotheses,the disproportional enlargement that used to be considered to be largely responsible for the uniqueness of human cognitive specialization.
Terence Deacon
The neuroscientist Terence Deacon argues that the emergence of symbolic capacities such as language, planning, emotional regulation, executive functions is due to larger prefrontal cortex in human brain evolution.
Study by Hill and colleagues
shows how much we should expand a rhesus monkey’s brain to match a human brain. They concluded that there is this resemblance between what happens in phylogeny, that is evolution, and in ontogeny, that is development.
Study by Hill and colleagues
shows how much we should expand a rhesus monkey’s brain to match a human brain. They concluded that there is this resemblance between what happens in phylogeny, that is evolution, and in ontogeny, that is development.
Semendeferi and colleagues
They challenged the theory of the primacy of the prefrontal cortex size.
An important problem with the earlier studies is that they are based on the analysis of one or two hemispheres of a few primates and other non-primate mammals. Usually with exclusion with the apes that are our closest relatives in evolution, and they way they compared the frontal corteces.
Semendeferi applied the normalization between frontal cortex versus the rest of the brain minus the frontal cortex. Human frontal cortices do not appear, according to this study by Semendeferi and colleagues, as disproportionately large in comparison to those of the great apes, when you correct for the volume of the rest of the brain. So the theory of prefrontal cortex size related to better cognitive capacities would be a false myth. These authors suggest that the special cognitive abilities attributed to the frontal advantage in humans might be instead due to differences in individual cortical areas and to perhaps some other characteristics like richer interconnectivity within and outside the frontal cortex, none of which would require a larger overall relative size of the frontal lobe during the hominid evolution.
Speaking about connectivity differences, macaque and human connectomes showed substantially similar connectivity in most homologous regions. So they seem to be wired, at least macroscopically quite similarly in general. However, there is some reconfiguration of connectivity during primates’ evolution, mainly in some parietal and some cingualte region connectivy, these is a lack of preservation across the two species, showing a sort of rewiring during evolution.
Similarity of grey matter variability in monkey and humans
Gray matter intra-species variability is due to genetic and epigenetic variability in individuals of the same species, on which then natural selection would act, because as Darwin taught us, natural selection needs variability across individuals. In humans and macaque brain, this intra-species variability tends to occur in the same regions. So inter-individual variability within the brains of humans is similar to inter-individual variability within the brains of monkeys. Those regions that mostly vary among humans are those that mostly vary across monkeys.
There is less variance within each of the two species in two very ancient parts of the brain, the hippocampus and the olfactory system, the so called archicortex. So at the level of hippocampus and piriform gyrus, the olfactory system, variability is very low, both in humans and in rhesus monkeys. But as you go away from these ancient brain centers, the level of variability increases. The neocortex, especially associative cortex, shows an explosion of inter-individual variability. This would help build new levels of complexity, and perhaps testing new configurations from one individual to another of cognitive capacities on which evolutionary pressure then would act and play a role to look for the fittest individuals in a given environment, adapted to a given environment, and select them.
Thiebaut, De Schotten and Croxson study
Thiebaut, De Schotten and Croxson correlated the grey matter variability within species and the evolutionary macaque-to-human expansion of brain regions, that is, how much you need to stretch a macaque’s brain to match the human brain. And you see that the two phenomena are correlated positively, apart from the occipital cortex, which the authors explained with the fact that in humans, the occipital cortex undergoes a stretching and then an introgression inside the inter-hemispheric sulcus, which could have caused this mismatch, also for anatomical constraints, not probably just for functional ones.
Smaers and colleagues study
investigated whether the ratio between white matter and gray matter volume in the left prefrontal cortex in that case changed across species, monkeys, apes, and humans. Apes and monkeys lie in the regression line with a substantial balance between gray and white matter in the prefrontal cortex in terms of volume, while humans instead deviate from this regression line. They found out that humans have more white matter than gray matter in the prefrontal cortex. So, this is suggesting a stronger hierarchical interaction in the prefrontal cortex areas, associative areas, or a higher level of integration to use some graph theory terms, or holistic terms within the prefrontal cortex of humans.
Bartolomeo and colleagues
When techniques such as axonal tract tracing are used in monkeys (which is a very invasive way to track white matter tracts of monkeys’ brains) and non-invasive neuroimaging techniques like DTI are used in humans, you can somehow gather evidence from both species’ connectivity in the brain and compare this connectivity somehow, which is what Bartolomeo and colleagues did for different white-matter tracts in both monkey and human brains.
They find commonalities and differences: The cingulum and the uncinate fasciculus, resemble each other in the two species in terms of the pattern of connectivity, the overall structure. Of course, the Rhesus monkey’s brain has been enlarged in some areas where it’s much smaller in reality than the human brain.
The cingulum, for instance, is an important white-matter tract that connects with each other regions belonging to the default mode network, which is the network that gets activated when you are asked to think about nothing, when you are inside the MRI scanner, and actually your brain never thinks about nothing, he thinks about autobiographical memories, details in space and time, awareness, mind-wandering, etc., are all implemented in the default mode network areas, connected through the cingulum. Probably the rhesus monkeys do the same.
Another similarity, is in the uncinate fasciculus that connects the ventromedial prefrontal cortex, the prefrontal pole, with anterior temporal lobes and the two amygdalae, the two almond-shaped nuclei that are very important for emotional processing, such as fear processing, for instance, which can suggest the function of this important tract, that is, to regulate emotions, to keep emotions in check, for instance. Cutting this tract, you would end up with psychopathic, antisocial behavior, perhaps.
Differences between human and monkey white matter tracts
The arcuate fasciculus, in some branches is important to connect different language regions in the human brain, and of this fasciculus only some portion is similar, not in terms of its curvilinear shape, because its arcuate forms an arc only in humans and not in monkeys, but in terms of the connectivity pattern. The other branches of this fasciculus are typically humans, much richer number of fibers and connecting many more regions than in the Rhesus monkey.
Another tract that would change between the Rhesus and the humans in terms of brain tracts is the inferior frontal occipital fasciculus, connecting very far away regions in the frontal lobe with regions in the occipital lobe, which has never been found or described in the Rhesus monkey. In the monkey, there is a tract that connects the frontal lobes with the mid-posterior temporal lobe, but there it stops. It doesn’t get up to the occipital lobes. Probably this tract became much longer in humans because the visual cortex underwent a lot of changes in terms of gyrification. It extended inside the posterior interhemispheric sulcus, stretching and elongating this frontal occipital fasciculus.
Some images detected with DTI in Croxson study would be a good proxy for evolution and variability of white matter across the two species. By observing these images, it is clear that what is variable within humans are, for instance, all associative regions of the neocortex and especially short-range U-shape white matter tracts, which are associative tracts that connect regions with each other for a better integration function, that you don’t find much in the macaque’s brain instead. The occipital lobe is also a hotspot of variability, because it underwent this anatomical pressure and intraflexion within the hemispheric sulcus. What doesn’t change much is instead the archicortex, subcortical ancient regions (e.g. hippocampus) that are not changing either across humans, so within species, or between humans and the macaque monkey, so between species in evolution.
Synaptogenesis
To acquire new knowledge, to learn new skills, implies a change in the efficiency of neural transmission. The way in which neurons communicate with each other is changing signals by means of neurotransmitters released in the synaptic cleft. Better neural communication means that electric signals travel more efficiently, more quickly, improving connections. This can be reached through many mechanisms. One of them is the generation of new synapses, new connections between one neuron and another. This process of formation of new synapses is known as synaptogenesis, which starts very early, in the prenatal period, as soon as neurons start to appear, but also, of course, up to the adult’s brain. So it’s a neuroplasticity structural change that accompanies us throughout the whole life span. Also at the synaptogenesis level we can talk about the debate between nature or nurture, which of the two processes counts more in the formation of new synapses. Synaptogenesis makes new synapses, so more numerous synapses, and efficient communication between already existing neurons
Neurogenesis
Another mechanism of neuroplasticity, known as neurogenesis, refers to deformation and growth of completely new neurons in the brain. It is much more typical of early stages of development, while in the adult brain there is a part of the brain that still shows neurogenesis, and is related with learning new memories: the hippocampus. Hippocampus, for sure, shows neurogenesis throughout the whole life span, it is the one exception to the rule that neurogenesis stops at early developmental stages. There is a dysregulation of hippocampal neurogenesis in some neurodegenerative diseases for instance Alzheimer’s and Parkinson’s disease, although data available from human brains are limited, so we rely more on animal models of these neurodegenerative diseases.
Myelination
A third structural neuroplasticity factor in the human development is myelination, that is, the formation of myelin around the axons of our brain. Myelin is a lipidic fatty substance that surrounds neuronal axons as a shield to insulate axons and increase speed and frequency at which electric impulses called action potentials could travel along the axon. So, in other words, myelin insulation makes the neurons more efficient in terms of information transfer and communication with other neurons. Most myelination, of course, within the first few years of life, but in part, will continue throughout the whole lifetime.
The first circuits that get myelinated are mostly motor and sensory circuits. The primary regions get myelinated first, then the limbic system, and the inter-hemispheric fiber bundles, and finally, the frontal, parietal, and temporal associative areas that are the latest to develop both ontogenetically and phylogenetically, and also, since they are the last to develop, they are the last to be wired and myelinated.
The phases of development
a productive phase, the brain contains a number of synapses that is way too much with respect to the adult’s brain, maximum peak, for instance of synaptogenesis would be at around one year
a regression phase slower but constant regression phase with pruning mostly or parsed estimates so many synapses are eliminated until they reach the density level typical of the adults brain, and this is not always the same it might change depending on the brain area that we are talking about, so the prefrontal cortex for instance is much later in terms of these phenomena a lot of pruning would occur for instance between five and seven years in dorsal lateral prefrontal cortex while most of the pruning in the visual cortex as occurred already in the perinatal period. How these mechanisms occurs? According to Hebbian rule, if those neurons that have a connection do not show co-activation for too long, probably this is one of the triggering events. Also, neurotrophic factors in the other phenomenon of neural death would play a role.
Pre-and post-natal human brain development occurs similarly in humans and other mammals, with a key difference which is the speed. Our brain, human brain, develops in a much slower pace than other mammals’ brain. It extends across a much longer temporal window (18-25 years). Of course development consists of an initial hyper production followed by pruning as we have seen. This prolonged neurodevelopment has some advantages:
• Cognitive flexibility: extended plasticity supports adaptability, problem solving, creativity.
• Complex learning: provides time to acquire skills like language, culture and technology
• Social development: fosters strong caregiver bonds and sophisticated social skills
• Emotional regulation: enables better self control and decision making
• Cultural advancement: facilitates mastering tools, innovation and passing knowledge
• Experience integration: allows diverse life experiences to shape the brain for nuanced thinking
• Recovery opportunity: extended plasticity and recovery from early adversities
Pruning and apoptosis
The neuroplasticity of the brain not only builds up new structures, but it also destroys some other structures that are not useful at all. These regressive mechanisms of neuroplasticity in the brain during development are pruning and apoptosis.
So after synaptogenesis during normal development, mostly until the teen years of age, but not only we can also observe these regressive mechanisms such as pruning, that is cutting out non-relevant connections as the farmers would do with the plants to get a much more efficient development, and apoptosis, programmed a neural death.
What’s a map?
point for point correspondence of a portion of the brain area to a specific area external to it. As a metaphor we can use geographical maps. You see a geographical map is a representation of something that is external to it. What is this external world that is represented inside our brain? It could be for instance a body portion either from a motor point of view or a sensory point of view. And in this case we are going to talk about somatotopic map, where the body is represented inside our brain. Or the external world, thinking about for istance the auditory and visual system. The visual system for istance maps the external world, already from the retina, in an ordered but magnified scale way. And this would be a map called retinotopic map, and later on also eccentricity map of the fovea to the periphery, in an inverted way in the visual cortex.
What’s a gradient?
A gradient in the brain is a vector along which a cortical characteristic gradually and continuously and smoothly changes, in a spatially continuous order.
Activity dependent sorting mechanisms cause neural selectivity to change smoothly in a brain map along the cortical surface, but it is more a continuum rather than discrete rappresentation of body areas in the brain, as initially described by the neurosurgeon Penfield for the sensory motor. So it’s much smoother, there’s a lot of overlap and the overlap is also dynamic. If unfortunately one breaks a hand or arm, what would happen would be a reorganization of the all map which takes into account that the arm is immobilized, even if transitorily. The same would happen with animals that have the differentiation of fibers coming from the peripheral parts of the body. Other regions, other parts of this somatotopic representation would conquer, literally, the territory left deafferent by other entities of the body
What’s a hierarchy?
A hierarchy, strictly speaking, would imply a control system from higher order to lower order regions. So there are some regions of the brain at the top of the hierarchy that as a cascade would somehow control and rule other regions down the cascade. There are regions that receive inputs from regions downstream, but just recieving input from downstream doesnt per se imply a hierarchy if there’s no feedback control to the regions below; these regions are just using some products of some operations but arent going back to modulate the activity of the region below from which they received the input. So you can talk about hierarchy just if a feedback control occurs.
What’s a somatotopic map?
A somatotopic map would be a point for point correspondence of an area of the body to a specific point in the central nervous system. Primary motor and sensory regions possess this type of somatotopic maps. We talk about maps because they are ordered in space, although in a distorted way, and that’s why we have an homunculus, hands tip of the fingers, the tongue are magnified with respect to other districts of the body that have a fewer receptors, from the somatosensory point of view, or less fibers from the motor point of view.
What’s an eccentricity map?
maps in the visual cortex, what is magnified is what is most important for our attention, so when we observe a scene we try to put the center of interest in the outside world inside our fovea, the most sensitive part of the retina, where there are a lot of cones which have some characteristics that are then reverberated in the parvo-cellular stream and are very useful for understanding the spatial details of the objects we are observing, for the visual acuity, the capacity to distinguish between the two spots that are classified to each other, something that is not possessed by the periphery of the retina, outside the fovea, where instead we have an abundance of another type of photoreceptors called rods, which instead are very sensitive to light. And indeed, if we want to detect some very weak light, it’s better to do so with the periphery of our gaze, rather than with the fovea. We don’t understand the details, but at least we would perceive the presence of some light.
What’s an eccentricity map?
maps in the visual cortex, what is magnified is what is most important for our attention, so when we observe a scene we try to put the center of interest in the outside world inside our fovea, the most sensitive part of the retina, where there are a lot of cones which have some characteristics that are then reverberated in the parvo-cellular stream and are very useful for understanding the spatial details of the objects we are observing, for the visual acuity, the capacity to distinguish between the two spots that are classified to each other, something that is not possessed by the periphery of the retina, outside the fovea, where instead we have an abundance of another type of photoreceptors called rods, which instead are very sensitive to light. And indeed, if we want to detect some very weak light, it’s better to do so with the periphery of our gaze, rather than with the fovea. We don’t understand the details, but at least we would perceive the presence of some light.
Example of hierarchical organization: the two visual streams
A specific example map and hierarchical organization could be found in the visual system, which is divided in two main pathways or streams. In this case the hierarchy imply that each stage has to manage visual information coming from the previous stream. And it has been also demonstrated that through feedback connectivity, as higher level regions in the visual system could somehow control lower level ones.
• The ventral visual stream where there is a progressive coding for the identity of visual objects, first divided in their smallest possible components, segments of different orientations, colors, frequencies in space. It runs from V1 primary visual cortex to the temporal lobe. It derives from parvocellular neurons
• The dorsal visual stream is responsible for processing spatial relationships, motion and coordination of action. It goes from the visual cortex upwards towards temporal and parietal structures. It codes for more spatial information rather than identity, since it orginates from magnocellular neurons. So the size of the object, the spatial location, geometrical organization, movement, trajectories, depth, etc.
So the hierarchical organization:
Early stages: basic feature
Intermediate stages: simple patterns
Higher stages: complex object recognition/ spatial navigation and action coordination
Catani study
Both in post-mortem or in vivo analysis tools to observe white matter structures, and histology, dissection of occipital lobe in 3D, typical U-shaped connections that allow the progressive traveling of information from one visual region to the next are visible. In this study they did in vivo tractography reconstruction of U-shaped fibers along the inferior occipital temporal cortex, flanking the inferior longitudinal fasciculus.
Catani study
Both in post-mortem or in vivo analysis tools to observe white matter structures, and histology, dissection of occipital lobe in 3D, typical U-shaped connections that allow the progressive traveling of information from one visual region to the next are visible. In this study they did in vivo tractography reconstruction of U-shaped fibers along the inferior occipital temporal cortex, flanking the inferior longitudinal fasciculus.
Catani study
Both in post-mortem or in vivo analysis tools to observe white matter structures, and histology, dissection of occipital lobe in 3D, typical U-shaped connections that allow the progressive traveling of information from one visual region to the next are visible. In this study they did in vivo tractography reconstruction of U-shaped fibers along the inferior occipital temporal cortex, flanking the inferior longitudinal fasciculus.
Which deficits are due to lesions in the dorsal stream?
Simultanagnosia, Optic ataxia, Oculomotor Apraxia (which together give the Balint syndrome), and Adenotoxia
Which deficits are due to lesions in the ventral stream?
Pure Alexia, Achromatopsia, Visual Agnosia, Acquired Prosopagnosia
Which deficits are due to lesions in the temporal pole?
Semantic dementia, other semantic deficits
Simultanagnosia
A problem linked to the dorsal visual stream, the inability to perceive more than a single object at a time, perceive objects in the visual field as a whole, for istance you see a tree but not the rest of the forest around that tree.
It has been described by Wolpert in 1924, but even earlier on, in 1909 by Balint who used the term “psychic paralysis of gaze” to show the same type of syndrome.
Typically, simultanagnosia might result from bilateral parietal occipital lesion, so more dorsal with respect to the V1 location.
A classic test for a patient, would be the kitchen scene, a test used to determine if a patient has simultanagnosia. If he/she can see more than one thing at a time happening within the same scene. So the patient with simultanagnosia would, for instance, tell you what’s happening in a single area of the kitchen, totally neglecting the other scenes.
Another classical experimental paradigm in psychology is the Navon test, where you ask participants to either focus on the global shape or on the local one (a H done with many little letters T). There could be compatibility between the two (a big T done with many little T).
Optic Ataxia
A manifestation of problems in the dorsal visual stream. It is the inability to voluntarily move the hand to reach a specific target object presented in the visual modality. Sometimes it is more accentuated, stronger in a crossed fashion, so the right hand moving towards the left hemifield or vice versa. So a visuo-motor coordination disorder, consequent to either unilateral or in the worst case scenario, bilateral lesions in the posterior parietal cortex on the dorsal visual stream, but it can also occur due to disruption of connectivity within the dorsal visual stream or between the dorsal visual stream and premotor/motor regions in the frontal lobes.
Oculomotor Apraxia
The difficulty in voluntarily moving the eye gaze where a fixation is needed to be, so the inability to voluntarily perform purposeful eye movements for instance when asked to follow a moving object in space.
What is the Balint syndrome?
The co-occurrance of Simultanagnosia, Optic Ataxia and Oculomotor Apraxia
Adenotoxia
A possible problem when the dorsal visual stream is lesioned, although very rare. It’s a rare problem because damage to the occipital lobe usually disturbs more than one visual function, so to have a clean adenotoxic patient is quite rare, but when it occurs, lesions are located in the posterior zone of the visual cortex where movements are coded. For instance, in area MT, medio-posterior temporal cortex, in area V5. The main characteristic of this condition is that patients are enabled to perceive moving objects as such. So they will perceive some static snapshots of that moving object.
Achromatopsia
An example of lesions causing problems in the ventral visual stream, is blindness to colors, to see the external world in black and white, basically. The cause would be, for instance, a damage in an area of the ventral visual stream, called V4, which codes for colors, receiving a lot of input from the blobs in V1, for instance.
Two patients, Madame R and Madame D, that Paolo Bartolomeo and colleagues in Paris were reporting in a paper, where V4 area was disconnected from the visual input because of a lesion unilaterally, in Madame R, so she developed a contralesional hemiachromatopsia, which is quite interesting, intriguing, so she could see half of the world very well, with all colors, and the other half in black and white. While the second case was a patient with bilateral lesion in the ventral visual stream, so this patient, Madame D, developed a full, full field achromatopsia. The whole visual field was in black and white, in terms of her perception.
Pure Alexia
An example of an epileptic patient who had to be operated because his epilepsy was intractable pharmacologically. The neurosurgeons, given the location of the epileptogenic focus, were concerned about possible problems in terms of reading capabilities of the patient post-operatively. So they asked some help from neuroscience, who scanned the patient before the surgical operation with fMRI, asking the patient to read inside the scanner and see which areas would light up in the ventral visual stream in concomitance with this reading task. The areas are typical areas such as the visual word form area, laterally, especially on the left side. So the, the region of interest was marked during neuronavigation inside the operatory room. The neurosurgeon knew that he had to avoid that region, to bypass that region when trying to go to the epilatogenic site, and that’s what the neurosurgeon did, but the patient developed pure alexia anyway. She was not able anymore to read after the lesion. This happened because as a reconstruction of tracts from visual work form area to more posterior visual regions demonstrated, the neurosurgeon cuts some white matter tracts connecting these two regions. So the visual word form area did not receive input from downstream visual regions and could not record letters anymore, because of that.
Visual Agnosia
The patient cannot recognize an object if presented in the visual modality. But a typical dissociation is with the tactile modality, if you show the same object in the tactile modality and the patient is able to touch it and explore it, the patient names correctly the object, of course, if the object is known to the patient, so tactile or haptic modality, similarly. Why? Because in these patients, typically, the ventral visual pathway, for instance, the inferior longitudinal fasciculus or areas that are connected through it in the ventral visual stream, are damaged. But instead, the somatosensory region that calls for tactile information, for haptic information, and the semantic center in the temporal pole are intact and connected, so the patient could name the object through this alternative modality and route. They are connected, for instance, through portions of the haptic fasciculus, superior longitudinal fasciculus, third branch, etc.
Visual Agnosia
The patient cannot recognize an object if presented in the visual modality. But a typical dissociation is with the tactile modality, if you show the same object in the tactile modality and the patient is able to touch it and explore it, the patient names correctly the object, of course, if the object is known to the patient, so tactile or haptic modality, similarly. Why? Because in these patients, typically, the ventral visual pathway, for instance, the inferior longitudinal fasciculus or areas that are connected through it in the ventral visual stream, are damaged. But instead, the somatosensory region that calls for tactile information, for haptic information, and the semantic center in the temporal pole are intact and connected, so the patient could name the object through this alternative modality and route. They are connected, for instance, through portions of the haptic fasciculus, superior longitudinal fasciculus, third branch, etc.
Acquired Prosopagnosia
The inability to identify familiar faces subsequent to brain damage to key regions calling for identity of faces, which is also possible with a dissociation with respect to other non-face objects that are typically, in a pure case of acquired prosopagnosia, intact in terms of perception. So the patient would recognize any type of object, but familiar faces. The lesion is variable, often bilateral, if it is unilateral, more often it is on the right side. An elective side would be the fusiform face area, but other sides are also important for coding for the identity of faces. So there should be some preference in terms of hemispheric asymmetries for face recognition. We would say that the right side is slightly superior in terms of performance in these types of tasks with respect to the left side. Not all or none, but a bit better.
Posteriorly to the fusiform face area there is an area that is important for the coding of locations in general. And posteriorly to it, there is an area that is important for coding for places, scenarios, landscapes, etc.
So basically we are in the ventral visual pathway where there is a division of labor even in these later higher level centers and not only in V1, although now there is not coding for very simple features like in V1, orientation, color, etc. but for entire semantic categories. Faces, buildings, etc. By the way, the name of this place area is Paraipocampal Place Area.
Study by Benton
he described very well an ERP component, event-related potential component, known as N170, because it was a negativity typically showing up after 170 milliseconds from the onset of a face in the visual scene. And in this particular study, he was comparing this ERP when a normal canonical face was shown, or the upright inverted face was shown instead. And you can see that when the inverted face was shown, there is a delay, which is difficult to see, but it’s 10 milliseconds at least, which is a lot for this latency of ERP’s, and also an increase in the processing load that is an increase in the amplitude of this component. Another thing you can notice, you know, that the odd numbers of electrodes on the scalp are left side, the even numbers are right side. So you can very well notice here the hemispheric asymmetry. So this component is much more pronounced on the right side than on the left side. N170 neural sources are unclear, with indications that they could involve not only the fusiform face area, but also occipital face area, and superior temporal sulcus. We could try to reconstruct the sources through EEG source analysis because otherwise from ERPs recorded from the scalp, it’s impossible to know what is the generator inside the brain that manifested this fluctuation recorded from the scalp, so you need to go for source analysis that takes its own limits, or you could go for brain lesion patients, that is, patients that have selected lesions in one or more of these putative areas for the generation of this ERP component specific to faces, N-170. That’s what the authors of another study did, Dalrymple and collegues. Some patients had lesions in occipital face area, but also in the fusiform face area. Other patients had only lesions in one of there areas. So we have a combination of lesions in different regions in these patients. A light color means that those areas are not lesioned, and they show activation in fMRI. When you see a dark color, it means that module was not activated and was probably destroyed by a lesion. You can see that a patient, despite a lesion in the fusiform face area, showed with ERPs, N-170 effect for faces versus non-faces. So this single case shows that the fusiform face area sometimes is not necessary to show a component that is selective for faces, despite what other previous studies were claiming. In order to get a nullification of this N-170 effect, you need a lesion both. The fusiform face area and the occipital face area.
It’s not a very clean study in terms of availability of lesions, because for instance none of the five patients had lesions in the superior temporal sulcus so we don’t know the role of that sulcus because we didn’t observe what is the effect of its damage and we don’t really know exactly what is the effect of the occipital face area because it was not lesioned alone in a patient in a pure clean way when it was lesioned in these two patients it was lesioned together with the fusiform face area,
Working with patients with lesions is a dirty job in the sense that you have to take whatever becomes available, these are very rare cases and you get what nature gives to you, it’s unethical to go and selectively lesion one region or another to see whether these experimental manipulations, you can dissociate their respective role and the interaction between their activations etc. etc.
So, in order to have a deficit in N170 ERP components, brain damage in this study was shown as needed in at least two areas of the face circuit. But of course, there was no complete coverage of lesions in the different nodes of the face circuit, so, this study is not very conclusive. But it suggested that an intact fusiform face area is not sufficient to produce an N170.
Study by Cohen
the right fusiform face area seems critical for face recognition and this study is a bit more robust in the sense that now the number of patients analyzed was 44, so, a lot of patients with prosopagnosia as their distinctive clinical condition and whose disconnectome was reconstructed starting from their lesion. They had a focal lesion from that focal lesion, you project it into an atlas of the white matter tracts, the main tracts in the human brain, and you try to simulate what was the part of the white matter connectivity that would go destroyed disconnected in that patient. Then, you build up a collective map by superimposing the disconnectome of one patient with the disconnectome of all the other patients. They observed that all 44 lesion locations were always connected through negative correlations with frontal regions, especially the left frontal cortex. And if the fusiform face area was not directly lesioned, at least it was positively connected with the lesioned area of the brain. So, this area was either lesioned or disconnected from another lesioned area in order to have prosopagnosia. This is an interesting associative perspective on prosopagnosia. So for all lesions causing prosopagnosia (n = 44), lesion locations demonstrated positive (right fusiform gyrus) and negative (left frontal cortex) correlation to a specifi set of locations.
So they basically did a lesion network mapping: regions consistently damaged in patients with prosopagnosia were superimposed on a map based on healthy brains to visualize how such lesions would affect the networks in which they are included -> FFA is always impaired, especially bc of connectivity damage. All lesion were also negatively correlated to the lPFC -> prosopagnosia comes from lesions in a network which includes the right FFA and left PFC.
Semantic Dementia
Cortical atrophy in the anterior temporal lobe which leads to damage to the semantics system (e.g. no meaning of words anymore), different from Alzheimer disease: cortical thinning in other regions, more related to memory.
Adolf and collegues study
a rare case of a patient suffering from Urbach–Wiethe disease, a very rare recessive genetic disorder with a lot of problems, including some infection to the skin, fever, but more than half of the patients with this disease also show bilateral, very symmetric calcification of the medial temporal lobes including the amygdala. The amygdala becomes calcified and so do not work well anymore. So bilateral amygdala damage, which would show an insensitivity to the intensity of fear expressed by faces. Very selective deficit. The authors confirmed this with the double dissociation between facial expression of faces and recognition of the identity of a face, which is coded in an anatomically separated region, fusiform face area, and other regions, but the amygdala is required to link a visual representation of face expressions on the one hand with a representation that constitutes the concept of fear on the other hand. So, basically when the patient was asked to draw different primary emotions, she was able to draw many types of emotions, but fearful faces were not easy to represent and draw for that patient.
Study by Adolf (and then Damasio)
Highlighted the relationship between amygdala and visual system, rare case of a patient suffering from Urbach–Wiethe disease, a very rare recessive genetic disorder with a lot of problems, more than half of the patients with this disease also show bilateral, very symmetric calcification of the medial temporal lobes including the amygdala. The amygdala becomes calcified and so do not work well anymore. So bilateral amygdala damage, which would show an insensitivity to the intensity of fear expressed by faces. Very selective deficit. The authors confirmed this with the double dissociation between facial expression of faces and recognition of the identity of a face, which is coded in an anatomically separated region, fusiform face area, and other regions, but the amygdala is required to link a visual representation of face expressions on the one hand with a representation that constitutes the concept of fear on the other hand. So, basically when the patient was asked to draw different primary emotions, she was able to draw many types of emotions, but fearful faces were not easy to represent and draw for that patient.
What’s blind sight?
A dissociation between the ventral and the dorsal visual streams, that is the fact that patients who claim not to see anything in the visual field either the whole visual field or part of it, for instance, anoxia, when forced to do so are instead quite able to act on objects that appear in the visual field, but they could not consciously perceive. They could, for instance, above chance detect whether a light was presented in the visual field or not, but they didn’t know what was presented and if you ask them, they say ‘I saw nothing’ but if I had to guess, I would guess that there was something there.
What’s blind sight?
A dissociation between the ventral and the dorsal visual streams, that is the fact that patients who claim not to see anything in the visual field either the whole visual field or part of it, for instance, anoxia, when forced to do so are instead quite able to act on objects that appear in the visual field, but they could not consciously perceive. They could, for instance, above chance detect whether a light was presented in the visual field or not, but they didn’t know what was presented and if you ask them, they say ‘I saw nothing’ but if I had to guess, I would guess that there was something there.
Goodale and Milner
in action blind sight using a pointing paradigm, a patient with left hemianoxia, so incapacity to see what was consciously what was presented on the left side visually, showed clear above-chance manual localization of unseen targets, “where was the target shown?”, “I don’t know”, “well, try to point your hand towards that target” and the patient was able to do so not perfectly but above chance. What does it mean? It means that the dorsal visual stream doesn’t need necessarily the input from B1 which was lesioned in that patient in order to process information from the visual world. Information that it was tuned to process for instance the location of an object in space. So this is an intriguing dissociation where a patient doesn’t see an object but could act on that object without seeing it. Dissociation between lesioned ventral visual stream and intact dorsal visual stream. How is that possible? Maybe there is some residual signal between noise in the visual cortex that is not totally destroyed or maybe there is input from other sub-optimal nuclei coding for visual information and then projecting directly to the dorsal visual stream and bypassing the ventral visual stream at V1, either of these is possible to explain this type of pattern.
Labeled Line model of working memory in prefrontal cortex
Labeled line model
• What: Object working memory (Inferior ventro-lateral prefrontal cortex)
• Where: spatial working memory (dorsal caudal sulcus principalis)
Supported by the studies of Goldman-Rakic on monkeys
Courtney study
fMRI study, where the task administered to participants was to remember the location or, in other blocks, the identity of three faces in a memory set; and for the spatial task the individual indicated with a left or right button press whether the test location was the same as the one of the three locations shown and kept in working memory, for the phase memory task the individual indicated whether the test phase the prop phase had the same identity as one of the three previous phases shown before and then eliminated from the field, regardless of the location. So one was concerning the identity, the other one was concerning the location. The arrogance of this study is that the sensory perceptual stimulus i is exactly the same, faces around different spatial locations and different identities. What changes is the task that participants are asked to perform on the same stimuli. So the low-level features are matched between the two conditions. So what happens in this task? It happens that when the block was concerning the identity of the faces, you see more ventral activations preferentially along the dorsal-ventral lateral prefrontal cortex, when instead the location of the face was the key feature to keep in mind, you see more extensive activation in dorsal parts of the lateral wall of the prefrontal cortex, in line with Goldman-Rakic labelled line model.
Desposito thought about labelled line model
Desposito presented a review of working memory studies that reported fMRI results which show that there is spatial and non-spatial hotspots with fMRI, both dorsally and ventrally. There is no clear-cut distinction at least in terms of fMRI activations in this review paper. What instead is more easily distinguishable is whether the task was to just maintain information in short-term memory, or in was also involving some manipulation of the conduct of this short term memory in working memory. If working memory is also involved they preferentially saw activation in the dorsal prefrontal cortex, which is a bit of evidence that was then also inspiring Petrides and other colleagues.
Petrides
fMRI study, in a control condition participants had just to state whether they had already seen an abstract shape or not, so this is a simple old new memory recognition task: did you see it or you didn’t see it. In the monitoring condition, instead, the individual saw pairs of abstract paintings and were required to select one of them and touch it. The individuals were told that some of the pairs of stimuli would come back again, and that in such cases they would have to select the stimulus that they didn’t touch previously. They concluded that in the ventral prefrontal cortex, what you represent is just the content of information storage of information temporarily in the short-term memory. But if you have to work with that content to perform a more complex task such as these monitoring tasks or other tasks that would come to your mind, a more dorsal activation is required, dorsolateral prefrontal activation, working with memory.
Fuster’s model
interpreted executive functions as our capacity to mediate cross-temporal contingencies between events, stimuli, words, stimuli, and responses etc., in a more and more complex and supra-ordinate way from very simple actions to more abstract rule and conceptual knowledge.
The lowest level are primary sensory and motor areas that process raw inputs and control basic motor functions, than secondary sensory and motor areas that process more complex information, than association cortices for high level integration, and then prefrontal cortex at the top of the hierarchy, with working memory, decision making, planning and go al setting, attentional control
He found out this through monkeys studies, with a delayed response task (monkey see where the food is, then the scene is covered by a screen, when the scene is uncovered the monkey has to remember where the food is), delayed matching to sample and non matching to sample tasks
Miller
According to Miller, a major function of the prefrontal cortex is to extract information about regularities across experiences and so to impart rules that can be used to guide thought and action. So the prefrontal cortex, according to Miller, learns the rule of the game. It shows an infrastructure, some computational capacities that are used for synthesizing a great amount of information, the statistical structure of our environment, etc. And coming up with what are the rules of this apparently random or chaotic environment. This is critical for complex forms of behavior to emerge in primates.
What is the evidence in favor of this view? Most lateral prefrontal cortex neurons, from the literature, are able to reflect the association between a cue and a reward. For instance, reward expectancy. Integration between stimuli, actions performed on those stimuli, and the consequences of these actions performed on those stimuli. Either positive, reward, or negative, punishment. Neurons in lateral prefrontal cortex call for these rewards. The activity of a good part of neurons in the lateral prefrontal cortex reflects associations between objects and actions. For instance, motor actions, saccades in some gaze tasks that they instruct. Prefrontal cortex neurons can reflect learned associations between different modalities. Visual and auditory stimuli, like in Fuster’s studies. So, the view that the prefrontal cortex is a supra-modal cortex aimed at integrating different sources of information with each other.
Badre
There is a very nice review work by David Badre trying to summarize the different models that have been developed to explain whether there is some abstraction, increase, and hierarchical organization as we move along the y-axis in the lateral prefrontal cortex, so from M1 to the frontopolar cortex.
According to David Badre, neural evidence is needed to establish whether a hierarchical representation of rules in lateral prefrontal cortex, is something that is indeed biologically plausible, biologically present, or if this hierarchical representation doesn’t need anatomically distinct regions along that axis.
What are the possible organization principles that have been put forward in the literature on hierarchical organization along the rostrocaudal axis of frontal lobes?
• In posterior regions, in general, a lot of scientists agree that it is represented the control of very limited temporally proximate concrete actions.
• As we move along the axis towards more rostral prefrontal anterior regions, these regions are capable of bridging temporal gaps that are more and more extended. So they have the capacity of branching. They have the capacity to keep in mind abstract representations of sets of rules, associations between different items, et cetera, et cetera, in a temporally extended fashion.
However, all these different models, do not agree on what they mean about abstraction. What is being abstracted more and more as we move forward towards the frontal pole? What does abstraction mean? Moreover, they do not agree either on what we mean by hierarchy. Do they really all imply a hierarchy that is a control from higher level regions to lower level regions, from which higher level regions are fed with new information, products of operations, et cetera, et cetera
For instance, even if a task could be ideally represented in a hierarchical way, this does not tell us that for sure the actual system itself consists of structural hierarchical processing nodes, processing levels. Indeed, some simulations with neural networks, by both Botvinick and Plaut, have shown that these hierarchical representations of goals, sub-goals, nested one inside the other, could be represented very well even within a single layer of a neural network structures. So without any need for a structurally defined hierarchy.
Domain generality in working memory rostro-caudal axis of prefrontal cortex
• in posterior/caudal prefrontal regions we have content-based representations, very concrete representations of contents that are domain-specific
• as we move forward in more anterior/rostral regions, these representations become domain-independent in prefrontal regions.
What are hints about a hierarchical organization in this sense?
Activation during preparatory interval in anterior dorsolateral prefrontal cortex/frontopolar cortex correlates (connectivity) with activation in superior frontal gyrus (BA 8) or inferior frontal gyrus (BA 44) depending on whether the prepared task is spatial or verbal, respectively
So, anterior dorsolateral prefrontal cortex and frontopolar cortex change not only their activation, but also the pattern of connectivity with more posterior regions in a way that is meaningfully linked to the task at hand.
• If the task content requires information from frontal eye fields, because the task requires movements of the eyes, then connectivity of these rostral prefrontal regions increases with frontal eye fields.
• If instead it requires more visual identity type of information, this region changes connectivity with other more relevant posterior regions that call for visual identity.
• If the task is verbal, then it strengthens its functional connectivity with verbal regions, such as for instance the Broca’s area, Brodmann area 44.
So depending on the type of task, the same core rostral prefrontal regions change their connectivity with more posterior regions. So modulation of connectivity with lower level regions, depending on the task demands, is a hint for hierarchical organizations. Although it doesn’t fully demonstrate hierarchical organization by itself.
Domain generality in anterior prefrontal cortex, but dynamic functional connectivity with more posterior regions that depends on specific task domains would be the way in which these models interpret abstraction, in the sense of domain generality, or a synonym, task independence, independence from the material, from the cognitive domain, verbal, spatial, et cetera, et cetera.
Christoff and Gabrieli
came up with a model where as you move along the caudal- rostral axis in the lateral prefrontal cortex, what changes and increases is relational complexity.
Levels of relational complexity along the y-axis
• Concrete features: in the simplest form, the capacity to call for concrete features. If the task only asks you to detect a stimulus color, you just need a very posterior prefrontal region that is the ventral lateral prefrontal cortex, which has the role of maintaining the rules involved to decode very simple specific item properties, stimulus parameters, characteristics like the color, the shape, et cetera. There is a stimulus, there is a response. Very simple SR association with posterior lateral prefrontal regions.
• First-order: you can increase the relational complexity between items to elaborate and evaluate in order to produce a behavior, a response, an action, for instance, when your task is to establish, whether two colors of two different stimuli match. So you now are starting to evaluate not just a simple stimulus-response association, coding from a stimulus to a response, but a simple relationship between concrete properties of different stimuli or different features of the same stimuli. In this case, since this is already a first-order relation to evaluate, the area that is mostly involved is the more anterior and dorsal, the dorsolateral prefrontal cortex.
• Second order: You can go on with this relational complexity, for instance, with a second-level relationship to evaluate, evaluating relationships among relationships. For instance, the task could be ‘Does the mismatching dimension (color or shape) of a target pair match the mismatching dimension of another pair?’ You have two stimuli, a red square and a red circle. You have another two stimuli, whatever they are. That’s the mismatching dimension. What makes them match? What makes them different? In that first case was shape, because color was the same, was red. Does that mismatching dimension, that is, the color, coincide with the mismatching dimension of another set of stimuli, pair of stimuli? You need to hierarchically increase the representation of relationships for that you need a more anterior region in the prefrontal cortex, that is the frontopolar cortex.
So there is a hierarchical organization as we saw in the ventral visual stream: from very simple sub-features like orientations, the angles or colors, etc. starts representing more and more complex features, such an entire semantic categories, faces, buildings, etc. The difference here is that now behavior is necessary, you need to evaluate more what you are elaborating with your perception in order to come up with a response that is more typical of prefrontal regions.
Anyway you cannot go too much farther increasing relational complexity because our frontal pole finishes at a certain point, and in front of us there is the empty space later on, but you can increase this capacity by for instance elaborating many different relational complexities of n levels in a highly skilled expert’s brain. One idea and also some empirical findings of that is that super experts indeed do not keep engaging higher level cognitive control function; they learn how to use a lot stored representations in procedural memory. They have a repertoire of representations that is already stored as schemata in their knowledge of the specific field of expertise, that’s why they are so quick and efficient.
In the typical learning curve in our brain when you start learning a new skill fronto parietal and associative regions come online very much with a very intensive demand. But when you learned that skill becomes so automatized, and these regions can go back to baseline or unless there is some event that Is not known that has not been stored that needs extra effort. Think for instance about driving a car, typically ou don’t need to keep control of everything, you can go with a kind of automatic pilot. But if a cyclist suddenly crosses the street you need to go back with your control.
Christoff and Gabrieli
came up with a model where as you move along the caudal- rostral axis in the lateral prefrontal cortex, what changes and increases is relational complexity.
Levels of relational complexity along the y-axis
• Concrete features: in the simplest form, the capacity to call for concrete features. If the task only asks you to detect a stimulus color, you just need a very posterior prefrontal region that is the ventral lateral prefrontal cortex, which has the role of maintaining the rules involved to decode very simple specific item properties, stimulus parameters, characteristics like the color, the shape, et cetera. There is a stimulus, there is a response. Very simple SR association with posterior lateral prefrontal regions.
• First-order: you can increase the relational complexity between items to elaborate and evaluate in order to produce a behavior, a response, an action, for instance, when your task is to establish, whether two colors of two different stimuli match. So you now are starting to evaluate not just a simple stimulus-response association, coding from a stimulus to a response, but a simple relationship between concrete properties of different stimuli or different features of the same stimuli. In this case, since this is already a first-order relation to evaluate, the area that is mostly involved is the more anterior and dorsal, the dorsolateral prefrontal cortex.
• Second order: You can go on with this relational complexity, for instance, with a second-level relationship to evaluate, evaluating relationships among relationships. For instance, the task could be ‘Does the mismatching dimension (color or shape) of a target pair match the mismatching dimension of another pair?’ You have two stimuli, a red square and a red circle. You have another two stimuli, whatever they are. That’s the mismatching dimension. What makes them match? What makes them different? In that first case was shape, because color was the same, was red. Does that mismatching dimension, that is, the color, coincide with the mismatching dimension of another set of stimuli, pair of stimuli? You need to hierarchically increase the representation of relationships for that you need a more anterior region in the prefrontal cortex, that is the frontopolar cortex.
So there is a hierarchical organization as we saw in the ventral visual stream: from very simple sub-features like orientations, the angles or colors, etc. starts representing more and more complex features, such an entire semantic categories, faces, buildings, etc. The difference here is that now behavior is necessary, you need to evaluate more what you are elaborating with your perception in order to come up with a response that is more typical of prefrontal regions.
Anyway you cannot go too much farther increasing relational complexity because our frontal pole finishes at a certain point, and in front of us there is the empty space later on, but you can increase this capacity by for instance elaborating many different relational complexities of n levels in a highly skilled expert’s brain. One idea and also some empirical findings of that is that super experts indeed do not keep engaging higher level cognitive control function; they learn how to use a lot stored representations in procedural memory. They have a repertoire of representations that is already stored as schemata in their knowledge of the specific field of expertise, that’s why they are so quick and efficient.
In the typical learning curve in our brain when you start learning a new skill fronto parietal and associative regions come online very much with a very intensive demand. But when you learned that skill becomes so automatized, and these regions can go back to baseline or unless there is some event that Is not known that has not been stored that needs extra effort. Think for instance about driving a car, typically ou don’t need to keep control of everything, you can go with a kind of automatic pilot. But if a cyclist suddenly crosses the street you need to go back with your control.