The cognitive neuropsychology of face perception Flashcards
(36 cards)
OBJECTIVES
Discuss the basic theory of human face perception and relate this to the understanding of acquired prosopagnosia.
Discuss in detail the cognitive mechanisms implicated in acquired prosopagnosia.
Discuss the relative merits of treatment regimes for patients who are suffering from acquired prosopagnosia.
What is prosopagnosia?
Prosopagnosia = face blindness, blindness to the recognition of or perception of faces.
What is the Cog N approach?
in essence, draw models, represent cog beh/cognition in models that are created from a variety of evidence and use them to help understand beh/cog. When this breaks down we can frame it on a model and then have insight and reflect on effective therapy.
Why are faces so important?
The human face is one example of a social device that conveys a myriad of info. We look at info and extract info like age, gender, attractiveness, competency etc, all sorts of complex social info is processed from this. We carry out such processing/recognition/cognitions at a millisecond level. So-called “thin slice” studies show that we only have to be shown a face for 250ms and we can extract some complex social traits, such as competency to win an election for an example, from the face. It is something that we are all very much expert in doing, and we do it many hundreds or thousands of times a day.
What is Bruce and Young’s (1986) model called?
The Functional Model for Face Perception
Face perception is a very complex, social force and there are a variety of models that have been created to help us to understand what happens when we see/process a face. None have the reputation or the weight as much as this one, this is a classic model in cog P, has inspired so much work. They collected arrange of evidence coming from beh studies, neuropsychological studies, and then collated this model together (is a box plot).
what type of model is Bruce and Young’s (1986) model?
A sequential model
This model encompasses what we knew about face perception. When we see a face it is inputted into the model (at the top right hand corner), there is one point at which the face percept enters in to our perceptual system, bc there is one point it is essentially a sequential model
First stage of the Functional Model for Face Perception
The structural encoding stage.
(Visual face input) view-centred descriptions —> Expression independent descriptions
Structural encoding is the first thing to happen, the first time you see a face you start to process the structure of the face, you map it to an internally stored configuration, you start to identify whether or not it is actually a face, whether or not we are actually looking at a face from face on, or we are looking at one from sideways – these are view-centered descriptions. We are also processing this internally stored configuration to an expression-independent description, where we are starting to extract expressions, displays of emotion, at this very early stage. These are all carried out in the structural encoding stage.
Leading on from this there are a variety of different branches. Expression analysis and facial speech branches
If we first start seeing a face and see the corners of the mouth are turned up for example, what does this mean? This configuration is different from a standard face - go to the expression analysis box. All know that the processing of expressions is incredibly complicated. The feeling that we experience when we see somebody else smiling or angry or disgusted etc is incredibly complex and involves a range of different cortical sites (but they have put it all into a box). This is one of the limitations, but it is how the model works, identify there is a facial expression and it is analysed over there. They have separated out facial speech analysis as well, see a face and start processing the initial configuration of a face, but what is it actually doing? Facial speech analysis is processed after structural encoding of the face. Straight away, as soon as you see a face, you start processing its emotions, whether or not it is speaking to you, BUT they separated out expression analysis from facial speech analysis, put them as 2 separate components
Directed visual processing
If you see a face of somebody you are going to meet somewhere, someone that you are attracted to, looking for a person of a certain age or height or something etc, your att is directed to a series of face. That happens very early on in this model.
Bottom right corner of the B&Y model. Face Recognition Units, Person Identity Nodes, Name Generation (next card)
Here have the idea of face recognition units (FRU) and person identity nodes (PIN). FRU is where you start to map identity/ meaning/ some form of 3D context towards this inputted structural configuration, start to process it as more of a face as opposed to an image of a face. These things can be considered together, bc it makes common sense that after you start processing a social context to that inputted exemplar, we start to identify a person e.g. that is somebody that I know or that is my girlfriend I have been waiting for. This then feeds into the cog system. B&Y – for model didn’t want to go in and breakdown the areas of the brain that were responsive/responsible to each of these tasks, they just put them in big boxes – cog system – something that goes on in your brain, the process where a person is identified, this is where a face is actually processed, and finally name generation – about the generation of a semantic label/title that helps you identify who that particular face belongs to
Concluding point about B&Y model
Have sequential processing of the various different types of tasks that take place when we see a face. Go down from Pictorial codes Expression codes Facial speech codes Visually derived semantic codes Structural codes Identity specific codes Name codes Sequential processing Each stage you build upon the internally stored representation of a face to activate face processing. Good model but it is fundamentally flawed, there are aspects that we can start to tease apart, (but the most) restricted aspect of this particular model, is the fact that it is a sequential process, we see a face and start analysing the structure then … intuitively it makes sense but later evidence says…
What does IAC stand for? Who created it?
The Interactive activation and competition model (IAC)
Burton, Bruce & Johnston 1990
The IAC (background)
B&Y model has inspired a lot of work but a refinement of their model is the IAC. Burton, Bruce & Johnston 1990 saw that the functional model for face perception was effectively flawed bc it is a sequential process, bc in reality, even though it is intuitively welcoming and we like to assume that we see the basic structure of a face and build upon it in a sequential process, that’s not what happens in reality.
From an evolutionary point of view, if we see somebody that is attractive we process that before we process/recognise the face, so there has to be a parallel process type element to the model. Basically, there are a number of components and they activate the exemplar, the recognised semantic concept as driven by a face, in parallel. There are a number of aspects that bring together to identify individuals.
The IAC
Developed from the Bruce and Young (1990) model and allows a computer simulation of the face identification pathway of the earlier model
The main difference is that it suggests pools of units that correspond to the functional stages of the earlier model (FRU and PINs etc) the identification of the person behind the face. There are pools of units that work to (collect??) semantic info.
Pools of semantic information (SIUs) as well that operate in an excitatory fashion e.g., Paul McCartney’s face may also prime John Lennon’s face etc
These are the main differences and similarities between the models, why are we talking about these models? Cog N model beh and cognition, face perception, a complex social process can be modelled quite effectively looking at B&Y. it needs some refinement with the IAC, we can incorporate those aspects to refine and develop a more effective model. But behind every model there is a patient suffering from a serious condition.
What about neural models of face perception?
Look at the physiology behind these models. Look at neurological models and see how they map together with the box plots. One of the main limitations with B&Y is that it lacks neurological specificity. B&Y were explicit in their limitations in the actual key paper, they said our model is but it lacks neurological specificity, we cannot map it to areas of the brain.
What is Haxby, Hoffman & Gobbini, (2006) model called?
Distributed model for Human face Recognition
LOOK AT PICTURE!!
Haxby created a model which was informed thoroughly by the brain. This has neurological specificity. HH&G (2006) collected together info from brain imaging studies where they mapped out the various processes to substrates in the brain that would mediate the process for face perception.
Describe the distributed model for Human face Recognition
2 main aspects – the core system and the extended system.
In the core system there is visual analysis taking place, you have these substrates that drive this low level perception of a face. Have given us the areas of the brain where these processes occur – inferior occipital gyri, for early perception of the facial features, at the back of the brain in the occipital pole, this is where the structural encoding of a face would take place. Then go to the side of the head and the lateral temporal cortex where the superior temporal sulcus (STS) is, here all moving aspects of the face are processed, so if it is emotion or it is speech it is all processed here in this one area of the brain.
Straight away you can notice the difference between B&Y and H, B&Y were clear and separated out expression analysis from speech analysis, H talks about them all being processed together, H has informed and developed/refined the B&Y model. Processed on the underside of the head in the fusiform gyrus (starting at the back and going right to the front, the temporal pole??) - Invariant aspects of the face, the static aspects, things like identity is processed, plus also contextual assignment of a face (this is Jane who works in a bookshop) that is processed and activated from this aspect here.
Describe the distributed model for Human face Recognition. The core system projects on to the extended system…
In this there are core areas of the brain that drive/mediate complex social processing. The STS projects to the intraparietal sulcus which drives things like shifts in gaze direction. The auditory cortex for prelexical speech perception, you see someone’s face move and it primes your response, your perception for the word. This is evidenced by the McGurk fusion illusion, you see a face going ga-ga and you hear on the words ba-ba, so what you perceive is tha-tha. Then we have emotion processing, we all know about the various aspects of emotion processing, the correct perception from various means – the amygdala, insula, limbic system, frontal cortex, these are all thought to mediate some form of emotion processing. Then the lateral fusiform projecting all the way down to the front of the brain into the anterior temporal cortex where name and biographical info is processed.
Why do we talk about the distributed model for Human face Recognition?
For 2 reasons, 1 – it clearly maps processes to the brain, 2 it is possible to map certain aspects here with the earlier B&Y model, and in doing so we can take 2 models and can actually inform our understanding of that beh, bc B&Y say beh happens here and then some more beh happens afterwards, yet H says that could happen but that happens in this area of the brain. We know where certain processes that are involved in face perception how they happen, when they happen and also where in the brain they happen.
1st **Brain break **
Face perception is a complex social process yet it is readily mapped into various models.
It’s fairly unique as cognitive psychological and cognitive neuroscientific models overlap i.e., one can infer neurological specificity to some fairly complex social processes. (can take aspects of B&Y model, not the whole thing and map it to H).
Applying various cognitive neuroscientific tool together can be informative on the cognitive systems that underpin face perception. (helps us to elaborate/identify/understand the mechanisms that drive these beh).
What happens when it breaks down?
The case of acquired prosopagnosia (AP)
Severe cognitive disorder characterized by an inability to recognize faces not low-level visual problems and not attributed to cognitive alterations such as with amnesia etc
but it is rather comorbid, tend to find an array of different sorts of issues manifesting itself, like amnesia and like agnosia etc. Sufferers can recognise features as a face but they cannot discriminate between faces or process unfamiliar faces. Acquired prosopagnosia usually occurs after an injury to the brain is sustained and you lose the ability to recognise faces.
A brief history and prevalance of AP
Early reports of AP around 1844, however, it was considered as a form of visual agnosia (= object blindness) until Hoff & Potzl (1937) published the first case study showing selective deficit for faces. prior to their publication, it was thought they were object blind, you can’t see objects.
Valentine et al (2006) screened 91 patients who had suffered brain damage 6 months prior to testing and report 70 patients (77%) had face perception difficulties.
Quite rare when studied alone but has a relatively high comorbidity, very rarely find a patient who manifests with pure prosopagnosia, but it does appear in the context other disorders as well.
Subtypes of prosopagnosia (IMPORTANT)
There is a basic dichotomy of prosopagnosia that mirrors the general dichotomy in visual agnosia e.g., apperceptive and associative prosopagnosia.
Apperceptive prosopagnosics have problems with the generation of the facial percepts but face perception is largely intact in associative prosopagnosic patients but they cant assign any meaning to a face. …that is more like a semantic deficit. App deficit is something when there is a problem with the low level processing of whatever it may be, ass is more high level.
What can the models tell us ?
Apperceptive prosopagnosics have an issue identifying (know something is going on but can’t recognise that it is a face), clear they have a breakdown in the early structurally encoding aspects of the B&Y model. Although B&Y may not have any neurological specificity but when you look at H, there is some kind there. If you put the 2 models side by side, we now know that people suffering from apperceptive prosopagnosia have problems with early structural encoding of the face but it is also the inferior occipital gyri where these deficits will occur. Now we have a very powerful tool that we can focus some kind of remedy or therapy.
associative prosopagnosia – can kind of recognise that that is a face, but can’t work out who it is. Here bc the actual processing of the face is intact, we can assume that these patients have intact access to the structural encoding aspects of B&Y, they have intact early visual processing as according to H. But these later aspects (according to B&Y) FRU and PINS, they don’t have access to that, and we know according to H that if the anterior temporal cortex are implicated in these processes. So when you find someone manifesting with associative prosopagnosia, can say you that they may have problems with their anterior temporal cortex, put them in an MRI to confirm. Now can focus on task/therapy/ practical work to remediate the processes that are going on here, don’t need to waste time targeting other things, have a selective tool by which you can start making a focused regime of therapy that is selective to these patients.