The cognitive neuropsychology of face perception Flashcards
OBJECTIVES
Discuss the basic theory of human face perception and relate this to the understanding of acquired prosopagnosia.
Discuss in detail the cognitive mechanisms implicated in acquired prosopagnosia.
Discuss the relative merits of treatment regimes for patients who are suffering from acquired prosopagnosia.
What is prosopagnosia?
Prosopagnosia = face blindness, blindness to the recognition of or perception of faces.
What is the Cog N approach?
in essence, draw models, represent cog beh/cognition in models that are created from a variety of evidence and use them to help understand beh/cog. When this breaks down we can frame it on a model and then have insight and reflect on effective therapy.
Why are faces so important?
The human face is one example of a social device that conveys a myriad of info. We look at info and extract info like age, gender, attractiveness, competency etc, all sorts of complex social info is processed from this. We carry out such processing/recognition/cognitions at a millisecond level. So-called “thin slice” studies show that we only have to be shown a face for 250ms and we can extract some complex social traits, such as competency to win an election for an example, from the face. It is something that we are all very much expert in doing, and we do it many hundreds or thousands of times a day.
What is Bruce and Young’s (1986) model called?
The Functional Model for Face Perception
Face perception is a very complex, social force and there are a variety of models that have been created to help us to understand what happens when we see/process a face. None have the reputation or the weight as much as this one, this is a classic model in cog P, has inspired so much work. They collected arrange of evidence coming from beh studies, neuropsychological studies, and then collated this model together (is a box plot).
what type of model is Bruce and Young’s (1986) model?
A sequential model
This model encompasses what we knew about face perception. When we see a face it is inputted into the model (at the top right hand corner), there is one point at which the face percept enters in to our perceptual system, bc there is one point it is essentially a sequential model
First stage of the Functional Model for Face Perception
The structural encoding stage.
(Visual face input) view-centred descriptions —> Expression independent descriptions
Structural encoding is the first thing to happen, the first time you see a face you start to process the structure of the face, you map it to an internally stored configuration, you start to identify whether or not it is actually a face, whether or not we are actually looking at a face from face on, or we are looking at one from sideways – these are view-centered descriptions. We are also processing this internally stored configuration to an expression-independent description, where we are starting to extract expressions, displays of emotion, at this very early stage. These are all carried out in the structural encoding stage.
Leading on from this there are a variety of different branches. Expression analysis and facial speech branches
If we first start seeing a face and see the corners of the mouth are turned up for example, what does this mean? This configuration is different from a standard face - go to the expression analysis box. All know that the processing of expressions is incredibly complicated. The feeling that we experience when we see somebody else smiling or angry or disgusted etc is incredibly complex and involves a range of different cortical sites (but they have put it all into a box). This is one of the limitations, but it is how the model works, identify there is a facial expression and it is analysed over there. They have separated out facial speech analysis as well, see a face and start processing the initial configuration of a face, but what is it actually doing? Facial speech analysis is processed after structural encoding of the face. Straight away, as soon as you see a face, you start processing its emotions, whether or not it is speaking to you, BUT they separated out expression analysis from facial speech analysis, put them as 2 separate components
Directed visual processing
If you see a face of somebody you are going to meet somewhere, someone that you are attracted to, looking for a person of a certain age or height or something etc, your att is directed to a series of face. That happens very early on in this model.
Bottom right corner of the B&Y model. Face Recognition Units, Person Identity Nodes, Name Generation (next card)
Here have the idea of face recognition units (FRU) and person identity nodes (PIN). FRU is where you start to map identity/ meaning/ some form of 3D context towards this inputted structural configuration, start to process it as more of a face as opposed to an image of a face. These things can be considered together, bc it makes common sense that after you start processing a social context to that inputted exemplar, we start to identify a person e.g. that is somebody that I know or that is my girlfriend I have been waiting for. This then feeds into the cog system. B&Y – for model didn’t want to go in and breakdown the areas of the brain that were responsive/responsible to each of these tasks, they just put them in big boxes – cog system – something that goes on in your brain, the process where a person is identified, this is where a face is actually processed, and finally name generation – about the generation of a semantic label/title that helps you identify who that particular face belongs to
Concluding point about B&Y model
Have sequential processing of the various different types of tasks that take place when we see a face. Go down from Pictorial codes Expression codes Facial speech codes Visually derived semantic codes Structural codes Identity specific codes Name codes Sequential processing Each stage you build upon the internally stored representation of a face to activate face processing. Good model but it is fundamentally flawed, there are aspects that we can start to tease apart, (but the most) restricted aspect of this particular model, is the fact that it is a sequential process, we see a face and start analysing the structure then … intuitively it makes sense but later evidence says…
What does IAC stand for? Who created it?
The Interactive activation and competition model (IAC)
Burton, Bruce & Johnston 1990
The IAC (background)
B&Y model has inspired a lot of work but a refinement of their model is the IAC. Burton, Bruce & Johnston 1990 saw that the functional model for face perception was effectively flawed bc it is a sequential process, bc in reality, even though it is intuitively welcoming and we like to assume that we see the basic structure of a face and build upon it in a sequential process, that’s not what happens in reality.
From an evolutionary point of view, if we see somebody that is attractive we process that before we process/recognise the face, so there has to be a parallel process type element to the model. Basically, there are a number of components and they activate the exemplar, the recognised semantic concept as driven by a face, in parallel. There are a number of aspects that bring together to identify individuals.
The IAC
Developed from the Bruce and Young (1990) model and allows a computer simulation of the face identification pathway of the earlier model
The main difference is that it suggests pools of units that correspond to the functional stages of the earlier model (FRU and PINs etc) the identification of the person behind the face. There are pools of units that work to (collect??) semantic info.
Pools of semantic information (SIUs) as well that operate in an excitatory fashion e.g., Paul McCartney’s face may also prime John Lennon’s face etc
These are the main differences and similarities between the models, why are we talking about these models? Cog N model beh and cognition, face perception, a complex social process can be modelled quite effectively looking at B&Y. it needs some refinement with the IAC, we can incorporate those aspects to refine and develop a more effective model. But behind every model there is a patient suffering from a serious condition.
What about neural models of face perception?
Look at the physiology behind these models. Look at neurological models and see how they map together with the box plots. One of the main limitations with B&Y is that it lacks neurological specificity. B&Y were explicit in their limitations in the actual key paper, they said our model is but it lacks neurological specificity, we cannot map it to areas of the brain.