The cognitive neuropsychology of face perception Flashcards

1
Q

OBJECTIVES

A

Discuss the basic theory of human face perception and relate this to the understanding of acquired prosopagnosia.
Discuss in detail the cognitive mechanisms implicated in acquired prosopagnosia.
Discuss the relative merits of treatment regimes for patients who are suffering from acquired prosopagnosia.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is prosopagnosia?

A

Prosopagnosia = face blindness, blindness to the recognition of or perception of faces.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the Cog N approach?

A

in essence, draw models, represent cog beh/cognition in models that are created from a variety of evidence and use them to help understand beh/cog. When this breaks down we can frame it on a model and then have insight and reflect on effective therapy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why are faces so important?

A

The human face is one example of a social device that conveys a myriad of info. We look at info and extract info like age, gender, attractiveness, competency etc, all sorts of complex social info is processed from this. We carry out such processing/recognition/cognitions at a millisecond level. So-called “thin slice” studies show that we only have to be shown a face for 250ms and we can extract some complex social traits, such as competency to win an election for an example, from the face. It is something that we are all very much expert in doing, and we do it many hundreds or thousands of times a day.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Bruce and Young’s (1986) model called?

A

The Functional Model for Face Perception

Face perception is a very complex, social force and there are a variety of models that have been created to help us to understand what happens when we see/process a face. None have the reputation or the weight as much as this one, this is a classic model in cog P, has inspired so much work. They collected arrange of evidence coming from beh studies, neuropsychological studies, and then collated this model together (is a box plot).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what type of model is Bruce and Young’s (1986) model?

A

A sequential model
This model encompasses what we knew about face perception. When we see a face it is inputted into the model (at the top right hand corner), there is one point at which the face percept enters in to our perceptual system, bc there is one point it is essentially a sequential model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

First stage of the Functional Model for Face Perception

A

The structural encoding stage.
(Visual face input) view-centred descriptions —> Expression independent descriptions

Structural encoding is the first thing to happen, the first time you see a face you start to process the structure of the face, you map it to an internally stored configuration, you start to identify whether or not it is actually a face, whether or not we are actually looking at a face from face on, or we are looking at one from sideways – these are view-centered descriptions. We are also processing this internally stored configuration to an expression-independent description, where we are starting to extract expressions, displays of emotion, at this very early stage. These are all carried out in the structural encoding stage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Leading on from this there are a variety of different branches. Expression analysis and facial speech branches

A

If we first start seeing a face and see the corners of the mouth are turned up for example, what does this mean? This configuration is different from a standard face - go to the expression analysis box. All know that the processing of expressions is incredibly complicated. The feeling that we experience when we see somebody else smiling or angry or disgusted etc is incredibly complex and involves a range of different cortical sites (but they have put it all into a box). This is one of the limitations, but it is how the model works, identify there is a facial expression and it is analysed over there. They have separated out facial speech analysis as well, see a face and start processing the initial configuration of a face, but what is it actually doing? Facial speech analysis is processed after structural encoding of the face. Straight away, as soon as you see a face, you start processing its emotions, whether or not it is speaking to you, BUT they separated out expression analysis from facial speech analysis, put them as 2 separate components

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Directed visual processing

A

If you see a face of somebody you are going to meet somewhere, someone that you are attracted to, looking for a person of a certain age or height or something etc, your att is directed to a series of face. That happens very early on in this model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Bottom right corner of the B&Y model. Face Recognition Units, Person Identity Nodes, Name Generation (next card)

A

Here have the idea of face recognition units (FRU) and person identity nodes (PIN). FRU is where you start to map identity/ meaning/ some form of 3D context towards this inputted structural configuration, start to process it as more of a face as opposed to an image of a face. These things can be considered together, bc it makes common sense that after you start processing a social context to that inputted exemplar, we start to identify a person e.g. that is somebody that I know or that is my girlfriend I have been waiting for. This then feeds into the cog system. B&Y – for model didn’t want to go in and breakdown the areas of the brain that were responsive/responsible to each of these tasks, they just put them in big boxes – cog system – something that goes on in your brain, the process where a person is identified, this is where a face is actually processed, and finally name generation – about the generation of a semantic label/title that helps you identify who that particular face belongs to

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Concluding point about B&Y model

A
Have sequential processing of the various different types of tasks that take place when we see a face. Go down from
Pictorial codes
Expression codes
Facial speech codes
Visually derived semantic codes
Structural codes
Identity specific codes 
Name codes
Sequential processing
Each stage you build upon the internally stored representation of a face to activate face processing. Good model but it is fundamentally flawed, there are aspects that we can start to tease apart, (but the most) restricted aspect of this particular model, is the fact that it is a sequential process, we see a face and start analysing the structure then … intuitively it makes sense but later evidence says…
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does IAC stand for? Who created it?

A

The Interactive activation and competition model (IAC)

Burton, Bruce & Johnston 1990

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The IAC (background)

A

B&Y model has inspired a lot of work but a refinement of their model is the IAC. Burton, Bruce & Johnston 1990 saw that the functional model for face perception was effectively flawed bc it is a sequential process, bc in reality, even though it is intuitively welcoming and we like to assume that we see the basic structure of a face and build upon it in a sequential process, that’s not what happens in reality.
From an evolutionary point of view, if we see somebody that is attractive we process that before we process/recognise the face, so there has to be a parallel process type element to the model. Basically, there are a number of components and they activate the exemplar, the recognised semantic concept as driven by a face, in parallel. There are a number of aspects that bring together to identify individuals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The IAC

A

Developed from the Bruce and Young (1990) model and allows a computer simulation of the face identification pathway of the earlier model
The main difference is that it suggests pools of units that correspond to the functional stages of the earlier model (FRU and PINs etc) the identification of the person behind the face. There are pools of units that work to (collect??) semantic info.
Pools of semantic information (SIUs) as well that operate in an excitatory fashion e.g., Paul McCartney’s face may also prime John Lennon’s face etc

These are the main differences and similarities between the models, why are we talking about these models? Cog N model beh and cognition, face perception, a complex social process can be modelled quite effectively looking at B&Y. it needs some refinement with the IAC, we can incorporate those aspects to refine and develop a more effective model. But behind every model there is a patient suffering from a serious condition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What about neural models of face perception?

A

Look at the physiology behind these models. Look at neurological models and see how they map together with the box plots. One of the main limitations with B&Y is that it lacks neurological specificity. B&Y were explicit in their limitations in the actual key paper, they said our model is but it lacks neurological specificity, we cannot map it to areas of the brain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Haxby, Hoffman & Gobbini, (2006) model called?

A

Distributed model for Human face Recognition
LOOK AT PICTURE!!
Haxby created a model which was informed thoroughly by the brain. This has neurological specificity. HH&G (2006) collected together info from brain imaging studies where they mapped out the various processes to substrates in the brain that would mediate the process for face perception.

17
Q

Describe the distributed model for Human face Recognition

A

2 main aspects – the core system and the extended system.
In the core system there is visual analysis taking place, you have these substrates that drive this low level perception of a face. Have given us the areas of the brain where these processes occur – inferior occipital gyri, for early perception of the facial features, at the back of the brain in the occipital pole, this is where the structural encoding of a face would take place. Then go to the side of the head and the lateral temporal cortex where the superior temporal sulcus (STS) is, here all moving aspects of the face are processed, so if it is emotion or it is speech it is all processed here in this one area of the brain.
Straight away you can notice the difference between B&Y and H, B&Y were clear and separated out expression analysis from speech analysis, H talks about them all being processed together, H has informed and developed/refined the B&Y model. Processed on the underside of the head in the fusiform gyrus (starting at the back and going right to the front, the temporal pole??) - Invariant aspects of the face, the static aspects, things like identity is processed, plus also contextual assignment of a face (this is Jane who works in a bookshop) that is processed and activated from this aspect here.

18
Q

Describe the distributed model for Human face Recognition. The core system projects on to the extended system…

A

In this there are core areas of the brain that drive/mediate complex social processing. The STS projects to the intraparietal sulcus which drives things like shifts in gaze direction. The auditory cortex for prelexical speech perception, you see someone’s face move and it primes your response, your perception for the word. This is evidenced by the McGurk fusion illusion, you see a face going ga-ga and you hear on the words ba-ba, so what you perceive is tha-tha. Then we have emotion processing, we all know about the various aspects of emotion processing, the correct perception from various means – the amygdala, insula, limbic system, frontal cortex, these are all thought to mediate some form of emotion processing. Then the lateral fusiform projecting all the way down to the front of the brain into the anterior temporal cortex where name and biographical info is processed.

19
Q

Why do we talk about the distributed model for Human face Recognition?

A

For 2 reasons, 1 – it clearly maps processes to the brain, 2 it is possible to map certain aspects here with the earlier B&Y model, and in doing so we can take 2 models and can actually inform our understanding of that beh, bc B&Y say beh happens here and then some more beh happens afterwards, yet H says that could happen but that happens in this area of the brain. We know where certain processes that are involved in face perception how they happen, when they happen and also where in the brain they happen.

20
Q

1st **Brain break **

A

Face perception is a complex social process yet it is readily mapped into various models.
It’s fairly unique as cognitive psychological and cognitive neuroscientific models overlap i.e., one can infer neurological specificity to some fairly complex social processes. (can take aspects of B&Y model, not the whole thing and map it to H).
Applying various cognitive neuroscientific tool together can be informative on the cognitive systems that underpin face perception. (helps us to elaborate/identify/understand the mechanisms that drive these beh).
What happens when it breaks down?

21
Q

The case of acquired prosopagnosia (AP)

A

Severe cognitive disorder characterized by an inability to recognize faces not low-level visual problems and not attributed to cognitive alterations such as with amnesia etc
but it is rather comorbid, tend to find an array of different sorts of issues manifesting itself, like amnesia and like agnosia etc. Sufferers can recognise features as a face but they cannot discriminate between faces or process unfamiliar faces. Acquired prosopagnosia usually occurs after an injury to the brain is sustained and you lose the ability to recognise faces.

22
Q

A brief history and prevalance of AP

A

Early reports of AP around 1844, however, it was considered as a form of visual agnosia (= object blindness) until Hoff & Potzl (1937) published the first case study showing selective deficit for faces. prior to their publication, it was thought they were object blind, you can’t see objects.
Valentine et al (2006) screened 91 patients who had suffered brain damage 6 months prior to testing and report 70 patients (77%) had face perception difficulties.
Quite rare when studied alone but has a relatively high comorbidity, very rarely find a patient who manifests with pure prosopagnosia, but it does appear in the context other disorders as well.

23
Q

Subtypes of prosopagnosia (IMPORTANT)

A

There is a basic dichotomy of prosopagnosia that mirrors the general dichotomy in visual agnosia e.g., apperceptive and associative prosopagnosia.
Apperceptive prosopagnosics have problems with the generation of the facial percepts but face perception is largely intact in associative prosopagnosic patients but they cant assign any meaning to a face. …that is more like a semantic deficit. App deficit is something when there is a problem with the low level processing of whatever it may be, ass is more high level.

24
Q

What can the models tell us ?

A

Apperceptive prosopagnosics have an issue identifying (know something is going on but can’t recognise that it is a face), clear they have a breakdown in the early structurally encoding aspects of the B&Y model. Although B&Y may not have any neurological specificity but when you look at H, there is some kind there. If you put the 2 models side by side, we now know that people suffering from apperceptive prosopagnosia have problems with early structural encoding of the face but it is also the inferior occipital gyri where these deficits will occur. Now we have a very powerful tool that we can focus some kind of remedy or therapy.

associative prosopagnosia – can kind of recognise that that is a face, but can’t work out who it is. Here bc the actual processing of the face is intact, we can assume that these patients have intact access to the structural encoding aspects of B&Y, they have intact early visual processing as according to H. But these later aspects (according to B&Y) FRU and PINS, they don’t have access to that, and we know according to H that if the anterior temporal cortex are implicated in these processes. So when you find someone manifesting with associative prosopagnosia, can say you that they may have problems with their anterior temporal cortex, put them in an MRI to confirm. Now can focus on task/therapy/ practical work to remediate the processes that are going on here, don’t need to waste time targeting other things, have a selective tool by which you can start making a focused regime of therapy that is selective to these patients.

25
Q

What do the models tell us so far?

A

The functional dissociation between apperceptive and associative prosopagnosia can be mapped to the B&Y model quite clearly.
However, an understanding of the patients can also inform and refine our understanding of the models.

The double dissociation of HJA and LM

26
Q

HJA & LM

A

HJA- male from Solihull, businessman, on a business trip to Germany and suffered a very severe stroke. He was unconscious for a period of time, flown back to England, and as he started to recover he started to manifest some quite severe visual disturbances. HJA - amnesia, agnosia, couldn’t take onboard new images, couldn’t remember new images, but he also manifested dense prosopagnosia (as it has a high comorbidity). He was very visually blind but he could recognize/perceive motion but he couldn’t perceive objects.

LM- female, German, (has now died), quite young when she had a stroke, stroke was caused by an overdose of some pills. Her behavioral manifestation was that she was densely motion-blind, was akinetopsic (suffered akinetopsia)- she couldn’t recognize, couldn’t perceive motion at all, she sees the world in a form of static snapshots, she cannot process movements at all.

27
Q

The HJA/LM dissociation

A

HJA was a suffered a stroke to large parts of his ventral temporal cortex while LM suffered a stroke to her lateral temporal cortex. the underside of his brain, ventral circuits. Her ventral circuits were intact; it was here that she had a massive issue
HJA become suffered from profound agnosia while LM suffered akinetopsia (she was motion blind)
Please note that nobody has published this dissociation together yet you must locate the key articles with HJA and those with LM and bring them together. HJA has been published on his own, LM has been published on her own, it is when we bring the literature together that we see some quite interesting things.

28
Q

The HJA/LM dissociation 2

A

Left = HJA. In a study looking at speech readings or McGurk fusion illusion, HJA could not do it at all when you presented a picture, a static photo. However, when you present it (the same speech, the same phoneme) in a dynamic modality HJA had no problem whatsoever. Uses dynamic info to get access to his stored semantic representations of the world. LM didn’t have access to this kind of info, she could not see things in the static domain, when she was presented with a face reading task, when it was moving she couldn’t really see anything, but the static modality, when a photograph was presented to her, she had no problem in accessing the semantic info. There is a functional disassociation, 2 patients with 2 discrete lesions (so don’t have access to that area of the brain) are manifesting 2 discrete form of beh. (Now we know that we have got necessity to these areas of the brain, so look at where they are).

29
Q

The HJA/LM dissociation (model slide)

A

Separate here but HLA and LM show us that they should perhaps be considered as one unit. Looking at the key model (B&Y). if you look at expression analysis and facial speech analysis they are separate here, but HJA and LM show us that perhaps they may be considered as one unit, one thing, facial motion units.

30
Q

The HJA/LM dissociation (blue and red model slide)

A

p. 69
They can be mapped quite perfectly to H’s model. LM had a stroke which took out most of her lateral temporal cortex, which included the superior temporal sulcus and the inferior occipital gyri. She didn’t have access to those areas of the brain. HJA did have access to this, but didn’t have access to the lateral fusiform gyrus, he couldn’t process identity from a static exemplar. Straightaway the presence of HJA and LM allow us to add necessity to this model, now know this model is true. If you take this out we can predict bc of this model the person won’t have access to this kind of beh (extended system), know this bc HJA couldn’t do it, and vice versa for LM, can say if anyone has damage to that part of the brain they won’t be able to do these kind of things, LM couldn’t do it when presented in a static modality. HJA and LM allow us to inform our understanding of the models.

31
Q

2nd ** BRAIN BREAK **

A

Prosopagnosia is a serious neuropsychological condition that can be divided into two forms these being associative and apperceptive.
The study of prosopagnosia can be informed by its subsequent modelling and vice versa.
Further study of neuropsychological patients reveals motion specific components that may underlie face perception are dissociable as well and can be modelled. This is crucial when you can model it you can start to talk about things like remedy, therapy and words like this.
How do we remedy it ?

32
Q

Treatment of Acquired Prosopagnosia

A

Few attempts to remedy AP and those that have been carried out have been largely unsuccessful.
Ellis & Young (1988) studied an 8yr old child with AP after surgical complications over a period of 18m. De Haan et al, (1991) attempted to rehabilitate an adult sufferer of AP but both failed. these both failed in their modelling in their remedy.
Powell et al (2008) carried out a series of interventions developed from contemporary face perception theory and did show some success…Powell et al. did something very clever, looked at the E&Y and DH and said they were not developing therapy or rehabilitation that is informed by our understanding of the models, by our understanding of the processes that are mapped to these models, they were not adopting a Cog N approach. (3) that were informed by the B&Y model.

33
Q

Powell et al. (2008) Actually showed some success in remedy/rehabilitation.

A

20 patients with general impairments of face perception alongside broader cognitive deficits in three different training procedures.
The procedures were each specifically designed to enhance the learning of faces.
Perhaps more importantly – each of the procedures were mapped to key ‘codes’ in the B&Y model (we have an example of the models informing therapy)

34
Q

Powell et al 2008 (2)

A

Semantic Association Task. Provided with additional verbal material about the faces that they were being trained to learn.
Caricature Training. Provided with caricatured version of the faces that they were being trained to learn
Part recognition training. Participants attention was being directed to the towards particular facial features during training.
Sat down and therapist asked them to concentrate on that part there of the face, can you focus on this picture, just this part not the whole thing, part-based processing that informed the structural encoding of a face.

35
Q

Powell et al. graph

A

Graph taken from the actual paper, look at the difference between the different colour bars. Exposure only to material, weren’t actually trained to do the actual tasks, vs the rehabilitation group. Are getting an improvement, correct responses, people who are trained on these specific tasks that were informed by our models of face perception actually showed an improvement in subsequent beh, this is an amazing finding, a finding that is rare in mainstream cog N, but in its essence, it is pure cog N – Powell took tools, designed therapy, improved people’s beh with his understanding of these models.

36
Q

Brain Break 3

A

Very difficult to remediate acquired face processing deficits however some success has been seen with the work by Powell et al, 2008.
They designed studies based closely on existing models of face perception.
However, this was a proof of point study and as such no follow up work carried out etc they just wanted to show that they could do it, what they needed to do was long term interventions, integration back into community studies and things like that, but they didn’t do this.