lecture 7 - how the brain creates perception, illusions and hallucinations Flashcards

1
Q

illusions

A
  1. Ames’ Window - linked to ‘perceptual constancy’
  2. The moon illusion - linked to ‘perceptual constancy’
  3. Lotto’s Cubes - linked to ‘perceptual constancy’
  4. Van Lier’s Stars - linked to after-effects in general

Ugly faces - linked to after-effects in general

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

ame’s window - diagram in notes

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is perception

A

Perception isn’t seeing an “image” (a “picture”).
Perception is interpretation
scene interpretation, object recognition, face recognition, word recogntion

its knowing the difference between meaningless stuff and meaningful stuff
and it happens automatically (once your brain has learnt)

therefore perception is the product of learning - for a baby therefore its not knowing difference between meaningful and meaningless stuff

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

illusions

A

llusions are the brain’s interpretation of confusing sensory signals.
They reveal the clever things our brains do during the everyday perception
we take for granted. They reveal the inner workings of the system we
normally have no awareness of.
They are rare in real life.
INTERESTING QUESTION: Consider when these illusions would start
‘working’ for a baby?
KEY CONCEPT 2: Illusions
Illusions are not just proof we are fallible…
Think also about cognitive examples where something well learnt
interferes in unusual tasks (e.g. Stroop effect – would that happen for
young children?)

llusions are the brain’s interpretation of confusing sensory signals.
They reveal the clever things our brains do during the everyday perception
we take for granted. They reveal the inner workings of the system we
normally have no awareness of.
We need to keep clear in our discussion the difference between the
- ‘stimulus’ of the illusion (which we might be misinterpreting in a narrow
sense)
and
- ‘what the object would most likely be in real life’ (which our perceptionis
designed to see and nearly always gets right)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

the moon illusion

A

the moon looks bigger near the horizon but its size has not changed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

perceived size

A

Perceived size depends on perceived distance and on the size of nearby objects
Your brain has learnt that small stimuli far away are actually large objects,
while big stimuli close by are actually small objects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

depth illusion

A

image in notes
same size. What about the facial expressions - identical but fear on the part of the pursued and rage on the part of the pursuer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What have we learnt so far:

A

Basic features such as size and shape
are perceived (i.e. automatically interpreted)
relative to their context and your lifetime of experience
What about colour?
That’s even more basic isn’t it? Determined by the receptors in our retinae?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

lottos cube

A

Colour also depends on perceived lighting,
shape and shadow
So Colour (a ‘basic’ sensory feature)… actually takes into account
environment, what the lighting is likely to be, shape and shadow

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

colour vision

A

an example of how basic signals
combine with adaptation and interpretation
Raw signal depends on cones in the retina:
And the types of cone we have are
entirely determined by our genetics…
(look up colour blindness if you are interested).
And the firing rate of these cones tells
us what colour an object is.
But is it that simple

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Perception comes from the COMPARING activity in different neurons

A

n our eyes there are three types of
‘cone’ light receptors.
They are activated by different
wavelengths of visible light:
long wavelengths (L cone),
middle wavelengths (M cone),
short wavelengths (S cone).
Colour is indicated by the comparison of
activity between these types of cone.
But is it that simple

when adapted to red - cone becomes less sensitive relative to the others. leads to a greenish-after-effect

adapted to green - M cone becomes less sensitive relative to the others so pink colour

adapted to yellow - L and M cone becomes less sensitive relative to the S cone. colour is bluish

adapted to blue - S cone becomes less sensitive relative to the others. colour is yellowish

graphs in notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

van lier’s stars

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

simultaneous contrast - for lightness

A

illusion in notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

orientation adaption and after effect

A

2 - Perceptual filling in… so perception is created
within the brain (as part of interpretation)

3 - Some signals are ignored: e.g. After image
perception is affected by context (e.g. lines

Why? Because an after effect is an
ambiguous signal – the brain does not
always know if to believe it.
And in the real world, faint colours that
represent real objects are normally
bounded by edges, whereas faint
colours that are irrelevant (because
they are due to lighting, shadow,
aftereffects) tend not to be bounded
by edges.
So the brain has learnt to ‘believe’
colours that are bounded by edges.
This is why we don’t see after effects
all the time every day

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what have we learnt so far?

A

Basic features such as size, shape and colour are perceived (i.e. automatically interpreted) relative to their context and your lifetime of experience
Perception is based on comparison – locally, with context, with what you’ve seen before.
This achieves some computationally complex interpretations taking into account distance, shapes, lighting and shadows, gaps in the signal (filling in),
Thus, perception is a creative process based on what’s already in your brains as well as the incoming (sometimes ambiguous) signals…
This creativity means it is only a small step to dreams and hallucinations…
which are created in your perceptual systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

recognising faces

A

Faces are perceived (i.e. automatically interpreted) relative to their context and your lifetime of experience
Face perception is based on comparison – locally, with context, with what you’ve seen before.
This achieves some computationally complex interpretations; Face recognition is a very difficult processing problem
(computers are bad at it, and so are we sometimes)
Thus, face perception is a creative process based on what’s already in your brains as well as the incoming (sometimes ambiguous) signals…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

face perception requires learnt expertise

A

“the thatcher illusion”

expertise with configuration - but only right way up which is how most of the learning occured

shows us … perception is … the product of learning

flashed faced distortion effect - neurons see faces as exaggerateingly different

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

faced conception is baed on comparison

A

– locally, with context,
with what you’ve seen before.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

We’re all hallucinating all the time; when we agree about our hallucinations, we call it reality - Anil Seth 2021Being You

A

mental imagery - Not confused with external reality
(How? Why?)
Experienced by most people
(but not all, aphantasia)

hallucinations - Often accepted as externally real
Normally considered rare
(but on a continuum?
common in right circumstances?

synaesthesia - Can be vivid and externalised
(but understood as unreal)
Evoked rather than spontaneous
(role of learning?

In all cases, people often express surprise in discovering that
others don’t experience the world as they do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

what is perception

A

Perception isn’t seeing an “image”.
- e.g. we do not see the physical amount of ‘colour’ entering the eye from each location, and even after adaptation, we do not see raw colour information provided by the cones
Perception is interpretation
e.g. it integrates information from edges and colour, and tries to interpret colour using shadows and shapes appropirately
Perception is complex (but normally very clever and automatic for us)
e.g. we are not aware that our colour vision is doing all this for us, and because it is automatic, it’s very hard to get colours exactly right when you try to be an artist.
Perception is learnt
Some basic things are innate or learnt very young, but the majority of perception that we take for granted is learnt in the first few years of life. e.g. We take the ability to match shapes and colours for granted, but babies take a surprisingly long time to do it well (they can discriminate between colours from early on, but that’s not the same as perceiving them as adults do, and being able to match them between objects etc).

* Brain receives fragments of info from approx 1 million axons in each of the optic nerves and then combines and organises these fragments into the perception of a scene  - objects having different forms, colours and textures, residing at different locations in three-dimensional space. 
* When our bodies or our eyes move, exposing the photoreceptors to entirely new patterns of visual information, our perception of the scene before us does not change. We see a stable world because the brain keeps track of our own movements and those of our eyes and compensates for the constantly changing patterns of neural firing that these movements cause.
* Perception is the process by which we recognise what is represented by the information provided by our sense organs. This process gives unity and coherence to this input.
* Perception is rapid, automatic, unconscious process
* Occasionally we do see something ambiguous and must reflect about what it might be or gather further evidence to determine what it is, but this situation is more problem-solving than perception.
* If we look at a scene carefully, we can describe the elementary sensations that are present, but we don't become aware of the elements before we perceive the objects and the background of which they are a part.
* Our awareness of the process of visual perception comes only after it is complete; we are presented with a finished product, not the details of the process. We can also accurately judge objects relative location in space and their movements
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

perception of form - figure and ground

A
  • Most of what we see can be classified as either object or background. Objects are things having particular shapes and particular locations in space. Backgrounds are in essence formless and serve mostly to help us judge the location of objects we see in front of them.
  • Psychologists use the terms figure and ground to label an object and its background, respectively. The classification of an item as a figure or as a part of the background is not an intrinsic property of the item. Rather, it depends on the behaviour of the observer.
  • If you are watching some birds fly overhead, they are figures and the blue sky and the clouds behind them are part of the background. If, instead, you are watching the clouds move, then the birds become background. If you are looking at a picture hanging on a wall, it is an object. Sometimes, we receive ambiguous clues about what is object and what is background.
    One of the most important aspects of form perception is the existence of a boundary. If the visual field contains a sharp and distinct change in brightness, colour or texture, we perceive an edge. If this edge forms a continuous boundary, we will probably perceive the space enclosed by the boundary as a figure.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Organisation of elements - principles of gestalt

A
  • Most figures are defined by a boundary. But the presence of a boundary is not necessary for the perception of form.
  • when small elements are arranged in groups, we tend to perceive them as larger figures.
  • illusory contours – lines that do not exist
  • In the early twentieth century, a group of psychologists, Max Wertheimer (1880–1943), Wolfgang Köhler (1887–1967) and Kurt Koffka (1886–1941), devised a theory of perception called Gestalt psychology (see Chapter 1); Gestalt is the German word for ‘form’. They maintained that the task of perception was to recognise objects in the environment according to the organisation of their elements. They argued that in perception the whole is more than the sum of its parts. Because of the characteristics of the visual system of the brain, visual perception cannot be understood simply by analysing the scene into its elements. Instead, what we see depends on the relations of these elements to one another (Wertheimer, 1912).
  • Elements of a visual scene can combine in various ways to produce different forms. Gestalt psychologists have observed that several principles of grouping can predict the combination of these elements.
  • The fact that our visual system groups and combines elements is useful because we can then perceive forms even if they are fuzzy and incomplete.
  • The real world presents us with objects partly obscured by other objects and with backgrounds that are the same colour as parts of the objects in front of them.
  • The laws of grouping discovered by Gestalt psychologists describe the ability to distinguish a figure from its background.
  • The adjacency/proximity principle states that elements that are closest together will be perceived as belonging together (Wertheimer, 1912).
  • The similarity principle states that elements that look similar will be perceived as part of the same form.
  • Good continuation is another Gestalt principle and refers to predictability or simplicity.
  • Often, one object partially hides another, but an incomplete image is perceived. The law of closure states that our visual system often supplies missing information and ‘closes’ the outline of an incomplete figure.
    The final Gestalt principle of organisation relies on movement. The principle of common fate states that elements that move in the same direction will be perceived as belonging together and forming a figure. In the forest, an animal is camouflaged if its surface is covered with the same elements found in the background – spots of brown, tan and green – because its boundary is obscured. There is no basis for grouping the elements on the animal. If the animal is stationary, it remains well hidden. However, once it moves, the elements on its surface will move together, and the animal’s form will quickly be perceived.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Models of pattern perception-
Templates and prototypes-

A
  • One explanation for our ability to recognise shapes of objects is that as we gain experience looking at things, we acquire templates, which are special kinds of visual memories stored by the visual system. A template is a type of pattern used to manufacture a series of objects (Selfridge and Neisser, 1960). When a particular pattern of visual stimulation is encountered, the visual system searches through its set of templates and compares each of them with the pattern provided by the stimulus. If it finds a match, it knows that the pattern is a familiar one. Connections between the appropriate template and memories in other parts of the brain could provide the name of the object and other information about it, such as its function, when it was seen before, and so forth.
  • The template model of pattern recognition has the virtue of simplicity. However, it is unlikely that it could work because the visual system would have to store an unreasonably large number of templates. Despite the fact that you may look at your hand and watch your fingers wiggling about, you continue to recognise the pattern as belonging to your hand. How many different templates would your visual memory have to contain just to recognise a hand?
    A more flexible model of pattern perception suggests that patterns of visual stimulation are compared with prototypes rather than templates. Prototypes (Greek for ‘original model’) are idealised patterns of a particular shape; they resemble templates but are used in a much more flexible way. The visual system does not look for exact matches between the pattern being perceived and the memories of shapes of objects but accepts a degree of disparity; for instance, it accepts the various patterns produced when we look at a particular object from different viewpoints.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

feature detection models

A
  • Some psychologists suggest that the visual system encodes images of familiar patterns in terms of distinctive features – collections of important physical features that specify particular items (Selfridge, 1959).
  • We are better at distinguishing some stimuli from others. We are better at searching for the letter A among a series of Bs than we are searching for the letter B among a series of As; we are better at finding orange-coloured objects in a series of red ones than vice versa; we find it easier to find a tilted item in a series of vertical items than finding a vertical item in a series of tilted ones.
  • Similarly, we are better at finding a mobile object in a series of stationary ones than a stationary one in a series of mobile ones. We can detect bumps in a display of bumpy and flat surfaces better than we can the absence of bumps, and we are better at finding a single stimulus in an array of different stimuli when there are many more different stimuli. It appears, then, that some stimuli have more distinctive features than others and this enhances discrimination.
  • An experiment by Neisser (1964) supports the hypothesis that perception involves analysis of distinctive features. Figure 6.10 shows one of the tasks he asked people to do. The figure shows two columns of letters. The task is to scan through them until you find the letter Z, which occurs once in each column.
    You probably found the letter in the left column much faster than you did the one in the right column. Why? The letters in the left column share few features with those found in the letter Z, so the Z stands out from the others. In contrast, the letters in the right column have many features in common with the target letter, and thus the Z is ‘camouflaged’.
  • The distinctive-features model appears to be a reasonable explanation for the perception of letters, but what about more natural stimuli, which we encounter in places other than the written page?
  • Biederman (1987, 1990) suggests a model of pattern recognition that combines some aspects of prototypes and distinctive features. He suggests that the shapes of objects that we encounter can be constructed from a set of 36 different shapes that he refers to as geons. Biederman suggests, the visual system recognises objects by identifying the particular sets and arrangements of geons that they contain.
  • Even if Biederman is correct that our ability to perceive categories of common objects involves recognition of geons, it seems unlikely that the geons are involved in perception of particular objects. For example, it is difficult to imagine how we could perceive faces of different people as assemblies of different sets of geons. The geon hypothesis appears to work best for the recognition of prototypes of generic categories: telephones or torches in general rather than the telephone on your desk or the torch a friend lent you.
    Biederman points out that particular features of figures – cusps and joints formed by the ends of line segments – are of critical importance in recognising drawings of objects, presumably because the presence of these joints enables the viewer to recognise the constituent geons. Figure 6.12 shows two sets of degraded images of drawings of five common objects. One set, (a), shows the locations of cusps and joints; the other, (b), does not. Biederman (1990) observed that people found the items with cusps and joints much easier to recognise.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Top-down processing - the role of context

A
  • We often perceive objects under conditions that are less than optimum; the object is in a shadow, camouflaged against a similar background or obscured by fog. Nevertheless, we usually manage to recognise the item correctly. We are often helped in our endeavour by the context in which we see the object.
  • Palmer (1975b) showed that even more general forms of context can aid in the perception of objects. He first showed his participants familiar scenes, such as a kitchen. Next, he used a tachistoscope to show them drawings of individual items and asked the participants to identify them.
  • A tachistoscope can present visual stimuli very briefly so that they are difficult to perceive (nowadays we would use a computer to perform the same function). Sometimes, participants saw an object that was appropriate to the scene, such as a loaf of bread. At other times, they saw an inappropriate but similarly shaped object, such as a letterbox
  • Palmer found that when the objects fitted the context that had been set by the scene, participants correctly identified about 84 per cent of them. But when they did not, performance fell to about 50 per cent. Performance was intermediate in the no-context control condition, under which subjects did not first see a scene. Thus, compared with the no-context control condition, an appropriate context facilitated recognition and an inappropriate one interfered with it.
  • The context effects demonstrated by experiments such as Palmer’s are not simply examples of guessing. That is, people do not think to themselves, ‘Let’s see, that shape could be either a letterbox or a loaf of bread. I saw a picture of a kitchen, so I suppose it’s a loaf of bread.’ The process is rapid, unconscious and automatic; thus, it belongs to the category of perception rather than to problem-solving, which is much slower and more deliberate. Somehow, seeing a kitchen scene sensitises the neural circuits responsible for the perception of loaves of bread and other items we have previously seen in that context.
  • Psychologists distinguish between two categories of information-processing models of pattern recognition: bottom-up processing and top-down processing. In bottom-up processing, also called data-driven processing, the perception is constructed out of the elements – the bits and pieces – of the stimulus, beginning with the image that falls on the retina. The information is processed by successive levels of the visual system until the highest levels (the ‘top’ of the system) are reached, and the object is perceived.
  • Top-down processing refers to the use of contextual information – to the use of the ‘big picture’. Presumably, once the kitchen scene is perceived, information is sent from the ‘top’ of the system down through lower levels. This information excites neural circuits responsible for perceiving those objects normally found in kitchens and inhibits others. Then, when the subject sees a drawing of a loaf of bread, information starts coming up through the successive levels of the system and finds the appropriate circuits already warmed up, so to speak.
    In most cases, perception consists of a combination of top-down and bottom-up processing. Figure 6.15 shows several examples of objects that can be recognised only by a combination of both forms of processing. Our knowledge of the configurations of letters in words provides us with the contexts that permit us to organise the flow of information from the bottom up.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Direct perception - Gibson’s affordances

A
  • We saw in an earlier section on cross-cultural differences that context is important for visual perception. The psychologist J. J. Gibson took this notion a step further.
  • Over a period of 35 years, Gibson proposed a theory of perception which argued that perception was direct and did not depend on cognitive processes to bring together fragmented data (Gibson, 1950, 1966, 1979). Because of this, it is considered a direct theory of perception. Originally, Gibson was interested in distinguishing between unsuccessful and successful Second World War pilots. Some of the unsuccessful pilots were unable to land accurately and seemed unable to appreciate distance. However, Gibson found that even when these pilots were given training in depth perception – which may have remedied the problem – they continued to have difficulty.
  • According to Gibson, ‘perceiving is an act, not a response; an act of attention, not a triggered impression; an achievement, not a reflex’ (Gibson, 1979). Gibson’s view of perception was that classical optical science ignored the complexity of real events. For example, it would focus on the effects of trivial, basic or simple stimuli on perceptual response. Gibson abandoned the depth/space perception view of the world and, instead, suggested that our perception of surfaces was more important. Surfaces comprised ground (which we discussed earlier) and texture elements in surfaces that would be attached or detached. Attached features would include bumps and indentations in the surface, such as rocks or trees; detached features would include items such as animals (which are detached from the surface).
  • Given the complex world in which we live, we must be able to perceive not just simple stimuli but stimuli which mean something more to us. We must decide whether an object is throwable or graspable, whether a surface can be sat upon, and so on. We ask ourselves what can this object furnish us with, what does it afford us (Gibson, 1982)? These are the meanings that the environment has and Gibson called them affordances. Thus, Gibson highlighted the ecological nature of perception: we do not simply perceive simple stimuli, but these stimuli mean something more in a wider, more complex context. This was a radical departure in visual perception because it implied that the perception of object meaning is direct. Perception involves determining whether something is capable of being sat upon or is throwable.
  • However, the theory is not without its problems. Costall (1995), for example, suggests that some affordances may not be able to afford. Imagine the ground covered in frost and a frozen lake. According to Gibson, the ground afforded walking. However, although the frosty ground does, the frozen lake may not.
  • Similarly, although we might agree with Gibson that some surfaces are graspable or supporting, we might disagree quite reasonably with the notion that surfaces are edible, for example, that they afford eating. Our decision that something is edible appears to rely on more than direct perception of surfaces.
27
Q

face perception

A
  • One of the most important categories of visual stimuli we process is faces. The ability to recognise and identify faces is one of the most crucial social functions we perform. It helps us form relationships with people, spot faces in a crowd and provides us with non-verbal clues to another’s mental state. We use faces to determine a person’s trustworthiness and aggressiveness.
    We identify people better on the basis of their eyes than their mouth, and both are more important than the nose (Bruce et al, 1993), even when hairstyle, make-up and facial hair are removed or minimised. A three-dimensional image of a face – such as that seen in three-quarter profile – is better recognised than is a full-frontal photograph. Upright faces are better recognised and identified than are those upside down, but there is a curious phenomenon called the ‘Thatcher effect’, first described by the British psychologist, Peter Thompson (1980).
28
Q

sex of the face

A
  • We can usually discriminate between faces more quickly on the basis of their users’ sex than their familiarity (Bruce et al, 1987). Men have larger noses and nasopharynxes, more prominent brows, a more sloping forehead and more deeply set eyes than do women (Enlow, 1982). Women have fuller cheeks and less facial hair, including eyebrows (Shepherd, 1989). Women have smaller noses, a more depressed bridge of the nose, a shorter upper lip, and larger eyes with darker shadows, especially young women (Liggett, 1974).
  • When facial features are presented in isolation, eyes are the most reliable indicator of sex, and the nose is the least reliable. With hair concealed, 96 per cent of participants were able to distinguish between faces based on sex (Burton et al, 1993). When individual facial features or pairs of features (such as brow and eyes, nose and mouth) were presented to participants, the features which afforded the best opportunity to make sex discriminations were (in this order): brow and eyes, brow alone, eyes alone, whole jaw, chin, nose and mouth, and mouth alone (Brown and Perrett, 1993). These findings suggest that all facial features carry some information about sex (except the nose) but suggest that it is difficult to find even one or two features which distinguish absolutely between men’s and women’s faces.
  • Women and girls are better able than men or boys to recognise faces, but they also remember more female faces than they do male ones (Herlitz and Loven, 2013). Women and girls are better at remembering faces of their own sex, a finding called an ‘own-gender bias’; the equivalent effect is not reported in men. A similar phenomenon, and a stronger one, is seen when the race of the face matches the race of the person doing the remembering (‘the own-race bias’).
29
Q

Attractiveness of the face 1

A
  • Each of us finds different faces attractive: some of us find faces friendlier than others, some meaner and others more sexually alluring. Although individual differences exist at this, what seems like, subjective level, studies have shown that some features of the face are generally regarded as more attractive than others.
  • Psychologists in the nineteenth century were interested in what makes a face attractive and constructed composites – averages of several different images – to produce a face which they believed was attractive (Galton, 1878; Stoddard, 1886). Recent work has provided a clearer account of what makes an attractive face; it has also helped to indicate which features of the face best allow us to remember a face, or which make a face distinctive.
  • Of the characteristics that seem to make faces attractive, three have been found to be most predictive (although not always). Facial symmetry, averageness and sexual dimorphism have been reported to influence perception of beauty in several studies.
  • Symmetry, for example, is considered to be an indicator of fitness and health. However, studies with large samples have found no relationship between facial symmetry and health, immune system function or socioeconomic status (Foo et al, 2017; Jones, 2018; Pound et al, 2014; Quinto-Sánchez et al, 2017).
  • Averageness has been found to be a predictor of attractiveness in many studies although, as Jones & Jaeger (2019) have noted this may be because distinctiveness is perceived as less attractive than averageness (rather than averageness being perceived as attractive). The association between averageness and health has also been questioned (Foo et al, 2017). Sexual dimorphism refers to the idea that we prefer faces that are stereotypically sex-based, that is, more masculine or more feminine.
  • Docherty et al (2020), for example, found that women preferred more masculine than more feminine men’s faces and, separately, that these judgements were unrelated to the women raters’ own attractiveness.
  • The more masculine or feminine the face, depending on the wearer, the more attractive they are. But, again, evidence is not consistent with studies finding that women do not prefer more masculine faces and that there is no association between the femininity of a woman’s face and her health and her immune system (which would be a key predictor of the feature as an evolutionary adaptation) (Jones, 2018; Scott & Fava, 2013).
  • In a series of three experiments, Jones & Jaeger did not find that feminised faces were preferred to unmanipulated faces and found that masculinised faces were not positively judged. They did find in two studies that averageness and dimorphism did predict attractiveness ratings, but effects were small.
  • A recent innovation in this type of research has been to allow participants to ‘manipulate’ their own faces so that they create a version of a face that they deem to be attractive.
  • Normally, in these experiments, participants are presented with facial stimuli that have been pre-manipulated by an experimenter.
    Ibánez-Berganza et al (2019), however, allowed participants to take a reference image and manipulate this in such a way that it reaches what, to them, was an example of an attractive face. In a sense, this provided a customised or boutique measure of attractiveness which might also reveal some general, universal characteristics. If the general characteristics are controversial, the specific features and proportions of these features deemed to be universally attractive have led to even further confusion. Using software called FACEXPLORE, Ibánez-Berganza et al found a high degree of subjectivity in the facial features and configurations that people found attractive (there were 95 volunteers).
30
Q

attractiveness of the face 2

A
  • The distinctiveness of the face – defined as the deviation from the norm – is unrelated to attractiveness (Bruce et al, 1994). Galton had hypothesised that averageness was attractiveness. That is, the more average looking the face, the more attractive it was likely to be. This hypothesis was tested and challenged by Perrett et al (1994), who compared the attractiveness ratings for average, attractive and highly attractive Caucasian female faces. They used special computer technology to construct an average composite of photographs of 60 female faces. The 15 faces rated as most attractive from the original 60 were then averaged. Finally, the attractiveness of this average was enhanced by 50 per cent to provide a ‘highly attractive’ composite.
  • Caucasian raters found the ‘attractive’ composite more attractive than the average composite and the highly attractive composite more attractive than the ‘attractive’ composite, thus disconfirming Galton’s hypothesis. Furthermore, when similar composites were made of Japanese women, the same results were obtained: both Caucasian and Japanese raters found the enhanced composite more attractive.
  • What distinguished an average face from an attractive one? The more attractive faces were those who had higher cheek bones, a thinner jaw and larger eyes relative to the size of the face. There was also a shorter distance between mouth and chin and between nose and mouth in the attractive faces.
  • Evolutionary psychologists argue that we are attracted to average faces because this behaviour evolved as a solution to attracting healthy mates – best to stick with what you know and can trust.
  • An alternative view is that we are simply attracted to the familiar – a well-known psychological phenomenon. If this were true, we should be attracted to average-looking stimuli that are non-faces too.
  • This is what Halberstadt and Rhodes (2000) found. They asked people to rate a selection of watches, birds and dogs for attractiveness and prototypicality (how typical they were of a category), or averageness. The researchers found that participants rated the average-looking stimuli as being the most attractive. One reason for this may be that we prefer averageness which ‘reflects a more general preference for familiar stimuli’.
    A directly gazing face is considered significantly more attractive than an indirectly gazing one and we are more likely to engage socially with people if they look at us directly.
    Strick et al (2008) paired novel objects – pictures of unknown peppermint brands – with an attractive or unattractive face which looked straight at the participant, or which averted its gaze. Objects paired with a directly gazing attractive face were more positively evaluated than were objects paired with an indirectly gazing attractive face or an unattractive face.
31
Q

familiarity of the face

A

Take a look at Figure 6.17 and before you come back to the text, sort these faces into different piles depending on how many people you can identify (not personally, just which faces you think are of the same person). Do that now.
Welcome back! How many different faces could you see? Three? Nine? 16?
The correct answer is two*.
This was an example of an exercise that participants in Jenkins et al’s (2011) study were asked to complete. If you knew the two people – and you are very unlikely to – the task should be relatively straightforward. If you did not know them, there is such a difference between these images of the same person that you might identify more than two. In Jenkins et al’s study, participants thought they recognised between 3–16 different people. Nine was the most common (Young & Burton, 2017). The reason why people inflate their identification is because the photos vary according to pose, lighting, expression and type of camera. This can have implications for the reliability of facial recognition by eyewitnesses.
*The correct selection
ABAAABABAB
AAAAABBBAB
BBBAAABBAA
BABAABBBBB

32
Q

Trustworthiness of the face

A
  • We often use faces to judge the trustworthiness or dominance of their owner.
  • Trustworthy faces are more likely to make us engage with their owners. Untrustworthy and dominant faces are responded to more slowly in reaction time tasks, and the slow response to each seems to be mediated by different brain regions (Getov et al, 2014).
  • More typical (‘average’) faces tend to be judged as more trustworthy than are less typical faces (Sofer et al, 2014). Having an untrustworthy face may even have serious consequences beyond those seen in social interaction
  • Wilson and Rule (2015) asked 208 volunteers on MTurk to rate the trustworthiness of the faces of 742 convicted murderers from a Florida prison – half of the prisoners were on death row; half were serving life imprisonment. The untrustworthiness of the face predicted the type of prisoner; the more untrustworthy the face was perceived as being, the more likely it was that the prisoner was on death row.
    In a follow-up study, they examined the effect of facial untrustworthiness on the sentencing for those individuals who had been wrongly convicted and released. They found that the individuals who had been sentenced to death were perceived to have the least trustworthy face
33
Q

How does face processing develop?

A
  • Research with adults has found that there are regions in the brain which are selective for faces; they respond to faces but no other stimuli. These regions are the occipital face area and the fusiform face area in the occipital and temporal cortices (Powell et al, 2018).
  • In rhesus monkeys, which also appear to show a degree of face selectivity, the region is the superior temporal sulcus. Damage to these regions can produce face perception problems and stimulation can evoke illusions of faces (Schalk et al, 2017).
  • Studies with infants have also found activation in these areas when 4–6 month-olds perceive faces but not when they look at other stimuli (Deen et al, 2017).
  • Deen et al’s study was the first fMRI study of its kind with infants although previous fNIRS imaging studies had found similar results. The selectivity, however, appears to be stronger in adults than in infants (Powell et al, 2018) with evidence emerging at around five years of age and increasing through to adolescence.
  • Why should these regions – any region – be this stimulus-specific?
  • Powell et al argue that the imaging data show that these areas are predisposed to respond to this type of stimulus and repeated learning and experience strengthens this relationship. They also propose that three mechanisms might explain this specialism. One is that the extrastriate cortex is organised in such a way that it responds to the visual data that faces provide at the most basic level and that a part of this cortex may preferentially respond to face-like stimuli. They argue that one line of evidence which may support this is that interactions, particularly the mother’s, with infants are face to face and so the visual system of the infant is directed towards the shape and features of the face of the mother or caregiver.
    A second mechanism may involve an innate subcortical region (what the authors call an ‘innate subcortical face template’) and the third mechanism invokes the medial prefrontal cortex which is what enables us to derive social information from faces and the interaction with these faces. For example, as infants develop, their preference for looking at static faces wanes and their preference for dynamic faces increases; when a dynamic face becomes still, the infant will look away (Toda & Fogel, 1993). Infants also begin to respond to/orient towards their own names and to the social features of interaction during this period. The region which may mediate these responses is the medial PFC; there are connections between this region and the face-areas, but studies are not extensive.
34
Q

Theories of face perception

A
  • The mechanisms that allow us to perceive faces are considered to be different from those that allow us to perceive objects; face perception has been thought of as ‘special’ (Farah et al, 1998).
  • Face perception involves a number of operations. We can perceive general characteristics such as the colour, sex and age of a face; we can perceive whether a face expresses anger, sadness or joy; we can distinguish familiar from unfamiliar faces. What model of face processing can account for these operations?
  • Bruce and Young (1986) suggested that face processing is made up of three functions: perception of facial expression, perception of familiar faces and perception of unfamiliar faces.
  • Why does the model separate these functions? Bruce and Young reviewed extensive evidence which suggested that each of these functions is dependent on different cognitive abilities and that evidence from neuropsychology supported such a model.
  • Current views of face processing argue that we exploit three strategies when we recognise faces. One strategy involves recognising the features of a face, a second involves recognising the relations between features in a face (configural processing) and a third suggests that we recognise the whole face (the holistic approach) (Grüter et al, 2008). Configural processing works when faces are upright, but fails when they are inverted, à la the Thatcher effect. There is more on theories of face processing in a later section.
    Young and Bruce (2011) recently reflected on how well their model has endured. They note that the one factor they did not consider, and which should have been, was eye gaze
35
Q

Depth perception -

A
  • It requires we perceive the distance of objects in the environment from us and each otherwise do this using two cues - binocular (two-eye) and monocular(one-eye).
    • Binocular cues arise as the visual fields of both eyes overlap and only animals that have eyes on the front of the head eg cats can obtain binocular cues.
    • Animals with eyes on sides of their heads eg fish can only obtain monocular cues.
      One monocular cue involves movement and so must be experienced in the natural environment or in a motion picture. The other monocular cues can be represented in a drawing or a photograph. Most of these cues were originally discovered by artists and only later studied by psychologists (Zeki, 1998).
36
Q

Binocular cues

A
  • Convergence provides an important cue about distance. The eyes make conjugate movements so that both look at (converge on) the same point of the visual scene. If an object is very close to your face, your eyes are turned inwards. If it is farther away, they look more nearly straight ahead.
    • the eyes can be used like range finders.
    • The brain controls the extraocular muscles, so it knows the angle between them, which is related to the distance between the object and the eyes. Convergence is most important for perceiving the distance of objects located close to us.
    • Retinal disparity also gives us info in the perception of distance.
    • Whenever your eyes are pointed towards a particular point, the images of objects at different distances will fall on different portions of the retina in each eye. The amount of disparity produced by the images of an object on the two retinas provides an important clue about its distance from us.
      The perception of depth resulting from retinal disparity is called stereopsis. A stereoscope is a device that shows two slightly different pictures, one for each eye. The pictures are taken by a camera equipped with two lenses, located a few inches apart, just as our eyes are. When you look through a stereoscope, you see a three-dimensional image.
37
Q

Monocular cues

A
  • Interposition is one of the most important info about relative distances of objects.
    • If one object is placed between us and another object so that the closer object partially obscures our view of the more distant one, we can immediately perceive which object is closer to us.
    • Interposition works best when we are familiar with the objects and know what their shapes look like.
    • the principle of good form affects our perception of the relative location of objects: we perceive the object having the simpler border as being closer.
    • Another important monocular distance cue is provided by our familiarity with the sizes of objects. For example, if a car casts a very small image on our retinas, we will perceive it as being far away. Knowing how large cars are, our visual system can automatically compute the approximate distance from the size of the retinal image.
    • linear perspective: the tendency for parallel lines that recede from us to appear to converge at a single point
    • Texture, especially the texture of the ground, provides another cue we use to perceive the distance of objects sitting on the ground. A coarser texture looks closer, and a finer texture looks more distant. The earth’s atmosphere, which always contains a certain amount of haze, can also supply cues about the relative distance of objects or parts of the landscape. Parts of the landscape that are further away become less distinct because of haze in the air so haze is a monocular distance cue.
      Shading/ patterns of light and shadow can provide with cues about 3D shapes of objects - they tell us which parts of the object are closer and which are further away.
38
Q

distance and location

A
  • When we are able to see the horizon, we perceive objects near it as being distant and those above or below it as being nearer to us. So elevation provides an important monocular depth cue.
    • Another important source of distance info depends on our own movement.
    • Head and body movements cause the images from the scene before us to change; the closer the object, the more it changes relative to the background. The information contained in this relative movement helps us to perceive distance.
      The changes in the relative locations of the objects provide cues concerning their distance from the observer. The phenomenon is known as motion parallax
39
Q

constancies of visual perception

A
  • An important characteristic of the visual environment is that it is almost always changing as we move, as objects move, and as lighting conditions change. However, despite the changing nature of the image the visual environment casts on our retinas, our perceptions remain remarkably constant.
40
Q

Visual perception across cultures

A

rom birth onwards, we explore our environment with our eyes. The patterns of light and dark, colour and movement, produce changes in the visual system of the brain. There is evidence, however, that perception is not absolute, that it varies across cultures. Ecological variables such as those associated with geography, cultural codes and education influence perception
* The visual stimulation we receive, particularly during infancy, affects the development of our visual system. If the environment lacks certain features – certain visual patterns – then an organism might fail to recognise the significance of these features if it encounters them later in life (Blakemore and Mitchell, 1973). But this is not the only type of environment that can influence perception.
* There may also be differences in the cultural codes found in pictorial representations (Russell et al, 1997). Although artists have learned to represent all the monocular depth cues (except for those produced by movement) in their paintings, not all cues are represented in the traditional art of all cultures. For example, many cultures do not use linear perspective.
It is quite rare for a member of one culture to be totally unable to recognise a depiction as a depiction (Russell et al, 1997). However, Deregowski et al (1972) found that when the Me’en tribe of Ethiopia, a culture unfamiliar with pictures, were shown a series of pictures from a children’s colouring book, they would smell them, listen to the pages while flexing them, examine their texture but would ignore the actual pictures. They did recognise depictions of indigenous animals, suggesting that the familiarity of a pictorial depiction is important for recognition within cultures.

41
Q

There are other geographical influences on perception.

A

People who live in ‘carpentered worlds’, that is worlds in which buildings are built from long, straight pieces of material that normally join each other in right angles, are more likely to be subject to the Müller–Lyer illusion -where two vertical lines are actually equal in length, but the one on the left appears to be longer.
Segall et al (1966) presented the Müller–Lyer illusion (and several others) to groups of subjects from Western and non-Western cultures. Most investigators believe that the Müller–Lyer illusion is a result of our experience with the angles formed by the intersection of walls, ceilings and floors (Redding and Hawley, 1993). In fact, Segall and his colleagues did find that people from ‘carpentered’ cultures were more susceptible to this illusion: experience with straight lines forming right angles appeared to affect people’s perception.
Although the famous Müller–Lyer illusion can be demonstrated in modalities apart from vision (Mancini et al, 2010), explanations for it have been based on an understanding of the visual system.
* For example, people with damage to the extrastriate visual cortex in the occipital lobe are unable to perceive the illusion, fMRI data show activation of the bilateral lateral occipital cortex and the superior parietal cortex, and MEG research indicates that activation is seen at two times – once between 85 and 130 ms after the onset of the image and then again at 195–220 ms in the ventral visual pathway in the right temporal cortex, parietal and frontal cortex (Mancini et al, 2011). The MEG data suggest that forming the representation of an object involves the lateral occipital and inferior temporal areas (Weidner et al, 2010).
There is a small amount of evidence to suggest that differences in visual perception exist between Western people and those from East Asia: Westerners tend to perceive objects more analytically and in a more focused way; East Asians are more likely to attend to the context in which objects appear (i.e. they perceive a scene ‘holistically’) (Choi and Nisbett, 1998; Chua et al, 2005).

42
Q

focal v context effect

A
  • In one study where American and Japanese participants were asked to describe an underwater scene, Americans were more likely to describe objects in the water, but the Japanese reported 60 per cent more information about the background environment (Masuda and Nisbett, 2001). In a different scenario, Americans were also able to identify an object (a tiger) more accurately than were the Japanese when it appeared against a different background from that in which it was originally seen. This focal v. context effect could occur due to the types of eye movement made by different cultures.
    • To test this hypothesis, Chua et al (2005) asked American and Chinese participants to view scenes in which objects appeared against complex backgrounds. The eye movement of participants as they viewed the object and scene were then tracked. Compared with the Chinese, Americans focused on specific objects and more quickly. The Chinese made more saccades – eye movements – to the background
    • The researchers suggest that this effect could partly be explained by socialisation. East Asians grow up and live in complex social networks in which paying attention to context is important (perhaps more important than focusing on individual objects or people); Westerners, however, are educated to value individuality and independence (and eye movement is, therefore, directed accordingly). This extends even to cultural products. An analysis of advertising and popular texts in Western and Asian (Korea, Japan and China)/Mexican cultures found that the latter were more individualistic and less collectivistic (Morling and Lamoreaux, 2008).
    • Similar cultural effects can occur when rating facial expressions. Participants in one experiment were asked to rate the degree of emotion shown in cartoons depicting happy, sad, angry or neutral facial expressions. These faces were surrounded by other people expressing the same or different emotion (Masuda et al, 2008). The surrounding stimuli influenced the ratings of Japanese participants significantly more than they did Westerners. This was evidenced by eye-tracking data. The Japanese spent more time looking at the surrounding stimuli than did the Westerners. The lack of self-absorption of Japanese participants was also seen in an experiment in which people completed a verbal fluidity task – where there was an opportunity to cheat – in front of a mirror (Heine et al, 2008). North Americans were more self-critical and less likely to cheat in front of the mirror; the Japanese participants were unaffected by the presence of a mirror.
    • Different nations and different cultures as well as groups within those nations and cultures can also produce art that can be as similar as it is different. Masuda et al (2008) analysed the artistic styles in a total of 365 Western and 218 Eastern landscapes and 286 Western and 151 Eastern portrait paintings. Eastern landscape art was more likely to place the horizon higher than was Western art, which created more space for field information. For portrait paintings, the size of the models was smaller in the Eastern sample; conversely, the Western sample was less likely to include more background.
    • In a second study, groups of American and Taiwanese, Korean, Japanese and Chinese students were asked to draw and photograph landscapes and portraits. The use of context was greater in both types of stimuli in the Eastern sample. It was more likely to draw the horizon in a high position and draw more objects. It was also more likely to use the zoom function to minimise the size of the model in portrait photographs and make the context larger. Finally, American and East Asians students were asked to rate their preference for portrait photographs where the model and the background varied. Japanese participants were significantly less likely to prefer narrow backgrounds and larger models.
    • The findings are consistent with those of other studies. Miyamoto et al (2006) took photographs of significant cultural institutions in the US and Japan. These included schools, post offices, hotels, and so on. The institutions in Japan featured more objects and were visually more complex.
      Why do these differences occur? Masuda et al (2008) cite Cohen et al’s insider/outsider view of how we organise information about the world (Cohen and Gunz, 2002; Cohen et al, 2007). The insider is dominant in the West; this person dwells on their own private experiences and sees the world from their point of view. The outsider views the world from the point of view of an outsider looking at the self. It seems as if these roles can change. For example, people who have been exposed to Japanese scenes for a few minutes notice more context than those who are exposed to American scenes (Miyamoto et al, 2006).
43
Q

Size -weight illusion

A
  • As well as auditory–visual illusions, there are also visual–kinetic illusions such as the size–weight illusion (SWI). The SWI is seen when the smaller of two objects – one large, one small, both equally weighted – is perceived as heavier than the larger (Charpentier, 1891).
    • The illusion persists even when people are told that the objects weigh the same; participants do not even have to lift the object – pushing objects produces the same illusion (Plaisier and Smeets, 2012; Buckingham, 2014). Repeated exposure to the task does not diminish the illusion.
    • One size–weight illusion that is manipulable involves golf balls. Ellis and Lederman (1998) found that when they made practice golf balls the same weight as normal golf balls (practice ones are lighter), golfers judged the newly adjusted practice balls to be heavier than the normal golf balls. The effect was not observed in non-golfers.
    • The SWI is also found when people play with dolls; people will judge the weights of female and male dolls differently, but this differential judgement disappears when people close their eyes (Dijker, 2008). When told that books were important, people judged them to be physically heavier than did those who were not told this (Schneider et al, 2011). Imagining having a heavy heart, in one study, was associated with judging objects as heavier than was having a light heart (Min and Choi, 2016). Finally, one study found that a cold coin placed on the forehead of a person lying down was more likely to be perceived as heavy than was a coin at room temperature – this is called the Silver Thaler illusion (Buckingham, 2014).
    • Why do illusions such as this occur? According to Buckingham, there are three possible explanations. One, the sensorimotor hypothesis, argues that the SWI illusion exists because of the mismatch between the assumed weight of the object we are about to hold (based on past experience) and the actual weight. We are more likely to lift large objects with a greater degree of force than smaller ones, for example. If we lift a small one, expecting it to be lighter (and it isn’t), we over-estimate the weight based on this sensorimotor of an object, which involves the lateral occipital and inferior temporal areas (Weidner et al, 2010).
      A second, the bottom-up hypothesis, argues that participants confuse an object’s density with its weight; small objects are less dense than larger ones of the same weight. A third, the top-down hypothesis, argues that prior experience determines our sensory response to the different objects. Flanagan et al (2008), for example, trained participants to repeatedly lift large objects that had lower mass (so were lighter than anticipated). A group of participants who had lifted such objects 1,000 times a day, did not show the SWI, suggesting the importance of prior experience.
44
Q

Brightness constancy

A

People can judge the whiteness or greyness of an object very well, even if the level of illumination changes. If you look at a sheet of white paper either in bright sunlight or in shade, you will perceive it as being white, although the intensity of its image on your retina will vary. If you look at a sheet of grey paper in sunlight, it may in fact reflect more light to your eye than will a white paper located in the shade, but you will still see the white paper as white and the grey paper as grey. This phenomenon is known as brightness constancy.

45
Q

Form constancy

A
  • When we approach an object or when it approaches us, we do not perceive it as getting larger. Even though the image of the object on the retina gets larger, we perceive this change as being due to a decrease in the distance between ourselves and the object. Our perception of the object’s size remains relatively constant
  • The unchanging perception of an object’s size and shape when it moves relative to us is called form constancy. Psychologists also refer to size constancy, but size is simply one aspect of form.
  • In the nineteenth century, Hermann von Helmholtz suggested that form constancy was achieved by unconscious inference, a mental computation of which we are unaware. We know the size and shape of a familiar object. Therefore, if the image it casts upon our retina is small, we perceive it as being far away; if the image is large, we perceive it as being close. In either case, we perceive the object itself as being the same size.
  • Form constancy also works for rotation.
  • The process just described works for familiar objects. However, we often see unfamiliar objects whose size we do not already know. If we are to perceive the size and shape of unfamiliar objects accurately, we must know something about their distance from us. An object that produces a large retinal image is perceived as big if it is far away and small if it is close.
46
Q

perception of motion

A
  • Detection of movement is one of the most primitive aspects of visual perception. This ability is seen even in animals whose visual systems do not obtain detailed images of the environment.
    our visual system can detect more than the mere presence of movement. We can see what is moving in our environment and can detect the direction in which it is moving.
47
Q

Adaptation and long-term modification

A
  • One of the most important characteristics of all sensory systems is that they show adaptation and rebound effects.
  • For example, when you stare at a spot of colour, the adaptation of neurons in your visual system will produce a negative after-image if you shift your gaze to a neutral background
  • Motion, like other kinds of stimuli, can give rise to adaptation and after-effects.
    Tootell et al (1995) presented participants with a display showing a series of concentric rings moving outwards, like the ripples in a pond. When the rings suddenly stopped moving, participants had the impression of the opposite movement – that the rings were moving inwards. During this time, the experimenters scanned the participants’ brains to measure their metabolic activity. The scans showed increased activity in the motion-sensitive region of the visual association cortex, which lasted as long as the illusion did. so, the neural circuits that give rise to this illusion appear to be located in the same region that responds to actual moving stimuli
48
Q

Interpretation of a moving retinal image

A
  • the visual system must know about eye movements to compensate for them in interpreting the significance of moving images on the retina. A simple demonstration suggests the source of this information. Close your left eye and look slightly down and to the left. Gently press your finger against the outer corner of the upper eyelid of your right eye and make your right eye move a bit. The scene before you appears to be moving, even though you know better. This sensation of movement occurs because your finger – not your eye muscles – moved your eye. When your eye moves normally, perceptual mechanisms in your brain compensate for this movement. Even though the image on the retina moves, you perceive the environment as being stationary. However, if the image moves because the object itself moves or because you push your eye with your finger, you perceive movement.
    In general, if two objects of different size are seen moving relative to each other, the smaller one is perceived as moving and the larger one as standing still. We perceive people at a distance moving against a stable background and flies moving against an unmoving wall. So, when an experimenter moves a frame that encloses a stationary dot, we tend to see the dot move, not the frame. This phenomenon is also encountered when we perceive the moon racing behind the clouds, even though we know that the clouds, not the moon, are moving.
49
Q

Does language affect our understanding of spatial relations?

A
  • an intriguing interaction can occur when people are asked to describe directions, and more intriguing still is the community which has helped illuminate this anomaly and which is described by Detuscher (2010). The Guugu Yimithirr is a population of around 1,000 who dwell 30 miles north of Cookstown in northeastern Australia and have a particularly well-known claim to fame.
  • Australia’s Indigenous peoples are two distinct cultural groups made up of Aboriginal and Torres Strait Islander people. When Captain Cook disembarked there in 1770 and encountered a strange animal, he was told that it was a ‘kanguroo’. Later explorers were baffled, however, because none of the indigenous people had heard of such an animal and, by all accounts, thought they were being taught the English word for it. Fifty years later another naval explorer, Phillip King, arrived and was told that the bouncy animal was a ‘minnor’ or ‘meenuah’. What created this confusion? And which was correct? The answer came in 1971 when an anthropologist called John Haviland discovered that the Guugu Yimithirr described one type of kangaroo as gangurru. The name given by them to other types of kangaroo was a variant of what was told to King – the word meant ‘meat’ or ‘edible animal’. They distinguished between the two types.
  • But the Guugu Yimithirr also have an unusual way of constructing other expressions: spatial relations. When we give directions, we do so using one of two frames of reference. If someone wants to find out where the nearest coffee shop is, you would either say ‘After the newsagent, turn left and then, after the hairdresser’s, turn right’ or ‘After the newsagent, turn west, then head north and turn north east’. The first, the one most people use, is egocentric and the two axes of right and left depend on the orientation of the body. The second type depends on geographical coordinates (which is, objectively, more accurate but needs to be computed and is, therefore, less easy to use day to day).
  • The peculiarity of the Guugu Yimithirr is that they use this form, not the egocentric form, to describe spatial relations. They have no word for left, right, in front of or behind when referring to object location. Instead, they use cardinal directions: north, south, east and west. This means that any direction is given in relation to what is seen in front of them. If a person described the movement of an actor on a television programme, then the directions would depend on the position of the television (not the actor). If the television was moved, the type of direction would move. If they read a book, a character would be said to be to the west of a woman; if the book was rotated, the man would be to the north of the woman. Even memories are recalled in this way. (A similar phenomenon is seen in the Tzeltal highland tribe of Southeast Mexico – they describe directions in relation to downhill, uphill and across.) They do understand the concepts of left and right in English.
    A final note about language and space. One study exploited the fact that different languages have different writing systems (Bergen and Lau, 2012). Mandarin Chinese is written left to right and top to bottom. In Taiwan, letters are written top to bottom but right to left. In Bergen and Lau’s experiment, Mandarin Chinese, Taiwanese and English speakers were asked to arrange the development of, for example, a frog from tadpole onwards. The stages of development were depicted on cards which the participants would arrange. The experimenters found that the English speakers plotted time from left to right, as did the majority of Mandarin Chinese speakers. The Taiwanese participants, however, were just as likely to plot time from left to right, as they were top to bottom. Some also depicted the stages going from right to left, suggesting that the way in which time is spatially represented can be influenced by the writing system.
50
Q

Brain mechanisms of visual perception

A
  • Although the eyes contain the photoreceptors that detect areas of different brightnesses and colours, perception takes place in the brain. The optic nerves send visual information to the thalamus, which relays the information to the primary visual cortex located in the occipital lobe at the back of the brain.
  • In turn, neurons in the primary visual cortex send visual information to two successive levels of the visual association cortex. The first level, located in the occipital lobe, surrounds the primary visual cortex. The second level is divided into two parts, one in the middle of the parietal lobe and one in the lower part of the temporal lobe.
  • Visual perception by the brain is often described as a hierarchy of information processing. According to this scheme, circuits of neurons analyse particular aspects of visual information and send the results of their analysis on to another circuit, which performs further analysis.
    At each step in the process, successively more complex features are analysed. Eventually, the process leads to the perception of the scene and of all the objects in it. The higher levels of the perceptual process interact with memories: the viewer recognises familiar objects and learns the appearance of new, unfamiliar ones. Deprivation of the visual system or damage to it during the early years of development can have significant consequences for visual function.

diagram in notes figure 6.29

51
Q

Primary visual cortex

A
  • Our knowledge about the characteristics of the earliest stages of visual analysis came originally from investigations of the activity of individual neurons in the thalamus and primary visual cortex. For example, Hubel and Wiesel inserted microelectrodes – extremely fine wires having microscopically sharp points – into various regions of the visual system of cats and monkeys to detect the action potentials produced by individual neurons (Hubel and Wiesel, 1977, 1979). The signals detected by the microelectrodes are electronically amplified and sent to a recording device so that they can be studied later.
  • After positioning a microelectrode close to a neuron, Hubel and Wiesel presented various stimuli on a large screen in front of the anaesthetised animal. The anaesthesia makes the animal unconscious but does not prevent neurons in the visual system from responding. The researchers moved a stimulus around on the screen until they located the point where it had the largest effect on the electrical activity of the neuron. Next, they presented the animal with stimuli of various shapes in order to learn which ones produced the greatest response from the neuron.
  • From their experiments, Hubel and Wiesel (1977, 1979) concluded that the geography of the visual field is retained in the primary visual cortex. That is, the surface of the retina is ‘mapped’ on the surface of the primary visual cortex. However, this map on the brain is distorted, with the largest amount of area given to the centre of the visual field. The map is like a mosaic. Each piece of the mosaic (usually called a module) consists of a block of tissue, approximately in size and containing approximately 150,000 neurons.
  • All of the neurons within a module receive information from the same small region of the retina. The primary visual cortex contains approximately 2,500 of these modules. Because each module in the visual cortex receives information from a small region of the retina, that means that it receives information from a small region of the visual field – the scene that the eye is viewing. If you looked at the scene before you through a straw, you would see the amount of information received by an individual module.
  • Hubel and Wiesel found that neural circuits within each module analysed various characteristics of their own particular part of the visual field, that is, of their receptive field. Some circuits detected the presence of lines passing through the region and signalled the orientation of these lines (i.e. the angle they made with respect to the horizon). Other circuits detected the thickness of these lines. Others detected movement and its direction. Still others detected colours.
    Because each module in the primary visual cortex receives information about only a restricted area of the visual field, the information must be combined somehow for perception to take place. This combination takes place in the visual association cortex. Some parts of the visual cortex, such as V1, V2 and V4 respond to different aspects of colour perception (Bartolomeo et al, 2013), whereas V4 and the lateral occipital cortex respond to the shape of an object (Ales et al, 2013).
52
Q

Visual association cortex

A
  • The first level of the visual association cortex, which surrounds the primary visual cortex, contains several subdivisions, each of which contains a map of the visual scene. Each subdivision receives information from different types of neural circuit within the modules of the primary visual cortex.
  • One subdivision receives information about the orientation and widths of lines and edges and is involved in perception of shapes.
    Another subdivision receives information about movement and keeps track of the relative movements of objects (and may help compensate for movements of the eyes as we scan the scene in front of us). Yet another subdivision receives information concerning colour (Zeki, 1993; Milner, 1998). You can see these subdivisions in Figure 6.30 in notes
  • The two regions of the second level of the visual association cortex put together the information gathered and processed by the various subdivisions of the first level. Information about shape, movement and colour is combined in the visual association cortex in the lower part of the temporal lobe. Three-dimensional form perception takes place here.
    The visual association cortex in the parietal lobe is responsible for perception of the location of objects. It integrates information from the first level of the visual association cortex with information from the motor system and the body senses about movements of the eyes, head and body
53
Q

The ‘special’ case of faces - evidence of neuroimaging

A
  • You saw in an earlier section, that faces are thought to be special stimuli in visual perception. Their specialness has been enhanced by evidence that there are specific brain regions that appear to respond selectively to them. For example, the perception of unfamiliar faces recruits a specific set of brain areas in the occipital and temporal lobe; these include the frontal gyrus, the inferior occipital gyrus (IOG) and the superior temporal sulcus (STS) (Natu and O’Toole, 2011), which you can see in the brain scans in Figure 6.31.
  • There is also evidence that the amygdala responds selectively to faces and contains neurons that are thought to be particularly responsive to faces (Todorov et al, 2013).
  • One study found that there were 25 regions of the brain which responded selectively to faces and these could be reduced to three clusters based on the connections between them (Zhen et al, 2013).
  • The IOG was the key component. When the study examined the brain’s response to objects, connectivity between the IOG and the rest of the brain was reduced. Two specifically face-selective regions are the middle of the frontal gyrus, called the fusiform face area (FFA), and the occipital face area.
  • Neuroimaging studies have shown that the face-specific effects in the STS are not consistent and may depend on whether the stimulus is dynamic or static. If the stimulus is moving, activation here is more consistent (Fox et al, 2009b). There is little evidence for connectivity between these two areas – the STS and FFA – which suggests that they undertake different independent roles in face recognition (Iidaka, 2013).
  • Haxby et al (2000) have proposed that the brain’s involvement in face processing can be explained by a distributed neural model. Specifically, they suggest that first, there are core areas dedicated to face processing specifically, and these areas are the ones described above. Second, there are regions which process the invariant features of the face – such as the position of the eyes, nose, mouth and so on – and these are the fusiform gyrus and the IOG. Features of the face which can change, such as its expression or gaze, are processed by the posterior STS
  • They suggest that connections between the lateral frontal gyrus and the anterior temporal lobe mediate our ability to code personal identity in a face, as well as the name associated with it and the biography of the person. The superior temporal sulcus is connected to the intra-parietal sulcus and this region allows us to direct our attention to faces. Other regions, such as the amygdala, insula and limbic system, mediate the ability to extract emotion from a face.
  • The recognition of famous faces appears to recruit the fusiform gyrus and the anterior or middle temporal cortex with personally familiar faces recruiting even more areas for reasons described below. The FFA adapts fairly quickly to repeated presentations, so activation becomes less when we see the same face over and over again. However, this activation re-starts when our viewpoint of the face changes (Andrews and Ewbank, 2004; Ewbank and Andrews, 2008).
  • Processing familiar faces is much more complex than processing an unfamiliar face or a famous one because the emotional and autobiographical information attributed to such a face is greater. We can normally categorise objects within 250–290 ms but our ability to recognise famous faces in a group of unknown faces takes us between 430 and 875 ms (Barragan-Jason et al, 2012). When a cap is set on reaction time, only very famous faces are recognised within 600 ms. In Barragan-Jason et al’s study, the minimum time required to recognise famous faces was 467 ms; that is, 180 ms more than when discriminating human faces from animals, and 160 ms slower than when distinguishing between the sex of faces.
    Gobbini and Haxby (2007) have argued that knowledge of the traits of the individual’s face and the ability to evaluate the mental state of the face of a familiar person is mediated by an area called the anterior paracingulate cortex. Biographical and semantic information associated with the face is mediated by the anterior temporal cortex. Autobiographical memories associated with the face activate the precuneus and posterior cingulate cortex, with emotion mediated by the typical regions already described above.
54
Q

Brain damage and visual perception

A

chneider (1969) had proposed that there were two major visual system pathways: a geniculostriate pathway which was responsible for identifying stimuli and discriminating between patterns, and a retinotectal pathway which was responsible for locating objects in space. Schneider’s theory has since been modified, although the idea that different brain regions are responsible for the perception of an object’s qualities and its location is valid.
Ungerleider and Mishkin (1982), for example, suggested that different parts of the brain were involved in object identification and object location: the appreciation of an object’s qualities was the role of the inferior temporal cortex; the ability to locate an object was the role of the posterior parietal cortex. Primates with posterior parietal cortex lesions make consistent errors in accurately reaching out for or grasping objects although their ability to discriminate between objects is intact. Similar damage in humans also results in difficulties performing visuospatial tasks such as estimating length and distance (Von Cramon and Kerkhoff, 1993; Jeannerod et al, 1994).

55
Q

parietal cortex

A
  • The parietal cortex (see Chapter 4) plays an important role in visually guiding movement and in grasping or manipulating objects (Sakata, 1997). Importantly, Ungerleider and Mishkin distinguished between a ventral and dorsal pathway or stream which projected from the primary visual cortex (PVC) to these areas. Thus, although originating in the PVC, the two pathways were independent and projected to different areas of the brain (to the occipitotemporal and the posterior half of the parietal cortex, respectively). The ventral stream was later extended to the ventrolateral and dorsolateral prefrontal cortex (Kravitz et al, 2011).
    Goodale and Milner (1992) and Milner and Goodale (1995) developed this idea that what was important was not ‘what’ and ‘where’, but ‘what’ and ‘how’. In Ungerleider and Mishkin’s model, the ventral stream processed the ‘what’ component of visual perception (identification of an object) whereas the dorsal stream processed the ‘where’ component (the spatial location of an object). Goodale and Milner’s research has focused on the ‘what’ and ‘how’ areas.
56
Q

occipital cortex

A
  • Goodale and Milner have made an extensive study of DF, a woman with substantial bilateral damage to the occipital cortex (but sparing the PVC) resulting from carbon monoxide poisoning (Goodale and Milner, 1992; Milner and Goodale, 1995). DF is unable to discriminate between geometric shapes and is unable to recognise or identify objects, despite having no language or visual sensory impairment (Milner et al, 1991). That is, she exhibits visual form agnosia (agnosia is described in more detail in a later section). DF is able to respond to objects. For example, she can place her hand into a slot of varying orientations or grasp blocks (Goodale et al, 1991). However, when she is asked to estimate the orientation of the slot or the width of the box by verbally reporting or by gesturing, she is unable to do so. Why?
    DF may be using the intact visuomotor processing system in the parietal cortex to perform the grasping and orientation tasks (Milner and Goodale, 1995; Milner, 1998). The guidance of motor behaviour relies on a primitive dorsal stream in the parietal cortex which is spared in DF. This is why the execution of her motor behaviour is accurate. When asked to indicate which of two boxes is a rectangle and which is a square, she can respond correctly when holding the boxes but less correctly when making a verbal response (Murphy et al, 1996). DF would make partial movements towards one of the boxes before correcting herself. When these initial reaches were analysed, they showed the same level of accuracy as if she had verbally reported which box was which.
57
Q

Did DF monitor the size of her anticipatory grip before making a decision?

A
  • There is evidence that she does. When asked to look at a series of lines of varying orientation and then copy them on a separate piece of paper, DF would outline the line in the air before making a copy. When asked not to do this, her copies were still relatively accurate. She found the task easier if she imagined herself drawing the line: when she was asked to copy the line immediately – thereby preventing rehearsal from taking place – she failed (Dijkerman and Milner, 1997). DF must have generated a motor image of the lines to allow her to accomplish this task, a behaviour which would have been made possible by intact functioning of the frontal and parietal lobes.
  • On the basis of DF’s behaviour, research from neuroimaging studies of motor movement and vision, and animal lesions to parietal and occipital areas, Milner and Goodale (1995) proposed that the dorsal stream sends information about object characteristics and orientation that is related to movement from the primary visual cortex to the parietal cortex. Damage to the ventral stream, which projects to the inferior temporal cortex, is what is responsible for DF’s inability to access perceptual information. The dorsal stream is automatic, non-conscious and involves visually guided action, not spatial perception whereas the ventral stream produces the representations that are available to conscious experience.
    Some have argued (Kravitz et al, 2011) that the dorsal stream is in fact three streams with one mediating spatial working memory (see Chapter 8), another mediating visually guided action and a third, spatial navigation. These go to the parieto-PFC, parieto-premotor cortex and parieto-medial temporal cortex, respectively. This is a challenge to the ‘what’ and ‘how’ model because Kravitz et al argue that the different streams support different functions because of the cortical areas they project to.
58
Q

Projections to the primary visual cortex

A
  • Two specific pathways – the parvocellular (P) and magnocellular (M) pathways – run from the retina to the cortex and terminate in different layers of the primary visual cortex (V1). Other layers of V1 project to other dorsal and ventral stream areas. Layers 2 and 3 of V1, for example, provide input to the ventral stream areas whereas layer 4B sends input to dorsal stream areas. Layer 4B also receives input from the M and P pathways and projects to areas such as V5, a region known to be involved in motion perception. Many other circuits such as this are made within the visual system but comparatively little is known about how functionally relevant such connections are or how different types of cell contribute to the circuitry.
    One study has shown that different types of neurons in area V1 receive different signals from the M and P pathways and forward this information to other specific cortical areas (Yabuta et al, 2001). The results of the study suggest that if two types of cell project to different layers, perhaps each type carries different types of information in the cortical visual system.
59
Q

Perceptual disorders

A
  • When the brain is damaged and visual perception is impaired, the patient is said to exhibit a perceptual disorder. There are several perceptual disorders and each is associated with damage to different parts of the visual system. It is important to note that these disorders are strictly perceptual, that is, there is no underlying impairment in sensation (patients retain visual acuity and the ability to tell light from dark, and so on). The basic visual sensory system itself is, therefore, unimpaired.
    Three of the most important perceptual disorders are blindsight, agnosia and spatial neglect. Each is important in its own way because they demonstrate how brain damage can affect different aspects of visual perception.
60
Q

blindsight 1

A
  • When the primary visual cortex is damaged, a person becomes blind in some portion of the visual field. Some individuals, however, can lose substantial areas of the PVC and yet show evidence of perceiving objects despite being ‘cortically blind’. This phenomenon is called blindsight (Weiskrantz, 1986, 1997) because although patients are unable to see properties of objects, they are aware of other aspects such as movement of objects.
  • According to Weiskrantz, blindsight is the ‘visual capacity in a field defect in the absence of acknowledged awareness’. Moving objects are better detected than still ones, objects can be located if they are pointed at and they can detect movement and colour, despite being ‘unable’ to see the stimuli. (There are equivalent phenomena in the auditory, somatosensory and olfactory systems called deaf hearing, blindtouch and blindsmell.)
  • Much of the research in blindsight has been conducted with people with damage to the striate cortex (V1) and who have retained residual visual ability; they are cortically and clinically blind in the field affected by the V1 damage but they are able to make decisions in this visual field which indicates that there is some visual ability that is intact. If people do indicate an awareness of visual stimuli – the most well-known example is showing some awareness of moving objects – they are described as having type-2 blindsight (Foley, 2014).
  • The earliest case of blindsight was reported at the beginning of the last century (Riddoch, 1917). Riddoch was an army medical officer who had made a study of soldiers whose primary visual cortex had been damaged by gunshot wounds. Although none of the patients could directly describe objects placed in front of them (neither shape, form nor colour), they were conscious of the movement of the objects, despite the movement being ‘vague and shadowy’. This suggested to Riddoch that some residual visual ability in the PVC remained which allowed the perception of object motion but no other aspect of visual perception. Some patients need to be prompted to ‘guess’ (Blindsight Type 1), whereas others will report vague sensations (Blindsight Type 2) although both types claim that they cannot see anything. That the PVC is damaged led to the hypothesis that this area was responsible for conscious visual perception (Radoeva et al, 2008).
    Since Riddoch’s study, several other cases of blindsight have been reported, notably Larry Weiskrantz’s famous patient, DB (Weiskrantz, 1986). DB had undergone surgery for a brain tumour, which necessitated removal of the area of the visual cortex in the right occipital lobe. This surgery resulted in a scotoma – an area of complete blindness in the visual field. DB could indicate whether a stick was horizontal or vertical, could point to the location of an object when instructed, and could detect whether an object was present or absent. Other tasks presented greater difficulty: DB could not distinguish a triangle from a cross or a curved triangle from a normal one. The most intriguing feature of DB’s behaviour, however, was a lack of awareness of the stimuli presented. According to DB, he ‘couldn’t see anything’ when test stimuli were seen. Why could DB, and patients like DB, make perceptual decisions despite being unaware of visual stimuli?
61
Q

blindsight 2

A
  • One hypothesis suggests that perceptual tasks can be completed successfully because stray light emitted by stimuli makes its way from the intact field of vision because it reflects from surfaces outside the eye area – what is called extraocular scatter (Cowey, 2004). The stray light hypothesis, however, appears to be an unlikely explanation because DB is able to make perceptual decisions in the presence of strong ambient light which reduces the amount of stray light emitted by stimuli. More to the point, this theory does not explain how DB can still make decisions based on the spatial dimensions of objects.
  • An alternative hypothesis is that the ability is attributable to the degrading of normal vision, possibly due to the presence of some residual striatal cortex (‘islands’ of PVC cortex that are undamaged) (Wessinger et al, 1999). Implicit in this hypothesis is the notion that residual abilities are not attributable to the functioning of another visual system pathway. There are 10 known pathways from the retina to the brain (Stoerig and Cowey, 1997). As you have seen, there appear to be two distinct pathways in the visual system which mediate different aspects of vision.
  • The visual location of objects, for example, is thought to be a function of a system which includes the superior colliculus, the posterior thalamus and areas 20 and 21, whereas the analysis of visual form, pattern or colour is thought to be a function of the geniculostriate system which sends projections from the retina to the lateral geniculate nucleus, then to areas 17, 18 and 19, and then to areas 20 and 21. Blindsight could, therefore, conceivably be due to a disconnection between these two systems. Again, there are arguments against this hypothesis.
    Curiously, DB, although unable to ‘see’ objects when presented to him, even 30 years after his deficit was first studied, appears to be aware of a visual ‘after-image’ after a stimulus on a monitor is switched off (Weiskrantz et al, 2002). The colour and spatial structure of the stimulus can be described, a phenomenon that is correlated with increased PFC activity (Weiskrantz et al, 2003). It is unclear whether this ability is due to spared striate cortex, however, because DB has surgical clips which prevent him from undergoing an MRI scan which would demarcate the preserved cortex
62
Q

visual agnosia

A
  • Patients with posterior lesions to the left or right hemisphere sometimes have considerable difficulty in recognising objects, despite having intact sensory systems. We saw an example of this in an earlier section when we discussed the perceptual impairments seen in patient DF. This disorder is called agnosia (literally ‘without knowledge’), a term coined by Sigmund Freud. Agnosia can occur in any sense (e.g. tactile agnosia refers to the inability to recognise an object by touch) but visual agnosia is the most common type (Farah, 1990; Farah and Ratcliff, 1994).
  • The existence of specific types of agnosia is a controversial topic in perception and neuropsychology. A distinction is usually made between two types of visual agnosia: associative and apperceptive. Apperceptive agnosia is the inability to recognise objects whereas associative agnosia is the inability to make meaningful associations to objects that are visually presented. Some neuropsychologists have argued that the boundaries between these two types are ‘fuzzy’ (DeRenzi and Lucchelli, 1993), and other sub-types of visual agnosia have been suggested (Humphreys and Riddoch, 1987a).
  • Apperceptive agnosics have a severe impairment in the ability to copy drawings, as patient DF did.
  • Associative agnosics, conversely, can copy accurately but are unable to identify their drawings. For example, Humphreys and Riddoch’s patient, HJA, spent six hours completing an accurate drawing but was unable to identify it when he had finished. Figure 6.33 shows you an example of HJA’s drawings.
  • There has been considerable debate concerning the specificity of visual object agnosia, that is, whether some patients are able to recognise some categories of object but not others (Newcombe et al, 1994). The commonest dissociation is seen between living and non-living things. Generally, it has been found that recognition of living objects (such as animals) is less accurate in agnosic patients than is recognition of non-living objects (Warrington and Shallice, 1984; Silveri et al, 1997).
  • Some psychologists, however, have argued that these studies do not show differences between the categories of object but between the ways in which these two different types of stimuli are presented. Parkin and Stewart (1993), for example, have suggested that it is more difficult to recognise drawings of animate than inanimate objects.
    An inanimate object, such as a cup, is a lot less detailed than an animate object, such as a fly. The dissociation seen in agnosic patients, therefore, may be due to the complexity and/or familiarity of the perceived stimulus. Stewart et al (1992) have suggested that when these artefacts are controlled for, these dissociations disappear
63
Q

prosopagnosia

A
  • A more category-specific form of agnosia is prosopagnosia. Some individuals with damage to specific areas of the posterior right hemisphere (and sometimes left and right hemispheres) show an impairment in the ability to recognise familiar faces. This condition is known as prosopagnosia (‘loss of knowledge for faces’).
  • Some patients are unable to recognise famous faces (Warrington and James, 1967) or familiar people such as spouses (DeRenzi, 1986). Some people are born with an impairment in the ability to recognise faces despite intact intelligence and vision; this condition is called developmental prosopagnosia or congenital prosopagnosia and tends to run in families. But such cases are not unitary and clear-cut. For example, here are some comments from patients with prosopagnosia on how their condition affects them.
  • The ability to recover some ability in face recognition in acquired and developmental cases is generally poor, despite attempts at rehabilitation (Bate and Bennetts, 2014; DeGutis et al, 2014). People with congenital prosopagnosia have a limited understanding of their own inabilities (Palermo et al, 2017).
  • Much of the recent neuropsychological work on face recognition has exploited neuroimaging techniques to determine whether regions of the human brain respond to faces selectively.
  • One controversy in the area surrounds whether such selective activation is specific to faces or to some other perceptual aspect of faces.
  • Kanwisher et al (1998), for example, found that the human fusiform face area (HFFA), was significantly activated when people viewed upright and inverted greyscale faces. Inverted two-tone faces, however, were associated with significantly reduced brain activation. The results suggest that the HFFA does not respond specifically to low-level features of faces (if it did, the inverted and upright two-tone faces would have produced similar activation) but does respond to face stimuli. The authors acknowledge, however, that this may not be the only brain region specialised for face processing.
  • Another argument against the specificity hypothesis is that the area may respond to familiar stimuli which we are expert at identifying, rather than faces specifically.
  • To test this hypothesis, Rhodes et al (2004) set up two experiments in which people were either trained or were not trained to recognise Lepidoptera (moths and butterflies). Brain activation was monitored using fMRI while participants viewed faces and Lepidoptera. In the second experiment, experts in identifying moths and butterflies passively watched examples of the species while brain activity was recorded. In the first experiment, the FFA was more significantly activated when people watched faces than Lepidoptera, regardless of whether people had been trained to recognise examples of the species. In the second experiment, activation was greater in the FFA when the butterfly experts watched faces than Lepidoptera. There was no overlap in the areas activated by faces and moths and butterflies. The results suggest that the FFA contains neurons that allow ‘individuation’ of (i.e. discrimination between) faces.
    Extensive testing of one individual, Edward, a 53-year-old man with PhDs in theology and physics who experienced face processing difficulties in childhood, strongly suggests that the deficit in prosopagnosia is a face-specific one (Duchaine et al, 2006).
64
Q

visuospatial neglect

A
  • Patients with lesions in the right posterior parietal cortex sometimes have difficulty in perceiving objects to their left (Vallar, 1998; Guariglia et al, 2014). Around 50–80 per cent of patients with right hemisphere stroke are unable to attend automatically to any stimuli in left space (Halligan and Marshall, 1994; Guariglia et al, 2014). This is called visuospatial neglect (or unilateral spatial hemineglect) and occurs on the side of the body that is contralateral to the side of the brain damage. It is called neglect because patients cannot, or show an impairment in the ability to, respond to stimuli in the visual field opposite to the area of brain injury.
    Neglect for the left side is more common than right neglect (which would be caused by damage to the left hemisphere).
  • Recent research suggests that a large number of other, subcortical structures may also result in neglect if damaged (Molenberghs et al, 2011). The variety of structures involved may explain why some patients show different types of neglect depending on the medium of impairment – personal, perceptual or representational.
  • Patients may ignore visual stimuli in the left visual field; not putting the left arm of a pair of glasses on the ear, or not eating what is on the left side of a plate of food or not reading the left side of a newspaper. Problems in the representational domain involve being unable to describe the left side of stimuli mentally imagined (such as remembering the location of a landmark). For this reason, psychologists administer a battery of tests, rather than one individual test (as this would not measure all types of neglect) (Guariglia et al, 2014). Guariglia et al suggested that differences in tests may explain the variety of degrees of reported neglect.
  • Spatial-neglect patients show a characteristic pattern of behaviour on visuospatial tests. For example, if they are required to bisect lines of varying length, they will err to the right. If they are presented with an array of stimuli (such as small lines) and asked to mark off as many as possible, they mark off those on the right-hand side but fail to mark off those on the left. Patients find this even more difficult if there are more target stimuli present (Tan Brink et al, 2020).
  • When asked to draw (or mentally imagine a scene), patients fail to draw or report details from the left side of the object or image (Guariglia et al, 1993; Halligan and Marshall, 1994). Sometimes, patients will transfer details from the left to the right-hand side. See Figures 6.34 and 6.35. This is called allesthesia or allochiria (Meador et al, 1991).
  • Guariglia et al (2014) found that in their study of 287 patients with right-sided brain injury, 45 per cent showed evidence of neglect based on standard test battery administration. Line bisection performance correlated with cancellation task, writing and perceptual task performance, but not with personal neglect suggesting, as other data have, that personal neglect might be a variation of – or a different disorder to –perceptual neglect.
  • The reasons for spatial neglect are unclear (see Halligan and Marshall, 1994, and Mozer et al, 1997, for a discussion) but there are various methods for reducing the symptoms of the disorder, including optokinetic stimulation (we think our bodies rotate to the right if our field of vision is full and the stimuli move to the left and we try to compensate for this by moving to the left), neck-muscle vibration (stimulating the left neck-muscles making us think that our muscles are extended and moving to the left), caloric stimulation (inserting cold or warm water into the contralesional ear – this stimulates the vestibular nerve, creating movement of the eyeball), and prism adaptation (patients wear prisms that automatically shift attention to the right) (Kerkhoff and Schenk, 2012).
  • A review of these methods used in studies from 1997 to 2012 found that only 12 randomised control trials had been published and that the prism adaptation technique was the most effective (Yang et al, 2013). There are significant effects of the disorder on patients’ lives and on the lives of their caregivers, including lack of independence, problems with looking after themselves, undertaking household chores, walking, reading and using a wheelchair (Bosma et al, 2020).
    Recovery from neglect is variable. One review recently reported that 98/142 of patients showed recovery at six months; their cancellation task performance recovered (Moore et al, 2021). Patients with allocentric neglect recovered more slowly than did patients with egocentric neglect.