Visual perception Flashcards

1
Q

Three basic approaches to measuring perception

A

1.magnitude estimation and production
2.matching
3.detection and discrimination tasks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

magnitude estimation and production

A

provide observers with a ‘standard’
stimulus and a given value (e.g., ‘100’) then ask the observer to give a corresponding value to indicate
their perception (e.g., ‘200’ if the new stimulus appears twice as bright as the standard), or to adjust
a new stimulus until it appears, for instance, twice as bright as the standard. Stevens (e.g., 1956)
pioneered this approach and found that different sensory continua (e.g., brightness, loudness of
sounds, etc.) conformed to the general pattern ψ = c * Im where ψ is the subjective level of sensation,
c is a constant, I is the stimulus’s physical intensity and m a constant specific to each modality. This
relationship is referred to as Stevens’ Power Law. A particular limitation of this approach is that it’s
uncertain whether participants can use numbers to indicate the relative strength of their percepts as
suggested they did.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Steven’s power law

A

Stevens (e.g., 1956)
pioneered this approach and found that different sensory continua (e.g., brightness, loudness of
sounds, etc.) conformed to the general pattern ψ = c * Im where ψ is the subjective level of sensation,
c is a constant, I is the stimulus’s physical intensity and m a constant specific to each modality. This
relationship is referred to as Stevens’ Power Law. A particular limitation of this approach is that it’s
uncertain whether participants can use numbers to indicate the relative strength of their percepts as
suggested they did.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

matching

A

By asking participants to match the
appearance of two stimuli, under two different conditions, one can measure the effect of the changing
conditions on subjective perception. An example is asking participants to match a grey square with
one from a selection of other grey squares, where each square is placed on a background of different
intensities. This can reveal how perceptions of colour can be impacted by the surrounding context.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

detection and discrimination tasks

A

provide measures of
an observer’s sensitivity to low levels of stimulation or barely detectable differences between stimuli.
Crucial to this approach is the concept of a threshold. An absolute threshold would be the weakest
level of stimulus that can be detected (e.g. the minimum luminance of a light flash that can be
detected).

A difference threshold, being the smallest detectable change in a stimulus (e.g. did you
detect a change in luminance?).

Two approaches to measuring an observer’s threshold are (1) the method of adjustment and (2) the method of constant stimuli.

The method of adjustment asks the
observer to adjust a stimulus until it is just noticeable.

The method of constant stimuli, instead,
presents the observer with a stimulus of a given intensity on each trial and the observer indicates
whether they saw the stimulus or not.
Across trials, the intensity of the stimulus can vary between
values likely to be undetectable and others likely to be detectable. By plotting the proportion of times
the participant saw the stimuli at each intensity level, one can estimate the intensity level required to
make the stimulus detectable on 50% of occasions. This is defined as the threshold. Note that this
simple example of the method of constant stimuli would be prone to changes in observers’ response
biases – more sophisticated approaches, discussed later in the course, tackle this issue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Two approaches to measuring an observer’s threshold are (1) the method of adjustment and (2) the method of constant stimuli.

A

The method of adjustment asks the
observer to adjust a stimulus until it is just noticeable. The method of constant stimuli, instead,
presents the observer with a stimulus of a given intensity on each trial and the observer indicates
whether they saw the stimulus or not. Across trials, the intensity of the stimulus can vary between
values likely to be undetectable and others likely to be detectable. By plotting the proportion of times
the participant saw the stimuli at each intensity level, one can estimate the intensity level required to
make the stimulus detectable on 50% of occasions. This is defined as the threshold. Note that this
simple example of the method of constant stimuli would be prone to changes in observers’ response
biases – more sophisticated approaches, discussed later in the course, tackle this issue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Who conducted pioneering studies measuring thresholds for detecting flashes of light when
background intensities differed?

A

Weber (1830s) -He measured thresholds to detect the change in luminance to a spot
of light, varying the intensity of the stimulus and the background. The intensity change required to
reach threshold was proportional to the stimulus’s original intensity, and could be formulated as
Weber’s law : ∆I = I * C (constant) or rewritten ∆I/ I = C. See graph of Weber’s law on slide 10 - this is
an example of a psychometric function, describing the relationship between a physical stimulus and
a behavioural index of perception. The constant term C (calculated from ∆I/ I), is different for different
sensory continua (e.g., 1/5 for concentration of saline solution, 1/11 for changes in the intensity of
sound, 1/300 for detecting changes in frequency of sine waves of medium frequency). Weber’s Law
holds for many sensory systems, although it breaks down as “I” approaches zero (due to internal
neural noise unrelated to the stimulus).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Draw a human eye structure

A

The human eye is the beginning of our visual system, using lenses to bend light and focus it onto photosensitive cells at the back of the eye. Some basic features of the eye: sclera is the wall of the eye made of a tough white material, except for the clear, protruding part at front of eye known as the
cornea.

The cornea acts as a fixed lens (i.e. it doesn’t adjust to bring items into focus). Just inside the
cornea is the anterior chamber filled with clear fluid (‘aqueous humour’), which separates cornea from the iris, a ring of muscle controlling the size of the pupil, and hence the amount of light entering the eye.

Light then passes though the crystalline lens, which assists the cornea in producing a focused
image on the retina (described below). The process of flattening the lens to bring distant objects into
focus, or making the lens rounder to bring near objects into focus is referred to accommodation. The
manipulation of the lens’s shape is carried out by cilliary muscles. To reach the retina, after the lens
light must pass through the vitreous humour. The retina is a thin rim of neural tissue at the back of
the eye that is responsible for encoding patterns of light and shade. In the central retina there is a
yellowish region is called the macula lutea, near the centre of which lies a pit (the fovea). Light
receptors here have particularly good acuity (ability to distinguish fine detail in the image) due to their
greater numbers and smaller receptive fields (areas of the visual field from which they receive light
input). The optic disk is the area of the retina where nerve fibres exit the eye projecting to the brain.
There are consequently no receptors there, and we thus have a ‘blind spot’ in each eye.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Counter-intuitive property of retinal

A

The retina is not a single layer of cells but rather consists of at least five distinct layers of cells that
provide some pre-processing of signals before sending them to the brain. Arguably, these layers are
the wrong way round: before striking the photoreceptors right at the back of the retina, light must
first pass through the other cell layers. Note though that this has little effect since the cells are mostly
water and the retina is only about 0.2 mm thick. Outside the photoreceptor layer, right at the back of
the retina, is the dark pigment epithelium, which is unreflective and may function to ‘mop up’ stray
light not absorbed by the receptors, preventing this light from being reflected back into the retina and
blurring the retinal image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Name, color, types of photoreceptors

A

The photoreceptors of the retina are termed ‘Rods’ and ‘Cones’, due to the shapes of their outer,
photoreceptive segments. Cones (numbering around 6 million) have their greatest density in fovea
and have relatively small receptive fields. There are three types, which are differently sensitive to long
medium and short wavelengths of light. Cones support colour vision. Rods (around 120 million) are
found outside the fovea. They are achromatic, and have a better sensitivity than cones to low light
levels. Receptors show a substantial response to light in their receptive field, but little response to
stimulation from surrounding areas of light (i.e. with an annulus ring).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Processing cells for photoreceptors

A

Photoreceptors transmit signals
to bipolar cells that, in turn transmit information to ganglion cells, which then transmit this
information. Two further types of cell also process information in the retina: horizontal cells integrate
information from several photoreceptors, and amacrine cells form links to several different ganglion
cells. Unlike photoreceptors, which show ‘graded’ responses to stimuli (‘graded potentials’) Ganglion
Cells code visual information in a different ‘all-or-nothing’ spikes of activity referred to as ‘action
potentials’. The size of action potentials is not graded according to how salient or strong a stimulus is.

When measuring ganglion cell responses, therefore, the number of action potentials (‘spikes’) per
second is counted (the firing rate).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Adaptation to light and darkness

A

The human visual system can operate efficiently over a huge range
of light levels, due partially to changes in pupil size, but in far greater part to mechanisms within the
retina itself. If the visual system did not adapt, neuronal responses would soon asymptote (level off)
and we would be blind to further increases or decreasesin luminance. Consider that the light reflected
from a piece of paper illuminated by starlight may be one ten-millionth of that reflected when
illuminated by bright sunlight. This reflects a huge range. Yet retinal ganglion cells have a range of only
0-200 spikes per second! So we need to adapt. Range of light/dark adaptation can be tested by first
adapting to high level of light, then measuring sensitivity to light flashes as the observer moves into a
dark room.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Who tested the sensitivity to flashes of light as a function of time an observer
spent dark-adapting?

A

Hecht (1937), tested the sensitivity to flashes of light as a function of time an observer
spent dark-adapting. When red flashes were used, sensitivity to light (ability to detect it) increased by
about 2 log units (100 x) after ten minutes in the dark, but then got no better. However, when violet
flashes were used, improvement by 2 log units during first 10 minutes (as for small red flashes), was
followed by further improvement of another 4 log units (or 10, 000 x!). This pattern of results points
to two systems - one that dark adapts quickly but asymptotes after 10 mins, and a second that dark
adapts less quickly but to a far greater extent. First a phototopic process, where adaptation occurs for
‘light seeing’ receptors, i.e. cones, is chromatic and high acuity. Second, a scotopic process where
adaptation is for seeing in the dark, driven by rods, is achromatic and with poor acuity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Colour vison-Trichromatic and Colour-opponent theories

A

Young (1807) was first to suggest vision might be trichromatic based on evidence from
metameric matching (matching the appearance of any single wavelength using mixtures of three
primary colours). Physiological support for this is provided by Brown & Wald (1966) using
microspectrophotometry - shining a thin monochromatic beam through individual receptors on a dissected retina and examining light absorption of different wavelengths. Peak absorptions of cones
cluster around three wavelengths (L, M and S).

Hering (1878) argued that rather that three, we
perceive four primary colours with other colours being mixtures. Hering proposed the Opponent
process theory of colour, suggesting that colours had opponent relationships (Red versus Green, Blue
versus Yellow), based on how colours can be though of from a psychological perspective. If you mix
two complimentary colours, and you get neutral, not a mixture of the two colours (i.e., we don’t see
reddish-green, or yellowish-blue colours). Further evidence for opponent colours in afterimages,
aftereffects (e.g. the castle at the end of the lecture) and simultaneous colour contrast.

So both
Trichromatic and Colour-opponent theories seem to be correct: trichromatic at a receptor level and
opponent processing at later levels. The transition from trichromatic receptor stage to four primaries
at colour opponent post-receptor stage through combining cones: L-cones provide ‘red’ input, Mcones ‘green’, S cones the ‘blue’ input. L+M cones together provide ‘Yellow’ inputs to colour opponent
processes. The yellow primary is perceived when L and M cones are both responding equally, in the
relative absence of S-cone responses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Visual perception 1 content

A

Measuring Perception, The Eye, Adaptation And Colour Vision
Visual perception 1 (VP1) introduces students to the lectures on human vision, outlining basic
approaches to measuring perception, a brief overview of the human eye, the importance of
adaptation (light and dark adaptation, in particular), and mechanisms of colour vision.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Visual Perception 2 content

A

Contrast, Tuning, Univariance, Adaptive Independence and Orientation
VP2 introduces (i) the perception of contrast as a fundamental feature of coding in visual
neurons, (ii) ambiguity in neural signals and the need for groups of neurons to cooperatively code
visual features (iii) Principles of Univariance and Adaptive Independence, and (iv) the perception of
orientation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Define receptive fields

A

Regions of the visual field in which
light stimulation causes the receptor to respond. By extension, other cells that are not themselves
stimulated by light, but whose responses are driven by photoreceptors (e.g., retinal ganglion cells,
visual cortical neurons) also have receptive fields.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The way that light stimulation effects a visual
neuron’s response typically depends upon_______

A

The way that light stimulation effects a visual
neuron’s response typically depends upon where in the cell’s receptive field the light stimulation
occurs – yielding opposite responses in the centre part of the receptive field versus in the peripheral
parts. This centre-surround ‘antagonism’ in many visual neurons’ responses means that most ganglion
cells, and other visual cortical neurons’ responses, don’t respond to the overall level of light
stimulation but to the relative stimulation of the centre versus the periphery (the ‘surround’). That is,
ganglion cells typically respond to the contrast in stimulation between the neighbouring regions of the
visual field.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Centre-surround antagonism-

Limulus (the horseshoe crab) by Hartline and
Graham (1932)

vertebrate eye (in the cat) by Kuffler (1953)

A

It refers to the tendency for
stimulation of the centre of a cell’s receptive field to have the opposite effect upon firing to that
elicited by stimulation of the surround. On-centre cells have excitatory inputs in the centre of their
receptive field and inhibitory inputs in the surround. Off-centre cells have the reverse pattern of
inputs. Centre-surround antagonism means that the retina responds poorly to large uniform regions
of light, as the inhibitory action of light striking an on-centre cell’s surround approximately cancels the
excitatory, and is responsible for a range of perceptual phenomena. For example, simultaneous
brightness contrast and Mach bands in stepped-intensity patterns can easily be accounted for in terms
of lateral antagonism (also see the Hermann Grid, not mentioned in the lecture). The tendency for
cells to code contrast in this manner is not limited to luminance (light intensity) but also characterizes
neural coding of colour, orientation, motion and other features of perception

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Three principles of neural coding.

A

(i) Neurons are preferentially activated by (or ‘tuned to’)particular features.

(ii) Principle of Adaptive Independence.

(iii) Principle of Univariance

21
Q

(i) Neurons are preferentially activated by (or ‘tuned to’) particular features.

A

(i) Neurons are preferentially activated by (or ‘tuned to’) particular features. In the lecture, an example cell from primary visual cortex (one of the visual areas
of the brain discussed in VP5) is used to illustrate this principle. The cell responds preferentially to bars
of 45 degrees orientation, and responds much less to bars of other orientations. Many cells show
differential sensitivity to e.g., different wavelengths of light, different directions of motion.

22
Q

(ii) Principle of Adaptive Independence (+2 types of perceptual distortion)

A

In the previous lecture (VP1), we saw that the visual system can ‘adapt’ to different levels of light, which varied the sensitivity of the visual cells to light thereby
maximising our ability to perceive differences in light intensity.

However, adaptation does not only
affect the visual system’s general response to light. Many different mechanisms ‘tuned to’ different
features can also be adapted independently of each other (hence the principle of ‘adaptive
independence’), yielding perceptual distortions.

Two types of such distortion are afterimages: the
perception of clear images on a blank background after stimulation of the eye has ceased, and
aftereffects : adaptation to a stimulus alters our perception of a second stimulus.

23
Q

Perceptual distortion

A

Many different mechanisms ‘tuned to’ different
features can also be adapted independently of each other (hence the principle of ‘adaptive
independence’), yielding perceptual distortions. Two types of such distortion are afterimages: the
perception of clear images on a blank background after stimulation of the eye has ceased, and
aftereffects : adaptation to a stimulus alters our perception of a second stimulus.

In these cases, adapting to a stimulus often yields a negative afterimage/aftereffect that is in some
sense perceptually ‘opposite’ to the adapting stimulus: they are thus termed ‘negative’
aftereffects/afterimages. The independent adaptation of individual mechanisms turns out to be very
useful for identifying how visual mechanisms work. A couple of examples are presented in this lecture.

24
Q

(iii) Principle of Univariance

A

Although individual neurons are more responsive to some properties than to others, their responses are still ambiguous regarding the presence or absence of a particular feature - why?

We have seen that a cell’s state of adaptation can affect its response to a stimulus - additionally, stimulus salience can also affect the response, more salient stimuli eliciting larger responses.
So multiple factors affect a cell’s response. In contrast, a neuron’s response varies along one dimension
(its responds more or less). Hence, if a neuron is responding at 50% of its maximum firing rate, this
might either be due to the presence of a faint stimulus that it is tuned to, or a more salient stimulus
of an orientation that the cell prefers less.

We simply can’t tell from the firing of one cell - its response
is ambiguous - but by combining responses of multiple cells (pattern coding), we can (as illustrated in
the lecture).

25
Q

Perception of Orientation-through orientation adapting to show mechanisms of orientation coding

A

3) Orientation. Retinal ganglion cells with circular, centre-surround receptive fields may be ‘tuned’ to
respond to luminance discontinuities over space. However, they cannot distinguish oriented straight
edges from a range of other stimuli. In order to detect the orientation of a bar, responses from several
neighbouring receptive fields must be integrated.
Perceptual effects of adaptation can help to reveal
mechanisms of orientation coding.

When observers adapt to a patch of lines oriented 10-15 degrees from vertical, this causes misperception of a second patch of vertical lines, which appear to be tilted
in the opposite direction to those in the adapted patch (negative tilt aftereffect). Sharp peak in
aftereffects arises at around 10-15 degrees separation between adapting and testing angles,
suggesting that orientation-units in visual system tuned to orientations 10-15 degrees apart may
mutually inhibit each other.

Intriguingly, Hubel & Wiesel (1977) in their recordings from primary visual
cortex in the macaque, found that within functional units of the cortex know as ‘hypercolumns’ cells
in neighbouring regions tended to code orientations that were around 10-15 degrees apart. Perhaps
the tilt after-effect reflects inhibitory interactions between neighbouring regions in visual cortex:
these inhibitory interactions may help to finetune our perception of a line’s orientation. Perception of
orientation therefore seems to depend upon the relative firing of several orientation-sensitive
neurons coding a particular area of the visual field: another example of pattern coding.

26
Q

VP Lecture 3 content

A

Spatial frequency, Depth & Motion

Visual Perception 3 (VP3) introduces
(i) the need to perceive edges at different spatial scales,and neural channels involved in spatial frequency perception
(ii) cues that vision uses to estimate the depth of objects from a two dimensional retinal image and
(iii) mechanisms for perceiving motion.

27
Q

Define spatial frequencies

A

The term ‘Spatial Frequency’ refers to the number of Cycles
Per Degree (CPD) of visual angle. We are more sensitive to some spatial frequencies than others.

28
Q

What is contrast sensitivity function

A

sensitivity to contrast is mapped against spatial frequency, this yields the contrast sensitivity function
(C.S.F.).

29
Q

Campbell & Robson (1968) - a limited number of spatial frequency ‘channels’ in the
visual system that are tuned to a small range of frequencies

A

Campbell & Robson (1968) adapted observers to specific spatial frequencies.

Rather than decreased sensitivity for just the adapted spatial frequency or for all visible spatial frequencies, they found that sensitivity was reduced to a range of spatial frequencies around that to which observers had adapted.

Campbell & Robson concluded that there are a limited number of spatial frequency ‘channels’ in the
visual system that are tuned to a small range of frequencies. When observers adapt to a specific spatial
frequency only channels coding this frequency become adapted. Other channels remain unaffected.

Spatial frequency is of interest to vision scientists for two reasons.

First, there is evidence that visual
cortical neurones in both cat (Blakemore & Campbell, 1969) and monkey (De Valois et al. 1982) are
‘tuned to’ spatial frequencies. Second, the spatial frequency approach was built upon the work of Jean
Fourier, a French mathematician who demonstrated that any repeating pattern can be constructed
from a series of sinusoidal wave functions of different frequencies. It is possible that early vision may
employ ‘Fourier Analysis’ decomposing each scene into constituent sinusoidal wave functions.

30
Q

Define pattern coding/adaptation and examples (3)

A

Combining responses of multiple cells/the coding of information in sensory systems based on the temporal pattern of action potentials.

Principle of Univariance- if a neuron is responding at 50% of its maximum firing rate, this
might either be due to the presence of a faint stimulus that it is tuned to, or a more salient stimulus
of an orientation that the cell prefers less. We simply can’t tell from the firing of one cell - its response
is ambiguous - but by combining responses of multiple cells (pattern coding)

Perception of orientation-Perhaps
the tilt after-effect reflects inhibitory interactions between neighbouring regions in visual cortex:
these inhibitory interactions may help to finetune our perception of a line’s orientation. Perception of
orientation therefore seems to depend upon the relative firing of several orientation-sensitive
neurons coding a particular area of the visual field: another example of pattern coding.

Individual motion mechanisms can be adapted. The motion aftereffect, commonly known as the
‘waterfall illusion’, is when an observer views a stimulus moving in one constant direction for around
30 seconds, and then focuses on a static stimulus. The effect is the still image appears to move in the
opposite direction to that of the initial motion. Sutherland (1961) accounted for this in terms of spatial
pattern coding of motion, where cells with receptive fields at the same location code many different
possible directions of motion.

31
Q

What can visual perception do?

A
  1. Identify eges-spatial frequency+contrast
  2. Depth
  3. Motion
  4. Colour
  5. Light intensity
  6. Orientation
  7. Adaptation (Principles of adaptive independence-aftereffect which occurs in all that above)
32
Q

how do we interpret 2-D retinal images to create a 3-D perception?

A

One way is by using depth cues. Occulomotor cues to depth based on the function of eye muscles. This is most powerful at short ranges.

1.: Binocular disparity
2.: Motion parallax The motion of an image across the retina can also provide a strong depth cue.
3.: Static monocular cues Height in a scene, aerial perspective, shading, shadows, perspective, interposition, relative size
(‘texture’ cues), assumed size and familiar size.

Binocular disparity: When an object at a middle distance is focussed on the
fovea in each eye, the images objects that are at the same distance away all appear at corresponding
points in the two retinas. The term for the imaginary ellipse formed by all these locations at the same
distance as fixation is termed the ‘horopter’. However, objects that are further away than the focussed
object do not appear at corresponding points on the two retinae. Rather, those objects’ images are
displaced leftward in the left eye relative to their images in the right eye. In contrast, images of nearer
objects are displaced to the right in the left eye relative to the right eye. This disparity between the
locations of the images in the two eyes is termed binocular disparity. The degree and direction of such
disparity can be used as a powerful cue to the distance of an object from the observer. Disparities of objects nearer than a fixated point are termed crossed disparities, those of objects further than a
fixated point, uncrossed disparities. To exploit binocular disparity as a depth cue, the amount of accommodation must be taken into account. Additionally, the images in the two eyes must be fused.
This does not happen for all images in a scene, but only for those near the fixated distance around the
horopter. The locations in which object images are fused is termed ‘Panum’s area’: outside this area,
we see double or diplopic images, though we are rarely aware of them.

Other depth cues: The motion of an image across the retina can also provide a strong depth cue. When
you are on a moving train and fixate a point outside through the window, you may have noticed that
objects further away than where you are focussing move across the retina in the same direction as
you are travelling, while nearer objects move in the opposite direction. This is known as motion
parallax.

There are also a variety of other static, monocular cues, termed ‘pictorial’ cues to depth:
Height in a scene, aerial perspective, shading, shadows, perspective, interposition, relative size
(‘texture’ cues), assumed size and familiar size. NB: Heuristics: (1) light comes from above, (2) faces
are convex

33
Q

How do we know an object is moving? Reichardt (1969)

A

The simplest case is when an object’s image moves
smoothly across the retina. Mechanisms for detecting this type of motion were first discovered in the
eyes of flies by Reichardt (1969). Remember that when two or more excitatory influences on a cell
occur simultaneously, that they sum, making the cell more likely to reach its firing threshold. Reichardt
motion detectors use a delay in one of the inputs to a cell to ensure that if an object moves across one
receptor then another, at just the right speed, the two cell activities will coincide, causing the motion
detector cell to fire. Note that in its simplest form, as illustrated, these detectors only detect motion
in one direction and one speed (i.e. one delay). However, this shortcoming can easily be remedied by
adding further cells and connections.

34
Q

Individual motion mechanisms can be adapted-name, and an explanation

A

The motion aftereffect, commonly known as the
‘waterfall illusion’, is when an observer views a stimulus moving in one constant direction for around
30 seconds, and then focuses on a static stimulus. The effect is the still image appears to move in the
opposite direction to that of the initial motion. Sutherland (1961) accounted for this in terms of spatial
pattern coding of motion, where cells with receptive fields at the same location code many different
possible directions of motion. When we see a stimulus as static, this is not because the motion
detectors are not responding at all, but rather because they are signalling all directions of motion
equally. When one direction of motion is then adapted, the cells coding that direction become less
responsive, such that when a static stimulus is subsequently viewed there is a net bias in motionresponsive cells in the opposite direction to that adapted, and motion is perceived in that direction.
Often, motion is ambiguous, particularly when there are several identical elements in a display. In such
cases, vision exploits heuristics to disambiguate motion: Examples are (1) Inertia - motion is assumed
to continue in the same direction as before, unless there is evidence to the contrary, (2) Rigidity -
points moving relative to one another are often assumed to remain the same distance in space from
one another, and so might be seen as moving in depth relative to the observer.

35
Q

What’s the content for visual perception 4

A

Interocular transfer and perceptual constancy

(i) the measurement of ‘interocular transfer’ of
adaptation, and (ii) mechanisms and examples of perceptual constancy for size and lightness.

36
Q

Define interocular transfer and how to find its effect on adaptation

A

interocular transfer. Refers to a change in threshold in one eye which had been occluded, similar to, but of lower magnitude, than that in the fixating eye in response to a visual stimulation.
No interocular transfer: colour + light/dark adaptation
Yes : tilt after effects, motion aftereffect

We have seen in previous lectures that adaptation to specific visual attributes
(a direction of motion, a particular wavelength of light or a particular orientation) can help to specify
the nature of the neural mechanisms underpinning these aspects of vision. In this lecture, we discuss
the interocular transfer of the effects of adaptation, to help understand where in the visual system
these adaptation effects occur. When an adapting stimulus is viewed with only one eye, the
aftereffects of the adaptation on subsequent perception can be measured separately for a test stimuli viewed with the same adapted eye, versus the test stimuli viewed with the unadapted eye.

If the site of adaptation in the eye/brain is pre-cortical (along the geniculostriate pathway), the affected cells
are monocular (receiving input only from one eye) and an aftereffect should only be seen in the
adapted eye.

Conversely, if the site of adaptation is in the visual cortex, the affected cells are much
more likely to be binocular (receiving input from two eyes) and we should then expect the aftereffect
to be similar when viewed in the adapted versus unadapted eyes.

37
Q

Define perceptual constancy and give out examples(2)

A

Vision automatically adjusts perception according to current conditions: Perceptual Constancy

Across both size and lightness constancy, we see that the retina input alone cannot explain constancy,
and that our high-level interpretation of the scene has a significant impact.

The adjustments vision makes to compensate for these changes can make physically identical
retinal inputs look very different.

Size constancy: (distance+size)An object’s size cannot be worked out from its retinal image alone. As the object gets
further away, it’s retinal image decreases proportionately.

Lightness constancy: (illumination+reflectance) In contrast to brightness (perceptual correlate of light intensity), ‘lightness’
refers to how reflective a surface is. Dark surfaces reflect only a small proportion of the light striking
their surfaces, absorbing the rest, whereas lighter surfaces reflect a greater proportion of light. Light
striking the retina from an object is a function of the level of illumination and the surface reflectance
(‘lightness’) of an object.

To work out the object’s lightness, vision must therefore be able to discount how much of the light reflected to the eye is due to the illumination level and how much due to the low or high reflectance of the object. By doing this, we will perceive the same level of lightness despite changes in the surrounding illumination.

38
Q

Explain size constancy and example

A

Emmert’s Law (1881) states that the
perceived size of an object must therefore be scaled-up according to its distance from the observer.
As the object gets further away, it’s retinal image decreases proportionately.

Examples of size scaling can be found in the ‘corridor illusion’, Ponzo, and Titchener illusions. In the
last two cases, we can see the effects of scaling, even when we don’t perceive any change in the
perceived distance of a stimulus. These are cases of misapplied size constancy: the scaling up of an
object’s perceived size is triggered by cues in the stimulus even though no conscious perception of
depth results from those cues. The Moon illusion is a further case of misapplied size constancy in the
real world (see Goldstein chapter 10).

39
Q

Explain light constancy and example

A

In contrast to brightness (perceptual correlate of light intensity), ‘lightness’
refers to how reflective a surface is. Dark surfaces reflect only a small proportion of the light striking
their surfaces, absorbing the rest, whereas lighter surfaces reflect a greater proportion of light. Light
striking the retina from an object is a function of the level of illumination and the surface reflectance
(‘lightness’) of an object. To work out the object’s lightness, vision must therefore be able to discount
how much of the light reflected to the eye is due to the illumination level and how much due to the
low or high reflectance of the object. By doing this, we will perceive the same level of lightness despite
changes in the surrounding illumination.

One simple method for achieving lightness constancy may derive from lateral antagonism. It is unlikely
that an object will have an entirely different level of illumination to its immediate surround. Therefore, if an object reflects more light than its background, this is likely to be due to the object being very
reflective rather than well illuminated.

Indeed, Wallach (1948) noted that the apparent lightness of
an illuminated circle depended on the ratio of the light it reflected and the light reflected by its
immediate background.

However, lateral antagonism cannot explain many examples of lightness
constancy. In some cases, two regions of the visual field may be physically identical and vary little in
terms of lateral antagonsim, yet be perceived to have different lightnesses. Examples in the lecture
are White’s illusion (1979), Benary’s Cross (1924), Adelson’s chequered shadow illusion (1995). In
these cases, the visual system’s interpretation of the stimulus and the surrounding seems to affect the
perceived lightness of an object.

Among the clearest cases of such ‘interpretation effects’ are the Gelb
(1929) demonstration and Gilchrist (1980)’s finding that depth perception can influence lightness
perception.
Across both size and lightness constancy, we see that the retina input alone cannot explain constancy,
and that our high-level interpretation of the scene has a significant impact.

40
Q

What is the visual perception 5 content?

A
  • Filling-in, Cortex and Conscious Vision
    (i) examples of ‘filling in’ of missing information in the input due to the blind-spot and camouflage, (ii) the visual brain - and a description of some of the brain’s
    visual processing areas, (iii) a brief introduction to conscious vision.
41
Q

Explain filling-in and where does it apply to the visual system?

A

The optic disk, where ganglion cell axons leave the eye, has no photoreceptors.
So under
monocular viewing (using one eye), there is a blind spot in our visual field. Nonetheless, we perceive
no ‘black hole’ reflecting the lack of visual information at that region. Some theorists have suggested
that this is because vision ‘fills-in’ colour and texture information across this region of visual space
with information from surrounding regions (e.g., Ramachandran and Gregory).

In contrast, the
philosopher Daniel Dennett and others have proposed that the visual system does not need to fill-in
‘missing’ colour and texture information, and simply doesn’t represent those regions of visual space
(hence we don’t see anything there, including any black circles). The blind spot is at a peripheral region
of the retina at which we have relatively poor acuity, so this debate is difficult to resolve.

Single cell
recordings by Fiorani et al. (1992) showed that neurons appeared to respond to perceptual
information being filled in across the blind spot, though these are difficult to interpret without clear
perceptual correlates. Filling-in seems, however, to be a general feature of vision. The lecture
describes filling-in of illusory-contours, and following breakdown of border coding in the ‘twinkle’
aftereffect first reported by Ramachandran and Gregory.

42
Q

Describe visual processing pathway beyond the retinal

A

Retinal ganglion cells project mainly to the Lateral Geniculate Nuclei and then
primary visual cortex (V1), forming part of the occipital lobe at the back of the brain.

43
Q

Which experiment found the hypercolumn and define hypercolumn?

A

-Hubel and Wiesel (1955, 1962)
-small slabs of cortical tissue 1mm square (of cortical
surface) by 2mm deep, which may form the basic unit of cortical processing.

The responses of
cells in V1 (of the cat) were first studied systematically by Hubel and Wiesel (1955, 1962), who received
a Nobel prize for their work.

When a recording electrode was inserted perpendicular to the cortical
surface, the cells tended to have similar receptive field locations, to prefer the same orientation, and
to receive most of their input from the same eye. Hubel and Wiesel called these ‘orientation columns’.

In contrast, when the electrode was inserted parallel to the cortical surface, the receptive field
location, preferred orientation or dominant eye (the eye which gave most input to the cell) would vary
systematically, depending on which direction the electrode moved. Hubel and Wiesel interpreted
these patterns in terms of ‘hypercolumns’ - small slabs of cortical tissue 1mm square (of cortical
surface) by 2mm deep, which may form the basic unit of cortical processing.

44
Q

Properties of hypercolumn

A

All cells in a hypercolumn
have approximately the same receptive field locations, with neighbouring hypercolumns coding
neighbouring regions of visual space. Each hypercolumn varied along two dimensions parallel to the cortical surface: along one dimension in terms of eye dominance, and along the other dimension in
terms of orientation.

Neighbouring groups of cells along this second dimension coded orientations
that were around 10-15 degrees apart, and the preferred orientations of cells varied systematically
forming a ‘pinwheel’. Cells also varied within a hypercolumn with regard to their preferred spatial
frequencies. Further subdivisions of cells in V1: Simple cells – response specific for size, orientation
and position of a stimulus. Complex cells –Respond to moving stimuli, not selective for position:
selective for direction of motion and orientation. Hypercomplex, ‘end-stopped’: respond to corners
or line ends moving in preferred direction.

Beyond these properties, cells were also found sensitive to colour. Wong-Riley (1979) stained the
cortex to reveal patterns of cytochrome oxidase ‘blobs’ within hypercolumns. Cells within these blobs
(elegantly termed ‘blob cells’!) primarily code colour information, and show little selectivity for
orientation, or sensitivity to high spatial frequencies. In contrast, cells in between these blobs (termed
interblob cells) show responses to high spatial frequencies and colour edges, but not to patches of
colour.

45
Q

Summerise visual cortex V1,V2,V5 and V4/V8

A

V1: Orientation, some colour
V2: Codes visual surfaces, including illusory contours.
V4/V8: Colour
v5: High level motion processing

Many other such areas have been studied, but here we also discuss V2, V5 (MT) and V4/V8, which are
extrastriate visual areas (‘Extrastriate’ means anterior to the Line of Gennari, a physiological marker
for the border between V1, and V2).

V2: Unlike V1, which only codes features that are in the stimulus, V2 codes visual surfaces, including illusory contours.

V4/ V8: Zeki (1978) found neurons in visual cortex
that code colour responses in a more sophisticated way than in V1, showing evidence of colour
constancy. Hadjikani et al. (1998) claim that it is a new region V8 that is the crucial region for highlevel colour coding, although Zeki claims V8 is not new but rather part of the V4 complex he had
already described.

The lesion that gives rise to ‘cerebral achromatopsia’ (cortical colour blindness)
involves damage to inferior occipitotemporal cortex, which had also been established in the 19th
Century (Verrey, 1888). Functional imaging studies provide supporting evidence of this region’s
important role in colour perception.

V5: Also called MT (middle temporal area), is involved in highlevel motion processing. Functional brain imaging studies and trans-cranial magnetic stimulation
(TMS) studies point to the importance of this region for motion perception. Zihl, Crammon & Mai
(1983) studied a patient with akinetopsia (motion blindness) after damage to V5, who could see
changes in the stimulus but not perceive them as motion.

46
Q

Explain ‘ conscious vision represents the ‘tip of
the information-processing iceberg’. ‘

A

We are not aware of the many calculations performed by early visual mechanisms, only of the results of later interpretative processes (‘the end product’). For
example, we are not aware of the retinal size of an object on the retina, only of the brain’s conscious
interpretation of the object’s size.

Similarly, we are not aware of the amount of light coming from an
object - only of the brain’s interpretation of this information as illumination and lightness.

47
Q
A

Conscious vision is also extremely limited in terms of how many objects we are aware of at any time.
While our intuition may tell us that we perceive many objects in a scene at once, phenomena such as
change blindness demonstrate that we are aware of only some of the objects in a scene at a time. To
what extent, then, can we process all objects in the visual scene at once.

The most powerful technique
for investigating this question is the ‘visual search’ paradigm. When an item differs from all items in a
display because of a single unique feature that is processed at an early stages of vision (e.g. colour,
edge orientation), it perceptually ‘pops out’ from the other items. Accordingly, the time taken to find
that odd-one-out item is short and independent of the number of other items in the display. This
phenomenon is termed ‘efficient, parallel search’ and implies that each of the items in the display was
processed simultaneously in terms of the feature that distinguished the odd-one-out. However,
higher-level cognitive functions like letter identity and face configurations seem to differ compared to
feature of early vision.

48
Q
A

Conscious vision is also extremely limited in terms of how many objects we are aware of at any time.
While our intuition may tell us that we perceive many objects in a scene at once, phenomena such as
change blindness demonstrate that we are aware of only some of the objects in a scene at a time. To
what extent, then, can we process all objects in the visual scene at once.

The most powerful technique
for investigating this question is the ‘visual search’ paradigm. When an item differs from all items in a
display because of a single unique feature that is processed at an early stages of vision (e.g. colour,
edge orientation), it perceptually ‘pops out’ from the other items. Accordingly, the time taken to find
that odd-one-out item is short and independent of the number of other items in the display. This
phenomenon is termed ‘efficient, parallel search’ and implies that each of the items in the display was
processed simultaneously in terms of the feature that distinguished the odd-one-out. However,
higher-level cognitive functions like letter identity and face configurations seem to differ compared to
feature of early vision.