From receptor signal to perception Flashcards
what do you look at and what do you see?
- Looking: photoreceptor signals change v. quickly and are noisy
○ Movements (eyes, head and body) change and stabilise gaze for very shirt periods of time, fast, main function filtering, decomposing images into elementary features in peripheral layers○ of the visual system, early segregation of signals fed into parallel visual streams for unconscious and conscious visual perception ○ Very little to do with what we consciously perceive 2. Actively looking and seeing: unconscious perception can be fast or slow, filtered depending on task (pathway), can be invariant, selective, is less noisy ○ Guides fast and slow actions, analyses scenes and object, changes are perceived through task outcomes (e.g. consequences or a behavioural response, change in internal state or top-down control (e.g. gaze, conscious decision-making in humans)) 3. Actively seeing: conscious visual perception in humans is stable, slow, invariant, affected by filtering but less selective than unconscious perception, low noise ○ Only some pronounced eye, head or body movements result in a perceived change of the viewed scene or object, conscious vision seems relevant for some specific tasks and can exert some top-down control but mostly results from processes at level 1 and 2, some of which are hard-wired • Humans: difficult to generate evidence that clearly separates domains 2 and 3 • Animals: level 3 not assumed Try to separate conscious and unconscious perception to see what special tasks conscious vision has to accomplish
Early stages of visual processing involve edge filtering and enhancement
• LI (found in the CS RFs of many ganglion cells in the retina) can explain hard-wired phenom in perception, such as the Herrman grid
• We see grey dots that are not there
• During eye movements the same part of the image is viewed by foeval receptors and ganglion cells or peripheral ones
• The CS RFs are smallest in the fovea (highest spatial resolution) and larger the further in the periphery of the retina they are (dots appear grey) - dominated by black tiles
• Move fovea to where you want to explore - dots become white
• Subtract signals of centre from surround
• Vary size of squares and get same effect - if completely hard wired to retinal ganglion cells, shouldn’t be happening - probably further processes at level of LGN and cortex - other types of receptive fields - modern criticism
Looking at grid doesn’t generate any useful behaviour - criticism - artificial scenes/patterns
see notes
Zanker (2010)
- Mach bands (optical illusion described by physicist Ernst Mach in 1865)
see notes
* From left to right - increasing brightness * Exaggerated towards darker or brighter area • High contrast edges • Removes illusion But can change back - hard wired into perception
see notes
• On-centre OFF-surround example • Ganglion cells with CS RFs: ○ Enhanced edges ○ Compresses information (only respond when in RF) ○ Filters info according to spatial frequencies (diff sized RFs and varying sensitivity across retina) • Boundary falls somewhere on retina • Direct fovea • Eyes move and have number of signals that vary over time • Number of CS RFs processing edge • CS RF don’t respond/change signal when illumination uniform • When illumination changes get either inhib or excitatory signal • Ganglion cells have average resting freq • Delta - change in frequency • 0 = no change in freq Resting state it still fires
see notes
• Thinner striped - higher spatial freq - can fit more stripes • One way to characterise the quality of vision Refined perceptual test
Early stages of visual processing involve edge filtering and enhancement research
Bakshi and Ghosh (2020)
Skottun (2020)
Khosravy et al. (2017)
Ghosh et al. (2006)
Misson and Anderson (2017)
De Valois et al. (2014)
Bakshi and Ghosh (2020)
A novel modification of the Hermann grid stimulus is demonstrated. It is shown that introduction of extremely tiny squares at the corners of the grid squares in the classical stimulus, keeping the position and orientation of the grid squares fixed, can reduce the strength and even completely wipe out the illusory dark spots. The novel perturbing stimulus was investigated further and a gray-level intensity threshold was measured for the tiny corner squares beyond which the illusory blobs disappear completely. It was also found that this threshold remains practically unchanged over a wide range of grid square size for an observer.
Skottun (2020)
The Hermann Grid is made up of a series of vertical and horizontal bars. The Hermann Grid Illusion consists in the brightness of the intersections appearing different from that of the sections between intersections in spite of the luminance being the same. In the case of a light grid on a dark background the intersections tend to appear darker than the parts between intersections. It is here pointed out, in two different ways, that the stimulus power is less for the parts of the grid located at intersections than for parts of the grid between intersections. This is all in the stimuli and does not depend on vision or the visual system. Were we to assume that a stronger stimulus gives a brighter appearance this would make the parts between intersections appear brighter than the parts of the grid at intersections. This would be consistent with the Hermann Grid Illusion.
Khosravy et al. (2017)
The perceptual adaptation of the image (PAI) is introduced by inspiration from Chevreul-Mach Bands (CMB) visual phenomenon. By boosting the CMB assisting illusory effect on boundaries of the regions, PAI adapts the image to the perception of the human visual system and thereof increases the quality of the image. PAI is proposed for application to standard images or the output of any image processing technique. For the implementation of the PAI on the image, an algorithm of morphological filters (MFs) is presented, which geometrically adds the model of CMB effect. Numerical evaluation by improvement ratios of four no-reference image quality assessment (NR-IQA) indexes approves PAI performance where it can be noticeably observed in visual comparisons. Furthermore, PAI is applied as a postprocessing block for classical morphological filtering, weighted morphological filtering, and median morphological filtering in cancelation of salt and pepper, Gaussian, and speckle noise from MRI images, where the above specified NR-IQA indexes validate it. PAI effect on image enhancement is benchmarked upon morphological image sharpening and high-boost filtering.
Ghosh et al. (2006)
A re-scan of the well-known Mach band illusion has led to the proposal of a Bi-Laplacian of Gaussian operation in early vision. Based on this postulate, the human visual system at low-level has been modeled from two approaches that give rise to two new tools. On one hand, it leads to the construction of a new image sharpening kernel, and on the other, to the explanation of more complex brightness-contrast illusions and the possible development of a new algorithm for robust visual capturing and display systems.
Misson and Anderson (2017)
It is generally believed that humans perceive linear polarized light following its conversion into a luminance signal by diattenuating macular structures. Measures of polarization sensitivity may therefore allow a targeted assessment of macular function. Our aim here was to quantify psychophysical characteristics of human polarization perception using grating and optotype stimuli defined solely by their state of linear polarization. We show: (i) sensitivity to polarization patterns follows the spectral sensitivity of macular pigment; (ii) the change in sensitivity across the central field follows macular pigment density; (iii) polarization patterns are identifiable across a range of contrasts and scales, and can be resolved with an acuity of 15.4 cycles/degree (0.29 logMAR); and (iv) the human eye can discriminate between areas of linear polarization differing in electric field vector orientation by as little as 4.4 degrees. These findings, which support the macular diattenuator model of polarization sensitivity, are unique for vertebrates and approach those of some invertebrates with a well-developed polarization sense. We conclude that this sensory modality extends beyond Haidinger’s brushes to the recognition of quantifiable spatial polarization-modulated patterns. Furthermore, the macular origin and sensitivity of human polarization pattern perception makes it potentially suitable for the detection and quantification of macular dysfunction.
De Valois et al. (2014)
The detectability of luminance modulated gratings of different spatial frequencies was determined at five different adaptation levels for three macaque monkeys and five normal human observers. The human and macaque observers gave results which were identical in form and similar in absolute values. Both species showed optimal contrast sensitivity in the middle spatial frequency range of about 3–5 c/deg with both low and high frequency attenuation, at high light levels. Contrast sensitivity to high frequencies dropped rapidly as adaptation levels were lowered, with a resulting shift in peak sensitivity to lower spatial frequencies. At the lowest adaptation level studied, neither macaque nor human observers showed any low frequency attenuation in the spatial luminance contrast sensitivity function.
Filtering of images and scenes at different spatial frequencies
see notes
• Filter different spatial frequencies • Place diff sized stripes over RF in fovea get number of diff signals and ratio between white and black changes • One way neurons code with CS RFs to filter out diff signals at high and low spatial freqs • Changing frequency useful for recognising emotions (Mona Lisa image) • Neurons that record intensity of high spatial freq components in image useful to recognise particular features, to know what features are and what small diffs are in facial expression • Takes lots of processing power • Time crucial factor to determine whether certain brain systems will use visual input that is filtered through low or high spatial freq filters • Not mutually exclusive • Evidence at neuronal level for both At first stage, ganglion cells with CS RFs and then more complicated as look at more central brain layers
Filtering of images and scenes at different spatial frequencies research
Estevez et al. (2016)
Logunova and Shelepina (2015)
Estevez et al. (2016)
The use of apodizing or superresolving filters improves the performance of an optical system in different frequency bands. This improvement can be seen as an increase in the OTF value compared to the OTF for the clear aperture.
In this paper we propose a method to enhance the contrast of an image in both its low and its high frequencies. The method is based on the generation of a synthetic Optical Transfer Function, by multi-plexing the OTFs given by the use of different non-uniform transmission filters on the pupil. We propose to capture three images, one obtained with a clear pupil, one obtained with an apodizing filter that enhances the low frequencies and another one taken with a superresolving filter that improves the high frequencies. In the Fourier domain the three spectra are combined by using smoothed passband filters, and then the inverse transform is performed. We show that we can create an enhanced image better than the image obtained with the clear aperture. To evaluate the performance of the method, bar tests (sinusoidal tests) with different frequency content are used. The results show that a contrast improvement in the high and low frequencies is obtained
Logunova and Shelepina (2015)
This paper discusses the process of interpreting scenes with the image of a human face, subjected to processing with spatial-frequency filters that simulate the characteristics of the receptive fields of the neurons of the primary visual cortex. A technique was used that makes it possible to give a quantitative evaluation of the interpretation of an image while carrying out tasks of identifying a period of emotional stress and the age-related features of the person. It was shown that, besides the horizontal components of the spatial-frequency spectrum, a substantial role is played in the process of interpreting images of faces by the diagonal components. Even though the visual system is less sensitive to the diagonal components than to the horizontal ones, the information contained in them makes it possible not only to distinguish age-related features, but also to give the supplementary information needed to identify an unfamiliar person when encountering that person again
Inter- and intraspecific variations in acuity and contrast sensitivity
(Owsley, 2016)
see notes
• Ghim and Hodos (2006) ○ Range of diff filter functions that differ between indvs Make inferences about how the world will look diff to diff indvs/species
see notes
Inter- and intraspecific variations in acuity and contrast sensitivity research
Gruber et al. (2013)
Billino and Pilz (2019)
Potier et al. (2018)
Feng et al. (2017)
Gruber et al. (2013)
Objective:In this article, we review the impact of vision on older people’s night driving abilities. Driving is the preferred and primary mode of transport for older people. It is a complex activity where intact vision is seminal for road safety. Night driving requires mesopic rather than scotopic vision, because there is always some light available when driving at night.Scotopicrefers to night vision,photopicrefers to vision under well-lit conditions, andmesopicvision is a combination of photopic and scotopic vision in low but not quite dark lighting situations. With increasing age, mesopic vision decreases and glare sensitivity increases, even in the absence of ocular diseases. Because of the increasing number of elderly drivers, more drivers are affected by night vision difficulties. Vision tests, which accurately predict night driving ability, are therefore of great interest.
Methods:We reviewed existing literature on age-related influences on vision and vision tests that correlate or predict night driving ability.
Results:We identified several studies that investigated the relationship between vision tests and night driving. These studies found correlations between impaired mesopic vision or increased glare sensitivity and impaired night driving, but no correlation was found among other tests; for example, useful field of view or visual field. The correlation between photopic visual acuity, the most commonly used test when assessing elderly drivers, and night driving ability has not yet been fully clarified.
Conclusions:Photopic visual acuity alone is not a good predictor of night driving ability. Mesopic visual acuity and glare sensitivity seem relevant for night driving. Due to the small number of studies evaluating predictors for night driving ability, further research is needed.
Billino and Pilz (2019)
Research on functional changes across the adult lifespan has been dominated by studies related to cognitive processes. However, it has become evident that a more comprehensive approach to behavioral aging is needed. In particular, our understanding of age-related perceptual changes is limited. Visual motion perception is one of the most studied areas in perceptual aging and therefore, provides an excellent domain on the basis of which we can investigate the complexity of the aging process. We review the existing literature on how aging affects motion perception, including different processing stages, and consider links to cognitive and motor changes. We address the heterogeneity of results and emphasize the role of individual differences. Findings on age-related changes in motion perception ultimately illustrate the complexity of functional dynamics that can contribute to decline as well as stability during healthy aging. We thus propose that motion perception offers a conceptual framework for perceptual aging, encouraging a deliberate consideration of functional limits and resources emerging across the lifespan.
Potier et al. (2018)
Animals are thought to use achromatic signals to detect small (or distant) objects and chromatic signals for large (or nearby) objects. While the spatial resolution of the achromatic channel has been widely studied, the spatial resolution of the chromatic channel has rarely been estimated. Using an operant conditioning method, we determined (i) the achromatic contrast sensitivity function and (ii) the spatial resolution of the chromatic channel of a diurnal raptor, the Harris’s hawkParabuteo unicinctus. The maximal spatial resolution for achromatic gratings was 62.3 c deg−1, but the contrast sensitivity was relatively low (10.8–12.7). The spatial resolution for isoluminant red-green gratings was 21.6 c deg−1—lower than that of the achromatic channel, but the highest found in the animal kingdom to date. Our study reveals that Harris’s hawks have high spatial resolving power for both achromatic and chromatic vision, suggesting the importance of colour vision for foraging. By contrast, similar to other bird species, Harris’s hawks have low contrast sensitivity possibly suggesting a trade-off with chromatic sensitivity. The result is interesting in the light of the recent finding that double cones—thought to mediate high-resolution vision in birds—are absent in the central fovea of raptors.
Feng et al. (2017)
In humans, geometrical illusions are thought to reflect mechanisms that are usually helpful for seeing the world in a predictable manner. These mechanisms deceive us given the right set of circumstances, correcting visual input where a correction is not necessary. Investigations of non-human animals’susceptibility to geometrical illusions have yielded contradictory results, suggesting that the underlying mechanisms with which animals see the world may differ across species. In this review, we first collate studies showing that different species are susceptible to specific illusions in the same or reverse direction as humans. Based on a careful assessment of these findings, we then propose several ecological and anatomical factors that may affect how a species perceives illusory stimuli. We also consider the usefulness of this information for determining whether sight in different species might be more similar to human sight, being influenced by contextual information, or to how machines process and transmit information as programmed. Future testing in animals could provide new theoretical insights by focusing on establishing dissociations between stimuli that may or may not alter perception in a particular species. This information could improve our understanding of the mechanisms behind illusions, but also provide insight into how sight is subjectively experienced by different animals, and the degree to which vision is innate versus acquired, which is difficult to examine in humans
can you find the badger?
• Seeing and recognising objects, mates, predators or prey is imp for many tasks
• But visual scenes often crowded (and typically not black and white)
• Contrast enhancement of edges is imp for many visual tasks - objects characterised by their edges
• A major task of the visual system is to segregate objects and backgrounds, automatically and quickly - based on analysis of edges - how fast they move - motion information - happens automatically and quickly - may not be able to influence it easily or at all
Other tasks require further computations in order to extract info - e.g. face recognition task
Insect lands preferably at the edge of objects (Eglehaaf et al., 2012; Kang et al., 2012)
• Moths actively choose spot and vary their orientation to align with the lines in the background for better camouflage against avian predators
• Recording natural landing behav of fly on cup - requires lots of coord and body posture control - controlling speed
Land at contrast edges - boundaries of objects
see notes
After landing can reposition - main orientation on bark that is signalled by contrast edges
see notes
Insect lands preferably at the edge of objects (Eglehaaf et al., 2012; Kang et al., 2012) research
Egelhaaf et al. (2014)
Mauss and Borst (2020)
Kang et al. (2015)
Green et al. (2019)
Egelhaaf et al. (2014)
Despite their miniature brains insects, such as flies, bees and wasps, are able to navigate by highly erobatic flight maneuvers in cluttered environments. They rely on spatial information that is contained in the retinal motion patterns induced on the eyes while moving around (“optic flow”) to accomplish their extraordinary performance. Thereby, they employ an active flight and gaze strategy that separates rapid saccade-like turns from translatory flight phases where the gaze direction is kept largely constant. This behavioral strategy facilitates the processing of environmental information, because information about the distance of the animal to objects in the environment is only contained in the optic flow generated by translatory motion. However, motion detectors as are widespread in biological systems do not represent veridically the velocity of the optic flow vectors, but also reflect textural information about the environment. This characteristic has often been regarded as a limitation of a biological motion detection mechanism. In contrast, we conclude from analyses challenging insect movement detectors with image flow as generated during translatory locomotion through cluttered natural environments that this mechanism represents the contours of nearby objects. Contrast borders are a main carrier of functionally relevant object information in artificial and natural sceneries. The motion detection system thus segregates in a computationally parsimonious way the environment into behaviorally relevant nearby objects and—in many behavioral contexts—less relevant distant structures. Hence, by making use of an active flight and gaze strategy, insects are capable of performing extraordinarily well even with a computationally simple motion detection mechanism.
Mauss and Borst (2020)
○ Optic flow arising from self-motion provides a rich source of information.
○ Optic flow detection and related behaviors have been studied extensively in insects.
○ Translational flow affordsspatial visionand estimation of travel speed.
○ Rotational flow mediates estimation and compensation of involuntary course changes
○ All optic flow-based behaviors likely depend on the same local motion detectors.
Vision is an important sensory modality for navigation in roaming animals. In contrast to most vertebrates, insects usually must cope with low resolution retinal images and the inability to infer spatial features using accommodation or stereovision. However, during locomotion, the retinal input is dominated by characteristic panoramic image shifts, termed optic flow, that depend on self-motion parameters and environmental features. Therefore, optic flow provides a rich source of information guiding locomotion speed as well as the position and orientation of animals over time relative to their surroundings. Here, focusing on flight behavior, we describe the strategies and putative underlying neuronal mechanisms by which insects control their course through processing of visual motion cues.
Kang et al. (2015)
Camouflage can be attained via mechanisms such as background matching (resembling the general background) and disruptive coloration (hindering the detection of an animal’s outline). However, despite much conceptual work with artificial stimuli there have to date been few studies of how such camouflage types work in real animals in their natural environments. Here, using avian vision models and image analysis, we tested which concealing mechanisms operate to provide camouflage during behavioral choice of a resting position in 2 bark-resting moths,Hypomecis roborariaandJankowskia fuscaria. Our results suggest that both species reinforced their crypticity in terms of both background matching and disruptive coloration. However, the detailed mechanisms (such as achromatic/chromatic matching or pattern direction matching) that each species exploits differed between the 2 species. Our results demonstrate that an appropriate behavioral choice of background and body orientation is important to improve camouflage against natural predators, and highlight the mechanisms that confer camouflage to cryptic animals in their natural habitats.
Green et al. (2019)
Camouflage is driven by matching the visual environment, yet natural habitats are rarely uniform and comprise many backgrounds. Therefore, species often exhibit adaptive traits to maintain crypsis, including colour change and behavioural choice of substrates. However, previous work largely considered these solutions in isolation, whereas many species may use a combination of behaviour and appearance to facilitate concealment. Here we show that green and red chameleon prawns (Hippolyte varians) closely resemble their associated seaweed substrates to the vision of predatory fish, and that they can change colour to effectively match new backgrounds. Prawns also select colour-matching substrates when offered a choice. However, colour change occurs over weeks, consistent with seasonal changes in algal cover, whereas behavioural choice of matching substrates occurs in the short-term, facilitating matches within heterogeneous environments. We demonstrate how colour change and behaviour combine to facilitate camouflage against different substrates in environments varying spatially and temporally.
Insects discriminate and generalise stripe patterns and recognise illusory contours (Hateren et al., 1990)
• Train bees to do more artificial tasks
• Reward with sucrose solution
• New pattern every time get sucrose
• Change in orientation or change in stripe thickness
• Can learn symmetry and asymmetry as feature
• Test with novel stim
• If can extract that isn’t pattern that matters but orientation - choose correctly previously rewarded even if never seen stim
Perf not great with rectangles but can still recognise orientation
see notes
Insects discriminate and generalise stripe patterns and recognise illusory contours (Hateren et al., 1990) research
Giurfa et al. (1996)
Roper et al. (2017)
Giurfa et al. (1996)
SYMMETRICAL visual patterns have a salient status in human perception, as evinced by their prevalent occurrence in art1, and also in animal perception, where they may be an indicator of phenotypic and genotypic quality2–4. Symmetry perception has been demonstrated in humans5–8, birds9–11, dolphins12and apes13. Here we show that bees trained to discriminate bilaterally symmetrical from non-symmetrical patterns learn the task and transfer it appropriately to novel stimuli, thus demonstrating a capacity to detect and generalize symmetry or asymmetry. We conclude that bees, and possibly flower-visiting insects in general, can acquire a generalized preference towards symmetrical or, alternatively, asymmetrical patterns depending on experience, and that symmetry detection is preformed or can be learned as a perceptual category by insects, because it can be extracted as an independent visual pattern feature. Bees show a predisposition for learning and generalizing symmetry because, if trained to it, they choose it more frequently, come closer to and hover longer in front of the novel symmetrical stimuli than the bees trained for asymmetry do for the novel asymmetrical stimuli. Thus, even organisms with comparatively small nervous systems can generalize about symmetry, and favour symmetrical over asymmetrical patterns.
Roper et al. (2017)
The ability to generalize over naturally occurring variation in cues indicating food or predation risk is highly useful for efficient decision-making in many animals. Honeybees have remarkable visual cognitive abilities, allowing them to classify visual patterns by common features despite having a relatively miniature brain. Here we ask the question whether generalization requires complex visual recognition or whether it can also be achieved with relatively simple neuronal mechanisms. We produced several simple models inspired by the known anatomical structures and neuronal responses within the bee brain and subsequently compared their ability to generalize achromatic patterns to the observed behavioural performance of honeybees on these cues. Neural networks with just eight large-field orientation-sensitive input neurons from the optic ganglia and a single layer of simple neuronal connectivity within the mushroom bodies (learning centres) show performances remarkably similar to a large proportion of the empirical results without requiring any form of learning, or fine-tuning of neuronal parameters to replicate these results. Indeed, a model simply combining sensory input from both eyes onto single mushroom body neurons returned correct discriminations even with partial occlusion of the patterns and an impressive invariance to the location of the test patterns on the eyes. This model also replicated surprising failures of bees to discriminate certain seemingly highly different patterns, providing novel and useful insights into the inner workings facilitating and limiting the utilisation of visual cues in honeybees. Our results reveal that reliable generalization of visual information can be achieved through simple neuronal circuitry that is biologically plausible and can easily be accommodated in a tiny insect brain.
detecting objects that lack continuous edges
• Early 20th century, School of Gestalt (shape or form) psychology proposed that shape and object perception is underpinned by processes in the mind that are characterised by the Gestalt laws, for example, proximity or similarity (e.g. in colour or size)
• Local information (dots and blobs) is integrated across long distances in the image - illusory edges form global features
• Is slow - requires a lot more computations and eye scanning’s of the whole scene to make sense of the world and resolve ambiguities
After a while visual system becomes primed
see notes
Making sense of a sparse scene (Savgin et al., 2004)
Adding motion enhances the recognition of shape and action
see notes
• fMRI experiments in humans with point-light biological motion animations show strong activation of lateral and inferior temporal cortex ('what' visual stream) but also inferior frontal cortex known to be activated by action observation • Can also flicker them and gives the same effect • Adding motion enhances the recognition of shape and action • From way lights move our brain can make a lot of sense of sparse scene • Lateral and inferior temp cortex concerned with making sense of world • Frontal cortex imp which we know where mirror neurons located for observing action Also activate part of brain when observing action not perf it
see notes
Making sense of a sparse scene (Savgin et al., 2004) research
Blake and Shiffrar (2007)
Sokolov et al. (2018)
Blake and Shiffrar (2007)
Humans, being highly social creatures, rely heavily on the ability to perceive what others are doing and to infer from gestures and expressions what others may be intending to do. These perceptual skills are easily mastered by most, but not all, people, in large part because human action readily communicates intentions and feelings. In recent years, remarkable advances have been made in our understanding of the visual, motoric, and affective influences on perception of human action, as well as in the elucidation of the neural concomitants of perception of human action. This article reviews those advances and, where possible, draws links among those findings.
Sokolov et al. (2018)
Visual perception of body motion is of substantial value for social cognition and everyday life. By using an integrative approach to brain connectivity, the study sheds light on architecture and functional principles of the underlying cerebro-cerebellar network. This circuity is organized in a parallel rather than hierarchical fashion. This may explain why body-language reading is rather resilient to focal brain damage but severely affected in neuropsychiatric conditions with distributed network alterations. Furthermore, visual sensitivity to body motion is best predicted by specific top-down feedback to the early visual cortex, as well as functional communication (effective connectivity) and presence of white-matter pathways between the right fusiform gyrus and superior temporal sulcus. The findings allow better understanding of the social brain.
methodological approaches
• Psychophysics - links variation in stim w/ changes in behav (e.g. eye and body responses, verbal responses, task acquisition and execution)
• Neuroanatomy - provides info about connectivity in the sensory organs, brain and motor systems
• Theory, philosophy and computational modelling - proposes concepts and tests mechanistic and functional hypotheses, formulates mathematical algorithms to show how neurons do or may interact within and between brain areas
Functional neurophysiology and neurogenics - links neural response patterns to connectivity or to behaviour, tests concepts and algorithms
main projections from retina to thalamus, midbrain and cortex
• LGN (cortical pathway) ○ Processes visual info ○ Ganglion cells form optic nerve, which leaves eye through blind spot and transfer signal between back of eye and thalamus in base of brain • Subcortical pathways ○ Pretectum § Mediates pupillary reflex ○ Superior colliculus Controls saccadic eye movements
see notes
the cortical pathway
see notes
thalamus is the principle synaptic replay before sensory information reaches the cerebral cortex
• High input - 10000 ganglion axons
• Thalamus gates and modulates the flow of info to cortex
• Info from diff sensory modalities and processed in diff areas of thal
• Opportunity for multi sensory modulation and interactions
• Not all info reaches cerebral cortex
• LGN composed of several layers
○ Receive input from magnocellular ganglion cells - magnocellular laminae - large cell bodies and receive input from right and left eye
○ Retinoptopic mapping as well as origin of info from each eye retained in LGN
Further layers which receive input from retina are parvocellular layers - subdivisions for right and left eye
see notes
Topography of primary visual cortex and surrounding areas (Tootell et al.)
• A and B
○ Filed sign analysis of retinotopic cortical visual areas from right and left hemispheres in a single subject
○ In A, anterior is to the left, and posterior to the right
○ In B, this is reversed
○ The field sign maps are based on 2 scans measuring polar angle (rotating thin ray stim) and 2 scans measuring eccentricity (expanding thin ring stim), acquired from echo-planar images in a 3-T scanner, using a bilateral, send-receive quadrature coil
○ Both stim extended 18-25 degree in eccentricity
• C and D
○ Same data, in cortically ‘inflated’ format, now viewed from a more posterior-inferior vantage point
○ The left panel shows the right hem
○ Human retinotopic areas revealed by the field sign analysis have been labelled (V1 etc.)
○ Cortical areas with a visual field sign (polarity) similar to that in the actual visual field are coded blue, and those areas showing a mirror-reversed field polarity are coded yellow
○ Also labelled is the foveal representation in V1 (black *)
○ Gyri and sulci in the folded state (e.g. A and B) are coded in lighter and darker shades of grey in the inflated format
○ Area V1 is larger than normal, extending well past the lips of the calcarine fissure
○ As in most subjects, the V1 representation of the extrafoveal horizontal meridian lies near the fundus of the calcarine fissure
V1 imp for conscious vision
see notes
Topography of primary visual cortex and surrounding areas (Tootell et al.) research
Kamitani and Tong (2005)
Wenliang and Seitz (2018)
Kamitani and Tong (2005)
The potential for human neuroimaging to read out the detailed contents of a person’s mental state has yet to be fully explored. We investigated whether the perception of edge orientation, a fundamental visual feature, can be decoded from human brain activity measured with functional magnetic resonance imaging (fMRI). Using statistical algorithms to classify brain states, we found that ensemble fMRI signals in early visual areas could reliably predict on individual trials which of eight stimulus orientations the subject was seeing. Moreover, when subjects had to attend to one of two overlapping orthogonal gratings, feature-based attention strongly biased ensemble activity toward the attended orientation. These results demonstrate that fMRI activity patterns in early visual areas, including primary visual cortex (V1), contain detailed orientation information that can reliably predict subjective perception. Our approach provides a framework for the readout of fine-tuned representations in the human brain and their subjective contents.
Wenliang and Seitz (2018)
Understanding visual perceptual learning (VPL) has become increasingly more challenging as new phenomena are discovered with novel stimuli and training paradigms. Although existing models aid our knowledge of critical aspects of VPL, the connections shown by these models between behavioral learning and plasticity across different brain areas are typically superficial. Most models explain VPL as readout from simple perceptual representations to decision areas and are not easily adaptable to explain new findings. Here, we show that a well -known instance of deep neural network (DNN), whereas not designed specifically for VPL, provides a computational model of VPL with enough complexity to be studied at many levels of analyses. After learning a Gabor orientation discrimination task, the DNN model reproduced key behavioral results, including increasing specificity with higher task precision, and also suggested that learning precise discriminations could transfer asymmetrically to coarse discriminations when the stimulus conditions varied. Consistent with the behavioral findings, the distribution of plasticity moved toward lower layers when task precision increased and this distribution was also modulated by tasks with different stimulus types. Furthermore, learning in the network units demonstrated close resemblance to extant electrophysiological recordings in monkey visual areas. Altogether, the DNN fulfilled predictions of existing theories regarding specificity and plasticity and reproduced findings of tuning changes in neurons of the primate visual areas. Although the comparisons were mostly qualitative, the DNN provides a new method of studying VPL, can serve as a test bed for theories, and assists in generating predictions for physiological investigations.