ML3 Flashcards
Why study early vision? Name 3 reasons
- Image processing is a common application of artificial deep networks
(particularly object recognition) - Early vision is very thoroughly investigated
(Visual input is easy to control)
(Good animal models of human vision)
(Computational aspects are well understood) - Common application of DCNN as simulations of neural processing
We believe that very similar principles operate in the networks in the brain like the retina. Why?
We already saw how the retina separates images into multiple feature maps for different colours and spatial frequencies.
Organisation of early visual cortex. Explain the process
The information from the two eyes is joined together. The left half of both retinas goes to V1 of the left hemisphere, and the other half to the right hemisphere.
Do spatial relationships from the image become spatial relationships in the cortex?
Yes
Why is it important that relationships from the image become spatial relationships in the cortex?
- This allows filtering and integration by dendritic trees with a limited cortical extent, a limited spatial extent in the feature maps.
- It also allows interaction over minimal distances: cells that wire together lie together for efficient interactions.
Name the 3 different cell groups carrying information from the retina
parasol, midget and bistratified cells
Are the parasol, midget and bistratified cells helt in slightly different cell populations at different depths or layers in the grey matter?
Yes
Are grey matter layers equivalent to neural network layers?
No, but here we do see several neural network layer - like stages, i.e. different numbers of synapse steps from the photoreceptor input, are sometimes in different layers of grey matter in the same cortical area.
Do the spatial relationships allow these neurons to interact most easily when they are carrying information on the same feature type, as these are closest together?
yes
Where do responses to specific edge orientations first emerge?
In V1. These cells respond when an edge has a specific orientation (the preferred orientation) and is shown in a specific position.
Are the filters operating on image inputs?
No, the inputs are already outputs of the previous layer of filters
Why is it computationally useful that neurones that have similar responses are found together?
If we have a filter that will accept a range of orientations, the neuron implementing this filter can synapse with a group of nearby neurons.
Orientation selectivity
Where is contrast initially computed in an orientation-independent filter?
In the retinal ganglion cell
Artificial DCNNs often avoid this step, going directly from image to orientation
Where are the orientation-selective responses computed?
V1, by operations comparing these RGC outputs (not image content)
What happens with the orientation preferences?
They gradually change across the cortex, at a much finer scale than the spatial visual field maps