eyesight 3 Flashcards

1
Q

The way that photoreceptors converge onto bipolar cells and RGCs accounts for:

A
  1. How we can see spatial detail (resolution or acuity)

2.How little light we need to see (sensitivity)
* Usually, there is a trade-off between resolution and sensitivity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How do eye doctors measure visual acuity?

A

Snellen chart: invented by eye doctor Herman Snellen (1862)
* Read letters, decreasing in size until you make several errors

Based on ability to see Snellen letters → Letter size = 5x stroke size (5-to-1 Ratio: The height and width of each letter are five times the thickness of the lines and gaps that define the letter.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do eye doctors define visual acuity

A

Distance at which a person can just identify letters / distance at which a person with “normal” vision can just identify the letters

Nowadays, patients sit at 20 ft and letter size is altered (you need to be 20 ft away to see something that a person with normal vision can see at 200 ft
Normal vision = 20/20
Legal blindness = 20/200

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do vision scientists measure visual acuity? (grading)

A

Identify smallest visual angle of a cycle of grading that we can perceive

Grading: altering cycle of light and dark bars

Use Visual Angle: the angle subtended by an object on the retina; the angle that would be formed by lines going from the top and bottom (or left and right) of a cycle on a page, passing through the centre of the lens and ending on the retina (ex: stick up thumb in front of face. Thumb takes up 2 degrees on space)
* 1 cm ≈ 1 degree at a viewing distance of 57 cm
* Smallest resolvable gratings are 0.017 degree

Cycle (for grating): one repetition of a black and white stripe
Contrast: the difference in illumination between an object and the background or between
lighter and darker parts of the same object

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Limit of spatial vision is determined primarily by:

A

spacing of photoreceptors in the retina

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How does contrast affect acuity?

Spatial frequency:

A

number of times a pattern repeats in each unit of space (# of cycles/ degree)

  • Varying frequency alters level of detail
  • Varying amplitude alters contrast
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

To detect fine gratings, we need at least :

A

two cones per cycle — otherwise, detail is lost.

Limit of Spatial Vision = Photoreceptor Spacing
Left Panel: When dark and light bars of a grating fall on different cones, the visual system can detect the pattern — we perceive stripes.

Right Panel: When the bars fall such that each cone receives both black and white (i.e., the pattern is too fine), cones respond the same → we perceive gray, not stripes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Gabor Patch test and result

A

Gabor patch is a visual stimulus used in vision science to test contrast sensitivity.
-It looks like a fuzzy, striped grating (like black-and-white ripples) that gradually fades out at the edges.

It combines:
- A sine wave grating (to test spatial frequency)
- With a Gaussian envelope (to localize it in space)

Goal:
Find the minimum contrast (threshold) needed to detect the Gabor patch at different spatial frequencies.

Step-by-step:
-Present participants with a Gabor patch at a specific spatial frequency (e.g., thin or thick stripes).
-Gradually adjust the contrast until the participant just barely sees it — that’s the threshold.
-Repeat this process across many frequencies (low → high).
-Plot the reciprocal of the threshold (i.e., sensitivity = 1 / threshold) for each frequency.

The result: the human contrast sensitivity function (CSF)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Contrast Sensitivity Function (CSF)

A

Contrast Threshold: Smallest amount of contrast required to detect a pattern

The resulting graph shows sensitivity vs. spatial frequency.

Typically looks like an inverted U-shaped curve:

-We’re most sensitive to medium spatial frequencies
-We’re less sensitive to very low (broad patterns) and very high (fine detail) frequencies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Nyquist Limit

A

Each cycle of a grating = one light + one dark stripe

To detect the pattern, at least two photoreceptors per cycle are needed (Nyquist limit)

highest spatial frequency that a photoreceptor array can theoretically sample
* Corresponds to a width of 0.017 degrees, Beyond this, the grating blurs into gray
* Determined primarily by cone spacing

x (point on graph) = for a 1 cycle/degree grating to be just distinguishable from uniform grey, the stripes must have a contrast of 1%

For humans, the optimal spatial frequency = 8-10 cycles/degree, At this range, we can detect patterns even with very low contrast

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Retinal Ganglion Cells & Spatial Frequency (stripes)

A

Low frequency yields weak response (spatial frequency is too low, big bars, falls on centre and surround, neuron does not fire)

Medium frequency yields strong response (spatial frequency is just right, light on centre, dark on surround, neuron excited and fires)

High frequency yields weak response (spatial frequency is too high, not excited). Bars are too small; light and dark average out over the receptive field.

Thus, retinal ganglion cells are “tuned” to spatial frequency.
Each cell acts like a filter, responding best to a specific spatial frequency that matches its receptive field size!
Response depends on the phase of the grating, i.e., the position of the grating in the cell’s receptive field

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Retinogeniculostriate pathway

A

The major retinal projection to the brain

Retina -> (1st relay site) Lateral Geniculate Nucleus (in the thalamus, major subcortical target - 90% of RGCs synapse here) -> Striate Cortex/primary visual cortex (V1)/area 17

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Signal splitting at: the optic chiasm

A

The retina is split into: temporal and nasal parts

Temporal retina: Outer half (closer to the temples)

Nasal retina: Inner half (closer to the nose)

Pathway of Visual Information
Light from the left visual field hits:
1.The nasal retina of the left eye
The temporal retina of the right eye

2.Retinal ganglion cell (RGC) axons leave the eye through the optic nerve.

3.At the optic chiasm:
Nasal axons cross to the opposite side
Temporal axons stay on the same side

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The LGN

A

serves as a relay station that organizes and passes retinal input to the primary visual cortex (V1)

There are two LGNs — one in each hemisphere of the brain.
-Each LGN has 6 distinct layers:
-Each layer receives input from one eye only.
-Layers are organized to maintain retinotopic mapping (preserving spatial layout from retina).

Neuronal Properties in LGN
LGN neurons have center-surround ON/OFF receptive fields, similar to retinal ganglion cells.
They respond best to: Spots of light, Gratings (patterns of light and dark)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

PARVOCELLULAR LGN layers

A

Small cell bodies

Receive information from P ganglion cells

provide detailed info and position info

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

MAGNOCELLULAR LGN layers

A

Large cell bodies

Receive information from M ganglion cells (big receptive fields)

no detail info, more sensitive to movement

17
Q

KONIOCELLULAR LGN layers

A

Very small cell bodies

Various functions, including relaying info from S-cones regarding blue-yellow pathway

thin layers in between parvo and magnocellular layers

18
Q

Organization of visual signals in the LGN:

RETINOTOPIC ORGANIZATION/ TOPOGRAPHICAL MAPPING

A
  • Each layer of LGN receives input from one eye (monocular)
  • Maps a complete half of the visual field
  • Objects in right visual field activate all 6 layers of the left LGN (3 per side)
  • Layers 1, 4 and 6 receive input from the contralateral eye (2,3,5 ipsilateral eye)
  • Topographical mapping: ordered mapping of the world in the LGN (and beyond) → but fovea has a larger representation!
19
Q

Organization of visual signals in the LGN:

FUNCTIONAL SEGREGATION

A
  • M ganglion cells from retina project to magnocellular layers (movement)
  • Specialized for large, fast-moving objects
  • P ganglion cells project to parvocellular layers
  • Specialized for processing details of stationary objects (form/colour)
  • Thus, visual info is processed through 2 parallel pathways that begin in the retina (nasal cross, temporal stay on same side)
20
Q

RGCs project to 3 other subcortical structures:

A
  1. Superior Colliculus (SC) – Eye & Head Movement: Located in the midbrain, RGCs form a retinotopic map in the SC (spatial organization mirrors the retina), Each SC controls the opposite visual field (e.g., right SC → left visual field)
    Integrates: Visual, auditory, and somatosensory inputs
    Sends motor output to brainstem to:
    Control saccades (fast eye movements that shift the fovea to objects of interest). Coordinate eye and head movement toward stimuli
  2. Pretectum & Edinger-Westphal Nucleus – Pupil and Lens Control: Involved in automatic reflexes of the eye
    Two key functions:
    A. Pupil Light Reflex In bright light, motor signals travel via cranial nerve III to constrict the iris
    B. Lens Accommodation Reflex
    If an image is out of focus, signals adjust the ciliary muscle. Allows the lens to change shape:
    Flatten for far vision, Round for near vision
  3. Suprachiasmatic Nucleus (SCN) – Biological Clock.
    Located in the hypothalamus
    Receives input from a special class of photosensitive RGCs (contain melanopsin). Tracks ambient light levels to regulate circadian rhythms
    Projects to the pineal gland, which releases melatonin (promotes sleep when it’s dark)
21
Q

Striate Cortex (V1 / Primary Visual Cortex)

A

Located in the occipital lobe, at the back of the brain

Named for its striped (striated) appearance due to alternating dense and sparse cell layers

Composed of 6 layers, each with different neuronal densities, inputs, and outputs

White matter lies below layer 6

Layer 4: Main input layer from the LGN; thickest layer
 * 4C𝛼 (upper)** Receives magnocellular input (motion, low-res, fast)
 * 4C𝛽 (lower)** Receives parvocellular input (fine detail, color)
 * 4A/4B** Subdivisions involved in further processing and integration
Layers 2 & 3 Send output to other cortical visual areas (e.g., V2, V4)

Layer 5 & 6: Send feedback to subcortical structures, like LGN and superior colliculus (SC)

22
Q

TOPOGRAPHICAL MAPPING

A

The primary visual cortex (V1) contains a topographic (retinotopic) map of the visual field:

Adjacent areas in the visual field are represented by adjacent neurons in V1

Right V1 processes the left visual field

Left V1 processes the right visual field

🧠 Think of it as a spatial map — like a projection of the retina onto the cortex, but flipped and inverted.

23
Q

CORTICAL MAGNIFICATION

A

The fovea (central retina) is overrepresented in V1:

Although it covers a small part of the visual field, it gets a large amount of cortical space

This is because the fovea has a high density of photoreceptors and retinal ganglion cells

✅ This means we see best (with most detail) at the center of our vision, where V1 devotes more neurons to processing information.

24
Q

Mapping the Visual Cortex (topography)

A

A. Lesion Studies (in animals)
Damage to specific parts of V1 → specific areas of vision are lost
Example: Lesion in the inferior calcarine sulcus → loss of the upper visual field
This helped researchers map which V1 regions correspond to which visual-field areas

B. fMRI Studies (in humans)
Present stimuli in different parts of the visual field
Measure blood flow (BOLD signal) in V1
Compare active regions vs. baseline (blank field) → build a functional map of visual space on the cortex

25
Eccentricity
distance from the center of the visual field (the fovea)
26
Visual Acuity Declines with Eccentricity
Fovea (center of gaze) has: -Densely packed photoreceptors -One-to-one connections between cones and RGCs -A large cortical representation in V1 (cortical magnification) Result: High-resolution, detailed vision at the center Peripheral retina: -Fewer photoreceptors -More convergence (many photoreceptors → fewer RGCs) -Less cortical area devoted to processing Result: Lower visual acuity in peripheral vision 🧠 Behaviorally: To see peripheral objects clearly, we shift our gaze using eye or head movements, bringing them into the fovea for detailed processing.
27
Visual Crowding in the Periphery
Definition: Crowding = when objects in the periphery are surrounded by other objects (clutter), we can’t recognize them clearly, even if each object is visible on its own. 🔹 Effects: Items appear jumbled or merged Impairs tasks like: -Object recognition -Reading -Eye-hand coordination in cluttered environments 🔹 Example: You may easily recognize a letter in the periphery if it's alone, but fail to identify it when surrounded by other letters or shapes.
28
The ventriloquism effect and multimodal integration
When auditory and visual stimuli are presented at the same time but from different locations, we perceive the sound as coming from the visual source. Classic example: A ventriloquist's dummy appears to speak, even though the real sound comes from the human performer. 🧠 Why Does This Happen? ✅ Multisensory Integration Principle: The brain combines information from multiple senses to create a coherent perception. Normally, it binds signals that are temporally and spatially aligned (from the same event). 🔁 But when there’s a mismatch… If the sound and visual cues don’t match in space, but do match in time → the brain resolves the conflict by favoring vision. Example: McGurk Effect (sound you hear depends on mouth movement and not the actual sound, sight overrides sound) example: The Rubber Hand Illusion A rubber hand is placed in front of a person while their real hand is hidden. Both the rubber hand and the real hand are stroked synchronously. Person starts to feel the rubber hand as their own. ✅ Why? Vision overrides somatosensory input → the brain trusts the visual match between the touch and what’s seen This alters body ownership and self-perception Why Vision Dominates (a.k.a. Visual Capture): Vision has higher spatial accuracy than audition Our visual system is much better at determining where things are located So, the brain "trusts" vision more when localizing the source of a sound
29
multimodal integration: When Audition Overrides Vision
The "2 Beeps / 1 Flash" Illusion (a.k.a. Sound-Induced Flash Illusion) What happens: You see 1 flash but hear 2 beeps. What you perceive: You “see” 2 flashes — even though only 1 actually occurred. ✅ Why? The brain trusts the auditory system’s timing more than the visual system’s. Auditory system has better temporal resolution, so it dominates in timing-based tasks. 🧠 Key idea: Vision dominates for spatial tasks; audition dominates for temporal tasks.
30
Sensory Dominance Depends on the Context
Spatial (where?): 👁️ Vision : Ventriloquism effect Temporal (when?): 🔊 Audition: Sound-induced flash illusion Body ownership: 👁️ Vision over touch: Rubber hand illusion
31
Multimodal (multisensory) neurons are found in both:
Subcortical structures (e.g., superior colliculus) Cortical association areas (e.g., parietal and temporal lobes) Even primary sensory cortices (like V1 for vision or A1 for audition) show direct connections and can be influenced by other senses.
32
Multisensory Neurons
These neurons respond to two or more types of sensory input (e.g., both light and sound). They allow the brain to combine information from different senses to create a unified perception of space or events. 🎯 For example, one neuron might fire only when a sound and a flash come from the same location at the same time.
33
coordinated receptive field
auditory and visual fields overlap a lot
34
Binding-by-Synchrony Hypothesis (Temporal Correlation Hypothesis)
Suggests that neurons coding different features of the same object fire in synchrony (i.e., at the same time). This synchronized firing acts as a temporal “tag” that links the features together in perception. 🔊👁️ Example: A neuron responding to the sound of a dog and one responding to its shape may fire together, helping the brain bind these into a single percept. ✅ Flexible & efficient, but hard to detect directly — still a theoretical model.
35
Convergent Coding ("Grandmother Cell Hypothesis") Binding Hypothesis
Proposes that one highly specific neuron responds only when all features of a known object are present. That neuron fires when it "recognizes" the exact combination (e.g., your grandmother’s face, voice, and smell). 🚫 Problem: Requires a huge number of specific neurons, one for every possible object/person we recognize → biologically inefficient
36
Population Coding Binding Hypothesis
More widely supported. Each neuron responds to one feature (e.g., shape, color, sound frequency). Perception arises from the pattern of activity across many neurons — like a “neural barcode.” Synchronous firing across these neurons can signal co-occurrence of features → binding ✅ Efficient, flexible, and fits with known brain activity patterns.
37
Synesthesia
A neurological condition where stimulation in one sensory modality triggers an involuntary experience in another. The brain blends sensory signals, creating multi-sensory perceptions from a single stimulus. ex: Grapheme-color: Letters or numbers evoke specific colors (e.g., “A is red”) Sound-color: Sounds (e.g., music, voices) trigger color experiences Auditory-tactile: Sounds create touch sensations (e.g., a bell causes a tingling feeling) Hearing-motion: Seeing movement causes the perception of sound (e.g., car "woosh") Caused by: Overactive perceptual binding machinery? * Abnormal strengthening of weak cross-modal connections, or, from the formation of intracortical pathways that never exist in people without synesthesia (artists are more likely to have synesthesia)