exam 1 Flashcards

1
Q

inattentional blindness

A

failing to see objects when attention is focused elsewhere (e.g. missing a gorilla in a video)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

change blindness

A

failing to notice changes in a scene due to attention limitations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what are ensemble statistics in the nonselective pathways

A

the nonselective pathways extracts the overall scene properties (e.g. color, texture, layout)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is attentional extinction (due to parietal damage)

A

inability to notice stimuli on the neglected side when competing stimuli are present

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what is attentional neglect (due to parietal damage)

A

ignoring one side of space (often left) after right parietal damage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

how do neurons implement attention

A
  • enhancement
  • sharper tuning
  • altered tuning
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

enhancement (neurons implementing attention)

A

stronger response to attended stimuli

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

sharper tuning (neurons implementing attention)

A

more precise focus on relevant features

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

altered tuning (neurons implementing attention)

A

changes in preferred stimulus properties

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

how does attention affect neural activity?

A
  • enhances firing rates of neurons responding to attended stimuli
  • suppresses irrelevant stimuli
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

attentional blink

A

a gap in perception when detecting two targets in rapid succesion (the second target is often missed if it appears 200-500 ms after the first)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

rapid serial visual presentation (RSVP) paradigm

A

stimuli appear in quick succesion at the same location (used to study attentional blink and perception limits)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

feature integration theory (FIT)

A

features (color, shape) are processed separately and must be bound together by attention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

the binding problem

A

how does the brain combine features into a unified perception?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what is guidance in visual search?

A

attention is guided by salient features and prior knowledge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

feature search

A
  • fast, parallel
  • target “pops out”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

conjunction search

A
  • slower, serial
  • target shares features with distractors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

spatial configuration search

A
  • even slower
  • requires recognizing relationships
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

visual search paradigm

A

a task where participants find a target among distractors. The search efficiency is measured by reaction time vs. number of distractors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

the “spotlight” metaphor of attention

A

attention is like a beam of light, enhancing what it illuminates. However, attention can split, shift, or diffuse, challenging the metaphor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

differences between endogenous and exogenous attention cues

A
  • endogenous –> voluntary, based on goals (e.g. looking for a friend in a crowd)
  • exogenous –> involuntary, driven by sudden stimuli (e.g. flashing light)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

what is the Posner cueing task

A

a test where a cue directs attention to a location before a target appears
- valid cues (correct location) –> faster responses
- invalid cues (incorrect location) –> slower responses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

are different kinds of scenes processed differently

A

Yes, the brain categorizes scenes (e.g. urban vs. natural). Some areas, like the PPA (parahippocampal place area) specialize in scene recognition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

how much do we actually notice or remember from what we see

A
  • very little detail (most information is filtered out)
  • we retain the gist, rather than specific details
  • change blindness and inattentional blindness reveal our limited awareness
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

if we can only attend to a few things at once, why does the world seem rich and detailed?

A
  • “gist perception” allows us to quickly understand scenes
  • the brain fills in missing details based on experience
  • rapid shifts of attention create an illusion of full perception
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

what changes in the brain when we “pay attention”?

A
  • increased activity in visual and parietal cortex
  • enhanced neural responses to attended stimuli
  • suppression of irrelevant stimuli
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

how do we find what we are looking for?

A
  • we use visual search and guidance from memory and expectations
  • the brain prioritizes salient features (e.g. color, shape)
  • top-down (goal-directed) and bottom-up (stimulus-driven) processed guide search
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

is attention really like spotlight?

A

Yes, in some ways (select specific locations, objects or features). But not exactly (attention can diffuse, tracking multiple things at once)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

“zoom lens” model of attention

A

suggests attention can widen/narrow as needed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

why can’t we process everything at once?

A
  • the brain has limited cognitive resources and selects relevant information
  • too much information would cause overload and slow down processing
  • attention prioritizes what’s most important for survival or goals
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

visual problems that lead to abnormal development and stereoblindness

A
  • strabismus (lazy eye)
  • amblyopia
  • congenital cataracts
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

strabismus

A
  • lazy eye
  • misalignment disrupts stereopsis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

amblyopia

A

weaker eye suppressed in the brain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

congenital cataracts

A

block visual input, preventing normal depth perception development

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

stages in normal development of stereopsis

A
  • birth: no stereopsis
  • 3-6 months: binocular coordination improves
  • 4-6 months: stereopsis emerges
  • childhood: refined depth perception
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

stereopsis

A

het vermogen van de hersenen om twee beelden van hetzelfde object, die elk door een oog worden waargenomen, te combineren tot één enkel 3D-beeld (diepte waarneming)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

binocular rivalry

A

when two different images are shown to each eye, perception switches between them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

suppression

A

the brain ignores conflicting information from one eye (e.g. in amblyopia)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

how can misapplying depth cues lead to illusions

A
  • Ames room (distorted size due to forced perspective)
  • Ponzo illusion (parallel lines & how perspective tricks the brain)
  • hollow face illusion (the brain expects convex faces)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

how does the Bayesian approach apply to depth perception

A

the brain combines prior knowledge with sensory input to infer depth (helps resolve ambiguous or conflicting depth cues)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

physiological basis of stereopsis and depth perception

A
  • binocular neurons in V1 and beyond detect disparity
  • MT (middle temporal area) processes motion and depth cues
  • parietal cortex integrates depth information for spatial awareness
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

what is the correspondence problem in stereoscopic vision

A

the brain must match the correct points from each eye’s image
solved by:
- feature matching (edges, textures)
- global processing (analyzing whole scenes)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

how do stereoscopes and stereograms create depth?

A
  • present slightly different images to each eye to mimic binocular disparity
  • the brain fuses them, creating 3D perception
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

retinal disparity

A

differences in images between right and left eyes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

crossed disparity

A

objects closer than fixation point appear displaced outward

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

uncrossed disparity

A

objects farther than fixation point appear displaced inward

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

accommodation & convergence as triangulation cues

A
  • accommodation = the lens adjusts shape to focus on objects at different distances
  • convergence = the eyes rotate inward for near objects, outward for far objects
48
Q

motion parallax as a triangultion cue

A

as you move, near objects move faster across your vision than far objects. The brain uses this difference to estimate depth and distance

49
Q

difference between metrical and nonmetrical depth cues

A
  • metrical cues provide exact distance (e.g. binocular disparity)
  • nonmetrical cues provide relative depth but not exact distance (e.g. occlusion)
50
Q

recognizing pictorial depth cues

A
  • occlusion
  • relative size
  • linear perspective
  • texture gradient
  • shading & shadows
51
Q

occlusion (pictorial depth cues)

A

objects blocking others appear closer

52
Q

relative size (pictorial depth cues)

A

smaller objects appear further away

53
Q

linear perspective (pictorial depth cues)

A

converging lines suggest depth

54
Q

texture gradient (pictorial depth cues)

A

more texture means closer

55
Q

shading and shadows (pictorial depth cues)

A

suggest 3D shape and position

56
Q

how does 3D vision develop?

A
  • infants rely more on monocular cues at birth
  • binocular coordination develops around 3-6 months
  • stereopsis emerges between 4-6 months
57
Q

how does the brain compute binocular depth cues

A
  • use binocular disparity
  • disparity is processed in visual cortex to determine depth
58
Q

how does the brain combine monocular depth cues

A
  • use relative size, interposition, texture gradient, motion parallax, shading, and perspective
  • integrates multiple cues for consistency in depth perception
59
Q

why do we have two eyes

A
  • provides binocular disparity which enhances depth perception (stereopsis)
  • expands the field of view
  • offers redundancy (one eye compensates if the other is damaged)
60
Q

how does the brain reconstruct a 3D world from 2D retinal images?

A
  • the brain combines monocular and binocular depth cues
  • uses perspective, shading, texture gradients, motion, and disparities between two eyes’ images
  • relies on experience and prior knowledge (Bayesian inference)
61
Q

how does color influence perceived flavor?

A
  • expectation effect (we expect a red drink to be cherry-flavored)
  • cross-modal interactions (the brain combines visual and taste signals)
  • marketing impact (people rate foods as tasting better when they have expected colors)
62
Q

why is color vision useful?

A
  • helps in finding food (ripe vs. unripe)
  • aids in recognizing objects & emotions
  • improves camouflage detection
63
Q

color constancy and how it works

A
  • the brain adjusts perceived color to remain consistent under different lighting
  • uses context, memory, and lighting cues
64
Q

predicting negative afterimage colors

A

after staring at a color, the opponent color appears in the after image (red –> green, and vice versa)

65
Q

how can context influence color perception?

A
  • color contrast (a color looks different depending on surrounding colors)
  • color constancy (the brain adjusts for lighting to keep colors looking stable)
66
Q

what is synesthesia?

A

a condition where one sense triggers another (e.g. seeing colors when hearing sound)

67
Q

forms of anomalous color vision

A
  • protanopia
  • deuteranopia
  • tritanopia
68
Q

protanopia

A

missing L-cones (red-green deficiency)

69
Q

deuteranopia

A

missing M-cones (red-green deficiency)

70
Q

tritanopia

A

missing S-cones (blue-yellow deficiency)

71
Q

does language influence color perception

A

some languages have fewer/more color words, which may affect perception (e.g. in russian they have seperate words for light and dark blue)

72
Q

opponent color theory

A

it states that colors are perceived in opposing pairs (red-green, blue-yellow)

73
Q

color cancellation experiments

A

adding a color cancels out its opponent (e.g. adding green cancels red)

74
Q

how is 3D color space represented?

A
  • RGB (red, green, blue) is used in screens and digital media
  • HSB (hue, saturation, brightness) is used in color design
  • CIE color space is more precise mapping of all visible colors
75
Q

four ways cone outputs are pitted against each other in cone-opponent cells

A
  • L-M (red-green opponent) compares long and medium-wavelength cones
  • M-L (green-red opponent) is opposite of L-M
  • S-(L+M) (blue-yellow opponent) compares short wavelength cones to the sum of L and M
  • L+M (luminance channel) measures overall brightness
76
Q

additive color mixing

A
  • process: mixing light (e.g. red + green = yellow)
  • example: computer screen
77
Q

subtractive color mixing

A
  • process: mixing pigments (e.g. cyan + yellow = green)
  • examples: paint
78
Q

Young-Helmholtz trichromatic theory of color vision

A

color vision results from three cone types corresponding to different wavelengths. The brain compares their responses to determine color

79
Q

principle of univariance

A

a single photoreceptor cannot distinguish between different wavelengths, only overall light intensity

80
Q

principle of metamers

A

two different light spectra can produce the same color perception because they stimulate cones the same way

81
Q

spectral sensitivities of the three cone types

A
  • S-cones: blue, ~420 nm peak
  • M-cones: green, ~530 nm peak
  • L-cones: red, ~560 nm peak
82
Q

three types of cones that contribute to color vision

A
  • S-cones: short wavelengths, blue
  • M-cones: medium wavelengths, green
  • L-cones: long wavelengths, red
83
Q

three steps to color perception

A
  1. detection
  2. discrimination
  3. appearance
84
Q

detection (color perception)

A

cones in the retina detect light

85
Q

discrimination (color perception)

A

brain compares signals from different cones

86
Q

appearance (cone perception)

A

brain interprets and assigns stable color

87
Q

if an orange looks green, will it taste different?

A

color affects expectation of taste, but the actual taste depends on the chemical composition. But perception can be influenced by color, making us think it tastes different

88
Q

why does an orange look orange in real life and in a photo, even though the physical basis is different?

A

in real life, we see the orange due to wavelengths of lights reflecting off it. In a photo, the camera captures those wavelengths and translates them into digital color signals that our screen displays. The brain interprets both using color constancy mechanisms

89
Q

does everyone see the same color?

A

most people see similar colors, but color vision varies due to genetics, lighting, and perception. Color is affected by color blindness and cultural & linguistic differences

90
Q

what is color for?

A

color vision helps us detect and recognize objects more effectively. It enhances contrast between objects and backgrounds

91
Q

attention

A

a large set of selective mechanisms that enable us to focus on some stimuli at the expense of others

92
Q

why are faces a special case of object recognition

A

they are processed holistically (not separate features). The FFA is specialized for face perception

93
Q

objects labels at different levels of description

A
  • superordinate = “animal”
  • entry-level = “dog”
  • subordinate = “golden retriever”
94
Q

strengths and weaknesses of object recognition models

A

pandemonium model:
- strength = explains future-based recognition
- weakness = too simple for complex objects
template model
- strength = works well for specific images
- weakness = fails with variations
structural description
- strength = describes objects as 3D parts (geons)
- weakness = doesn’t explain textures, lighting effects
deep neural networks
- strength = excellent for real-world images
- weakness= requires large datasets

95
Q

subtraction methods in the brain

A

compares brain activity with and without a stimulus

96
Q

decoding methods in the brain

A

uses machine learning to interpret brain activity

97
Q

receptive field properties of neurons that process objects and faces

A
  • IT cortex (recognizes complex shapes and objects)
  • FFA (specialized for faces)
98
Q

Bayesian approach in perception

A

the brain combines prior knowledge and sensory input to interpret the world

99
Q

methods the visual system uses to deal with occlusion

A
  • edge interpolation (filling in missing parts)
  • surface completion (assuming hidden parts continue)
  • contour continuation (using Gestalt rules)
100
Q

define figure-ground assignment

A

determines what is object (figure) and what is background (ground)

101
Q

principles figure-ground assignment

A
  • surroundedness (the enclosed region is usually the figure)
  • size (smaller objects are typically figures)
  • symmetry (symmetric shapes are often figures)
102
Q

accidental view points in perception

A

occurs when an object aligns in a way that misleads perception (e.g. a person standing in front of the Eiffel tower may appear to be holding it)

103
Q

Gestalt psychology

A

the principles state that perception is based on innate grouping rules

104
Q

Gestalt principles

A
  • proximity (close things group together)
  • similarity (similar things group together)
  • closure (we fill in the missing information)
  • good continuation (we see smooth, continuous lines)
105
Q

define midlevel (or middle) vision

A

the stage between low-level (edges, contrast) and high-level (objects, faces) processing. It organizes elements into coherent shapes

106
Q

challenges in object recognition for the visual system

A
  • viewpoint changes (objects look different from different angles)
  • occlusion (objects can be partially hidden)
  • lighting variations (shadows can distort appearance)
107
Q

feed-forward processing

A

information flows one way (retina –> V1 –> IT)

108
Q

reverse-hierarchy theory

A

higher areas can send feedback signals to refine early processing

109
Q

visual agnosia

A

a condition where a person cannot recognize objects despite normal vision (caused by damage to the ventral stream, inferior temporal cortex)

110
Q

dorsal pathway

A
  • “where/how”
  • location and action
  • parietal lobe
  • speed is fast
  • damage causes optic ataxia (impaired grasping)
111
Q

ventral pathway

A
  • “what”
  • object recognition
  • temporal lobe
  • speed is slower
  • damage causes visual agnosia
112
Q

concept of border ownership

A

cells in V2 assign edges to specific objects rather than seeing them as just standalone lines

113
Q

differences between extrastriate cortex and striate cortex

A
  • striate cortex = V1, processing of basic features (edges, orientation)
  • extrastriate cortex = V2-V5, processes higher-level aspects (shapes, motion, object identity)
114
Q

how could a computer recognize objects?

A
  • feature detection
  • template matching
  • deep learning