Lecture 2 Flashcards
agnosia
- A deficit in recognition despite normal vision
Observations from patients with agnosia give us an indication of the processing that occurs during vision
Result from damage to the “what” pathway
apperceptive agnosia
- Patients are unable to name, match or discriminate visually presented objects
Patients can’t combine basic visual info into a complete percept, therefore they show deficits in copying as well
associative agnosia
- Patients cannot associate a visual pattern with meaning - they can’t recognize what they see
Patients are able to combine visual features into a whole, so they are able to copy well
steps to visual perception
- Patient data tell us there are separate steps to visual perception
a. Input/sensation
b. Basic visual components assembled
Meaning is linked to visual input
the experience error
- What you see isn’t what you get
- The false assumption that the structure of the world is directly given from our senses
Visual illusions illustrate that we don’t always perceive an accurate representation of a visual stimulus
- The false assumption that the structure of the world is directly given from our senses
fixation-saccade cycles
We have the impression of seeing a continuous image of the world, however, our eyes follow a series of this cycle
fixation
When the gaze is directed to a specific object for a brief period of time
saccade
When the gaze moves quickly between objects
main difference between approaches to study perception
whether the goal of perception is recognition or action
computational approach
Concerned with discovering how the brain represents and interprets the distal stimulus (the external object/event in the environment that is being perceived which are located at a distance from the individual)
bottom-up processing
○ Data driven
We recognize patterns by analyzing sensory input step-by-step
top-down processing
○ Conceptually driven
○ Perception is influenced by our prior knowledge, memories and experiences
○ We use what we know about physical properties of the world to perceive 3D objects from 2D images
We use what we know about how the world is structured to perceive 3D objects from 2D images
arguments against the computational approach
§ Analyzing each feature one at a time takes a long time
§ Theories that rely on features cannot explain within category discrimination
Pattern recognition can depend on top-down/conceptually driven effects
template matching
According to template theory, we have a mental “stencil” for an array of different patterns
feature matching
○ We have a system for analyzing each distinct feature of a visual item
○ Ex. Pandemonium (Selfridge 1959)
Physiological support for feature matching comes from discovery of feature detector neurons in primary visual cortex
biederman’s recognition by components
○ Geons are view-point invariant (test objects are processed equally well after having been viewed previously in either the same/different orientation in depth during initial encoding) because they have nonaccidental properties (image properties that are invariant to changes in viewpoint)
But, recognition is impaired when we view objects from non-canonical viewpoints (not part of a set of works that we consider important/good/worth studying)
view-based recognition
○ Evidence from psychology and physiology does not support a viewpoint invariant approach to object identification
○ Humans appear to have a viewer centered bias
§ Object recognition is faster from familiar viewpoints
Cortical neurons demonstrate viewpoint specificity
gestalt approach
- Uses organizational principles to create meaningful perception of the environment
- Concerned with how perception gets organized into meaningful units
“The whole is different than the sum of its parts
- Concerned with how perception gets organized into meaningful units
gestalt grouping principles
○ Identify characteristics of perception which help determine which components of a stimulus group together
○ We can use these rules to predict what will be perceived based on one law at a time - it is hard to predict the outcome of combining laws
§ Law of proximity
§ Law of similarity
Law of common region
role of experience
If things have been associated in prior viewings, they will be grouped together in the future
perception/action approach
- Assumes the goals of action help determine perception
- The goal of perception is to provide a perceiver with information about objects’ affordances
- The environment contains all the information we need for perception
- The goal of perception is an action
- Lab experiments using 2D images only study indirect perception
Unlike Gibson’s view, most modern researchers believe both action and representations are involved in perception, but that action influences how we perceive the world
- comes from gibson’s direct perception approach
gibson’s direct perception approach
- perception/action approach comes from this
○ An extreme approach in which affordances (the characteristics or properties of an object that suggest how it can be used) directly connect perception and action without the need for intervening cognitive processes
§ No perceptual representation (no proximal stimulus)
§ No role of memory (no top-down processing)
ambient optic array
The structure imposed on light by the environment and contains all the information we need for perception
motion
○ Necessary “pick up” the required information from the optic array
○ Described by optic flow (pattern of apparent motion on the retina caused by the relative motion between an observer and the scene) in the ambient optic array
○ If there is flow in the optic array, the observer is in motion
The direction of flow indicates the direction the observer is moving
localization of perception
- Evidence from lesion studies suggest there are 2 anatomical pathways for object recognition
- Imaging studies have revealed different areas of activation for perception of faces, places, and objects
Deficits in motion perception and action have been dissociated from object identification
- Imaging studies have revealed different areas of activation for perception of faces, places, and objects
ideomotor apraxia
Damage to “where” pathway
blindsight
A result of cortical damage to visual areas and results in a dissociation of visual recognition and vision for action
face processing
- Requires more within-category differentiation
○ Is this a face -> is this face familiar -> who is this face -> is this face happy or sad -> is this person attractive -> is this person friendly
Evidence suggests that we process objects and faces differently
tanaka and farah 1993
Discovered it is a lot easier to recognize parts of houses than parts of faces
face inversion effect
We are faster and more accurate recognizing upright faces compared to inverted faces
is face processing special
- Evidence suggesting we have an innate preference for processing faces and that there are neural areas that process faces
Some believe it is special (domain specific), others argue there is nothing special about faces other than we are experts at identifying them (general expertise)
diamond and carey 1986
Suggested dog experts identify dogs in the same way that the rest of us identify faces
robbins and mckone 2007
Replicated and extended Diamond & Carey’s 1986 study but failed to replicate their findings
cambell and tanaka
Demonstrated equivalent inversion effects for faces and budgies among budgie experts