Lec 6/ TB Ch 10 Flashcards
1
Q
A
Features of object concepts
Theoretical background
- Amodal symbolic model
- concepts (including word meanings) consist of abstract symbols that are represented and processed in an autonomous semantic system that is completely separate from the modality specific systems for perception and action
- Ex. the concepts encoded by the word banana consist of amodal features (ex. fruit, long, curved, yellow, peel)
- To understand the word, we need to assess the features, and do not need to retrieve memories on how banana are sensed and used
- Grounded cog model/embodied model/simulation model
- This theory suggests that concepts are anchored in modality specific systems, such that understanding word meanings involves high-level perceptual and motor representations
- Semantic K does not reside in an abstract realm and is segregated from perception and action; It overlaps with these capacities
- Ex. understanding object nouns (banana) involves activating modality-specific records in LT memory that capture generalisations on how bananas look, taste, feel, and how they are manipulated
- Some cases, conceptual processing is so deep -> vivid mental imagery
- Ex. read a good novel
- Here, detailed imagery is not needed for comprehension; this detailed imagery happens after like embellishment/elaboration that helps us understand
- What matters in the ground cog model is that modality specific activations don’t need to be full-fledged conscious, sensory, and motor images to support conceptual processing
- During ordinary language comprehension, these activations are implicit
- Ex. when you read the word banana, you won’t experience the flavor
- The lack of conscious taste doesn’t mean that gustatory representations is not involved in the comprehension process (they are)
- Attentive readers will notice layout of visual, auditory elements
- This reflects the anatomy location of the corresponding modality-specific systems in the brain
- The Grounded Cog model maintains that the neural correlates of conceptual knowledge include high-level components of those systems
- This implies is that meaning of object nouns (ex. banana) does not reside in a single place, rather different fragments of this concept (banana) are scattered across diff regions
- Ex. visual-semantic info on how bananas typically look maybe stored in the same ventral temporal areas that are engaged when bananas are visually recognized
- Ex. gustatory-semantic info on how bananas typically taste maybe stored in the same OBF and insula areas that are engaged when banana taste are recognized
- Ex. spatiomotor and action-semantic info about how bananas are usually handled maybe stored in the same parietal/frontal areas that are engaged when bananas are grasped/manipulated
- This theory suggests that concepts are anchored in modality specific systems, such that understanding word meanings involves high-level perceptual and motor representations
- This scheme was first introduced by Wernicke, explored by Broadbent, Lissauer, and Freud
- Warrington used a similar framework to interpret the performance of brain-damaged patients who showed selective impairments of particular semantic domains
- More studies used different tech to test predictions on which conceptual processing recruits modality-specific systems for perception and action
- Visual features: color, shape, motion
- Nonvisual: motor, auditory, gustatory/olfactory
2
Q
A
Box 10.1: what is a violin
- Damasio – used Grounded Cog model to describe how the concept of a violin is implemented in the brain
- Stimuli – drawing/word/object = violin
- Activates many representations
- Ex. man-made objects, string instruments
- Auditory
- Co-activation depends on the convergence zones
- Convergence zones: ensembles of neurons that “know about” simultaneous occurrence of patterns of activity during the perceived or recalled experience of entities and events
3
Q
A
Color features
- Many objects have typical colors
- social conventions (ex. yellow taxi)
- animals (ex. white swans)
- plants (ex. orange carrots)
- Object-color associations is a key part of semantic k of the relevant nouns
- 2 main cortical regions for color perception
- Passive color sensation
- Ex. one gazes at a garden of flowers
- activates area v4, a patch of lingual gyrus at occipital lobe
- more activated by colors than grayscale stimuli
- damage here -> achromatopsia: can’t see color
- active color perception
- deliberately, attentively compares shades of diff flowers
- uses mid-fusiform gyrus (ventral BA20, downstream from v4)
- fusiform gyrus = ventral temporal cortex
- what pathway – deals w/ shape, color, texture
- v4-alpha: region in fusiform gyrus, responsive during color discrimination
- sensitive to Farnsworth-Munsell 100 Hue Test:
- determine if 5 circular arrayed wedges form a clockwise sequence in incrementally changing hues
- (baseline): subjects must make similar judgements for grayscale
- sensitive to Farnsworth-Munsell 100 Hue Test:
- Passive color sensation
- Are wither v4 and or v4-alpha engaged when a person retrieves semantic k about color features of objects (ex. taxi, sawn)?
- Simmons et al 2007
- fMRI study
- 2 parts
- Part 1: localized the subject’s color perception areas
- Administered the Farnsworth-Munsell 100 Hue test, subtract the activation pattern evoked by color wheels
- Part 2: asked ppl to do a conceptual property verification task, 3 conditions
- In each trial of the color property condition
- Subjects were shown an object noun (ex. eggplant) -> color adj (ex. purple)
- Then they indicate if the color usually applies to the object
- In each trial of the motor property condition
- Shown an object noun (ex. football) -> action verb (ex. throw)
- Then indicate if the action usually applies to the object
- In each trial of the concept-only condition
- Shown an object noun (ex. lightbulb)
- NOT followed by property word; no response
- Purpose: allow rs to separate the BOLD signals elicited by object words from those elicited by prop words in the 1st 2 conditions
- Trials from the 3 conditions were mixed and presented randomly
- In each trial of the color property condition
- Anaylzed data to see if there were any voxels that were activated more for color than greyscale wheels in the 1st part of the study, that also activated more color property judgements than motor property judgements in part 2
- Results: a large cluster of voxels in the left mid-fusiform gyrus, overlapping v4-alpha met this criteria
- These findings are consistent w/ the Grounded Cog model as it supports the view that semantic k is anchored in the brain’s modality-specific systems
- Amodal symbolic model supports argue that the fusiform activity observed during color property judgements may not reflect unconscious, implicit retrieval of conceptual color features
- It may instead reflect conscious, explicit generation of color imagery, which is a process that may happen after the relevant color k has been accessed from an abstract semantic system located elsewhere in the brain
- Rebuttal – simmons et al 2007
- The data is compatible w/ other explanations
- But it does not sit well w/ the core assumption in the amodal symbolic model – abstract representations should be sufficient to performa all semantic tasks
- They stated that damage to the left fusiform gyrus can cause color agnosia
- Color agnosia: a disorder that impairs k of object-color associations (i.e. typical colors of objects) that their color property verification task probed
- Due to damage in ventral temporal cortex, esp fusiform gyrus
- Ths supports the idea that the fusiform activity shown in the fMRI study reflects retrieval of conceptual color features, not just color imagery
- Color agnosia: a disorder that impairs k of object-color associations (i.e. typical colors of objects) that their color property verification task probed
4
Q
A
Shape features
- Shape is a key component of object nouns
- Shape properties of visual objects are represented in the ventral occipitotemporal cortex
- Studies examined if the shape properties of diff categories of objects evenly distributed or clustered together
- Most evidence suggest that certain areas are preferentially responsive to certain categories of objects (ex. faces, non-face body parts, animals, places, and printed words)
- Chao et al 1999
- Showed that there are separate cortical rep of the shapes of animals and tools
- The category of tools is restricted to man-made objects that serve specific fx
- Examined the perceptual processing of animals and tools via passive viewing tasks and match-to-sample tasks
- Also evaluated conceptual processing of animals and tools using silent-picture-naming task and property verification tasks
- property verification tasks – require ppl to answer y/n to qs (ex forest animal?, kitchen tool) in response to printed words for animals and tools
- Results: across all tasks, sig more bilateral activation for animals (perceptual and conceptual) in the lateral part of mid-fusiform gyrus
- More bilateral activation for tools in the medial part of the mid-fusiform gyrus
- These adjacent but distinct regions of the fusiform gyrus were activated by picture and words
- These results fit the predictions of the Grounded cog model
- Some argue: activation evoked by words may reflect ppl’s efforts to conjure up explicit visual images of the shapes of the lexically encoded animals and tools
- Rebuttal: Wheatley et al 2005
- Showed that lexically driven category-related fusiform activations indicate semantic processing
- Used the phenomenon “repetition suppression”
- Repetition suppression: a pop of neurons that code for a specific type of info will decrease its response when the info is repeated
- This reflects greater processing efficiency
- Methods: ppl rapidly read presented word pairs (shown for 150 ms -> then 100 ms “break”) that were either
- unrelated (ex. celery, giraffe)
- related (ex. horse, goat)
- identical (ex. camel, camel)
- Results: as the degree of semantic relatedness b/w 2 words progressively increased for a particular category (i.e. animals for this study), the neural activation evoked by the 2ndword progressively decreased in the lateral part of the mid-fusiform gyrus
- The same area Chao et al 1999’s study linked w/ the animal category
- Given the processing time constraints in the task, it is unlikely that the repetition suppression effects is due to explicit conscious images the subjects generated after understanding the words
- Rebuttal: Wheatley et al 2005
- The convergent results showed that the shape features of the meanings of objects nouns are captured by neurons in the ventral temp cortex that partially overlap with those that subserve visual perception of the same features
- They are also segregated based on semantic category
- Other studies: damage to the mid-fusiform gyrus (esp LH) impairs understanding of concrete object nouns
- These lesions tend to affect semantic k about living things (ex. animals, fruits, veggies) more severely than semantic k about non-living things (ex. tools)
5
Q
A
Motion features
- Some objects have characteristic movements
- Ex. hopping for rabbits; cutting for scissors
- MT + (located in the vicinity of the anterior occipital and lateral occipital sulci) is involved in the passive perception of moving visual stimuli
- Damage to this area -> akinetopsia
- Impaired ability to consciously see motion due to damage to MT+
- MT+ does not distinguish systematically b/w diff types of object-associated motion
- BUT it projects to higher-lv posterolateral temporal areas that do
- Processing stream 1:
- Extend from MT+ into a sector of the posterior superior temporal sulcus (pSTS) that responds preferentially to the sight of biological (ex. animal) motion patterns
- Processing stream 2:
- Extends from MT+ into a sector of the posterior middle temporal gyrus (pMTG) that responds preferentially to the sight of nonbiological (ex. tool) motion patterns
- NOTE: pSTS and pMTG (esp in LH) are associated w/ speech perception and production
- X
- Rs examine if these 2 // motion processing pathways contribute to high-lv visual perception, but ALSO to LT semantic rep of category-specific object-associated motion patterns
- The Grounded Cog model predicts they should
- Chao et al 1999
- The pSTS (independently linked w/ the sight of biological motion patterns) was engaged when ppl performed perceptual tasks w/ animal pics but ALSO when they performed conceptual tasks w/ animal nouns
- The pMTG (independently linked w/ the sight of nonbio motion patterns) was engaged when ppl did perceptual tasks tasks w/ tool pics but ALSO when for conceptual tasks w/ tool nouns
- These results are consistent w/ the hypothesis that understanding words (ex. rabbits, scissors) involve comprehension process and implicitly reactivates visual generalisation about the typical motion patters of the objects
- NOTE: damage to pSTS/pMTG (esp in LH) is more likely to impair recognition and naming of tools than animals
- This is opposite of the one involving shape features in the mid-fusiform gyrus
- This suggests that relevant brain regions are more important for semnatic processing of tools than animals
6
Q
A
Motion features
- When we think about tools like hammer and knife, we also think about their visual representations and how they are handled
- These tools operated in diff ways
- These motor representations are important to the meanings of words
- The grounded cog model predicts these meanings reside in some of the same high-lv components of the motor system the subserve the actual use of tools
- X
- Tool recruit 2 main cortical regions that are left-lateralized in right-handed peeps
- # 1: anterior intraparietal sulcus (aIPS) and inferiorly adjacent supramarginal gyrus (SMG)
- # 2: ventral premotor cortex (vPMC)
- The cortical area of aIPS and SMG stores LT gestural rep that indicate a daily schematic and invariant lv of abstraction, how certain tools are grasped and manipulated to achieve certain goals
- Evidence
- Damage to aIPS/SMA -> ideational apraxia
- Ideational apraxia: peeps can’t understand the proper use of tools due to damage to left aIPS/IPL
- Ex. use a comb to brush their teeth
- During normal tool use, after an appropriate gestural rep is selected in the aIPS/SMG -> sent to the vPMC
- vPMC transforms the rough plan to more specific motor program for physical action
- Program includes setting parameters
- Ex. hand configuration, grip force, movement direction, movement speed
- Both regions (aIPS/SMG and vPMC) are engaged not only when we use the tool, but also when one imagine/pantomimes using it or sees/hears other use it
- Program includes setting parameters
- Rs examine if the same regions also underlies the motor feature of the meanings of tool nouns -> yup
- For aIPS/SMG and vPMC
- Evidence 1: naming tools activate both regions than naming animals
- Naming manipulable tools (ex. hair brush/key) activates both regions more than naming non-manipulable non-tools (ex. balcony)
- Evidence 2: damage to these regions impairs naming of manipulable artifacts more than naming of non-manipulable artifacts
- Evidence 3: Both regions respond more to words for manipulable artifacts that must be handled in specific ways to fulfil their fx (ex cup) than to words for manipulable artifacts that do not have these requirements
- Evidence 4: for the time-course of activation
- Both regions are engaged w/in 150 ms when subjects perform semantic tasks (ex verifying that certain toll nouns are linked w/ certain hand actions)
- This ignition speech is so fast, and supports the view that the regions are automatically activated as part of the comprehension process rather than being deliberately engaged
- Just aIPS/SMG
- Evidence 1: this region is activated more when peeps judge word pairs as denoting objects that are manipulated in similar ways (ex. piano and keyboard) than when they judge word pairs that denote objects w/ similar fx (ex. match and lighter)
- Evidence 2: ppl w/ lesions to the aIPS/SMG and normal ppl receiving rTMS to it struggle the former judgement (i.e. focusing on manipulation) than w/ the later type of judgement (i.e. focusing on fx)
- Evidence 3: Hargreaves et al 2012
- Used the “body-object interaction” index
- Measures the ease w/ which a human body can interact w/ an object denoted by a noun
- Results: words w/ high ratings (ex. belt) engaged the aIPS/SMG more than words w/ low ratings (ex sun)
- Used the “body-object interaction” index
- Evidence 4: Pobric et al 2010
- Showed that applying rTMS to the same site delated naming response for high vs low manipulability objects
- Applying rTMS to the occipital pole (control) site did not interfere w/ naming response for either class of objects
- Just vPMC
- The degree of activity when subjects name tools vary w/ the amount of motor experience those subjects have w/ those tools
- Patients w/ progressive nonfluent aphasia (nrodegenerative disease) that affect vPMC are more impaired at naming tools than animals
- In sum, these findings support the hypothesis that motor-semantic aspects of tool nouns rely on the same motor-related cortical regions that subserve the actual use of the designated objects (i.e. aIPS/SMG and vPMC)
- Processing the meanings of words (ex. hammer, knife) involved covertly simulating actions that are usually performed w/ those tools
- This aligns w/ the Grounded cog model
- NOTE: some studies showed that apraxic patients cannot use tools correctly but can name the tools and their fx
- This suggest that even though tool nouns trigger motor simulations in parietal and frontal regions, those simulations are not needed to understand the words
7
Q
A
Auditory features
- Some nouns are characterized by how they typically sound
- Ex. dogs vs cats, hammers vs saws
- Auditory features are coded for object words
- Higher-order perception of non-linguistic env sounds share the cortical areas associated w/ higher-order perception of speech
- pSTG, pSTS, pMTG in both H
- But there are differences
- fMRI: Perception of speech is more left laterialized than perception of non-linguistic env sounds
- nropsych: auditory agnosia
- impaired ability to recognize non-linguistic env sounds but with intact speech perception
- Kiefler et al 2008: examined the neural correlates of auditory semantic features of object nouns
- fMRI and electriophysiology
- Both studies: peeps did the same task – make lexical decisions (i/e Y/N is the letter strings are real words) for 100 words and 100 pronounceable pseudowords
- The 100 words consisted of 2 subsets that differ on the relevance of auditory features; words were selected so they only differ in the semantic dimension of auditory content
- Some words were rated +ve (ex. telephone)
- Some words more -ve (ex. cup)
- Other aspects
- # 1: lexical decision task is assumed to not require effortful processing of the word meanings; it is implicit and automatic
- # 2: in the fMRI study, subjects only performed the lexical decision task, but ALSO listened to sounds produced by animals and tools
- These stimuli were included to localize the cortical regions that subserve high-lv non-linguistic auditory perception
- The 100 words consisted of 2 subsets that differ on the relevance of auditory features; words were selected so they only differ in the semantic dimension of auditory content
-
Results for fMRI
- “activation patterns elicited by words w/ auditory-semantic features” – MINUS “the activation patters elicited by words w/o auditory-semantic features“
- # 1: Found that there’s a large cluster of voxels in the left pSTG, pSTS, and pMTG
- They compared this cluster w/ the larger one that was associated w/ hearing sounds produced by animals and tools
- -> Fig A: They found sig overlap
- # 2: As the ratings of auditory-semantic features of words gradually increased, so did the BOLD signals in this cortical region
- # 3: prev fMRI studies linked the same general territory w/ a variety of high level auditory processes
- Explicitly verifying the auditory semantic features of object nouns
- Voluntarily recalling certain sounds
- Imagine music
- Recog familiar env sounds
- Hearing human voices
- # 1: Found that there’s a large cluster of voxels in the left pSTG, pSTS, and pMTG
- “activation patterns elicited by words w/ auditory-semantic features” – MINUS “the activation patters elicited by words w/o auditory-semantic features“
-
ERP studies
- Rs overlaid the waveforms elicited by the 2 main types of words
- Fig A: Found that the traces diverged sig at the “150-200 ms” time window at all of the central (midline) electrode sites
- Fig B: Neural generators of these effects were pSTG, pSTS, and pMTG
- This supports the grounded cog model – the left pSTG/pMTG represents auditory conceptual features in a modality specific manner
- Other rs: damage to the left pSTG/pMTG -> indices greater processing deficits for words w/ auditory-semantic features than for words w/o them
- This confirms the causal involvement of the auditory association cortex in comprehending lexically encoded sound concepts
8
Q
A
Gustatory and olfactory features
- Taste and smell
- Esp important for foods
- Some studies support the grounded cog model
- X
- Sensory capacity for taste and smell are grouped together as they both require chemical stimulation
- In higher lv of processing, both kinds of processing depend on the OBF cortex bilaterally
- These regions contribute to recognition of flavors and odors, but ALSO computing its reward value (i.e. diff degrees of pleasantness)
- The region responds strongly o the sight of appetizing foods, and increases activity when words for foods are processed
- Evidence
- Goldberg et al 2006
- Participants were scanned when doing a task on semantic similarity judgements among object nouns belonging to 4 categories: birds, body parts, clothing, and fruits
- On each trial,
- # 1: covertly generated the most similar item they can think of in relation to the target item (ex. what is the most similar item to peach?)
- # 2: chose one of 2 alternatives (ex. apricot or nectarine) as being more similar to the item they generated
- Results: relative to the categories of birds, body parts, and clothing, category of fruits induced sig activity in the OBF cortex bilaterally
- Goldberg et al 2006b
- # 1: Ppl were scanned while doing a conceptual property verification task in which words for diff kinds of objects (including foods, non-foods) were presented
- # 2: after each one a property term appeared that had tb judged as either T/F of the given type of object
- The property terms probed semantic K in 4 perceptual modalities: color, sound, touch, and taste
- Results: relative to the conditions involving color, sound, and touch properties, the condition involving taste properties induced sig activity in OBF cortex, esp in LH
- These results show that gustatory/olfactory features of food concepts depend on high-lv components of the gustatory/olfactory system in the brain
- Limitation: study 1 involved effortful thought -> so OBF activity may reflect voluntary explicit imagery instead of involuntary implicit semantic retrieval
- Goldberg et al 2006
- Summary
- The rs supports Grounded Cog model, the meanings of object nouns are anchored in modality-specific brain systems
- Here, comprehension involves accessing high-lv perceptual and motor rep that capture generalization about what it’s like to sense and interact w/ the designated entities
- Here, object concepts are not compact representations that reside in an autonomous semantic module; they consist of multiple fragments of info that are widely distributed across the cerebral cortex based on their content
- IOW: color features may be sorted in the same part of the ventral temporal cortex that underlies high-lv color perception
- Shape features may be stored in the same part of the ventral temporal cortex that underlies high-lv shape perception
- Motion features may be stored in the same part of lateral temporal cortex that underlie high-lv motion perception
- Motor features may be stored in the same parts of parietal and frontal cortices that underlie high-lv motor programming
- Auditory features may be stored in the same part of the superior/middle temporal cortex that underlies high lv auditory perception
- Olfactory/gustatory features may be stored in the same part of OBF cortex that underlies high-level olfactory/gustatory perception
- The account of conceptual k assumes that whenever an object-noun w/ complex multimodal features is understood (ex an animal word like squirrel, tool like spoon) correspondingly complex network of multimodal cortical areas is rapidly and unconsciously engaged
- This evocation of perceptual and motor rep constitutes the bedrock of comprehension
9
Q
A
A semantic hub for object concepts
- More rs show that neural substrates of object concepts include high-lv components of modality-specific systems for perception and action AND certain sectors of the anterior temporal lobes (ATLs) bilaterally
- Hub and spoke model:
- A theory of semantic K
- Concepts are based not only on modality-specific brain systems for perception and action, but also modality-invariant integrative mech in the ATLs
- ATLs are integrative regions that have bidirectional connections w/ each of the anatomically distributed modality-specific systems and systems that subserve the phonological and orthographic representations of words
- It combines aspects of the grounded cog model and amodal symbolic model
- The modality invariant reps in this approach are similar to the undecomposed: lexical concept” nodes in the Lemma Model of speech production
- Computational reasons that suggest there is some sort of integrative device that organize various semantic features of object nouns
- Point 1: Features that belong to diff modalities are not always experienced together
- So, a mechanism is needed to ensure cross-modal features are correlated w/ eo in the LTM
- Ex. “duck”
- A bird w/ visual and auditory properties
- But the sight of ducks is not always accompanied by the sound of quacking
- point 2: features vary greatly in their typicality for a given concept
- a mechanism is needed to distinguish b/w entities that are central members, peripheral members, and non-members of the category specified by the concept
- Ex. “chair” is associated w/ 4 legged, straight back, wooden object; but chairs can have any # of legs (ex. 0 for beanbag chairs), and do not need backs, can be made of various materials
- Point 3: some objects are perceptually similar to eo but belong to diff categories
- Need a mechanism to overcome the misleading modality-specific commonalities and register deeper conceptually discriminative features
- Ex. donkeys like similar to horses, but they are very diff species
- Need a mechanism to overcome the misleading modality-specific commonalities and register deeper conceptually discriminative features
- Point 1: Features that belong to diff modalities are not always experienced together
- X
- These factors helped construct computer simulations of the development and breakdown of object concepts
- These simulations has an architecture in which info represented in distinct modality-specific systems is fed into a central modality-invariant system
- MP: they systems can mimic basic aspects of human semantic cog
- The modality invariant hub can solve the problems described above
- It can bind features that gives rise to typicality effects (ex. diversity of chairs) and extract subtle features that differentiate b/w similar concepts (ex. donkey vs horse)
- The hub does NOT represent the conceptual content
- Most of the content of object nouns reside in the modality-specify systems for perception and action
- The fx of this hub is to identify and organize combinatorial patterns of features w/in and across the systems
- Hub and spoke model maintains that the integrative system (semantic hub) is in the ATLs bilaterally
- These regions occupy the apex of complex processing hierarchies in both hemispheres
- They receive convergent input from and send divergent output back to a broad range of other brain areas that subserve perceptual and motor fx
- So they can serve the feature binding and systematizing fx
10
Q
A
Box 10.2: the concept of a nest in the brain of a mouse
- Nonhuman animals don’t talk but they have sophisticated object concepts
- Lin et al 2007
- Showed that certain neurons in the anterior temporal lobes (ATLs) of mice respond to the perception of nests regardless of location, shape, style, etc
- Fig 10B
- Shows a cell increased firing rate transiently but drastically when the animal encounters its home nest regardless of position and angle of approach
- Other exp: showed several other characteristics
- Discharged when the nest was moved to diff locations in the same env AND another env
- It responded to circular, triangular, square nests, and nests made of diff materials
- The cell did NOT fire sig when mouse approached a non-nest like object (ex. food items, toys, cotton balls)
- The cell discharged above its baseline frequency when the mouse encounters a nest that was 2x the normal diameter
- But the cell did not discharge when the nest was 4x the normal diameter
- Lin et al 2007
- Examine is the cell was tuned to the fundamental fx features of nests (i.e. refuge for the animal)
- # 1: Rs compared its responses to a plastic cap that was oriented in the “open” nest-like position
- # 2: then compared it to the same object that is flipped over in the “closed” non-nest like position
- Results: cell fired sig in the 1st condition but not in the 2nd one
- IOW: it is sensitive to the defining fx properties of nests
- So, functionality-based conceptualization of nests is implemented at the lv of signal cells in the ATLs of mice
- These responses were only observed in a tiny % of the cells that were studied
- This supports the view that the dev of such tuning characteristics is specialized and help animals discriminate b/w objects that fit/not fit the criteria
11
Q
A
Evidence from semantic dementia
- Semantic dementia (SD): neurodegenerative disease, variant of primary progressive aphasia; conceptual k gradually deteriorates
- Ppl struggle w/ all verbal and nonverbal tasks that require them to retrieve and process object concepts
- They do poorly when asked to name pics, match words w/ pics, verify if words refer to pics, sort words and objects according to similarity, demonstrate proper use of objects recog object based on visual auditory, somatosensory, gustatory/olfactory features
- Despite these impairments, patients do well on independent tests of basic perception, autobiographical memory, WM, problem solving and attention till late course of the disease
- Atrophy in SD is striking
- It targets the ATLs bilaterally although w/ left bias
- As disease progresses, there is more tissue loss and hypometabolism in these structures (esp in ventral and lateral parts)
- The Hub and Spoke model, amodal hub is disputed 1st -> visual spoke malfx -> posterior parts of the inferior and middle temporal gyri
- SD case: Patient EK
- Cortical atrophy and conceptual disturbances were tracked for 3 yrs
- 60 yo right-handed woman, part-time cook and cleaner
- Worsening word finding problems over 5 yrs
- EK’s pattern and degree of tissue loss and b performances on a battery of standardized semantic tasks were assessed annually 3 times (t1,2,3)
- The neuroimaging results
- Distribution of tissue loss was similar in the L and RH, more sever in the left
- T1: atrophy is restricted to the ATLs, affecting temporal pole, ventral surface, anterior fusiform gyrus and anterior parahippocampal gyrus
- T2: more development of atrophy observed at T1
- Some extension posteriorly into the inferior and mid temporal gyri (LH bias)
- T3: EK’s tissue lost was most severe in the ATLs, it spread more into the other parts of the temporal lobes
- 4 tasks
- Task 1: object naming (control mean = 98%)
- Task 2: word-picture matching – hear a spoken word and match it w/ the correct picture in the 4-item array of the target (ex. horse), w/in-domain distractor (ex. lion), and two cross-domain distractors (ex. apple and a car)
- Control mean = 100%
- Task 3: category label (ex. animals) and producing the names of as many members of the category as possible w/in 1 minute
- Control mean = 17%
- Task 4: property verification
- Give Y/N responses to qs about the features of common objects
- Some features shared by many types of objects in the domain (ex. does a camel hv legs); others a distinctive (ex. does a camel have a hump)
- Control mean = 97%
- Results
- Performance in all 4 tasks declined over time as her cortical atrophy progressed
- T1: tissue loss was confined to ATLs
- Had semantic deficits
- Object naming task: 20%
- Superordinate errors = 17% (saying animal instead of horse)
- Coordinate errors = 19% (say dog instead of cat)
- Most errors = idk responses
- Word picture matching task: 89%
- Below normal
- Category fluency task: 7 items only
- Property verification task: 72%
- Struggle to make judgements on distinctive than common features of objects
- Common among SD patients
- T2: tissue loss extended to MTG -> performance in all 4 tasks were worse
- T3: atrophy spread even further
- Word-picture matching task = stable performance
- Category fluency task = worse
- can’t complete the object naming and property verification tasks
- Ex. refused to answer the 1st qs: does an apple hv a handle; stated apple is smth you put food into
- This supports the Hub and Spoke Model: ATL’s critical role in processing object concepts
- Lambon Ralph et al. 2010
- Examined when ATL hub is damaged, performance will be dominated by modality-specific surface similarities and be less reflective of higher-order semantic structure
- Control ppl and 6 SD patients were given matching-to-sample task
- On each trial, the subjects were presented w/ a word and an array of 9 pictures
- Their task was to indicate which pictures showed objects that belonged to the category specified by the word
- The subjects were told that there’s always more than 1 target in the array
- The experiment was set up so the # of targets varied b/w 2 or 3
- The study had targets and distractors to allow the rs to pit surface similarities against category membership
- Typical targets (ex. standard cat)
- Atypical targets (ex. hairless cat)
- Unrelated distractors (ex. train)
- Partially related distractor (ex. otter)
- Pseudo-typical distractors (ex. chihuahua) – similar to the targets but did not belong to the category
- Given this design, the rs expected the SD patients to commit to 2 main types of errors
- Undergeneralization: fail to pick atypical targets
- Overgeneralization: incorrectly picked pseudo-typical targets
- Results support predictions
- It confirms the claim that the ATLs implement an integrative semantic system posited by the Hub and Spoke Model
- Follow up study
- Used similar methods but used words instead of pics in the choice arrays
- Ex. cat
- An assortment of animals are spatially organized in approximate visual similarity
- This may reflect the way they are represented in the shape-sensitive lateral portion of the mid-fusiform gyrus (ex. grounded cog model)
- To identify all the cats in this modality-specific representational space, we need to draw a boundary that includes the typical and atypical items, and exclude the unrelated items and superficially related items
- This is one of the main fx of the ATL hub
- When the hub is damaged (ex. in SD), the precise configuration of the boundary is blurry (Fig B)
- So it is possible to recognize typical members of the category
- Atypical members may be incorrectly excluded (undergeneralization) and superficially related items are incorrectly included (overgeneralization)
12
Q
A
Evidence from fMRI and TMS
- Many rs think findings from SD are strong evidence showing ATLs are essential nodes in the neural architecture of object concepts
- 2 limitations of these findings
- # 1: SD is a progressive nrodegenerative disease
- Even in early stage patients (atrophy is confined to ATLs), the observed semantic deficits may be due to subthreshold damage (damage cannot be detected by current tech) in areas outside of ATLs
- # 2: SD affects many diff sectors of ATLs, it is not feasible to infer from nropysh studies of SD patients whether certain sectors of ATLs contribute more to conceptual k than others
- # 1: SD is a progressive nrodegenerative disease
- Examine fMRI, rTMS studies
- Visser et al 2010
- fMRI weakness: BOLD signals are shy/broken up near air-filled areas
- Advances in fMRI -> can correct for signal loss
- Showed that sig semantically driven activity in the ATLs
- Methods:
- Semantic condition
- # 1: Ppl first read 3 words denoting objects in a particular domain
- # 2: then they decided if a 4th word (in upper case font) denoted an object in the same domain or in a different domain
- Yes response = taxi-boat-bicycle-AIRPLANE
- No response = taxi-boat-bicycle-SPOON
- Baseline condition
- # 1: ppl saw 3 strings of a particular letter
- # 2: decide if the 4th string (in upper case font) showed the same letter or not
- Yes response: rrrr-rrr-rrrrr-RRR
- No response: rrr-rrrr-rrrrr-DDD
- Semantic condition
- Rs contrasted the semantic against the baseline condition
- Results: sig activity in the ATLs (LH bias)
- Predominantly ventral, centered in the anterior fusiform gyrus, extended rostrally and medially
- It suggests that portions of the ATLs may be especially important for object concepts
- Binney et al 2010
- Corrected for signal loss in fMRI
- Examine the contribution of ATLs to semantic processing
- Semantic condition: make synonym judgements
- Ppl decide which of 3 choice words (ex. scoundrel, polka, gasket) was most similar in meaning to the probe (ex. rogue)
- All words were matched for imageability and freq
- Baseline task: involved #s
- On each trial, ppl decided which of 3 choice #s was closest in value to a probe #
- Contrasted the semantic w/ baseline condition -> found sig activity in some of the same sectors of ATLs
- Ex. anterior fusiform gyrus and anterior part of inferior temporal gyrus
- -> Fig 10.6 A,B
- Activity here is more LH lateralized b/c the semantic task used more lexical relations
- Jefferies et al 2009
- fMRI study w/ SD patients
- used the same semantic and baseline tasks
- SD patients: bad at synonym judgement task
- -> Fig 10.16D
- Binney et al 2010
- Examine if any of the specific ATL regions were activated in fMRI study fell w/in the large ATL territory that is affected in SDD
- Used “region of interest” analysis: used the map of tissue loss in SD
- Found that 2 cortical areas (L anterior fusiform gyrus and L anterior inferior temporal gytus) showed most sig activity when health ppl performed synonym judgement task, and showed most atrophy in SD patients
- This fMRI data support SD data
- rTMS data
- we can’t stimulate anterior fusiform gyrus b/c it is on the ventral surface of the temporal cortex
- too far from the scalp
- we can stimulate the inferolateral ATL region that compromises the anterior parts of the inferior and middle temporal gyri
- The region was targeted in 2 studies that were designed to determine if temporarily disrupting the region’s functionality in healthy ppl would delay their response on synonym judgement task
- Results: supported predictions
- Ppl’s RT were slower on the synonym judgement task, not on the # judgement task, when rTMS was applied to the target region in the LG compared to ono rTMS applied there
- rTMS results align w/ fMRI
- Homologous inferolateral ATL region in RH
- In fMRI, is tends to be dysfx in SD (not sig tho), this may support that it cooperates w/ the LH twin to implement the semantic hub
- Lambon Ralph et al 2009
- Applied rTMS to target region in LH and RH while subjects performed synonym judgement task and # judgement task
- Same outcome regardless of hemisphere
- Interfering w/ the operation of LH or RH sig increased RT on lexical task but not # task
- This support the key claims of Hub and Spoke Model: object concepts depend on ATLs bilaterally
- Limitation
- Pobric et al 2007 & Lambon Ralph et al 2009 stimulate the target sites cont for 10 min prior to task performance
- Such stimulation produce b effects that last for several min after rTMS train has concluded
- Dunno if b effects are due to nrophysio changes that occur near the site of stimulation, remote from the site, or both
Summary
- Hub and Spoke Model
- Object concepts that are encoded by concrete nouns are subserved by modality specific brain system for perception and action (the spokes) but also by amodal integrative system that resides in ATLs bilaterally (the hub)
- The hub has several fx:
- Binds together the anatomically distributed modality-specific features that constitute the main content of object concepts
- Organize those features so it is possible to distinguish b/w entities that fall w/in the scope of a given concept and entities that fall out of the scope
- Evidence the semantic hub is underpinned by ATLs bilaterally
- SD patients show progressive dissolution of object concepts that is linked w/ gradual atrophy of ATLs
- PET and distortion corrected fMRI studies show that ATLs are activated when health ppl process object concepts
- rTMS: temp disrupting ATLs in healthy peep rescue their capacity to process object concepts
- distortion-corrected fMRI and rTMS studies: show semantic hub may not depend equally on all ATLs aspects, it rely on 2 specific sectors
- anterior fusiform gyrus, inferolateral cortex (incl anterior parts of inferior and middle temporal gyri)
13
Q
A
Domains of object concepts
- object concepts encoded by concrete nouns are usually grouped together to form hierarchies
- Ex. golden retrievers belong to dogs, dogs belong to animals, animals belong to living things, etc
- How are these categories implemented in the brain?
- Warrington et al
- Described patients w/ semantic disorders that affect certain categories of object concepts more than others
- Selective semantic disorders/ category specific deficits
- Common dissociation: impaired k of living things (esp animals, fruits/ veggies) in the context of persevered k of non-living things (esp tools, artifacts)
- Opposite dissociation also reported
- 42 patients w/ category specific deficits on living things
- 34 … on non-living things
- The performance of some patients is influenced by other v (ex. visual complexity of pics, familiarity of concepts, freq of words), some are controlled
- 3 major domains of selective semantic impairment
- Animal concepts, fruit/vegetable concepts, tool concepts
14
Q
A
Box 10.3 the influences of gender and culture on concepts for animals and fruits/vegetables
- Ppl differ in their familiarity w/ specific kinds of animals, fruits, veggies
- Need to know if these diff are large enough to sig modulate the patterns of category-specific semantic disorders that hv been documented
- 2 factors we need to look at – gender & culture
- X
- Gender
- Gainotti 2010
- 80% of patients w/ prevalent impairment of animal concepts were women
- 95% of patients w/ a prevalent impairment fruit/vegetable concepts were men
- Striking gender diff cannot explain all of the data b/c nroanatomical diff on lesion sites influence this
- Possibility is that differential gender-related vulnerabilities to category-specific deficits are due to gender-related social orles
- Men are more familiar w/ animals b/c they are more likely to hunt
- Women are more familiar w/ fruits/veggies b/c they are more likely to cook
- So, there may be a male advantage for animal k and female advantage for plant k, b/c of evolution
- Men contributed more to hunting; women to gathering
- Culture
- Ppl living in post-industrial societies have “nature-deficit syndrome”
- Impoverish understanding of the natural world
- The folk-biological k exhibited by modern vs ancestral agricultural societies
- 50 vs 500 -> large discrepancy
- This shows that most of the patients observed already have “nature-deficit syndrome”
- This may manifest differently in societies back in time
15
Q
A
3 major domains of selective semantic impairment
Animal concepts
- Category-specific deficit involves living things -> smaller domains
- Animate (animals) vs inanimate (plants)
- Some patients manifest semantic disorders that selectively or disproportionately affect 1 or other these 2 domains
- Examine patients w/ impairments that affect animal concepts, then plants
- Most patients w/ semantic disorders on the animal domain have lesions in mid to anterior ventral and medial temporal regions (LH bias)
- Causes:
- Stroke
- Most have herpes simplex encephalitis (HSE) infection
- Viral infection that rapidly destroys portion of the temporal lobes bilaterally, including medial sections of the ATLs
- Some patients have worse k of animals than other conceptual domains
- SD patients: impaired K in both conceptual domains (living and non-living)
- Reason
- Rapid necrosis in HSE distort conceptual representation -> category specific deficits
- Gradual atrophy in SD dims conceptual representations -> across the board deficits
- Reason
- Blundo et al 2006 – case study
- KC woman, right-handed
- MRI showed damage to anterior ventral and medial temporal lobes bilaterally (LH bias)
- Diagnosis: HSE
- Struggled w/ animal items on standardized verbal/nonverbal tasks
-
Picture-naming task: provide appropriate terms for 260 line drawings
- She successfully named 93% of fruits/veggies, 92% artifacts, 50% of animals
- Not due to familiarity effects b/c she can’t name cat/pig
- Generated semantically related naming response for 4 items
- 1: called ant a fly
- 2: called the eagle a parrot
- 3: called the fox a wolf
- 4: called pig a hippo
- She successfully named 93% of fruits/veggies, 92% artifacts, 50% of animals
-
Oral definition task
- Rs probed KC’s conceptual k by asking “What is X? Please describe it, including info on size and structure”
- 102 objects (50% animals, 50% not)
- KC can indicate the superordinate category of all objets
- Good definition for non-animal objects
- Adequate definitions for only 17 animals
- Ex. mouse – 4 legs, 1m tall and 1m long
- Rs probed KC’s conceptual k by asking “What is X? Please describe it, including info on size and structure”
-
Drawing from memory task
- For the same 102 objects, she could draw all non-animal items, but can only draw 17/51 animals
-
Decision test for visual features
- 76 items (50/50 animal/nonanimals)
- Asked Y/N qs about if the object has a certain visual attribute
- Ex. does a fly have wings?
- Perfect answers for non-animals
- 50% correct for animals
- -> Impaired at retrieving conceptual k of shapes of animals
- Also impaired at retrieving conceptual k about their colors and sounds
- Impaired for association/functional features
- Task: rs gave definition consisting of associative/fx feature, asked her to provide the corresponding name
- Ex. it’s an animal, it’s a bug that stings, it sucks nectar from flowers and produces honey (Ans: bee)
- Result: 80% correct for nonanimals; 0% for animals
- Task: rs gave definition consisting of associative/fx feature, asked her to provide the corresponding name
- Semantic judgements about animals
- KC was asked about the animals’ habitat, ferocity, edibility
- Shit on all 3 features
- KC was asked about the animals’ habitat, ferocity, edibility
- KC case study – shows how conceptual domain of animals can be selectively disrupted
- Her category-specific deficit was displayed for diff kinds of stimuli (verbal/nonverbal), diff kinds of semantic features (shape, color, sound, associative/fx)
- Impairment was due lesions to anterior ventral and medial temporal lobes bilaterally
Fruit/vegetables concepts
- Similarities vs diff b/w semantic disorders on living things vs non-living things (ex. fruits, veggies)
- Chief similarity: both kinds of patients have damage to mid-to-anterior ventral and medial temporal regions
- Child diff: laterality and intra-hemispheric localization
- Laterality:
- patients w/ impairment of animal concepts -> bilateral
- patients w/ impairment of non-animal concepts -> unilateral (LH bias)
- intra-hemispheric localization
- patients w/ impairment of animal concepts -> anterior temporal areas
- patients w/ impairment of non-animal concepts -> posterior areas (ex. mid fusiform gyrus)
- Laterality:
- Samson and Pillon 2003
- Patient RS – impaired conceptual k of fruits/veggies
- Engineer -> stroke in left posterior cerebral artery
- Include ventral and medial areas (fusiform, parahippocampal, hippocampal gyri)
- Medial occipital areas, part of thalamus
- had primary language deficits in reading and oral word retrieval
- RS’s scores on semantic tasks
- Good at naming pics of non-living things; shit at living things (esp fruits/veggies than animals – even though animals are more complex)
- name objects in response to verbal descriptions -> same pattern
- Word-pic matching -> same pattern
- -> shows a category-specific deficit for fruit/veggie concepts develop due to left posterior cerebral artery infarct
Tool concepts
- Impairment of tool concepts don’t have lesions in the ventral and medial temporal lobes
- They have them in the posterior lateral temporal region (pMTG), inferior parietal region (aIPS/SMG) and or inferior frontal region (vPMC); LH bias
- Warrington and McCarthy 1987
- Patient YOT
- Stroke damaged the left temporoparietal region
- Can’t produce and comprehend propositional speech, can’t process written language
- Can partially understand single words (spoken/printed)
- 3 tasks
- Task 1: for each item, match the spoken word w/ the correct pic w/ 5 choices
- 3 categories of objects: animals, fruits/veggies, artifacts
- Each array of pics showed objects belonging to the same category
- Same task was administered in 2 diff sessions; the response-stimulus interval (RSI) = 2-5s
- RSI: amount of time b/w patients’ response to 1 item and examiner’s presentation of the next item
- When RSI = 2s
- YOT did worse on artifacts (63%) than animals (85%)
- When RSI = 5s
- Artifacts = 90% (effect disappeared)
- Rs interpret this as YOT struggles to access semantic info rather than having absolute less of semantic rep
- Task 2:
- Spoken-word/pic matching
- Categories: fuits/veggies
- 2 subclasses of artifacts
- Large non-manipulable ones (non-tools)
- Small manipulable ones (tools)
- Results: did well on fruits/veggies (85%)
- Non tools: 80%
- Tools: 60%
- So, YOT’s difficulty in retrieving conceptual k may not apply to the entire domain of artifacts, maybe just to tools
- Task 3: match spoken word to written word (6 choices)
- Semantic classes: animals, fruits/veggies, buildings, vehicles, kitchen utensils, office supplies, furniture, body parts
- Written words belong to the same class
- Done in 2 diff times
- Results: good at living things (animals and fruits)
- Declined for outdoor artifacts (buildings and vehicles)
- Worse for small indoor artifacts (kitchen utensils, office supplies, furniture)
- Worst on body parts
- -> retrieval deficit was worse for tool-like objects
- Summary: YOT has a disorder in which the semantic rep of tools (and tool-like objects like body parts) are harder to access than other concepts
- Lesion in left temporoparietal region