Lec 6/ TB Ch 10 Flashcards

1
Q
A

Features of object concepts

Theoretical background

  • Amodal symbolic model
    • concepts (including word meanings) consist of abstract symbols that are represented and processed in an autonomous semantic system that is completely separate from the modality specific systems for perception and action
    • Ex. the concepts encoded by the word banana consist of amodal features (ex. fruit, long, curved, yellow, peel)
    • To understand the word, we need to assess the features, and do not need to retrieve memories on how banana are sensed and used
  • Grounded cog model/embodied model/simulation model
    • This theory suggests that concepts are anchored in modality specific systems, such that understanding word meanings involves high-level perceptual and motor representations
      • Semantic K does not reside in an abstract realm and is segregated from perception and action; It overlaps with these capacities
    • Ex. understanding object nouns (banana) involves activating modality-specific records in LT memory that capture generalisations on how bananas look, taste, feel, and how they are manipulated
    • Some cases, conceptual processing is so deep -> vivid mental imagery
    • Ex. read a good novel
    • Here, detailed imagery is not needed for comprehension; this detailed imagery happens after like embellishment/elaboration that helps us understand
    • What matters in the ground cog model is that modality specific activations don’t need to be full-fledged conscious, sensory, and motor images to support conceptual processing
    • During ordinary language comprehension, these activations are implicit
    • Ex. when you read the word banana, you won’t experience the flavor
      • The lack of conscious taste doesn’t mean that gustatory representations is not involved in the comprehension process (they are)
    • Attentive readers will notice layout of visual, auditory elements
    • This reflects the anatomy location of the corresponding modality-specific systems in the brain
    • The Grounded Cog model maintains that the neural correlates of conceptual knowledge include high-level components of those systems
    • This implies is that meaning of object nouns (ex. banana) does not reside in a single place, rather different fragments of this concept (banana) are scattered across diff regions
    • Ex. visual-semantic info on how bananas typically look maybe stored in the same ventral temporal areas that are engaged when bananas are visually recognized
    • Ex. gustatory-semantic info on how bananas typically taste maybe stored in the same OBF and insula areas that are engaged when banana taste are recognized
    • Ex. spatiomotor and action-semantic info about how bananas are usually handled maybe stored in the same parietal/frontal areas that are engaged when bananas are grasped/manipulated
  • This scheme was first introduced by Wernicke, explored by Broadbent, Lissauer, and Freud
  • Warrington used a similar framework to interpret the performance of brain-damaged patients who showed selective impairments of particular semantic domains
  • More studies used different tech to test predictions on which conceptual processing recruits modality-specific systems for perception and action
    • Visual features: color, shape, motion
    • Nonvisual: motor, auditory, gustatory/olfactory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
A

Box 10.1: what is a violin

  • Damasio – used Grounded Cog model to describe how the concept of a violin is implemented in the brain
    • Stimuli – drawing/word/object = violin
    • Activates many representations
      • Ex. man-made objects, string instruments
      • Auditory
    • Co-activation depends on the convergence zones
    • Convergence zones: ensembles of neurons that “know about” simultaneous occurrence of patterns of activity during the perceived or recalled experience of entities and events
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
A

Color features

  • Many objects have typical colors
    • social conventions (ex. yellow taxi)
    • animals (ex. white swans)
    • plants (ex. orange carrots)
  • Object-color associations is a key part of semantic k of the relevant nouns
  • 2 main cortical regions for color perception
    • Passive color sensation
      • Ex. one gazes at a garden of flowers
      • activates area v4, a patch of lingual gyrus at occipital lobe
      • more activated by colors than grayscale stimuli
      • damage here -> achromatopsia: can’t see color
    • active color perception
      • deliberately, attentively compares shades of diff flowers
      • uses mid-fusiform gyrus (ventral BA20, downstream from v4)
      • fusiform gyrus = ventral temporal cortex
        • what pathway – deals w/ shape, color, texture
      • v4-alpha: region in fusiform gyrus, responsive during color discrimination
        • sensitive to Farnsworth-Munsell 100 Hue Test:
          • determine if 5 circular arrayed wedges form a clockwise sequence in incrementally changing hues
          • (baseline): subjects must make similar judgements for grayscale
  • Are wither v4 and or v4-alpha engaged when a person retrieves semantic k about color features of objects (ex. taxi, sawn)?
  • Simmons et al 2007
    • fMRI study
    • 2 parts
    • Part 1: localized the subject’s color perception areas
      • Administered the Farnsworth-Munsell 100 Hue test, subtract the activation pattern evoked by color wheels
    • Part 2: asked ppl to do a conceptual property verification task, 3 conditions
      • In each trial of the color property condition
        • Subjects were shown an object noun (ex. eggplant) -> color adj (ex. purple)
        • Then they indicate if the color usually applies to the object
      • In each trial of the motor property condition
        • Shown an object noun (ex. football) -> action verb (ex. throw)
        • Then indicate if the action usually applies to the object
      • In each trial of the concept-only condition
        • Shown an object noun (ex. lightbulb)
        • NOT followed by property word; no response
        • Purpose: allow rs to separate the BOLD signals elicited by object words from those elicited by prop words in the 1st 2 conditions
      • Trials from the 3 conditions were mixed and presented randomly
    • Anaylzed data to see if there were any voxels that were activated more for color than greyscale wheels in the 1st part of the study, that also activated more color property judgements than motor property judgements in part 2
    • Results: a large cluster of voxels in the left mid-fusiform gyrus, overlapping v4-alpha met this criteria
    • These findings are consistent w/ the Grounded Cog model as it supports the view that semantic k is anchored in the brain’s modality-specific systems
  • Amodal symbolic model supports argue that the fusiform activity observed during color property judgements may not reflect unconscious, implicit retrieval of conceptual color features
    • It may instead reflect conscious, explicit generation of color imagery, which is a process that may happen after the relevant color k has been accessed from an abstract semantic system located elsewhere in the brain
  • Rebuttal – simmons et al 2007
    • The data is compatible w/ other explanations
    • But it does not sit well w/ the core assumption in the amodal symbolic model – abstract representations should be sufficient to performa all semantic tasks
    • They stated that damage to the left fusiform gyrus can cause color agnosia
      • Color agnosia: a disorder that impairs k of object-color associations (i.e. typical colors of objects) that their color property verification task probed
        • Due to damage in ventral temporal cortex, esp fusiform gyrus
      • Ths supports the idea that the fusiform activity shown in the fMRI study reflects retrieval of conceptual color features, not just color imagery
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
A

Shape features

  • Shape is a key component of object nouns
  • Shape properties of visual objects are represented in the ventral occipitotemporal cortex
  • Studies examined if the shape properties of diff categories of objects evenly distributed or clustered together
  • Most evidence suggest that certain areas are preferentially responsive to certain categories of objects (ex. faces, non-face body parts, animals, places, and printed words)
  • Chao et al 1999
    • Showed that there are separate cortical rep of the shapes of animals and tools
    • The category of tools is restricted to man-made objects that serve specific fx
    • Examined the perceptual processing of animals and tools via passive viewing tasks and match-to-sample tasks
    • Also evaluated conceptual processing of animals and tools using silent-picture-naming task and property verification tasks
      • property verification tasks – require ppl to answer y/n to qs (ex forest animal?, kitchen tool) in response to printed words for animals and tools
    • Results: across all tasks, sig more bilateral activation for animals (perceptual and conceptual) in the lateral part of mid-fusiform gyrus
    • More bilateral activation for tools in the medial part of the mid-fusiform gyrus
    • These adjacent but distinct regions of the fusiform gyrus were activated by picture and words
    • These results fit the predictions of the Grounded cog model
  • Some argue: activation evoked by words may reflect ppl’s efforts to conjure up explicit visual images of the shapes of the lexically encoded animals and tools
    • Rebuttal: Wheatley et al 2005
      • Showed that lexically driven category-related fusiform activations indicate semantic processing
      • Used the phenomenon “repetition suppression”
      • Repetition suppression: a pop of neurons that code for a specific type of info will decrease its response when the info is repeated
      • This reflects greater processing efficiency
      • Methods: ppl rapidly read presented word pairs (shown for 150 ms -> then 100 ms “break”) that were either
        • unrelated (ex. celery, giraffe)
        • related (ex. horse, goat)
        • identical (ex. camel, camel)
      • Results: as the degree of semantic relatedness b/w 2 words progressively increased for a particular category (i.e. animals for this study), the neural activation evoked by the 2ndword progressively decreased in the lateral part of the mid-fusiform gyrus
        • The same area Chao et al 1999’s study linked w/ the animal category
        • Given the processing time constraints in the task, it is unlikely that the repetition suppression effects is due to explicit conscious images the subjects generated after understanding the words
  • The convergent results showed that the shape features of the meanings of objects nouns are captured by neurons in the ventral temp cortex that partially overlap with those that subserve visual perception of the same features
  • They are also segregated based on semantic category
  • Other studies: damage to the mid-fusiform gyrus (esp LH) impairs understanding of concrete object nouns
  • These lesions tend to affect semantic k about living things (ex. animals, fruits, veggies) more severely than semantic k about non-living things (ex. tools)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
A

Motion features

  • Some objects have characteristic movements
  • Ex. hopping for rabbits; cutting for scissors
  • MT + (located in the vicinity of the anterior occipital and lateral occipital sulci) is involved in the passive perception of moving visual stimuli
  • Damage to this area -> akinetopsia
    • Impaired ability to consciously see motion due to damage to MT+
  • MT+ does not distinguish systematically b/w diff types of object-associated motion
  • BUT it projects to higher-lv posterolateral temporal areas that do
  • Processing stream 1:
    • Extend from MT+ into a sector of the posterior superior temporal sulcus (pSTS) that responds preferentially to the sight of biological (ex. animal) motion patterns
  • Processing stream 2:
    • Extends from MT+ into a sector of the posterior middle temporal gyrus (pMTG) that responds preferentially to the sight of nonbiological (ex. tool) motion patterns
    • NOTE: pSTS and pMTG (esp in LH) are associated w/ speech perception and production
  • X
  • Rs examine if these 2 // motion processing pathways contribute to high-lv visual perception, but ALSO to LT semantic rep of category-specific object-associated motion patterns
  • The Grounded Cog model predicts they should
  • Chao et al 1999
    • The pSTS (independently linked w/ the sight of biological motion patterns) was engaged when ppl performed perceptual tasks w/ animal pics but ALSO when they performed conceptual tasks w/ animal nouns
    • The pMTG (independently linked w/ the sight of nonbio motion patterns) was engaged when ppl did perceptual tasks tasks w/ tool pics but ALSO when for conceptual tasks w/ tool nouns
    • These results are consistent w/ the hypothesis that understanding words (ex. rabbits, scissors) involve comprehension process and implicitly reactivates visual generalisation about the typical motion patters of the objects
  • NOTE: damage to pSTS/pMTG (esp in LH) is more likely to impair recognition and naming of tools than animals
  • This is opposite of the one involving shape features in the mid-fusiform gyrus
    • This suggests that relevant brain regions are more important for semnatic processing of tools than animals
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
A

Motion features

  • When we think about tools like hammer and knife, we also think about their visual representations and how they are handled
  • These tools operated in diff ways
  • These motor representations are important to the meanings of words
  • The grounded cog model predicts these meanings reside in some of the same high-lv components of the motor system the subserve the actual use of tools
  • X
  • Tool recruit 2 main cortical regions that are left-lateralized in right-handed peeps
    • # 1: anterior intraparietal sulcus (aIPS) and inferiorly adjacent supramarginal gyrus (SMG)
    • # 2: ventral premotor cortex (vPMC)
  • The cortical area of aIPS and SMG stores LT gestural rep that indicate a daily schematic and invariant lv of abstraction, how certain tools are grasped and manipulated to achieve certain goals
  • Evidence
    • Damage to aIPS/SMA -> ideational apraxia
    • Ideational apraxia: peeps can’t understand the proper use of tools due to damage to left aIPS/IPL
    • Ex. use a comb to brush their teeth
  • During normal tool use, after an appropriate gestural rep is selected in the aIPS/SMG -> sent to the vPMC
  • vPMC transforms the rough plan to more specific motor program for physical action
    • Program includes setting parameters
      • Ex. hand configuration, grip force, movement direction, movement speed
    • Both regions (aIPS/SMG and vPMC) are engaged not only when we use the tool, but also when one imagine/pantomimes using it or sees/hears other use it
  • Rs examine if the same regions also underlies the motor feature of the meanings of tool nouns -> yup
    • For aIPS/SMG and vPMC
    • Evidence 1: naming tools activate both regions than naming animals
      • Naming manipulable tools (ex. hair brush/key) activates both regions more than naming non-manipulable non-tools (ex. balcony)
    • Evidence 2: damage to these regions impairs naming of manipulable artifacts more than naming of non-manipulable artifacts
    • Evidence 3: Both regions respond more to words for manipulable artifacts that must be handled in specific ways to fulfil their fx (ex cup) than to words for manipulable artifacts that do not have these requirements
    • Evidence 4: for the time-course of activation
      • Both regions are engaged w/in 150 ms when subjects perform semantic tasks (ex verifying that certain toll nouns are linked w/ certain hand actions)
      • This ignition speech is so fast, and supports the view that the regions are automatically activated as part of the comprehension process rather than being deliberately engaged
    • Just aIPS/SMG
    • Evidence 1: this region is activated more when peeps judge word pairs as denoting objects that are manipulated in similar ways (ex. piano and keyboard) than when they judge word pairs that denote objects w/ similar fx (ex. match and lighter)
    • Evidence 2: ppl w/ lesions to the aIPS/SMG and normal ppl receiving rTMS to it struggle the former judgement (i.e. focusing on manipulation) than w/ the later type of judgement (i.e. focusing on fx)
    • Evidence 3: Hargreaves et al 2012
      • Used the “body-object interaction” index
        • Measures the ease w/ which a human body can interact w/ an object denoted by a noun
      • Results: words w/ high ratings (ex. belt) engaged the aIPS/SMG more than words w/ low ratings (ex sun)
    • Evidence 4: Pobric et al 2010
      • Showed that applying rTMS to the same site delated naming response for high vs low manipulability objects
      • Applying rTMS to the occipital pole (control) site did not interfere w/ naming response for either class of objects
    • Just vPMC
    • The degree of activity when subjects name tools vary w/ the amount of motor experience those subjects have w/ those tools
    • Patients w/ progressive nonfluent aphasia (nrodegenerative disease) that affect vPMC are more impaired at naming tools than animals
  • In sum, these findings support the hypothesis that motor-semantic aspects of tool nouns rely on the same motor-related cortical regions that subserve the actual use of the designated objects (i.e. aIPS/SMG and vPMC)
  • Processing the meanings of words (ex. hammer, knife) involved covertly simulating actions that are usually performed w/ those tools
  • This aligns w/ the Grounded cog model
  • NOTE: some studies showed that apraxic patients cannot use tools correctly but can name the tools and their fx
  • This suggest that even though tool nouns trigger motor simulations in parietal and frontal regions, those simulations are not needed to understand the words
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
A

Auditory features

  • Some nouns are characterized by how they typically sound
    • Ex. dogs vs cats, hammers vs saws
    • Auditory features are coded for object words
  • Higher-order perception of non-linguistic env sounds share the cortical areas associated w/ higher-order perception of speech
    • pSTG, pSTS, pMTG in both H
    • But there are differences
  • fMRI: Perception of speech is more left laterialized than perception of non-linguistic env sounds
  • nropsych: auditory agnosia
    • impaired ability to recognize non-linguistic env sounds but with intact speech perception
  • Kiefler et al 2008: examined the neural correlates of auditory semantic features of object nouns
    • fMRI and electriophysiology
    • Both studies: peeps did the same task – make lexical decisions (i/e Y/N is the letter strings are real words) for 100 words and 100 pronounceable pseudowords
      • The 100 words consisted of 2 subsets that differ on the relevance of auditory features; words were selected so they only differ in the semantic dimension of auditory content
        • Some words were rated +ve (ex. telephone)
        • Some words more -ve (ex. cup)
      • Other aspects
      • # 1: lexical decision task is assumed to not require effortful processing of the word meanings; it is implicit and automatic
      • # 2: in the fMRI study, subjects only performed the lexical decision task, but ALSO listened to sounds produced by animals and tools
        • These stimuli were included to localize the cortical regions that subserve high-lv non-linguistic auditory perception
    • Results for fMRI
      • “activation patterns elicited by words w/ auditory-semantic features” – MINUS “the activation patters elicited by words w/o auditory-semantic features“
        • # 1: Found that there’s a large cluster of voxels in the left pSTG, pSTS, and pMTG
          • They compared this cluster w/ the larger one that was associated w/ hearing sounds produced by animals and tools
          • -> Fig A: They found sig overlap
        • # 2: As the ratings of auditory-semantic features of words gradually increased, so did the BOLD signals in this cortical region
        • # 3: prev fMRI studies linked the same general territory w/ a variety of high level auditory processes
          • Explicitly verifying the auditory semantic features of object nouns
          • Voluntarily recalling certain sounds
          • Imagine music
          • Recog familiar env sounds
          • Hearing human voices
  • ERP studies
    • Rs overlaid the waveforms elicited by the 2 main types of words
    • Fig A: Found that the traces diverged sig at the “150-200 ms” time window at all of the central (midline) electrode sites
    • Fig B: Neural generators of these effects were pSTG, pSTS, and pMTG
    • This supports the grounded cog model – the left pSTG/pMTG represents auditory conceptual features in a modality specific manner
  • Other rs: damage to the left pSTG/pMTG -> indices greater processing deficits for words w/ auditory-semantic features than for words w/o them
  • This confirms the causal involvement of the auditory association cortex in comprehending lexically encoded sound concepts
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
A

Gustatory and olfactory features

  • Taste and smell
  • Esp important for foods
  • Some studies support the grounded cog model
  • X
  • Sensory capacity for taste and smell are grouped together as they both require chemical stimulation
  • In higher lv of processing, both kinds of processing depend on the OBF cortex bilaterally
    • These regions contribute to recognition of flavors and odors, but ALSO computing its reward value (i.e. diff degrees of pleasantness)
    • The region responds strongly o the sight of appetizing foods, and increases activity when words for foods are processed
  • Evidence
    • Goldberg et al 2006
      • Participants were scanned when doing a task on semantic similarity judgements among object nouns belonging to 4 categories: birds, body parts, clothing, and fruits
      • On each trial,
        • # 1: covertly generated the most similar item they can think of in relation to the target item (ex. what is the most similar item to peach?)
        • # 2: chose one of 2 alternatives (ex. apricot or nectarine) as being more similar to the item they generated
      • Results: relative to the categories of birds, body parts, and clothing, category of fruits induced sig activity in the OBF cortex bilaterally
    • Goldberg et al 2006b
      • # 1: Ppl were scanned while doing a conceptual property verification task in which words for diff kinds of objects (including foods, non-foods) were presented
      • # 2: after each one a property term appeared that had tb judged as either T/F of the given type of object
      • The property terms probed semantic K in 4 perceptual modalities: color, sound, touch, and taste
      • Results: relative to the conditions involving color, sound, and touch properties, the condition involving taste properties induced sig activity in OBF cortex, esp in LH
      • These results show that gustatory/olfactory features of food concepts depend on high-lv components of the gustatory/olfactory system in the brain
      • Limitation: study 1 involved effortful thought -> so OBF activity may reflect voluntary explicit imagery instead of involuntary implicit semantic retrieval
  • Summary
    • The rs supports Grounded Cog model, the meanings of object nouns are anchored in modality-specific brain systems
    • Here, comprehension involves accessing high-lv perceptual and motor rep that capture generalization about what it’s like to sense and interact w/ the designated entities
    • Here, object concepts are not compact representations that reside in an autonomous semantic module; they consist of multiple fragments of info that are widely distributed across the cerebral cortex based on their content
      • IOW: color features may be sorted in the same part of the ventral temporal cortex that underlies high-lv color perception
      • Shape features may be stored in the same part of the ventral temporal cortex that underlies high-lv shape perception
      • Motion features may be stored in the same part of lateral temporal cortex that underlie high-lv motion perception
      • Motor features may be stored in the same parts of parietal and frontal cortices that underlie high-lv motor programming
      • Auditory features may be stored in the same part of the superior/middle temporal cortex that underlies high lv auditory perception
      • Olfactory/gustatory features may be stored in the same part of OBF cortex that underlies high-level olfactory/gustatory perception
  • The account of conceptual k assumes that whenever an object-noun w/ complex multimodal features is understood (ex an animal word like squirrel, tool like spoon) correspondingly complex network of multimodal cortical areas is rapidly and unconsciously engaged
  • This evocation of perceptual and motor rep constitutes the bedrock of comprehension
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
A

A semantic hub for object concepts

  • More rs show that neural substrates of object concepts include high-lv components of modality-specific systems for perception and action AND certain sectors of the anterior temporal lobes (ATLs) bilaterally
  • Hub and spoke model:
    • A theory of semantic K
    • Concepts are based not only on modality-specific brain systems for perception and action, but also modality-invariant integrative mech in the ATLs
      • ATLs are integrative regions that have bidirectional connections w/ each of the anatomically distributed modality-specific systems and systems that subserve the phonological and orthographic representations of words
    • It combines aspects of the grounded cog model and amodal symbolic model
    • The modality invariant reps in this approach are similar to the undecomposed: lexical concept” nodes in the Lemma Model of speech production
  • Computational reasons that suggest there is some sort of integrative device that organize various semantic features of object nouns
    • Point 1: Features that belong to diff modalities are not always experienced together
      • So, a mechanism is needed to ensure cross-modal features are correlated w/ eo in the LTM
      • Ex. “duck”
        • A bird w/ visual and auditory properties
        • But the sight of ducks is not always accompanied by the sound of quacking
    • point 2: features vary greatly in their typicality for a given concept
      • a mechanism is needed to distinguish b/w entities that are central members, peripheral members, and non-members of the category specified by the concept
      • Ex. “chair” is associated w/ 4 legged, straight back, wooden object; but chairs can have any # of legs (ex. 0 for beanbag chairs), and do not need backs, can be made of various materials
    • Point 3: some objects are perceptually similar to eo but belong to diff categories
      • Need a mechanism to overcome the misleading modality-specific commonalities and register deeper conceptually discriminative features
        • Ex. donkeys like similar to horses, but they are very diff species
  • X
  • These factors helped construct computer simulations of the development and breakdown of object concepts
  • These simulations has an architecture in which info represented in distinct modality-specific systems is fed into a central modality-invariant system
  • MP: they systems can mimic basic aspects of human semantic cog
  • The modality invariant hub can solve the problems described above
  • It can bind features that gives rise to typicality effects (ex. diversity of chairs) and extract subtle features that differentiate b/w similar concepts (ex. donkey vs horse)
  • The hub does NOT represent the conceptual content
  • Most of the content of object nouns reside in the modality-specify systems for perception and action
  • The fx of this hub is to identify and organize combinatorial patterns of features w/in and across the systems
  • Hub and spoke model maintains that the integrative system (semantic hub) is in the ATLs bilaterally
  • These regions occupy the apex of complex processing hierarchies in both hemispheres
  • They receive convergent input from and send divergent output back to a broad range of other brain areas that subserve perceptual and motor fx
  • So they can serve the feature binding and systematizing fx
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
A

Box 10.2: the concept of a nest in the brain of a mouse

  • Nonhuman animals don’t talk but they have sophisticated object concepts
  • Lin et al 2007
    • Showed that certain neurons in the anterior temporal lobes (ATLs) of mice respond to the perception of nests regardless of location, shape, style, etc
  • Fig 10B
    • Shows a cell increased firing rate transiently but drastically when the animal encounters its home nest regardless of position and angle of approach
    • Other exp: showed several other characteristics
      • Discharged when the nest was moved to diff locations in the same env AND another env
      • It responded to circular, triangular, square nests, and nests made of diff materials
  • The cell did NOT fire sig when mouse approached a non-nest like object (ex. food items, toys, cotton balls)
    • The cell discharged above its baseline frequency when the mouse encounters a nest that was 2x the normal diameter
    • But the cell did not discharge when the nest was 4x the normal diameter
  • Lin et al 2007
    • Examine is the cell was tuned to the fundamental fx features of nests (i.e. refuge for the animal)
    • # 1: Rs compared its responses to a plastic cap that was oriented in the “open” nest-like position
    • # 2: then compared it to the same object that is flipped over in the “closed” non-nest like position
    • Results: cell fired sig in the 1st condition but not in the 2nd one
    • IOW: it is sensitive to the defining fx properties of nests
  • So, functionality-based conceptualization of nests is implemented at the lv of signal cells in the ATLs of mice
    • These responses were only observed in a tiny % of the cells that were studied
    • This supports the view that the dev of such tuning characteristics is specialized and help animals discriminate b/w objects that fit/not fit the criteria
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
A

Evidence from semantic dementia

  • Semantic dementia (SD): neurodegenerative disease, variant of primary progressive aphasia; conceptual k gradually deteriorates
  • Ppl struggle w/ all verbal and nonverbal tasks that require them to retrieve and process object concepts
  • They do poorly when asked to name pics, match words w/ pics, verify if words refer to pics, sort words and objects according to similarity, demonstrate proper use of objects recog object based on visual auditory, somatosensory, gustatory/olfactory features
  • Despite these impairments, patients do well on independent tests of basic perception, autobiographical memory, WM, problem solving and attention till late course of the disease
  • Atrophy in SD is striking
  • It targets the ATLs bilaterally although w/ left bias
  • As disease progresses, there is more tissue loss and hypometabolism in these structures (esp in ventral and lateral parts)
  • The Hub and Spoke model, amodal hub is disputed 1st -> visual spoke malfx -> posterior parts of the inferior and middle temporal gyri
  • SD case: Patient EK
    • Cortical atrophy and conceptual disturbances were tracked for 3 yrs
    • 60 yo right-handed woman, part-time cook and cleaner
    • Worsening word finding problems over 5 yrs
    • EK’s pattern and degree of tissue loss and b performances on a battery of standardized semantic tasks were assessed annually 3 times (t1,2,3)
  • The neuroimaging results
    • Distribution of tissue loss was similar in the L and RH, more sever in the left
    • T1: atrophy is restricted to the ATLs, affecting temporal pole, ventral surface, anterior fusiform gyrus and anterior parahippocampal gyrus
    • T2: more development of atrophy observed at T1
      • Some extension posteriorly into the inferior and mid temporal gyri (LH bias)
    • T3: EK’s tissue lost was most severe in the ATLs, it spread more into the other parts of the temporal lobes
  • 4 tasks
    • Task 1: object naming (control mean = 98%)
    • Task 2: word-picture matching – hear a spoken word and match it w/ the correct picture in the 4-item array of the target (ex. horse), w/in-domain distractor (ex. lion), and two cross-domain distractors (ex. apple and a car)
      • Control mean = 100%
    • Task 3: category label (ex. animals) and producing the names of as many members of the category as possible w/in 1 minute
      • Control mean = 17%
    • Task 4: property verification
      • Give Y/N responses to qs about the features of common objects
      • Some features shared by many types of objects in the domain (ex. does a camel hv legs); others a distinctive (ex. does a camel have a hump)
      • Control mean = 97%
  • Results
    • Performance in all 4 tasks declined over time as her cortical atrophy progressed
    • T1: tissue loss was confined to ATLs
      • Had semantic deficits
      • Object naming task: 20%
        • Superordinate errors = 17% (saying animal instead of horse)
        • Coordinate errors = 19% (say dog instead of cat)
        • Most errors = idk responses
      • Word picture matching task: 89%
        • Below normal
      • Category fluency task: 7 items only
      • Property verification task: 72%
        • Struggle to make judgements on distinctive than common features of objects
        • Common among SD patients
    • T2: tissue loss extended to MTG -> performance in all 4 tasks were worse
    • T3: atrophy spread even further
      • Word-picture matching task = stable performance
      • Category fluency task = worse
      • can’t complete the object naming and property verification tasks
        • Ex. refused to answer the 1st qs: does an apple hv a handle; stated apple is smth you put food into
    • This supports the Hub and Spoke Model: ATL’s critical role in processing object concepts
  • Lambon Ralph et al. 2010
    • Examined when ATL hub is damaged, performance will be dominated by modality-specific surface similarities and be less reflective of higher-order semantic structure
    • Control ppl and 6 SD patients were given matching-to-sample task
    • On each trial, the subjects were presented w/ a word and an array of 9 pictures
    • Their task was to indicate which pictures showed objects that belonged to the category specified by the word
    • The subjects were told that there’s always more than 1 target in the array
    • The experiment was set up so the # of targets varied b/w 2 or 3
    • The study had targets and distractors to allow the rs to pit surface similarities against category membership
      • Typical targets (ex. standard cat)
      • Atypical targets (ex. hairless cat)
      • Unrelated distractors (ex. train)
      • Partially related distractor (ex. otter)
      • Pseudo-typical distractors (ex. chihuahua) – similar to the targets but did not belong to the category
    • Given this design, the rs expected the SD patients to commit to 2 main types of errors
      • Undergeneralization: fail to pick atypical targets
      • Overgeneralization: incorrectly picked pseudo-typical targets
    • Results support predictions
    • It confirms the claim that the ATLs implement an integrative semantic system posited by the Hub and Spoke Model
  • Follow up study
    • Used similar methods but used words instead of pics in the choice arrays
    • Ex. cat
      • An assortment of animals are spatially organized in approximate visual similarity
      • This may reflect the way they are represented in the shape-sensitive lateral portion of the mid-fusiform gyrus (ex. grounded cog model)
      • To identify all the cats in this modality-specific representational space, we need to draw a boundary that includes the typical and atypical items, and exclude the unrelated items and superficially related items
      • This is one of the main fx of the ATL hub
      • When the hub is damaged (ex. in SD), the precise configuration of the boundary is blurry (Fig B)
      • So it is possible to recognize typical members of the category
        • Atypical members may be incorrectly excluded (undergeneralization) and superficially related items are incorrectly included (overgeneralization)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
A

Evidence from fMRI and TMS

  • Many rs think findings from SD are strong evidence showing ATLs are essential nodes in the neural architecture of object concepts
  • 2 limitations of these findings
    • # 1: SD is a progressive nrodegenerative disease
      • Even in early stage patients (atrophy is confined to ATLs), the observed semantic deficits may be due to subthreshold damage (damage cannot be detected by current tech) in areas outside of ATLs
    • # 2: SD affects many diff sectors of ATLs, it is not feasible to infer from nropysh studies of SD patients whether certain sectors of ATLs contribute more to conceptual k than others
  • Examine fMRI, rTMS studies
  • Visser et al 2010
    • fMRI weakness: BOLD signals are shy/broken up near air-filled areas
    • Advances in fMRI -> can correct for signal loss
    • Showed that sig semantically driven activity in the ATLs
    • Methods:
      • Semantic condition
        • # 1: Ppl first read 3 words denoting objects in a particular domain
        • # 2: then they decided if a 4th word (in upper case font) denoted an object in the same domain or in a different domain
          • Yes response = taxi-boat-bicycle-AIRPLANE
          • No response = taxi-boat-bicycle-SPOON
      • Baseline condition
        • # 1: ppl saw 3 strings of a particular letter
        • # 2: decide if the 4th string (in upper case font) showed the same letter or not
          • Yes response: rrrr-rrr-rrrrr-RRR
          • No response: rrr-rrrr-rrrrr-DDD
    • Rs contrasted the semantic against the baseline condition
    • Results: sig activity in the ATLs (LH bias)
      • Predominantly ventral, centered in the anterior fusiform gyrus, extended rostrally and medially
      • It suggests that portions of the ATLs may be especially important for object concepts
  • Binney et al 2010
    • Corrected for signal loss in fMRI
    • Examine the contribution of ATLs to semantic processing
    • Semantic condition: make synonym judgements
      • Ppl decide which of 3 choice words (ex. scoundrel, polka, gasket) was most similar in meaning to the probe (ex. rogue)
      • All words were matched for imageability and freq
    • Baseline task: involved #s
      • On each trial, ppl decided which of 3 choice #s was closest in value to a probe #
    • Contrasted the semantic w/ baseline condition -> found sig activity in some of the same sectors of ATLs
      • Ex. anterior fusiform gyrus and anterior part of inferior temporal gyrus
      • -> Fig 10.6 A,B
    • Activity here is more LH lateralized b/c the semantic task used more lexical relations
  • Jefferies et al 2009
    • fMRI study w/ SD patients
    • used the same semantic and baseline tasks
    • SD patients: bad at synonym judgement task
    • -> Fig 10.16D
  • Binney et al 2010
    • Examine if any of the specific ATL regions were activated in fMRI study fell w/in the large ATL territory that is affected in SDD
    • Used “region of interest” analysis: used the map of tissue loss in SD
    • Found that 2 cortical areas (L anterior fusiform gyrus and L anterior inferior temporal gytus) showed most sig activity when health ppl performed synonym judgement task, and showed most atrophy in SD patients
    • This fMRI data support SD data
  • rTMS data
    • we can’t stimulate anterior fusiform gyrus b/c it is on the ventral surface of the temporal cortex
    • too far from the scalp
    • we can stimulate the inferolateral ATL region that compromises the anterior parts of the inferior and middle temporal gyri
    • The region was targeted in 2 studies that were designed to determine if temporarily disrupting the region’s functionality in healthy ppl would delay their response on synonym judgement task
    • Results: supported predictions
      • Ppl’s RT were slower on the synonym judgement task, not on the # judgement task, when rTMS was applied to the target region in the LG compared to ono rTMS applied there
      • rTMS results align w/ fMRI
  • Homologous inferolateral ATL region in RH
    • In fMRI, is tends to be dysfx in SD (not sig tho), this may support that it cooperates w/ the LH twin to implement the semantic hub
    • Lambon Ralph et al 2009
      • Applied rTMS to target region in LH and RH while subjects performed synonym judgement task and # judgement task
      • Same outcome regardless of hemisphere
      • Interfering w/ the operation of LH or RH sig increased RT on lexical task but not # task
      • This support the key claims of Hub and Spoke Model: object concepts depend on ATLs bilaterally
    • Limitation
      • Pobric et al 2007 & Lambon Ralph et al 2009 stimulate the target sites cont for 10 min prior to task performance
      • Such stimulation produce b effects that last for several min after rTMS train has concluded
      • Dunno if b effects are due to nrophysio changes that occur near the site of stimulation, remote from the site, or both

Summary

  • Hub and Spoke Model
    • Object concepts that are encoded by concrete nouns are subserved by modality specific brain system for perception and action (the spokes) but also by amodal integrative system that resides in ATLs bilaterally (the hub)
    • The hub has several fx:
      • Binds together the anatomically distributed modality-specific features that constitute the main content of object concepts
      • Organize those features so it is possible to distinguish b/w entities that fall w/in the scope of a given concept and entities that fall out of the scope
    • Evidence the semantic hub is underpinned by ATLs bilaterally
      • SD patients show progressive dissolution of object concepts that is linked w/ gradual atrophy of ATLs
      • PET and distortion corrected fMRI studies show that ATLs are activated when health ppl process object concepts
      • rTMS: temp disrupting ATLs in healthy peep rescue their capacity to process object concepts
      • distortion-corrected fMRI and rTMS studies: show semantic hub may not depend equally on all ATLs aspects, it rely on 2 specific sectors
        • anterior fusiform gyrus, inferolateral cortex (incl anterior parts of inferior and middle temporal gyri)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
A

Domains of object concepts

  • object concepts encoded by concrete nouns are usually grouped together to form hierarchies
  • Ex. golden retrievers belong to dogs, dogs belong to animals, animals belong to living things, etc
  • How are these categories implemented in the brain?
  • Warrington et al
    • Described patients w/ semantic disorders that affect certain categories of object concepts more than others
    • Selective semantic disorders/ category specific deficits
  • Common dissociation: impaired k of living things (esp animals, fruits/ veggies) in the context of persevered k of non-living things (esp tools, artifacts)
  • Opposite dissociation also reported
    • 42 patients w/ category specific deficits on living things
    • 34 … on non-living things
  • The performance of some patients is influenced by other v (ex. visual complexity of pics, familiarity of concepts, freq of words), some are controlled
  • 3 major domains of selective semantic impairment
    • Animal concepts, fruit/vegetable concepts, tool concepts
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
A

Box 10.3 the influences of gender and culture on concepts for animals and fruits/vegetables

  • Ppl differ in their familiarity w/ specific kinds of animals, fruits, veggies
  • Need to know if these diff are large enough to sig modulate the patterns of category-specific semantic disorders that hv been documented
  • 2 factors we need to look at – gender & culture
  • X
  • Gender
  • Gainotti 2010
    • 80% of patients w/ prevalent impairment of animal concepts were women
    • 95% of patients w/ a prevalent impairment fruit/vegetable concepts were men
    • Striking gender diff cannot explain all of the data b/c nroanatomical diff on lesion sites influence this
    • Possibility is that differential gender-related vulnerabilities to category-specific deficits are due to gender-related social orles
      • Men are more familiar w/ animals b/c they are more likely to hunt
      • Women are more familiar w/ fruits/veggies b/c they are more likely to cook
      • So, there may be a male advantage for animal k and female advantage for plant k, b/c of evolution
        • Men contributed more to hunting; women to gathering
  • Culture
    • Ppl living in post-industrial societies have “nature-deficit syndrome”
    • Impoverish understanding of the natural world
    • The folk-biological k exhibited by modern vs ancestral agricultural societies
      • 50 vs 500 -> large discrepancy
      • This shows that most of the patients observed already have “nature-deficit syndrome”
      • This may manifest differently in societies back in time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
A

3 major domains of selective semantic impairment

Animal concepts

  • Category-specific deficit involves living things -> smaller domains
    • Animate (animals) vs inanimate (plants)
    • Some patients manifest semantic disorders that selectively or disproportionately affect 1 or other these 2 domains
  • Examine patients w/ impairments that affect animal concepts, then plants
  • Most patients w/ semantic disorders on the animal domain have lesions in mid to anterior ventral and medial temporal regions (LH bias)
  • Causes:
    • Stroke
    • Most have herpes simplex encephalitis (HSE) infection
      • Viral infection that rapidly destroys portion of the temporal lobes bilaterally, including medial sections of the ATLs
      • Some patients have worse k of animals than other conceptual domains
    • SD patients: impaired K in both conceptual domains (living and non-living)
      • Reason
        • Rapid necrosis in HSE distort conceptual representation -> category specific deficits
        • Gradual atrophy in SD dims conceptual representations -> across the board deficits
  • Blundo et al 2006 – case study
    • KC woman, right-handed
    • MRI showed damage to anterior ventral and medial temporal lobes bilaterally (LH bias)
    • Diagnosis: HSE
    • Struggled w/ animal items on standardized verbal/nonverbal tasks
    • Picture-naming task: provide appropriate terms for 260 line drawings
      • She successfully named 93% of fruits/veggies, 92% artifacts, 50% of animals
        • Not due to familiarity effects b/c she can’t name cat/pig
      • Generated semantically related naming response for 4 items
        • 1: called ant a fly
        • 2: called the eagle a parrot
        • 3: called the fox a wolf
        • 4: called pig a hippo
    • Oral definition task
      • Rs probed KC’s conceptual k by asking “What is X? Please describe it, including info on size and structure”
        • 102 objects (50% animals, 50% not)
        • KC can indicate the superordinate category of all objets
        • Good definition for non-animal objects
        • Adequate definitions for only 17 animals
          • Ex. mouse – 4 legs, 1m tall and 1m long
    • Drawing from memory task
      • For the same 102 objects, she could draw all non-animal items, but can only draw 17/51 animals
    • Decision test for visual features
      • 76 items (50/50 animal/nonanimals)
      • Asked Y/N qs about if the object has a certain visual attribute
      • Ex. does a fly have wings?
      • Perfect answers for non-animals
      • 50% correct for animals
    • -> Impaired at retrieving conceptual k of shapes of animals
    • Also impaired at retrieving conceptual k about their colors and sounds
    • Impaired for association/functional features
      • Task: rs gave definition consisting of associative/fx feature, asked her to provide the corresponding name
        • Ex. it’s an animal, it’s a bug that stings, it sucks nectar from flowers and produces honey (Ans: bee)
      • Result: 80% correct for nonanimals; 0% for animals
    • Semantic judgements about animals
      • KC was asked about the animals’ habitat, ferocity, edibility
        • Shit on all 3 features
  • KC case study – shows how conceptual domain of animals can be selectively disrupted
    • Her category-specific deficit was displayed for diff kinds of stimuli (verbal/nonverbal), diff kinds of semantic features (shape, color, sound, associative/fx)
    • Impairment was due lesions to anterior ventral and medial temporal lobes bilaterally

Fruit/vegetables concepts

  • Similarities vs diff b/w semantic disorders on living things vs non-living things (ex. fruits, veggies)
  • Chief similarity: both kinds of patients have damage to mid-to-anterior ventral and medial temporal regions
  • Child diff: laterality and intra-hemispheric localization
    • Laterality:
      • patients w/ impairment of animal concepts -> bilateral
      • patients w/ impairment of non-animal concepts -> unilateral (LH bias)
    • intra-hemispheric localization
      • patients w/ impairment of animal concepts -> anterior temporal areas
      • patients w/ impairment of non-animal concepts -> posterior areas (ex. mid fusiform gyrus)
  • Samson and Pillon 2003
    • Patient RS – impaired conceptual k of fruits/veggies
    • Engineer -> stroke in left posterior cerebral artery
      • Include ventral and medial areas (fusiform, parahippocampal, hippocampal gyri)
      • Medial occipital areas, part of thalamus
    • had primary language deficits in reading and oral word retrieval
    • RS’s scores on semantic tasks
      • Good at naming pics of non-living things; shit at living things (esp fruits/veggies than animals – even though animals are more complex)
      • name objects in response to verbal descriptions -> same pattern
      • Word-pic matching -> same pattern
    • -> shows a category-specific deficit for fruit/veggie concepts develop due to left posterior cerebral artery infarct

Tool concepts

  • Impairment of tool concepts don’t have lesions in the ventral and medial temporal lobes
  • They have them in the posterior lateral temporal region (pMTG), inferior parietal region (aIPS/SMG) and or inferior frontal region (vPMC); LH bias
  • Warrington and McCarthy 1987
    • Patient YOT
    • Stroke damaged the left temporoparietal region
    • Can’t produce and comprehend propositional speech, can’t process written language
    • Can partially understand single words (spoken/printed)
    • 3 tasks
    • Task 1: for each item, match the spoken word w/ the correct pic w/ 5 choices
      • 3 categories of objects: animals, fruits/veggies, artifacts
      • Each array of pics showed objects belonging to the same category
      • Same task was administered in 2 diff sessions; the response-stimulus interval (RSI) = 2-5s
        • RSI: amount of time b/w patients’ response to 1 item and examiner’s presentation of the next item
      • When RSI = 2s
        • YOT did worse on artifacts (63%) than animals (85%)
      • When RSI = 5s
        • Artifacts = 90% (effect disappeared)
      • Rs interpret this as YOT struggles to access semantic info rather than having absolute less of semantic rep
    • Task 2:
      • Spoken-word/pic matching
      • Categories: fuits/veggies
      • 2 subclasses of artifacts
        • Large non-manipulable ones (non-tools)
        • Small manipulable ones (tools)
      • Results: did well on fruits/veggies (85%)
        • Non tools: 80%
        • Tools: 60%
      • So, YOT’s difficulty in retrieving conceptual k may not apply to the entire domain of artifacts, maybe just to tools
    • Task 3: match spoken word to written word (6 choices)
      • Semantic classes: animals, fruits/veggies, buildings, vehicles, kitchen utensils, office supplies, furniture, body parts
      • Written words belong to the same class
      • Done in 2 diff times
      • Results: good at living things (animals and fruits)
      • Declined for outdoor artifacts (buildings and vehicles)
      • Worse for small indoor artifacts (kitchen utensils, office supplies, furniture)
      • Worst on body parts
      • -> retrieval deficit was worse for tool-like objects
  • Summary: YOT has a disorder in which the semantic rep of tools (and tool-like objects like body parts) are harder to access than other concepts
  • Lesion in left temporoparietal region
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
A

Explanatory approaches

  • Issues in category-specific deficit rs via the grounded cog model and general hub and spoke model
  • Both theories assume that most of the features of object concepts are neurally implemented in widely distributed modality-specific systems for perception and action
  • Diff b/w 2 model: Hub and Spoke model also assumes that these features are bound together and organized by an integrative system in the ventral and inferolateral parts of ATLs bilaterally
  • The semantic hub is believed to be domain-general
    • Disrupting this would not be expected to -> category specific deficits
    • Evidence 1: SD patients always show pervasive semantic impairments that affect concepts for living and non-living things equally
    • Evidence 2: when rTMS is applied to inferolateral part of ATLs in healthy ppl, this induces milder across the board semantic impairments
  • So, rs suspect that the explanation for category specific deficits would be due t spokes that radiate out from the hub
    • -> modality specific systems for perception and action
  • Differential weighting hypothesis
    • Warrington
    • Suggests that different domains of object concepts are characterized by different mixtures and “weightings” of modality specific features, and this causes them to gravitate (over the course of cog dev) to diff networks of brain regions
    • So, disrupting a particular region that is functionally more important for once conceptual domain than for others may generate a category specific deficit
    • Ex: different domains of object concepts (ex. animals, fruits/veggies, tools) vary on how much they depend on certain sensory and motor channels of info
      • Evidence: studies – normal ppl rated relative importance of certain types of features for certain types of concepts
    • Animal concepts
    • Visual properties enter into most LT object representations, shape features are critical for animal concepts
    • This is b/c many animals have similar forms and require fine-grained shape analysis to be identified
      • Detailed shape analysis is known to rely on mid-to-anterior portions of the ventral and medial temporal lobes
      • Damage here -> category specific deficits for animal concepts
    • Fruit/veggie concepts
      • Shape, color, gustatory/olfactory features are important here
      • Most patients w/ selectively/disproportionately impaired k of fruit/veg have ventral temporal lesions
    • Tool concepts
      • Visual motion patterns are major features (Ex. oscillations of hammer)
      • Motor programs that specify how much objects should be manipulated to carry out their fx is also important
      • So semantic disorders on tools are often associated w/ damage to the left pMTG, aIPS/SMG, and or vPMC
    • Limitations of this hypothesis
      • # 1: If this were true, we predict patients w/ a deficit for a particular conceptual domain will have worse K of features that are weighted heavily for that domain than of features that are not
        • Many case studies -> this is not true
        • Ex. Blundo et al 2006
          • patient KC have a category-specific deficit for animal concepts
          • Based on this H, since animal concepts depend more on visual shape features than non-associative fx features, KC’s k of animal’s visual features should be more disrupted
          • Not true
          • Each were equally impaired
          • Also found in other patients
        • So, semantic problems cannot be reduced completely to visual disturbances
      • # 2: if a sensory or motor channel is key or a particular conceptual category, it should be impaired in patients w/ a deficit for that category
        • -> not true
        • Samson and Pillon 2003
          • Patient RC – category specific deficit for fruit/veggie concepts
          • Since color info is crucial in these concepts, based on the hypothesis, you will anticipate RS’s color k is shit; but it was spared
      • # 3: If sensory/motor channel is important for a conceptual category, damage to it -> deficit for that category
        • -> not true
        • Ex. some apraxic patients who cannot use tools correctly can still name the correctly
        • Even though manipulation k is important in tool concepts it does not always need to be retrieved to process those concepts
  • Distributed domain-specific hypothesis
    • Caramazza and Bradford
    • 2 claims
    • Claim 1: major factor driving the neural architecture of object concepts is domain
      • constraints are innately programmed to apply to the 3 domains w/ the most evolutionarily relevant histories for humans
      • (i.e. animals – predators/prey; fruits/ veggies – food/medicine; tools used to transform env)
    • Claim 2: the factor of domain shapes the neural architecture of object concepts at 2 separate lv of representation
      • At level of widely distributed modality-specific systems for perception and action
      • And at an abstract lv that is for conceptual k
      • Unlike the semantic hub in the Hub and Spoke Model
        • This is partitioned according to category
        • This allows for the possibility of category-specific deficits
    • Evidence that support the neural networks for acquiring certain domains of object concepts hv genetic foundation derived from natural selection
      • Farah & Rabinowitz 2003
        • Patient Adam
        • Had bilateral occipital and occipitotemporal lesions
        • K of object concepts: showed dissociation w/ sig impaired understanding of living things (animals, fruits/vegetables) but normal in non-living things (esp tools)
        • This deficit was manifested for perceptual and nonperceptual features of living things
        • This is not just a visual problem
          • Distinct Neural substrates for living and non-living things
  • Distributed domain specific H is formulated so it can explain why many patients w/ category specific deficits (KC, Adam, etc) have impaired K of perceptual and non-perceptual features of concepts in the affected domains
  • The disorders are assumed to happen at a semantic lvl of rep that is more abstract than the lv of modality-specific systems that is partitioned in category-specific ways
  • This account is internally coherent, and takes category-specific deficits at face value (as reflections of genuinely category-specific assortments of object concepts in the brain)
  • X
  • Limitation
    • There is not much evidence for a lv of representation that has proposed properties (abstractness and category-based divisions)
    • If this level exist, this is a third rep system

Summary

  • SD patients show comprehensive impairments for all kinds of object concepts
  • Other patients show selective/disproportionate impairments that affect certain domains of object concepts
  • Ability to produce and comprehend words is severy disrupted for concrete nouns that happen to encode the affected concepts
  • Category specific deficits are reported for 3 major conceptual domains: animals, veggies/fruits, and tools
  • Patients w/ impairment to animal concepts hv bilateral damage to ventral and medial sectors of the mid-to-anterior temporal lobes
  • Patients w/ a prevalent impairment to fruit/veggies have unilateral LH damage to relatively more posterior areas (ex mid fusiform gyrus)
  • Patients w/ impairment to tool concepts hv unilateral LH damage to posterior lateral temporal region (pMTG), inferior parietal region (aIPS/SMG) and inferior frontal region (vPMC)
  • X
  • 2 attempts to explain these findings
    • # 1: differential weighting H: diff conceptual domains have diff patterns of regionalization in the brain b/c they depend on diff combinations and rankings of modality specific sensory and motor features
      • The type of category-specific deficit can result from a lesion that primarily affect the type of modality-specific info that is weighted most heavily for a given category
    • # 2: Distributed domain-specific H: diff conceptual domains are innately programmed to have segregated neural implementations at the modality-specific system for perception and action; and at a more abstract lv of pure semantic structure
      • A particular type of category-specific deficit will results from damage to the corresponding cateogry0specfic components of the system that is restricted to representing conceptual knowledge
17
Q

Lec

A

Science fiction to cog nro (rewatch the video, think about the themes)

  • fMRI
  • Shown ppl pictures/words (knife -> hammer -> window -> house)
  • Ppl think about this stimuli
  • Use computer algorithm to guess word/concepts was being evoked
  • 100% accuracy – thought identification/ mind reading
    • Based on the brain activation patterns Got all 10 items right
  • Intentions/decisions (add/subtract)
    • No Lie MRI
    • Navigation in VG, in VR game
    • Look at brain activity can identify who have been to the area
  • Use it for criminals
  • fMRI -> guess what you want to buy
18
Q
A

How does algorithm work?

Mitchell et al 2008 Science

  • Model was run in 2 stages
  • Stage 1:
    • 20 ppl completed the experiment
    • # 1: Examined the neural activity evoked after seeing a set of words & indiv diff in brain activation for objects
      • Ex. we have diff experiences w/ hammers
      • Ex. but we have similar understanding on what it looks like, how heavy it is, etc
    • # 2: Used these results and create templates (i.e. averaged the templates) that instruct the algorithm
      • NOTE: you can match templates based on indiv similarities (ex. demographics)
    • # 3: Person 21 comes in -> scan their brain activation and see how accurately it fits the patterns on the template
    • Main issue: you need to scan every word in the dictionary
  • Stage 2:
    • Rs try to predict neural activity using verbs (Ex. eat, taste, fill)
    • # 1: Rs came up w/ 60 verbs that describe what you can do w/ the object
    • # 2: Derive indiv rep of how the 60 verbs look like
    • # 3: Show P1 a noun
      • ex. celery
        • more weighting on the verb “eat”
        • less weighting w/ turn (you can’t turn celery)
      • Ex. keys
        • More weighting w/ turn, less w/ eat
        • IOW: Since you can turn keys, there’s a heavier weighting on the verb “turn” for keys
    • # 4: distill/boil down the concepts (nouns/celery) down to the pattern of activities across these 60 verbs
    • # 5: Combine the neural activity from 60 different verbs to predict the neural activity for a concept (ex. celery)
  • Future studies: Analyze a body of text and see how often “eat” co-occurs w/ “celery”
  • IOW: you can train your model ahead of time using the words on Wikipedia -> make neural activity predictions on all of the words relative to verbs it cooccur w/
  • Generative model
    • Trained the model using 58 words
    • Present 2 new words to the model
    • See what happens
19
Q
A

Algorithm is generative

  • Can predict new concepts it has never seen before (although the initial versions are most focused on concrete concepts).
  • Why focus on concrete words specifically?

Embodied cognition

  • Explains why they started w/ concrete objects
  • Ex. When you think about screwdriver, many brains areas are activated as it is associated w/ diff sensations
    • Ex. what it looks like, how heavy it is, etc
  • Embodied cognition: representations of concepts you are familiar w/ are tied to the sensorimotor experiences you have w/ that object
    • Ex. celery & “eat”
      • high weighting and relevance
      • More gustatory cortex activity
    • Ex. hammer & eat
      • low weighting and relevance
      • Less gustatory cortex activity
    • Ex. hammer & hit-> more motor cortex activity
    • Ex. celery & see; hammer & see
      • -> both have visual cortex activity
  • IOW: when you represent a concept, there is brain activity across the brain
    • This represents the various sensorimotor info that is activated
  • Ex. “banana”
    • Activates color, tactile, smell, manipulation
    • No activation in auditory
  • There is strong activation for particular categories of knowledge
    • Prev YT vid: rs picked categories that are salient, so they can identify/discriminate in their MRI study
    • Ex. show igloos, hammers, houses
  • There some regions important to animals, tools, etc
  • Why tools, animals, fruits/veggies?
    • Reason 1: strong visual (identify animals)/action (use tools) representation
    • Reason 2: evolution
      • To survive, hunt animals, spot ripe fruits/veggies, use tools
20
Q
A

An example from one modality (similar findings in other modalities) -> see TB

  • Ppl used neural network simulations to understand how and why the brain organize info in a specific fashion
  • Hub and spoke model – a neural network simulation
    • We have spokes = representations for sensory motor cortex that code for particular concepts, and correspond to diff brain regions
    • These brain regions send connections to the interior temporal lobe
      • Interior temporal lobe: important for coding/representing semantic info
        • Located in the middle of the brain
        • Receive connections
        • It’s like a hub that connects all the spokes
        • IOW: if you want to translate/connect info from 1 modality to another modality, you need to pass through the interior temporal lobe most of the time
        • Ex. If I see a banana (visual), I want to peel a banana (action)
        • Ex. If I hear barking (auditory), I think about what a dog looks like (visual)
        • Ex. If I hear the word dog (auditory), I think about dog in other modalities
        • Input from a stimulus -> spreading activation of the entire concept
    • YT video:
      • You learn how this concept is represented in diff modalities
      • if we present the concept via 1 modality -> spreading activation -> full activation of the entire concept (i.e. stable pattern of spreading neural activity)
    • If we see a picture of an igloo/hear the word “igloo”, it activates other representations like the tactile/temperature modality (i.e. igloo is cold)

Relationship of brainprints

  • We can detect the same word in different brains. Can we detect different brains from the same words?
  • IOW: Can I give a group of undergrads the same set of words -> find unique signatures from their brains that let me uniquely identify those ppl?
    • Will this persist over time?
21
Q
A

“Brainprints:” Identifying Individuals Based on their Brain’s Electrical Activity while Reading

  • Armstrong et al., 2015, Neurocomputing; Subsequently selected for 2015 Special Issue Celebrating the Breadth of Biometrics Research
  • Possible application: brainprint biometric
    • IOW: opposite of prev study – instead of looking at similarities, we looked at differences in neural activity when they see the same concept
  • Why develop a brain-based biometric?
    • Unlike the keys, brains are hard to loose or forget your brain
    • Safe for the user, and for the system
      • Ex. In movies you see ppl cut off someone’s finger or eye to unlock systems
      • For brains, you can’t remove it from the skull; if you do, the brain is dead
  • If you have brain based biometric, you are forced to reveal info
    • To do a brain scan these days, you need to consent, lie down in the scanner
    • But there are also neural activity markers that show up when someone is super stressed (covert)
      • Ex. application – bank robbery, robbers captured the manager, manager is super stressed -> this sends out a neuromarker to the police w/o the robbers knowing -> deny access to bank vault for robbers
  • Prof’s study
    • Individuals share some general knowledge but also uniquely differ in knowledge of some types of information such as which words they know.
    • Research Questions: Does this actually lead to detectable markers of uniqueness in the brain?
    • Why might this matter?
  • Basic idea
    • “The idea of common knowledge vs. unique knowledge is present in the context of words like acronyms”
      • There’s purple, green, and blue person
      • You want to test their knowledge of some words
        • Ex. acronyms – not really words, but have meanings
        • Dehain study???
    • Individuals may differ in terms of whether they know an acronym or not…
      • Ex. “DVR”
        • Purple knows this acronym; green and blue ppl do not
    • Or in terms of the knowledge, they associate with the acronym
      • Ex. FBI may stand for different things for a police officer vs politician vs student
    • Study: presented 75 acronyms to ppl
      • No ppl knew the same set of acronyms
        • Ex. person A: knows acronym 1,2,4,5
        • Ex. person B: knows acronym 1,3,4,5
  • Does the brain react differently to these type of items?
  • What do we know already?
    • N400 component = index of meaning
    • N400 reacts differently based on familiarity of the word
    • Ex. you are given 75 acronyms
      • Check off those you know
      • Don’t check off those you don’t know
      • Neural activity on acronyms you know are different than neural activity of acronym you don’t know
    • “Averaged across individuals, the electricity generated by the brain during reading differs depending on whether a word is known or is unknown”

Combining the basic idea and what we already know, can we apply this principle to discriminate amongst individuals?

  • Method:
    • 32 ppl read a list of 75 acronyms two times.
    • The neural activity associated with reading each word was recorded.
    • Length of recorded data: ~ 2 minutes
  • The data for each participant were classified using several different classification methods.
  • Several of these methods were based on the computational principles used by neural networks in the brain itself.
    • Input = neural activity
    • Input this to a classifier -> classifier determines which of the 32 ppl produced it
    • Rs used different classifiers
    • The classifier that did well is a “neural network classifier”
      • It simulated pools of artificial neurons, and mapped the input signals to the individual participant

Results

  • The classifiers were almost perfect at uniquely identifying individuals based on the neural correlates of reading (!)
    • Method:
      • T1: P1 saw the 75 acronyms
      • Waited for some time
      • T2: P1 saw the list the second time
      • We can use the brain activity data at t1 to predict your performance at t2, and tell who’s brain produced the activity
    • IOW: indiv diff are generating unique patterns of neural activity when they see the same words

POSSIBLE APPLICATION: BRAINPRINT BIOMETRIC

  • A critical property of a biometric is that it has to be relatively permanent — it must work in a week, in a month, in a year…
    • Biometric: permanent signal
      • Ex. fingerprint is stable around 20 yo

RESULTS OF PERMANENCE TEST

  • Participants brought back to the lab for a second (1 mo later) and third session (6 mo later) up to months after the initial session.
  • Accuracy remained very high
  • This suggests the neuro biomarker is somewhat permanent
  • Ex. we know what a platypus is
    • For most ppl, we thought about it probably more than a year ago, but we still know what it is
  • Characteristic of semantic knowledge/memory: it is fairly stable
  • It makes sense the semantic system can be a biomarker
  • X
  • T1 vs T2
    • Neural activity may differ b/c you learnt this knowledge
    • You may need to update the individual’s neural activity model as the learns/grows
22
Q
A

Ethical issues

  • Ethical issues are minimal
    • Minimal risk: see Random letters/ words -> press button
    • Higher risk: study patients, children, disabled ppl
  • “Can we detect thoughts? Can we suppress thoughts?” Potentially a two-way street
    • If we are part of a society that doesn’t like certain ideologies and want to suppress them –> can we train ppl to suppress those thoughts
  • e.g., can you detect coercion and prevent a login to a bank account? Can you detect a lie?
  • Video - Ethical uses of experimental technologies (much like the first fingerprint cases in court)
    • What is facts? What is your personal throught?
    • We can’t force someone to testify against themselves
    • But if we brain scan you, the data tells that you are lying, is this now a fact (like a fingerprint) or a personal thought?
  • Covert/Remote monitoring: technologies such as NIRS
    • NIRS: near infra-red spectroscopy
      • Non-invasive technique, commonly used in kids brain
      • Ex. you can shoot an infrared-red laser at ppl’s forehead to detect neural activity at a specific cortical area w/o them knowing
        • -> this could be spying, should we use this technology?
  • Applications of technologies to Neuromarketing
    • Ex. FB
      • Your data helps them target ads
    • Ex. I scan your brain -> target ads to engage your brain maximally
      • Should there be limits?
    • At one point, we may know what people want exactly based on their neural activity
    • As we can use cog nro language to tell us what words mean, what our thoughts mean/representing in language -> we need guidelines
23
Q
A

Semantic Deficits

  • Hub and Spokes model:
    • Semantic deficit studies informed us which parts of brains is specialized for certain categories (ex. tools, fruits/veggies, animals)
  • Category specific semantic deficits
    • Only affect certain categories of words
    • Patient KC vs matched/comparison subjects
      • Examined their ability to respond to
        • Picture naming - animals
          • KC: 50%; normal 90%
        • Picture naming – fruits/veggies
          • KC: 90%; normal 90%
        • This shows KC has a specific impairment for animals only
    • The study didn’t just stop here -> picture naming results only can be interpreted as a modality specific issue (i.e. KC has a lower score on animals b/c visual -> sound modality is not working)
    • So rs used other tasks as well
    • Task 2: object definition
      • Hear the word -> say the definition
      • Animals: KC has impairment
      • Non-animals: KC has impairment
    • MP: no matter what test is run, input/output modality -> there’s an impairment for living things, showing a category specific deficit
    • There are also selective impairment for tools, or fruilts/veggies

Evidence for selective impairment across modalities

  • Patient RS: selective impairment for fruits/vegetables
  • Theory: we can take out specific tissues -> category specific deficits
  • -> can’t really test this
24
Q
A

Category Structure (Cree & McRae, 2003)

  • Created semantic feature norms
  • # 1: Give ppl a word (like moose)
  • # 2: write down properties related to moose
  • # 3: tallied the frequency of those features, then label if those features relate to certain parts of the brain
    • Ex visual, tactile features
  • X

Results

  • The distribution of the # of features varied across categories
  • Density plot
    • Taller cone: more ppl produced that feature
    • Computed the relevance of that feature for each domain
      • How well can you predict/explain performance that domain for those features
    • Results: there’s diff distributions for features across categories
    • Ex. Creatures line
      • There’s like no cones in the LS and barely on RS
      • This shows that visual motion is more important for the creature category than for other categories (ex. top most bar = nonliving things; for a subgp of living things, motion is irrelevant)
    • If you go through all the modalities here
    • Some concepts in some contegores rely more on certain modalities
      • Ex. animals rely more on motion
      • Ex/ tools rely more on tactle
    • Thus, in the Hub and Spokes model
      • Some info shape stats learn in the hub more than other caetories????
  • Dimensionality reduction: look at how k is organized based on how features tend to cooccur across different concepts w/in a category
    • This diagram shows that there are clusters of concepts
    • Top = living things
    • Bottom = non-living things
    • Middle = fruits/vegetables
    • -> this shows that living things, non-living things, fruits/veggies are 3 separate categories
    • The stats can drive these effects