Lec 7/ TB Ch 12 Flashcards

1
Q
A

Cog and neural distinctions b/w concrete and abstract concepts

Theoretical background

  • Dual coding model
    • Paivio
    • Word meanings are based on modality specific rep (nonverbal codes) and lexical associations (verbal codes)
      • Imagens = nonverbal codes
      • Logogens = verbal codes
    • Concrete concepts draw equally on both systems; abstract concepts rely mainly on verbal system
    • Ex. concept “telephone”
        • Grounded cog model: nonverbal system implements modality specific features of concepts
    • Ex. how telephones usually look, sound, feel
    • Verbal system: linguistic storehouse of word forms
    • Ex. phonological form of the word telephone and auditory/motor instantiation of that form during overt and covert (subvocal) speech processing
      • Rather than representing word forms in complete isolation from eo, it captures complex networks of frequency-based associations among the,
      • Telephone is linked w/ many other lexical items that co-occur w/ it – ring, #, directory, call, conversation
  • Associative nature of verbal system is important
    • It supports the Dual Coding Model’s account of the major diff b/w the meanings of concrete and abstract words
    • Based on the theory, conceptual k is not limited to nonverbal info; instead, it embraces verbal word association
    • Concrete concepts are thought to have more or less equal amounts of nonverbal and verbal content
    • Ex. meaning of telephone includes not only the various modality-specific semantic features mentioned abv, but also the web of associative links w/ other lexical items
    • Abstract concepts depend more immediately and more substantially on verbal than nonverbal content
    • Ex. Religion may activate church first as a verbal associate and then as an image of a church
    • Ex. due to stat tendency to co-occur in the same discourse contexts, abstract words like money, stock, profit etc have strong associative links w/ eo in the verbal system, and far from being semantically irrelevant
    • These reciprocal links are assumed to actually constitute the meanings of words
      • Imagery and verbal processes together help w/ comprehension of concrete language
      • Verbal processes predominate in the case of abstract language
    • Since Dual coding model suggest that concrete words engage both systems to equal degrees
      • Abstract words rely primarily on verbal system, so it predicts the concrete words should have distinct processing advantages over abstract words
      • Concreteness effects: concrete words have certain processing advantages over abstract words, like being recognized faster and remembered better
        • Ex. when ppl perform a lexical decision task that requires them to distinguish b/w real words and pseudowords, they respond faster to concrete words (ex table) than abstract words (ex special)
      • When ppl are asked to remember certain words, they are better w/ concrete than abstract items
    • Findings are based on large databases
  • Vigliocco et al
    • Extended the Dual Coding Model
    • There’s 2 main systems
    • # 1: experiential system – that stores LT modality specific representations
    • # 2: distributional systems: register stat co-occurrent patterns of words across discourses
    • This approach assumes concrete and abstract concepts incorporate diff proportions of experiential and distributional info
    • Concrete concepts have more experientially based modality specific content than distributional based content
    • Abstract: opp
    • Salient feature in this model:
      • Brings together a single rubric a substantial amount of psycholinguistic and computational data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
A
  • Context availability model
    • Another alternative to the Dual Coding Model
    • Suggests all word meaning are amodal in format, but differ w/ how hard they are to pin down
    • Concrete concepts tend to be stable and insensitive to context, whereas abstract concepts tend to be more variable and sensitive to context
  • Saffran and Sholl
    • Concrete = rose
    • Abstract = phase
      • Varies w/ context: phase of moon vs phase of bb dev
  • Hoffman et al
    • Concrete = spinach
    • Abstract = chance
      • -> situation based on luck (ex. it’s down to chance)
      • -> opportunity may arise (ex. I’ll do it when I got a chance)
      • -> risky option (ex. take a chance)
    • Chance is harder to understand w/o context
  • Evidence
    • Abstract words tend to appear in a wider range of linguistic contexts and have larger # of distinct senses
    • Several studies shown that conceptually constraining context is provided in the form of 1+ prior sentences and scaffold the interpretation, abstract words are understood as efficiently as concrete words
  • Pavio et al
    • Documented sig concreteness effects for linguistic stimuli beyond the single word lv
    • Ex. measured ppl’s capacity to comprehend and recall entire texts
      • Their performance was sig better for concrete and abstract material
  • -> evidence – next section
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
A

Box 12.1: do abstract concepts hv metaphorical foundations

  • Lakeoff and Johnson: metaphors we live by
    • Abstract concepts derive their structure and content from concrete concepts via metaphors
    • Ex: love is a journey
    • Ex: happy is up sad is down
      • this boosted my spirits, I’m feeling down
    • Ex. argument is war
      • He attacked every weak point in my argument
    • Ex. linguistic expressions are containers, communication is sending
      • Your ideas seem hollow
      • I gave you that idea
    • Ex. Time is space
      • The appointment is on Monday
      • She worked through the evening
  • Some studies show that metaphors help us think
  • But there are limits and metaphors
    • Ex. Love is a journey
    • Love has a destination; but you don’t think about going to the next station
    • Ex. Time is space
      • some brain damaged patients can understand the temporal meanings of English propositions (like on Monday) even though they cannot understand the corresponding spatial meanings of the very same prepositions (ex. in the rm)
      • IOW: even though the time is space metaphor influenced the development of English, it is not necessary in linguistic processing of contemporary adults
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
A

Evidence from PET and fMRI

  • Examine the concrete/abstract distinction
  • Wang et al 2010
    • 300 ppl
    • 20 PET/fMRI studies that explored the concrete/abstract distinction
    • Concrete > Abstract contrast
      • Sig effects in 3 main regions
      • # 1: left ventral temporal cortex, fusiform gyrus
        • Shape and color features of object concepts
        • Greater response to concrete than abstract words that reflect retrieval of visual-semantic features
        • This aligns more w/ Dual Coding Model (posits modality-specific conceptual k) than the contextual availability model (posits amodal conceptual k)
        • The degree the fusiform gyrus is engaged by concrete vs abstract words depends to some extent on how deeply the meaning of those words are processed
      • # 2: bilateral posterior cingulate gyrus
        • Posterior area/ retrosplenial cortex is associated w/ many fx (ex. visual imagery, spatial attention, navigation, and episodic mem)
        • it may facilitate the situational placement of particular types of objects in particular types of env
          • Ex. when a person reacts to the word toaster by imagining
        • If so, more response of posterior cingulate to concrete than abstract words aligns more w/ the Dual Coding Model than the Context Availability Model b/c it reflects activating perceptual simulations rather than amodal semantic structures
      • # 3: left inferior parietal lobe, angular gyrus
        • Conceptual processing of concrete words
        • Binder et al 2009 - The most dense concentration of activation foci was in the left angular gyrus
        • This is more consistent w/ Dual Coding model
    • Abstract > concrete contrast
      • 2 regions
      • # 1: mid/superior sector of the left anterior temporal lobe (ATL)
        • Contribute to the conceptual processing of abstract words is unclear
        • Region is involved in high-lv speech perception and sentence comprehension
          • It may play a role in verbal word associations
          • This fits w/ Dual Coding Model as it assumes the verbal word associations are critical to abstract words
        • The anterior area of the middle temporal gyrus is semantic hub postulated by the Hub and Spoke Model
        • Alternative possibility: it implements amodal semantic structures
          • Fits the context availability structures
          • It admits amodal rep and assumes they are engaged more by abstract than concrete words
      • # 2: left IFG
        • Includes Broca’s areas, and linked w/ many linguistic fx
        • 2 of those fx are relevant to processing of abstract vs concrete words
          • Each interpretative possibility reflects a diff theoretical perspective
          • 1: left IFG subserve the articulatory component of auditory-verbal STM
            • This region may maintain an activated state the verbal word associations that are more integral to meanings of abstract than concrete words
          • 2: left IFG – strategic control of semantic processing
            • Context availability model
              • Help regulate selection of specific word senses
              • This is more important for abstract than concrete words, esp when stimuli are presented alone
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
A

Evidence from nropsych and rTMS

  • Dual coding model/context availability model provides the best characterization of the role that the left IFG plays in processing abstract vs concrete words
  • Hoffman et al 2010
    • Eval ppl’s ability to understand these 2 types of words in 2 separate conditions (no context vs w/ context)
    • Dual Coding Model – does not predict this experimental manipulation should differentially affect the comprehension of the 2 types of words
    • Context availability model – predicts that abstract words should be easier to understand when relevant contextual info is given that facilitates that selection of certain meanings
    • 2 experimental conditions
    • Condition 1: semantic similarity judgement task
      • Do a semantic similarity judgement task
      • Each trial: present a probe word w/ 3 choice words
        • 1 is closely related in meaning to the probe
        • 2 not
        • See which choice word was most semantically similar to be probe
        • Some trials included concrete words and others abstract
    • Condition 2: subjects performed the very same task again
      • But it was preceded by 2 sentences that jointly composed a cue which was either relevant or irrelevant to the probe word
      • Both experimental conditions were employed in 2 separate studies
        • A gp of brain-damaged patients
        • Other gp: healthy subjects w/ rTMS
    • Study 1: 6 brain-damaged patients w/ stroke-induced LH lesions that vary in their focus and extent but overlapped in IFG
      • This shows a greater contribution of the left IFG to abstract than concrete words
      • We expect the patients to show worse comprehension of abstract than concrete words
      • But context availability model makes additional predictions
      • 1: predicts the patients’ comprehension of abstract words should sig improve when relevant contextual cues are provided
        • b/c These cues reduce the need for the kind of reg semantic processing that the left IFG subserve semantically variable abstract words, and helps select the most appropriate interpretation for the task
      • 2: this benefit will be minimal for concrete words b/c their meanings tend tb more stable and tangible than those of abstract words
    • Results: align w/ predictions
      • Overall, patients’ accuracy was much worse for abstract than concrete words
      • Their accuracy for abstract was sig boosted if there are relevant contextual cues compared to none
      • Some patterns for concrete words, but not sig
      • Irrelevant cues
        • Irrelevant cues and a minor -ve impact on understanding abstract words, had a major -ve impact on understanding concrete words
        • Reason: concrete words hv more rigid meanings and are less able to accommodate competing info
  • Study 2
    • H: left BA45 (site of most lesions among patients in study 1) is essential for strategic semantic processing
      • Based on context availability model, it is required more by abstract than concrete words
      • 13 adults performed modified versions of the tasks that had been given to the patients
      • Important change
        • Only relevant contextual material was provided as cue
        • Subjects performed task that consisted of making similar judgements about #s
        • -> control for general task difficulty
          • Some trials were classified as easy (concrete words in no cue condition)
          • Some were hard (abstract words in no-cue condition)
        • Main goal here: apply rTMS to left BA45 would sig affect the comprehension of abstract vs concrete words
      • Method: deliver a train of TMS pulses to target site for 10 min prior to administration of tasks
      • The b effects last for several min
  • Expectation: RT is slower for abstract than concrete words due to greater processing difficulty
  • Theoretical perspective: context availability model predicts that in no-cue condition, rTMS should lengthen RT for abstract words even more, but not affect RT for concrete words
  • Repetitive stimulation of left BA45 disrupt the capacity to guide the selection of the most appropriate word meanings (required more by abstract words, esp when relevant cues are gone)
  • If cues are present, they may be sufficient to overcome thee rTMS effect
    • RT for abstract words would not increase sig
  • Dual Coding Model
    • Diff expectations
    • Predicts rTMS will lengthen RT for abstract but not concrete words in both no cue an cue condition b/c in both conditions abstract but not concrete words need verbal word associations to be kept in an activate state
  • Results favor the context availability model over dual coding model
    • Subjects’ RT were slower for abstract than concrete words
    • For abstract but not concrete words
    • An interaction b/w the v of rTMS and contextual cues
    • RT for abstract words were sig longer after than b4 rTMS, but only when those words had tb understood w/o the benefit of semantically relevant contextual cues
    • RT for concrete words were not influenced in these ways
    • X
    • Abstract but not concrete words were affected by rTMS in the no-cue condition can not be attributed to inherently greater processing difficulty of abstract words, b/c the hard # judgement task was unaffected by rTMS
  • Support H: L BA45 contributes to the comprehension of abstract words by helping to resolve competitions b/w the various diff interpretations that are possible for these words

Summary

  • Concrete words are processed more efficiently than abstract words
  • Dual Coding Model: concrete words are ways to understand b/c their meanings draw on 2 representational systems
    • Nonverbal codes consist of modality specific perceptual and motor features
    • Verbal codes w/ frequency-based word associations
    • Abstracts words are difficult to understand as their meanings depends on verbal codes
  • Context availability model
    • Representational format of all word meanings = amodal
    • Concrete words have stable meaning and constant across context -> easier to process
  • Wang et al’s meta analysis
    • Concrete words tend to activate 3 main brain areas:
      • Left fusiform, gyrus (associated w/ visual shape and color rep)
      • Bilateral posterior cingulate gyrus (associated w/ visual imagery, spatial attention, navigation, and episodic mem)
      • Left angular gyrus (associated w/ integrative fx)
    • Abstract words activate 2 areas
      • Left middle/superior ATL (high lv spoken language comprehension and amodal semantic structures)
      • Left IFG (auditory-verbal STM and strategic control of semantic processing)
    • Data for concrete words are better explained by Dual Codng Model
    • Data for abstract words: both theories
  • Hoffman et al 2010
    • Showed that left IFG help comprehending of abstract words by helping to identify the best interpretation for the task at hand
      • 1 most valuable when abstract words are encountered w/o disambiguating contexts
      • 2 specific manifestation of thee strategic control of semantic processing
      • 3 more in keeping w/ context availability model
    • lv spoken language comprehension
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
A

A semantic hub for abstract concepts

  • Hub and Spoke model
    • ATLs in both hemispheres have modality-invariant integrative device/semantic hub
      • Bind and organize various conceptual features for word meanings
  • Look at studies that show the semantic fx of ATLs include abstract concepts (nouns & verbs)

Evidence of semantic dementia

  • SD: primary progressive aphasia, conceptual k gradually deteriorates due to worsening atrophy in ATLs and temporal lobe
  • Can affect abstract concepts
    • SD patients show “reverse concreteness effects”
    • Prev studies suggests that the L ATL is recruited more by abstract words -> damage here causes more impairment to abstract words
    • Some studies show reverse concreteness effects are only found in few ppl
  • Case studies
    • Ppl w/ herpes simplex encephalitis (HSE)
      • Virus invades ATLs bilaterally
      • Better K of abstract than concrete words
  • Macoir’s 2009
    • Case study
    • Patient SC, w/ SD
    • Psych prof
    • Struggle to retrieve words and understand ppl’s speech, recog objects and use them appropriately
    • Ex. went to car mechanic to fix a cigarette lighter
    • Tests showed he has conceptual k impairment, intact perception, motor control, EF
    • Atrophy in polar and inferolateral sectors of ATLs (LH bias)
    • X
    • SC can talk about abstract concepts better than concrete K
    • Doc administered the same set of tasks at 3 diff time points longitudinally
    • “odd one out” task
      • Show probe & 2 word choices
      • Determine which choice word was semantically more diff from probe word
        • 40 abstract words
        • 20 concrete living things
        • 20 concrete non-living things
      • Results
        • Abstract condition: (sig)
          • T1&2 = mildly impaired (70%)
          • T3 = shit (40%)
        • Concrete condition (not sig)
          • T1&2 = already shit 40%)
          • T3 = trash (13%)
    • Matching task
      • Present word w/ 4 pics
      • Identify the pic that best correspond to the word
        • 40 abstract
        • 20 concrete living
        • 20 concrete non living
      • Abstract condition
        • Rs describe picture associated w/ chance
          • Correct - (ex. person find money)
          • Related distractor (person bang his leg)
          • Unrelated: put flashlight in a bag
      • Concrete condition
        • Pic associated w/ snake
          • Correct pic – showed a snake
          • Semantic related distractor – turtle
          • Semantic and visually related distractor – alligator
          • visually related distractor – belt
      • Results
        • Abstract condition: T1&2 > T3
        • Concrete condition: not sig
    • Word definition task
      • Show word -> generate the most complete definition possible
      • 47 abstract words
      • 15 concrete living thing words
      • 28 concrete nonliving thing words
      • Rs rated SC’s definition
      • SC’s more correct definition for abstract items, esp for high f words
    • Results of 3 studies show reverse concreteness effects
    • It violates the normal processing advantage of concrete of abstract words
  • Some studies show that reverse concreteness effects do not happen frequency enough to be considered as a typical feature of SD
  • Hoffman and Lambon Ralph 2011
    • 7 tasks to SD patients
    • Probed concrete and abstract concepts
    • 1: synonym judgment – which of the 3 words are similar to the probe
    • 2: description-to-noun matching – describe which noun best match description
    • 3: description-to-verb matching - describe which verb best match description
    • 4: verb similarity – which verb choice best match the probe
    • 5: word-pic matching – which pic best match the word
    • 6: mischievous monkey test w/ pics (MMT): similar to #5
      • More difficult
    • 7: mischievous monkey test w/ words (MMT)
  • Results: ppl do better on concrete than abstract
  • Reverse concreteness effects do not appear to occur frequently enough tb an SD symptom
  • Most patients: semantic k deteriorates, abstract concepts are affected more
  • Aligns w/ Wang et al’s data
    • SD affects mid/superior sector of left ATL
    • This region affect abstract word processing
  • X
  • Hoffman and Ralph’s data, Wang et al’s data
    • Accommodated by 3 frameworks
    • Dual Coding Model: left middle/superior ATL contribute more to abstract than concrete concepts b/c it help high-lv spoken language comprehension -> verbal word associations
    • Context availability model: left middle/superior ATL contribute more to abstract than concrete concepts b/c it may store amodal semantic rep
      • These rep maybe engaged more by abstract words as they have more interpretations
    • Hub and Spokes model
      • Similar to context availability model
  • Why do minority of SD patients show reverse concreteness effects
    • # 1: functional anatomical parcellation of ATLs and distribution of atrophy in SD
      • Abstract words depends more on middle/superior sector of left ATL
        • If atrophy affected that area less than ventral sector (concrete object concept) -> reverse concreteness effects
    • # 2: indiv diff of abstract conceptual K prior to brain injury
      • If they hv unusually large abstract vocab (prof) -> reverse concreteness effects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
A

Evidence from rTMS

  • Examined if left mid/superior ATL plays a greater role in understanding abstract than concrete words
  • Pobric et al
    • 12 healthy ppl
    • Did semantic similarity judgement task
      • Decide which of 3 choice words were most similar to the probe
      • 3 sets of trials
        • 48 trials: words w/ high imageability
        • 48 trials: med
        • 48 trials: low
    • Control task – judge #s instead of words
    • Did the task b4 and after the delivery of 10 min of rTMS
    • rTMS left on 1 session; RS on another session
    • Results: RT & accuracies
      • RT was longer after rTMS for lexical, not # task
      • Effects in both LH and RH
      • As imageability of items decreased, impact of rTMS on RT increased esp on LH
      • Accuracies: more errors in lexical task w/ low imageability items
  • Thus, the rTMS findings support that left lateral ATL (esp anterior mid temporal gyrus) is critical to process abstract concepts
  • RH has a weaker role
  • Supports all theories, esp Hub and Spoke Model (ATLs bilaterally = conceptual K)

Summary

  • Nropsych, SD, rTMS studies align w/ Wang et al’s meta analysis
    • Middle/superior area of left ATL contribute to abstract concept processing
  • SD: progressive tissue loss in ATL -> degrades abstract and concrete concepts
    • Early stage: worse on abstract than concrete items normally
    • Some patients = reverse
      • Reason 1: Less atrophy in left middle/superior ATL than left ventral ATL
      • Reason 2: higher capacity for abstract thought prior to symptoms
  • rTMS: temp disrupting left anterior middle temporal gyrus impairs semantic processing of abstract words (RH = weaker effect)
  • Thus, SD and rTMS data is explained by Dual Coding Model, Content Availability Model, and esp Hub and Spokes model (both LH and RH of ATLs -> semantic structures of words)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
A

Domains of abstract concepts

  • Prev studies treat abstract concepts as a homogenous thing
  • It can have specific domains (ex. emo and #s)

Emotions

  • Emo words (ex. feat, anger, happiness, sadness) are considered abstract
  • When these words are processed deeply, they are complex concepts that activate neural activity across the brain
    • Ex. brain areas for verbal associations, amodal reps, perception, action, and introspection
  • Abstract words have more affective connotations than concrete words
  • Know neural underpinnings
  • Experts organize words along 2 based on ratings
  • They use 2D
    • Concreteness (extent the words refer to concrete objects)
    • Imageability: ease & speed the words elicit mental images in diff modalities
  • Kousta et al 2011
    • Showed that these 2D are distinct
    • Analyzed ratings for 4000 words -> words cluster on concreteness (abstract vs concrete) and imageability
    • E1:
      • Healthy ppl do lexical decision task
      • 80 real words
        • 40: high concreteness
        • 40: low
      • Matched on imageability, context availability, familiarity, f,etc
      • Results: abstract words were recog faster than concrete words
    • E2:
      • Analyzed 500 words
      • Results: abstract words are more emo loaded than concrete words
      • Main factor that influenced subjects’ RT in initial lexical decision experiment
    • E3:
      • New set of healthy ppl
      • Lexical decision task & fMRI scanned
      • Showed 60 concrete nouns; 60 abstract
        • They were matched
      • Results: abstract words were more valanced and arousing
        • Show sig activation in rostral (pregenual) part of anterior cingulate cortex, key for emo processing
  • Dual Coding Model and Context Availability Model can’t explain it
  • Need alt H:
    • Concrete concepts -> activate sensory and motor features
    • Abstract -> affective
  • Wilson Mendenhall et al 2011
    • Examine neural substrates of words w/ emo
    • Based on the Grounded Cog model
      • H: when ppl think deeply on meanings of emo words, this simulate high-lv aspects of affective processing (and recruit those brain areas)
      • H2: emo concepts are not always activated in the same way
      • Ex. Fear
        • Approach fear: swat a bug, overprepare
        • Avoid fear: tell white lie
        • You may hv higher or lower HR
      • 2 proposals:
        • # 1: comprehending emo words rely on neural networks for emo
        • # 2: comprehension rely on other neural networks the correspond to context (Ex. sensory, motor, social)
    • Method:
      • fMRI study
      • examined 4 abstract concepts
        • 2 emo = fear and anger
        • 2 none-emo = observe and plan
      • Critical trial
        • 1: heard a situation description -> imagine being there
        • 2: hears ¼ concept words, judge how easy it is to experience in the situation
          • 2 types of situations:
            • Protagonist was careless -> Physical danger (ex. got lost in woods)
            • Social eval in unfair situations (ex. unprepared for work presentation as others did not help)
      • Rs mixed “catch trials (presented situation -> NO concept” word) to tell apart neural responses to situations and concept words
        • Predictions:
          • 1: the 2 concept words (emo terms feat and anger) engage neural circuitry for emo
          • 2: these words will engage other areas for perception, action, language, and social cog, contingent of preceding situations
    • Results: Confirmed predictions
      • Deep semantic processing of emo terms activate areas for affective construal and reg
        • Anterior cingulate cortex
        • Lateral and medial OBF cortex
      • Recruited by both words regardless of preceding situation
      • Activated non emo related regions that were situation specific
        • FEAR
          • Physical danger situation (ex bodily harm)
            • Engaged visceral sensation (insular cortex)
            • Place recognition (parahippocampal cortex)
            • Auditory perception (superior temporal cortex)
            • Motor programming (inferior parietal cortex)
          • Social eval situations (judged -vely by others)
            • Social K areas (temporal poles)
            • Moral judgement (ventromedial PFC)
            • Cog control (dorsolateral PFC)
      • Non-emo abstract words
        • Regions were less activated
        • Engaged regions for visual, motor, and exec processes
    • Conclusion
      • Support that emo concepts are abstract but also activate areas for perception, action and affect
      • Can be explained by Grounded Cog model
      • NOTE: in the critical trials, ppl were immersed in the situation and imagined if the given concepts can be experienced in the situation
        • Issue: can’t tell apart neural circuitry of comprehending the words (fear and anger) vs the circuitry for evoking images
        • IOW: emotion related brain areas for feat and anger were not activated in the initial comprehension of those words; activated later during deliberate generating of affective imagery
        • Future studies -> tease this apart
      • Not all scholars agree w/ that we can clearly tease this apart
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
A

Numbers

  • Numbers are magnitudes that can be represented by various symbols (two, 2, II, etc)
  • They are abstract but grounded in bodily experience
  • NOTE
    • # s do not depend on ATLs
      • Pobric et al 2009
        • Apply rTMS to ATLs affect abstract words but not #s
      • SD patients: spared # k (ex. can compared # magnitudes, order them. read and write #s)
        • Some are obsessed w/ #s (ex. compulsive counting, clock watching)
      • Reason – SD disease does not affect central magnitude rep system in the brain (intraparietal sulcus IPS and inferior parietal lobe – IPL)
      • These regions respond when ppl think about #s and when they eval other quantitative stimulus (Ex size and time)
      • IPS/IPL represents supramodal coding scheme (detect diff notations: four vs 4)
        • May contain notation sensitive coding schemes
  • How are # concepts anchored in sensorimotor systems?
    • We use fingers to learn to count
    • # representation is linked w/ body part rep (esp finger rep) in IPS/IPL
    • Damage here (esp LH)
      • Acalculia: impaired # cog (ex. counting, compare magnitudes, calculations)
      • Finger agnosia: impaired recognition, differentiation and naming of finders of self and others
    • Can be induced (apply current to left IPS/IPL) by intracranial electrodes and rTMS
  • Rusconi et al 2009
    • Analyzed activation patterns w/ fMRI
    • Analyzed fiber pathways w/ diffusion tensor imaging (DTI)
    • fMRI - # K
      • experimental task
      • 1: probe # K – ask ppl to add and subtract #s
      • 2: indicate if the final # in red is the correct answer
      • Control/baseline
      • 1: viewed a sequence of letters
      • 2: indicate if a final letter in red appeared b4
    • fMRI – finger K
      • Experimental task
      • 1: probe finger K – present a sequence of hand postures
      • 2: ask them to identify those in which the ring finger was extended
      • Baseline task
      • 1: view a sequence of hand postures
      • 2: report those in which the palm was visible
    • Subtracted the 2 baseline condition from the 2 experimental conditions
      • Activation patterns for # and finger rep in the left IPS/IPL were very cloe together (Fig A-C)
    • X
    • DTI part
      • Parietal areas are associated w/ # and finger rep were directly connected by white matter paths (Fig D)
  • Thus, # concepts are abstract but are anchored in bodily experience
  • Finger counting strategies emerge in childhood, and still accessed by adults
  • Can’t represent infinity w/ our hands tho

Summary

  • 2 main abstract areas: emo and #s
    • They engage distinct brain areas
  • Emotion domain: 2 takeaways
    • 1: abstract words have affective component, and engage brain areas like anterior cingulate cortex
    • 2: if you carefully process emo words -> activate other affect-related areas: OBF cortex
  • # s: word meanings (eight) and numerals (8) are represented in left IPL/IPS
    • Nearby brain areas are related to finger representation
  • Thus, emo and #s depends on modality specific systems for perception, action, and introspection
    • Related to purview in grounded cog model and hub and spokes in “Hub and Spoke Model”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
A

Prev lec: The focus has been on concrete concepts in much of our discussion in class

  • Reasons
    • Easiest stimuli to define
    • bound to sensorimotor features
    • often represent high-frequency words that most people will know…
  • This really is the tip of the iceberg, however….
  • X
  • Top fig: concreteness ratings (concrete vs abstract)
  • Bottom fig: imageability (how easy it is to imagine this object in your mind)
  • Both dimensions are very correlated
  • 650+ = very rare words
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
A

Abstract concept theories

Classic Account:

  • Dual Coding Model (Paivio, 1971). Western Ontario
    • Dual coding = 2 systems of how semantic reps are coded
    • Pic
      • There’s verbal stimuli (ex. word “phone”)
      • Nonverbal stimuli (ex. picture “phone”)
      • Both enter into our sensorimotor system -> 2 types of representations
        • RHS: non-verbal system (ex. images/multimodal ideas)
          • Image = what sensorimotor modality it evokes
          • Ex. when you see a pic of a phone -> you recall other pics of phone and it’s sound -> generate nonverbal responses to stimuli
        • LHS: verbal system
          • Focuses on verbal structures; access words related to the concept
          • Ex. name of object “phone”, there’s a link b/w the word “phone” and “ring”; “ring” and gossip”
        • Referential connections (lines in b/w)
          • Allows you to map info b/w the systems -> verbal/nonverbal responses
    • Phone is concrete
    • Abstract words (ex. love/justice) don’t have a specific image associated w/ it
      • It may have verbal associations, but lack nonverbal/sensorimotor associations
      • IOW: abstract words are more associated w/ verbal, less on nonverbal rep/context
  • Abstract words must therefore be determined more by verbal representations and verbal context
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
A

How to operationaize abstractness?

  • A range of techniques, many based on explicit ratings of:
    • Imageability – how easy you can imagine smth
    • Concreteness:
    • And new measures like “ease of verbalizability”
      • Prof
      • How easy it is to describe in words what an object refers to
      • Sorta opposite of imageability
    • Can be specific to senses (ex. tactile, auditory)

Distinct Neural activation for concrete vs. abstract knowledge

  • More concrete words relate to sensorimotor areas (ex. vision)
    • Ex -24 panel
  • Abstract: conflict resolution
    • Ex. bottom right
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
A

Complicating matters… different effects in different modalities

Armstrong, Barreiro Abad, & Samuel (2014)

  • Examined overall accuracy on Visual vs. auditory lexical decision
  • Prediction based on imageability and “ease of verbalizability”
  • Take away
    • LHS - Imageability scores
      • visual task: (dashed line) imageability does not predict performance for visual lexical decision (flat line)
      • auditory task = opp, imageability is a stronger predictor of performance on an auditory lexical task
    • RHS: ease of verbalizability  opp
      • visual task: (dashed line) EoV predict performance for visual lexical decision (steeper line)
      • auditory task = EoV does not really predict of performance on an auditory lexical task
  • Why are there opp effects?

Why would we get different effects of imageability in different modalities?

  • Slower responses overall?
    • Slower response in auditory task b/c you need to hear some or all the words b4 you can generate the response
    • No – b/c the data was constant, only swapping the predictors
  • Stronger top-down interactions?
    • Most likely
    • Regressed all variances in behavioral responses based on known factors that influence performance
      • Noticed there are unexplainably fast and slow trials
      • Rs predicted the performance of those 2 trials based on imageability and EoV
      • Maintained effects/same pattern seen in prev graphs
    • Visual task:
      • use eyes and occipital lobe -> less top-down influence from semantic system on what the word visually looks like
      • auditory system is less loaded -> effectively access semantic rep via spreading activation -> stronger effect from EoV
    • Auditory task
      • Involve hearing words -> auditory system is loaded -> harder to EoV
      • Rely more imageability to respond in this task
  • Related to Noun verb distinctions (Watson et al)
    • There’s diff reps that are more/less relevant depending on the input vs output of the situation
    • Here -> similar effect
    • “”Consistent with the data, and supported by analyses of the “unexplainably slow” trials:
      • sort trials based on residual accuracy/latency, not factoring in concreteness
      • Analyze concreteness effects as a function of residual latency””
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
A

Alternative Accounts / An even more complex picture

Context Availability

  • Word representations are amodal (i.e. not restricted to 1 sensory feature/modality), differ in how fixed and specific word meaning is as a function of context
    • E.g., abstract words may occur in more diverse contexts than concrete words
      • Ex. Dogs -> we talk about it in more narrow/ concrete settings
      • Ex. Love -> happen in more diverse settings (friends, family, romantic)
      • Ex. Justice -> judicial, police justice
    • May also relate to the noun/verb distinction we discussed earlier in the course
      • Maybe verbs occur in more diverse contexts
      • -> Watson’s theory may be incomplete
  • Images may relate to many specific modalities, not just amodal
  • Findings differ based on task type (auditory vs visual task) -> emphasize vs deemphasize certain aspects of the word
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
A

Tolentino & Tokowicz (2009)

  • Presented participants with abstract words (e.g., heaven) and concrete words (e.g., pumpkin) in a visual lexical decision task while also recording EEG.
  • Mainpulation
    • A: present all abstract words -> concrete words
    • B: opp, present all concrete words -> abstract words
    • C: mixed
  • Only accuracy data (no RT data)
    • You should do nonspeeded tasks
    • Don’t want to contaminate the motor response w/ the lexical semantic activation (interest here)

Behavior

  • Looked at the accuracy only
  • Abstract first: Performance is more accurate for abstract and nonwords
  • Concrete first & mixed: similar
  • All experiments – concrete words = constant
  • IOW: there may be difference in performance if the “abstract words come 1st”, esp for abstract and nonwords

ERP—125-175 ms (visual/orthographic)

  • Here, we will focus on specific time windows
  • Assume rs are using electrodes that allow for optimal detection
  • Looked at changes in amplitude (i.e. how much of the electrical activity changed in these time windows)
  • X
  • For early time window (125-175 ms)
    • It represents visual/orthographic processing, esp visual word form area
    • Error bars are huge, and overlap across conditions
    • Point 1: nothing going on differently in the visual/orthographic processing across abstract words, concrete, and nonwords

ERP-200-275 ms (orth/phon processing)

  • Related to orthographic and phonological processing
  • i.e. map graphemes on phenomes
  • Here, all error bars overlap
    • Point 1: nothing going on differently in the orth/phon processing across abstract words, concrete, and nonwords
  • It’s a good thing there are no effects b/c this means that the stimuli are NOT confounded/not matched

ERP—300-500 ms (N400/semantics)

  • Focus on abstract vs concrete words
  • Abstract first & mixed show activity/pattern that differs w/ the concrete first condition
  • different activity
  • Why would the order matter (i.e. concrete first vs abstract first)?
  • Why does the mixed condition look more like the abstract first condition?
  • Thus, Order of presentation provides a context for words that interacts with a word’s meaning.
    • Order of presentation (i.e. abstract first vs concrete first vs mixed) provides a context that shape how ppl access the word’s meaning

Implication

  • When more abstract words are presented first, they provide more abstract context -> activate broader sets of contextual words (i.e. each abstract word connects to more words)
  • IOW: abstract words provide less specific context, this can shape/modulate the concreteness effect
    • If you have abstract word -> activate broad set of semantic K -> more neural activity
  • Present concrete words -> abstract words; the abstract words behave more like concrete words
    • IOW: when we put ppl in the mind set of concrete words, it uses this context to process things; this way of processing carry overs when exposed to abstract words
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
A

Static or dynamic?

  • Much work in the literature focuses on single words or concrete sentences that denote specific actions in the world.
    • -> This makes life much simpler for the experimenter.
  • Abstract words stress the importance of context effects (coming up later in class!), as well as of word-word (or word meaning – word meaning) interactions in shaping the meaning that is evoked.
    • Depending on how abstract/concrete the word is, this influences your ability to image the word/ EoV
    • Abstract words are more influenced by context
  • To paraphrase
    • David Rumelhart (A father figure to modern cognitive science; Cog Sci “Nobel” is named in his honor):
    • Quote: “Words don’t have meaning so much as they provide clues to meaning in a particular context.”
      • If you have a word in a sentence, the word does not provide a fixed amount of contribution to this sentence or other sentences;
      • Rather, that word provides a clue what it should mean in this context
      • IOW: EoV may be more important in certain contexts; while imageability in other
  • Abstract words show how flexible the lexical and semantic system is
17
Q
A

Semantic ambiguity

  • Interpretation of a word varies based on context
    • e.g.,
      • can refer to river bank, or bank of Canada

Why is a Theory of Semantic Ambiguity Resolution Important?

  • 1: understanding ambiguous words is Critical for theories of word and discourse comprehension
    • Context can affect processing of abstract words
    • XX/similar mechanism can affect how we process ambgious words
  • 2: semantic ambiguity is pervasive across and w/in semantic languages
    • across languages: English, Spanish, French, Hebrew, Japanese, etc.
      • all of these languages have ambiguous words
      • within languages: 85% of high frequency content words in English is ambiguous
    • Thus, you need theories of word and discourse comprehension that go hand in hand w/ theory of semantic ambiguity

Why don’t we have a good theory so far?

Challenge 1: Relatedness of Interpretations

  • All ambiguous words are NOT monolithic
  • IOW: ambiguous words can be ambiguous based on differences in their relatedness of their interpretations????
  • Continuum: from unambiguous -> polysemous -> homonymous
    • Unambiguous words with a single interpretation
      • e.g., CHALK
    • Polysemous words with related senses
      • e.g., / PAPER
        • physical property vs content of paper
    • Homonymous words with unrelated meanings
      • e.g., / BANK
  • This is a continuum
    • Polysemous subclasses
      • Metanemic (polysemous words that are closer to unambiguous words)
        • Ex. Chicken
          • Can be animal or the meat we eat
      • Metaphoric polysemes: polysemous words that are closer to homonymous words
        • Ex. Star = Celestial body vs movie actor
        • Related in metaphor?
    • Distribution of Ambiguity in Language
      • Looked up how many meanings there are for different words in the dictionary
      • Red = unambiguous words
        • Dense: b/c many low f words hv only 1 meaning
      • Blue = polysemous
        • Quite a lot of words
      • Green = homonymous
        • Most studied type of ambiguous words
        • But are rare in language
        • 500-1000 homonyms that are suitable for experiments
      • Yellow – hybrid ambiguous (polysemous and homonymous)
        • Have multiple unrelated meanings and multiple related senses
      • MP: there are a lot of words that are not unambiguous, need a theory that captures that

Challenge 2: Complex, Contradictory Effects

  • There are many complex and contradictory effects in the lit
  • Ex. Lexical decision task
    • Polyseme advantage: Polysemous words are responded to faster/more quickly than unambiguous words and homonyms
    • This diff is related to diff activity in the N400 window
      • -> N400 effects are lexical semantic in nature, generated by interior temporal lobes/ where semantic K is coded
    • Ex. Semantic Categorization Task
      • The effect are different
      • RT for homonyms -> sig slow down
      • Ex. present word “bank” -> does this refer to a living thing or not -> Ans = No -> slower RT relative to other 2 types of words
  • X
  • Complex, contradictory effects on the timing of the task ran
    • Swinney 1979
      • Presented homonyms to ppl, and provided sentences that bias you on how to interpret the word
      • Ex.
        • # 1: ppl listen to a sentence “the building was filthy, and it was no surprise to find spiders and other bugs in the rm”
        • # 2: after 250 OR 1000 ms the word “bug” is presented
          • Ppl do visual lexical decisions task: decide if the newly presented word that is consistent, inconsistent, or unrelated to the prev context
          • Ex. consistent: prev = bug; now = ant
          • Ex. inconsistent: prev = bug; now = spy
            • (i.e. spy planted a bug)
          • Ex. unrelated control word (ex. so)
      • @250 ms: Even though context is provided on how the word “bug” should be interpreted, it facilitated the interpretation of the “consistent” and “inconsistent words
        • No effect of the unrelated word
      • @1000 ms
        • There’s only a facilitation effect for the “consistent” condition
      • As such, there’s is relatively weak early effect of context

Challenge 3: Narrow Scope of Previous Accounts and Unsubstantiated Limiting processing Assumptions on how the accounts work

  • Orthographic Account of lexical decision (e.g., Hino et al., 1996; 2011; Kawamoto, 1993)
  • Semantic Account of lexical decision (e.g., Rodd et al., 2004)
  • Simultaneous Access Account of context effects (e.g., van Petten & Kutas, 1987)
    • Focus on how ppl integrate contexts
  • X
  • -> Those accounts each focus on providing a theoretical explanation how ppl perform on a specific task
    • It’s good at explaining the effects for that one task; but it cannot explain the contradictory effects in other tasks
  • Some efforts –
  • Decision System Account of lexical decision and semantic categorization (e.g., Hino et al., 2006; Hargraves et al., 2011)
    • -> Tries explain why performance vary across different tasks (ex. lexical decisions, semantic categorizations)
    • There’s issues in their account
18
Q
A

Decision System Account (Hino et al., 2006)

  • The semantic coding process is the same across tasks
  • If the coding process is the same, any differences you see must come from sources outside of the semantic coding system
  • Specifically, task differences must arise from the post-lexical decision system
    • “[Task differences] are likely not due to the semantic-coding process as that process is conceptualized within parallel distributed processing [PDP / Connectionist] models” - Hino, Pexman, and Lupker (2006, p. 266)

Limitations

  • 1: Unparsimonious
    • They don’t provide a specific theory on the decision system should differ across tasks a priori
    • IOW: you don’t have 1 simple theory that explains lots of data
  • 2: Does not generate specific predictions
    • When you have a new task that generate different effects, you need to understand post hoc how the decision system is configured to generate such effects??
  • 3: Underspecified
    • Don’t have any simulations that show how it actually works
  • 4: ** most key (e.g., Kawamoto et al., 1994, Piercey & Joordens, 2000)
    • NOTE: they assumed that semantic coding process is the same across tasks
    • Prog thinks this underestimates the temporal processing dynamics in the brain
    • And can’t explain semantic ambiguity effects

Settling Dynamics Account

  • 1: Temporal processing dynamics will change semantic activation at different points in time
  • 2: These Dynamics interact with representations of ambiguous and unambiguous words to produce different ambiguity effects at different points in time.
  • 3: diff tasks require diff amounts of evident to generate a response across tasks
    • -> Different tasks interact with different pts of the settling dynamics to produce a range of effects
    • This is Notwithstanding (despite) any qualitative differences between tasks
  • IOW:
    • Not everything in semantic coding process is the same (ex. across lexical decision and semantic categorization)
    • Prof argues: the systems are more similar than different
    • -> we will look at a neutral semantic task
  • X
  • Processing dynamics:
    • Co-operation between consistent features vs competition between inconsistent features of a word
      • Diff b/w cooperation and competition align w/ biologically plausible networks
      • Ex. there are fast co-operation (excitation) and slow competition (inhibition)
    • There are Weak (slow) contextual effects
      • IOW: it takes some time for context to bias interpretation of a word
    • The processing dynamics will interact w/ the Unambiguous, homonymous, and polysemous reps of the words
      • Representations
      • Unambiguous, homonymous, and polysemous words differ in terms of:
        • Consistency of features across interpretations
          • Ex. Chicken as meat vs animal
            • Both has wings (more consistency/ shared features in polysemous word than for homonymous words that barely have any shared features)
            • Ex. money bank vs river bank
        • Contribution of context in selecting an interpretation
  • Postulated semantic activation vs Time
    • Polysemous words (appropriate vs inappropriate)
    • Unambiguous
    • Homonymous words (appropriate vs inappropriate)
    • Onset of semantic processing: (some low lv processing already happened)
    • x
    • Unambiguous (baseline) = 1 meaning + 1 context
    • Other black line (BLEND state)
      • Gradual transition where context effects start to build up
        • there is less context-free partial activation, and more context-sensitive processing
      • This does not mean that after a certain period of time, you go from pure context-free processing to pure context-sensitive processing
      • Rather the constraint of processing builds up gradually over time
  • X
19
Q
A
  • Unambiguous
    • LS: Cooperation among consistent features: It has multiple semantic features; these features cooperate to activate one another -> mod amount of cooperation
    • Mid: There’s no competition b/c the words are unambiguous
    • RS: context is less important (relative to other items)
  • Homonyms
    • LS: Cooperation among consistent features = mod
      • Ex. bank has 2 clusters of features (river bank vs money bank)
      • W/in each cluster of features, you have cooperation
    • M: competition b/w inconsistent features = high
      • Ex. Features in $$ bank is inconsistent w/ river bank
    • RS: importance of context - high
      • Since competition b/w inconsistent features is high -> context is v impotent
    • -> this explains why the green line is below the red line
      • The green line ramps up more slowly b/c the features of the homonym are competing w/ one another -> this slows down semantic activity
      • Once context kicks in, you can activate the contextually appropriate interpretation of the word fully; and fully suppress the contextually inappropriate word
  • Polysemous (sorta like Goldilocks)
    • LS: Cooperation among consistent features = high
      • Unlike homonyms (not shared features), there’s overlap/shared features
      • Ex. there’s wings for Chicken as meat vs animal
    • M: competition b/w inconsistent features = mod
      • Compared to homonymous; some features won’t compete
      • Ex. wings = activated in both interpretations
      • Only the idiosyncratic features of the polyseme will compete w/ eo
    • RS: importance of context – mod
      • You need context to rule out the features that are not present in both interpretations of the word
      • But it’s not critical for the features that is shared across both interpretations
    • -> line
    • Early on – cooperation is strong (more activity in polyseme compared to unambig word)
      • Once context is becoming more important -> fully activate contextually appropriate interpretation of the polyseme
      • -> also largely but NOT fully suppress the contextually INappropriate interpretation of the polyseme
      • NOTE: you don’t fully suppress it b/c some features in the inconsistent one is present in the consistent interpretation
        • Ex. “wings” is activated for both meanings of chicken
20
Q
A

How does this explain the different data for diff tasks?

  • NOTE: Both tasks do not require contextual integration
    • Have to happen b4 the BLEND state
  • Lexical decision task: Lexical decision is made pretty quick (RT = 570 ms)
    • In the Line graph: since LDT is faster (purple), blue line/polysemes has more activity than the unambig and homonyms
    • -> this reflects a polyseme advantage
  • Semantic categorization task: Slower (RT = 740 ms)
    • In line graph, SCT happen at a latter time point (yellow)
      • There is still more activity for blue line/polysemes
      • The unambiguous word activity is much closer to the polysemes
      • But the homonym activity is left behind here (lower activity)
      • As such, this explains the effects in the SCT (aka bar graph)
  • Thus, we can explain the data in both LDT and SCT based on how semantic activity dynamically changes over time
  • X
  • Sweeney data
    • @ early time (250 ms)/context free processing (blue arrow)
      • Both green/homonyms lines (appropriate & inappropriate) overlap eo
      • Thus, you will see facilitation effects for the consistent and inconsistent words
    • @ late time (1000ms)
      • You suppressed the interpretation of the inappropriate homonym, and fully activate the appropriate one
      • Thus, we see facilitation effect for the consistent interpretation, but no diff b/w inconsistent and unrelated ones
  • -> we can explain Sweeney’s data
  • X
  • Many results from diff studies can be aligned to diff points on this continuum -> thus explain a very broad set of data
  • This suggests we can explain a braod set of data based on a mechanism (i.e. fx of semantic activity – words meaning, # of meanings it has, context)