lecture 4- multisensory perception & the sense of body ownership Flashcards
perception is..
multisensory in natural interactions with the environment
information from different senses can either be..
complementary or redundant/overlapping
what does multisensory information increase?
increases reliability of the percept and provides a more complete representation of the world (+ increases resistance to interference)
note: [note: vast amount of research on all senses and their possible interactions – here focus on vision & touch and audition & touch]
multisensory:
more than one modality is used in perception
cross-modal:
interactions between different modalities=> one sense affects perceptions provided by a different sense
integration:
merging information from different modalities into a unified percept
multisensory perception
- Different modalities can provide convergent information about the same external event/properties
- CNS has to disentangle cases where stimulation of different senses is unrelated and where it is related
simple heuristics for integration:
- temporal correlation
- spatial congruency
- inverse effectiveness
temporal correlation:
Stimulation of different modalities occurs at
(roughly) the same time
spatial congruency:
Stimuli in the different senses come from
approximately the same location
inverse effectiveness:
Reduced benefit of multisensory integration the stronger the unimodal signal of a cross-modal cue → Multisensory response is stronger when one stimulus by itself is quite weak
inverse effectiveness: single neuron II:
- superaddictivity
- additivity
- subadditivity
inverse effectiveness: single neuron I- multi-modal neurons in superior colliculus (SC) (relevant for rapid orienting of attention):
- Spikes produced by combination of visual and auditory event (5) is larger than the individual neural spikes in response to visual (1) and auditory stimuli (2)
- Superadditivity of spike counts:
Multisensory response is greater than the sum of uni-sensory responses - Sum usually only larger for weak inputs (near threshold) → aids detection of weak stimuli → speeds up behavioural responses
Additivity:
As cues become stronger unisensory
responses become stronger → integrated response is not different from the sum of the responses to each component
Superadditivity:
Both cues are weak – response
exceeds the sum of the separate inputs
Subadditivity:
Combined input is smaller than the
sum of the two uni-sensory inputs (but still
exceeds the largest single input response)
Definition: Inverse Effectiveness
Degree to which a multisensory response exceeds the response of the most effective modality specific stimulus component declines as the effectiveness of the modality-specific stimulus component increases
neural mechanisms- subcortical areas:
Superior colliculus
- Located in the mid-brain – important for orienting behaviour and fast motor
reactions - High(est) proportion of multisensory neurons (extensively studied)
- Neurons show overlapping
spatial maps for visual, auditory
and somatosensory modalities
neural mechanisms- cortical areas
- Multisensory neurons are
found in most areas – often in combination
with unimodal neurons - Even in areas previously considered modality specific (e.g., neurons in visual cortex respond to tactile cues, and neurons in
primary auditory cortex are activated by
visual lip movements) - Studies in primates primarily focussed on
posterior parietal cortex (converging
information from visual, vestibular, tactile
and auditory system)
cross-modal integration
- How is input from two senses combined perceptually?
- Different modalities are combined to yield the best estimate of the external properties
- The modality that provides more reliable information is given more weight (greater reduction in uncertainty)
→ e.g., vision strongly influences auditory localisation (ventriloquist effect) – vision is spatially more accurate
→ audition can dominate vision in temporal properties, e.g., auditory flutter drives perception of visual flicker
→ Modality appropriateness hypothesis
interim summary I
- 3 simple heuristics of multi-sensory integration
- Inverse Effectiveness in multisensory neurons of the SC
- Weighing of different stimuli depends on their accuracy and reliability
- Role of semantic congruency in strengthening multisensory integration
semantic congruency
- Semantic congruency (consistent
meaning of two stimuli) strengthens
multisensory stimulus integration and
corresponding behavioural performance - Semantic congruency of visual and
auditory stimuli affects the speed of
participants responses → faster target detections when visual stimulus is accompanied by a semantically congruent sound
crossmodal illusions
REMINDER
- Complementary information improves
the reliability of our perception - Incongruent information can result in
unexpected percepts due to sensory
interactions (e.g., audio-visual illusion
of the McGurk Effect) - Less research on multisensory illusions
including the tactile domain – e.g.,
audio-tactile interactions (noise bursts
can affect perception of roughness)
parchment skin illusion:
(1998, Jousmaeki & Hari)
Sound modifies tactile sensations
→ Enhanced high frequency feedback
makes the skin feel drier
→ temporal coincidence required
- Deprivation of one modality can modify the
development and integration of remaining
modalities
parchment-skin illusion in blind people
- Less susceptible to the illusion – ability to
ignore irrelevant auditory input in the
tactile task
Modality appropriateness account:
Interference by a task-irrelevant modality is
reduced when processing accuracy of the
task-relevant modality (i.e., touch) is high
(i.e., perception is dominated by the
modality that provides the most reliable
information)
parchment skin illusion: (vision, audition and touch)
- Robust illusion in sighed humans
- 7 of the early blind participants were
not/ minimally susceptible to the
illusion - Only 3 of the early blind participants
showed small effects in the expected
direction - Multisensory perception might not be
innate but is – at least to some extent -
based on experiences during early
development → following visual
deprivation extensive cross-modal
changes occur (re-organisation of
perceptual system)
note: study tested both early vs late blind subjects- only results for early shown/discussed here
Cross-modal Plasticity in the Cerebral Cortex I
Study by Hamilton et al., 2000 (NeuroReport)
- Case of blind woman who lost ability to
read Braille following bilateral occipital
lesions (usually processing vision) following
stroke - Occipital cortex involved in decoding
spatial and tactile information for Braille
reading - Suggests that there may be a critical period of susceptibility for the recruitment of the
occipital cortex for haptic information processing (in congenitally blind)
Cross-modal Plasticity after Sensory Deprivation: Summary: Reorganisation in areas associated with the deprived modality
- Primary sensory areas are able to process
information from remaining modalities - Sensory inputs shape the functional
architecture of the brain - Reorganisation likely to be limited to
early-onset (sensitive period) - Caution:
Difficulty to clearly distinct between
primary brain areas and neighbouring
areas (usually multi-modal) – small
spatial resolution of TMS, PET and MRI
Cross-modal Plasticity in the Cerebral Cortex II
MS – Study
Cohen et al., (1997), Nature, 389
- TMS briefly disrupts the electric activation
patterns of the neurons in the cortical area it is applied to - Blind Braille readers and sighted participants who had to identify embossed Roman letters
- Occipital stimulation (visual area) disrupted
Braille reading in blind participants but not
tactile discrimination in sighted participants
(but disrupted their visual performance) - Visual cortex recruited for somatosensory
processing in early blind
Cross-modal Plasticity after Sensory Deprivation: Summary: Reorganisation of multi-modal areas in the cortex:
- Behavioural compensation for missing
modality is mediated by enhanced
recruitment of multi-modal areas - Reorganisation in multi-modal areas
also for late-onset deprivations - Example: Enhanced recruitment of
posterior STS (area of multi-modal
integration) in deaf individuals when
attending to moving visual displays
Interim Summary II
- Cross-modal illusions, such as the parchment skin illusion, can be explained by the modality appropriateness account
- Cross-modal perception are likely shaped by our experiences (in early development)
- Cross-modal plasticity: Sensory deprivation in one modality affects the development/processing of the remaining modalities → reorganisation of cortical functions
what is critical for our survival and part of human conscious experience ?
Perceptual distinction between one’s own body and the environment is critical for our survival and part of human conscious experience
what does body ownership include of?
Body Ownership includes feeling the skin stretching around joints and digits, feeling the coolness/ warmness on the skin, feeling the tension from muscles and tendons etc. (all combined)
state- Sense of Body Ownership is multisensory in nature (cannot be
reduced to a single modality) – not really a “sense” but a complex/multisensory perception
research-
Research interest in experimental studies of body ownership started about 25 years ago with the description of the famous “Rubber Hand Illusion” (Botvinick & Cohen, 1998, Nature) → 3-way interaction between vision, touch & proprioception
rubber hand illusion: how (objectively) quantified?
- Questionnaires and Rating scales
- Proprioceptive Drift (reported hand
location changes) - Physiological responses (skin conductance) in response to perceived threat to the rubber hand
perceptual rules of body ownership:
1.Temporal synchrony: if visual and tactile
stimulation are mismatched illusion
disappears (~ 300-500 ms)
- Spatial rules: e.g., distance between real
hand and rubber hand (peri-personal
space as constraint), identical direction of
visual and tactile strokes, matching
orientation & postures of hands
what are perceptual rules similar to?
Perceptual Rules are similar to principles of multisensory integration
what is sense of body ownership governed by?
Sense of body ownership is governed by the same principles as multisensory perception
→ multisensory integration is the key mechanism in perceiving body ownership
Definition: Peripersonal Space
Space immediately surrounding our bodies in which objects can be grasped and manipulated [Space beyond grasping distance = extrapersonal space]
- illusion strength decreases at distances > ~30 cm (peripersonal space)
rubber hand illusion - more rules
-tactile congruence rule
- humanoid shape rule
tactile congruence rule
Tools that touch real and rubber hand must be similar in texture and geometric features (subtle incongruences possible)
humanoid shape rule
Rubber hand must resemble a human hand in shape and structure (colour and material less critical) → e.g., ownership observed for
realistic prosthetic hands or even images
of human hands
state- Congruent pattern of multi-sensory signals drive the perceptual phenomenon!
Multisensory integration of body signals
- Cortex areas specifically dedicated to
multisensory integration of body related
signals in peripersonal space - Meta-analysis: areas in ventral premotor
cortex and intraparietal sulcus that respond
with greater activation for combined/ congruent visuotactile stimulation than for
unimodal visual or haptic stimulation or
incongruent stimulation - Generally, body ownership is associated with
activation in multisensory areas in frontal
and parietal lobes
Recent studies on limb ownership:
Consistent activation in area EBA
of the ventral stream
Full-Body Ownership
- Illusion adheres to the same
perceptual rules as the RH-illusion - Illusion seems to relate to similar
activation of brain patters as the
RH-Illusion (increased activity in
ventral premotor cortex,
intraparietal cortex and LOC) - Activation in ventral premotor
cortex correlates with the strength
of the illusion - Note: Entire body is perceived as one’s own (not just the stimulated parts) → requires multisensory perceptual binding
Multisensory Integration: Neuroimaging
- Manipulation of temporal congruency (synchrony) and spatial congruency (hand orientation)
- Largest activation when stimulation was both spatially and temporally congruent
- Activation in intraparietal cortex is additive
- Response in the pre-motor cortex is superadditive
Individual Differences-
Something to consider
- Degree of illusory experience varies
(+about 30% of population are immune to
the induction of the RH-Illusion) - Factors influencing individual differences
are largely unknown - Multi-sensory account predicts that
immunity should relate to how visual,
tactile and proprioceptive information is
weighted by the brain - People that rely more on proprioceptive
information (e.g., dancers, gymnasts etc.)
maybe more resistant to the illusion
potential clinical applications- projection of ownership to advanced hand prostheses:
Indication that synchronised
brushing of participant’s stump and the fingers of a prosthetic hand produces RH-Illusion in some (~ 30%) amputees
potential clinical applications- Projection of ownership to simulated bodies in VR:
People can maintain ownership of a virtual
hand as long as its movements are temporally
and spatially congruent with movements of real hand (allow paralysed people to “own a virtual limb” in virtual and mixed reality applications)
final summary
*Rubber hand illusion and full body ownership illusions obey the
temporal, spatial and other congruency rules related to properties of the stimulus – mirroring the congruency principles of multisensory integration
- Illusory changes in body ownership don’t depend on a single modality but reflect an (flexible) integration process of different modalities
- Sense of body ownership is associated with increases in activity in multisensory cortical areas (e.g., premotor cortex and PPC)
- Sense of body ownership critically depends on multisensory integration