Representations & Consciousness: Chapters 4 & 5 Flashcards
What are examples of conscious modes?
perception, imagery, dreaming
What are the problem with direct definitions of consciousness?
Direct definitions of C usually resort to circularity (“seeing”, “knowing”, “realising”)
What is a better approach than defining consciousness head on?
Instead of defining C head-on, we better start with asking:
What can be peeled away from our lives before C is lost?
Several modalities have been investigated as necessary components. Which of these does Pennartz posit empirical support has been found for consciousness in the absence of?
Empirical support for conscious experience
in the absence of:
-Motor activity
-Language (incl. verbal beliefs, judgment)
-Emotion
-Memory
How can conscious vision be broken down?
Conscious vision can be broken down into
various components, e.g.:
- Color vision (involving V4 & inferotemporal patches)
- Motion vision (MT/V5)
- Form and Face vision (FFA, IT etc.)
- Vision in an entire hemifield (parietal cortex - hemineglect)
Why is breaking down vision like this relevant?
Important to look at what can’t be peeled away:
MT/V5 –> if away –> akinetopsia
loss of V4 –> achromatopsia
what can’t be peeled away?
:: pieces of sensory ctx
:: parietal ctx
:: Thalamocortical systems
Consciousness in other modalities also depends on specific cortical systems (e.g. auditory, somatosensory cortex,
olfaction, taste)
What are Pennartz’ hallmarks of consciousness?
Hallmarks:
* Qualitative (multimodal) richness
* Situatedness & immersiveness: you’re right in the middle of the situation (immersed into it)
* Integration, unity
* Dynamics and stability: establishing of objects
* Interpretation, inference, intentionality
How does Pennartz construct his definition of consciousness?
Modes of (healthy) consciousness: perception imagery, dreaming
Hallmarks:
* Qualitative (multimodal) richness
* Situatedness & immersiveness: you’re right in the middle of the situation (immersed into it)
* Integration, unity
* Dynamics and stability: establishing of objects
* Interpretation, inference, intentionality
Definition of conscious experience: Inferential representation that is situational (spatially encompassing) and multimodally rich
Describe the hard problem of cosciousness
Past decades: much progress on “easy” problems of
consciousness – memory, attention, decision-making, sensory
discrimination (etc.)
* But: we refrain from asking deeper questions, e.g. how is
phenomenal content associated with neural activity (”hard”
problem; D. Chalmers; “Explanatory Gap” - Levine)
* What is phenomenal content?
– having qualitatively rich experiences
– What is it like to be…. (e.g. you)?
Give two examples of the difference between easy and hard problems
- First example: painting by Van Gogh àcorrelate pictorial
elements (shape, color, etc.) with neural activity in different
brain areas; but what is their exact relationship? - Second example: physical description of light vs. color experience
What is a key problem of consciousness?
“whole-pattern perception” (Hallmark: integration, unity)
What group of psychologists tackled this problem of whole pattern perception?
This is a classic problem in Gestalt Psychology: e.g. Kurt Koffka, Max Wertheimer and Wolfgang Kohler
=> “Holistic” vs. analytic approach to perception
How do we distinguish whole objects against a background according to Gestalt psychologists?
Gestalt psychologists: whole-figure recognition explained from common features present in parts of the figure
Gestalt “laws” of perception:
* Law of common fate (common motion)
* Law of good continuation
* Law of similarity, proximity, closure
What neural network is compared to these Gestalt psychologists and why?
Gestalt Psychology and whole-pattern recognition by recurrent nets: Gestalt laws suggest how bottom-up grouping of features
into an object may be achieved. Attractor properties of recurrent nets may help explain bistable (flip/flop, 2 basins) nature
Do Gestalt laws therefore explain holistic perception?
Gestalt laws explain feature grouping, not the holistic (all-or-none) aspect of perception
What can neural network models explain about cognition? (2)
Stability of percepts (and/or imagery) is a hallmark of
(conscious) representation – also achieved in recurrent nets
Emergence: networks show how low-level phenomena can
give rise to more complex, high-level phenomena (Imagine you are a neuron connected to a large array of neurons: no clue what you and others are representing (but the representation is there, “supra-neural”))
What did the success of neural network theory lead to?
Success of neural network theory led to neurocomputational
account of consciousness
Give two examples of these neurocomputational accounts of consciousness
*e.g. Paul and Patricia Churchland’s eliminative materialism
attempts to explain away all mental phenomena by brain’s
physical processes (eliminate “folk psychology”)
*e.g. Explain memory, attention, multistability (etc.) from
recurrent properties in corticothalamic systems
How is this recurrent processing of consciousness described in P. M. Churchland (1995)?
The thalamocortical loop is the posited recurrent properties responsible or involved in consciousness; the intralaminar nuclei especially have far reaching ascending and descending pathways around the cortex.
What caveats are pointed out about this recurrent processing theory?
*Intralaminar nuclei project less specifically than depicted
*Intralaminar nuclei are important for arousal, not modality-specific recurrent processing
*Recurrent pathways found elsewhere in the brain, also in “nonconscious” structures such as cerebellum.
How do Patricia Churchland and Terry Sejnowski approach the question of how neurons represent anything?
“…. in view of the opportunity to correlate neuronal
responses with controlled stimuli, the sensory systems are
a more fruitful starting point for addressing this question
than, say, the more centrally located structures such as the
cerebellum or the hippocampus or prefrontal cortex. […]
Constrained by transducer output, the brain builds a model
of the world it inhabits.
That is, brains are world-modelers, and the verifying
threads —the minute feed-in points for the brain’s
voracious intake of world-information — are the neuronal
transducers in the various sensory systems.”
How do Patricia Churchland and Terry Sejnowski approach the question of how neurons represent anything?
“…. in view of the opportunity to correlate neuronal
responses with controlled stimuli, the sensory systems are
a more fruitful starting point for addressing this question
than, say, the more centrally located structures such as the
cerebellum or the hippocampus or prefrontal cortex. […]
Constrained by transducer output, the brain builds a model
of the world it inhabits.
That is, brains are world-modelers, and the verifying
threads —the minute feed-in points for the brain’s
voracious intake of world-information — are the neuronal
transducers in the various sensory systems.”
What omission does Pennartz point out in this account?
But: How could sensory receptors act to verify that a model of the world (~ representation) is correct? (What is the independent evidence?)
In other words: inputs & outputs of neural nets are not specified / identified
(except by external observer)
=>the network itself has “no clue” about what it is processing (it processes numbers)
Why is the input to our perceptual systems not specified? There is no shortage of feature detectors within each modality (submodalities) so that’s not the problem. Name two classic hypotheses on this issue
First classic hypothesis: pattern coding
Second hypothesis: labeled-lines coding
Describe pattern coding
Suppose the brain needs to be informed about two taste inputs, ‘bitter’ and ‘sweet’: Different chemical applied to same taste bud could result in different sequences of activations encoding the two different inputs
Different types of information sent across 1 common ‘line’
(=receptor/nerve fiber)