Representations & Consciousness: Chapters 4 & 5 Flashcards

1
Q

What are examples of conscious modes?

A

perception, imagery, dreaming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the problem with direct definitions of consciousness?

A

Direct definitions of C usually resort to circularity (“seeing”, “knowing”, “realising”)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a better approach than defining consciousness head on?

A

Instead of defining C head-on, we better start with asking:
What can be peeled away from our lives before C is lost?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Several modalities have been investigated as necessary components. Which of these does Pennartz posit empirical support has been found for consciousness in the absence of?

A

Empirical support for conscious experience
in the absence of:
-Motor activity
-Language (incl. verbal beliefs, judgment)
-Emotion
-Memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How can conscious vision be broken down?

A

Conscious vision can be broken down into
various components, e.g.:
- Color vision (involving V4 & inferotemporal patches)
- Motion vision (MT/V5)
- Form and Face vision (FFA, IT etc.)
- Vision in an entire hemifield (parietal cortex - hemineglect)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why is breaking down vision like this relevant?

A

Important to look at what can’t be peeled away:
MT/V5 –> if away –> akinetopsia
loss of V4 –> achromatopsia

what can’t be peeled away?
:: pieces of sensory ctx
:: parietal ctx
:: Thalamocortical systems

Consciousness in other modalities also depends on specific cortical systems (e.g. auditory, somatosensory cortex,
olfaction, taste)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are Pennartz’ hallmarks of consciousness?

A

Hallmarks:
* Qualitative (multimodal) richness
* Situatedness & immersiveness: you’re right in the middle of the situation (immersed into it)
* Integration, unity
* Dynamics and stability: establishing of objects
* Interpretation, inference, intentionality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does Pennartz construct his definition of consciousness?

A

Modes of (healthy) consciousness: perception imagery, dreaming
Hallmarks:
* Qualitative (multimodal) richness
* Situatedness & immersiveness: you’re right in the middle of the situation (immersed into it)
* Integration, unity
* Dynamics and stability: establishing of objects
* Interpretation, inference, intentionality

Definition of conscious experience: Inferential representation that is situational (spatially encompassing) and multimodally rich

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Describe the hard problem of cosciousness

A

Past decades: much progress on “easy” problems of
consciousness – memory, attention, decision-making, sensory
discrimination (etc.)
* But: we refrain from asking deeper questions, e.g. how is
phenomenal content associated with neural activity (”hard”
problem; D. Chalmers; “Explanatory Gap” - Levine)
* What is phenomenal content?
– having qualitatively rich experiences
– What is it like to be…. (e.g. you)?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Give two examples of the difference between easy and hard problems

A
  • First example: painting by Van Gogh àcorrelate pictorial
    elements (shape, color, etc.) with neural activity in different
    brain areas; but what is their exact relationship?
  • Second example: physical description of light vs. color experience
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a key problem of consciousness?

A

“whole-pattern perception” (Hallmark: integration, unity)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What group of psychologists tackled this problem of whole pattern perception?

A

This is a classic problem in Gestalt Psychology: e.g. Kurt Koffka, Max Wertheimer and Wolfgang Kohler
=> “Holistic” vs. analytic approach to perception

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How do we distinguish whole objects against a background according to Gestalt psychologists?

A

Gestalt psychologists: whole-figure recognition explained from common features present in parts of the figure

Gestalt “laws” of perception:
* Law of common fate (common motion)
* Law of good continuation
* Law of similarity, proximity, closure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What neural network is compared to these Gestalt psychologists and why?

A

Gestalt Psychology and whole-pattern recognition by recurrent nets: Gestalt laws suggest how bottom-up grouping of features
into an object may be achieved. Attractor properties of recurrent nets may help explain bistable (flip/flop, 2 basins) nature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Do Gestalt laws therefore explain holistic perception?

A

Gestalt laws explain feature grouping, not the holistic (all-or-none) aspect of perception

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What can neural network models explain about cognition? (2)

A

Stability of percepts (and/or imagery) is a hallmark of
(conscious) representation – also achieved in recurrent nets

Emergence: networks show how low-level phenomena can
give rise to more complex, high-level phenomena (Imagine you are a neuron connected to a large array of neurons: no clue what you and others are representing (but the representation is there, “supra-neural”))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What did the success of neural network theory lead to?

A

Success of neural network theory led to neurocomputational
account of consciousness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Give two examples of these neurocomputational accounts of consciousness

A

*e.g. Paul and Patricia Churchland’s eliminative materialism
attempts to explain away all mental phenomena by brain’s
physical processes (eliminate “folk psychology”)

*e.g. Explain memory, attention, multistability (etc.) from
recurrent properties in corticothalamic systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How is this recurrent processing of consciousness described in P. M. Churchland (1995)?

A

The thalamocortical loop is the posited recurrent properties responsible or involved in consciousness; the intralaminar nuclei especially have far reaching ascending and descending pathways around the cortex.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What caveats are pointed out about this recurrent processing theory?

A

*Intralaminar nuclei project less specifically than depicted

*Intralaminar nuclei are important for arousal, not modality-specific recurrent processing

*Recurrent pathways found elsewhere in the brain, also in “nonconscious” structures such as cerebellum.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How do Patricia Churchland and Terry Sejnowski approach the question of how neurons represent anything?

A

“…. in view of the opportunity to correlate neuronal
responses with controlled stimuli, the sensory systems are
a more fruitful starting point for addressing this question
than, say, the more centrally located structures such as the
cerebellum or the hippocampus or prefrontal cortex. […]
Constrained by transducer output, the brain builds a model
of the world it inhabits.

That is, brains are world-modelers, and the verifying
threads —the minute feed-in points for the brain’s
voracious intake of world-information — are the neuronal
transducers in the various sensory systems.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How do Patricia Churchland and Terry Sejnowski approach the question of how neurons represent anything?

A

“…. in view of the opportunity to correlate neuronal
responses with controlled stimuli, the sensory systems are
a more fruitful starting point for addressing this question
than, say, the more centrally located structures such as the
cerebellum or the hippocampus or prefrontal cortex. […]
Constrained by transducer output, the brain builds a model
of the world it inhabits.

That is, brains are world-modelers, and the verifying
threads —the minute feed-in points for the brain’s
voracious intake of world-information — are the neuronal
transducers in the various sensory systems.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What omission does Pennartz point out in this account?

A

But: How could sensory receptors act to verify that a model of the world (~ representation) is correct? (What is the independent evidence?)

In other words: inputs & outputs of neural nets are not specified / identified
(except by external observer)

=>the network itself has “no clue” about what it is processing (it processes numbers)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Why is the input to our perceptual systems not specified? There is no shortage of feature detectors within each modality (submodalities) so that’s not the problem. Name two classic hypotheses on this issue

A

First classic hypothesis: pattern coding
Second hypothesis: labeled-lines coding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Describe pattern coding

A

Suppose the brain needs to be informed about two taste inputs, ‘bitter’ and ‘sweet’: Different chemical applied to same taste bud could result in different sequences of activations encoding the two different inputs

Different types of information sent across 1 common ‘line’
(=receptor/nerve fiber)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What was the fate of this pattern coding hypothesis?

A

This hypothesis was refuted in the 19th century

27
Q

Describe labelled lines coding

A

It is a specific receptor type/nerve fiber that conveys the modality and the kind of signal propagated is the same for different modalities

28
Q

How was this labelled-lines coding received?

A

This hypothesis received widespread confirmation

29
Q

What was included in Müller’s doctrine of Specific Nerve Energies (1838)?

A

Ten laws, including:
» Law 3: “The same external cause [..] gives rise to different sensations in each sense, according to the special endowments of its nerve“ (Not important how it is activated so long that it is)

> > Law 5: “We are directly aware, not of objects, but of activity of our nerves themselves“ (not aware of APs but of results/consequence of that AP )

Law 3: e.g. Smack on eye àtactile sensation but also visual percept

30
Q

How could these laws be interpreted/ evaluated now?

A

What are the “special endowments of nerves”?
=>We now know: different nerves conduct action potentials in a
very similar way

How can we be aware of activity of our nerves themselves?
=> “activity of our nerves” would have to be: spike trains
Alternatively: how does the brain convert spike trains into
percepts?

31
Q

Critically evaluate this labelled lines hypothesis

A

From the brain’s viewpoint, the labeled-lines hypothesis is cheating: anatomical origin of afferent input is unknown to it. Brain area X receives spike trains / EPSPs, but these do not disclose the identity of the source

All types of nerve fibers operate in the same way: propagation of action potentials to brain
=> the problem of modality ́labelling ́ (or: identification) has been relegated to the brain

=> This problem becomes the brain’s problem of Modality Identification: how does the brain know what kind of information it is processing?

32
Q

Describe a counter argument this these criticisms of labelled lines hypothesis

A

Argument: Visual input is differently structured than auditory input because of the way the eyes and retina are constructed (spatial field; disparity) and photon patterns impinge on it

33
Q

What is a criticism of this structural explanation of the labelled lines problem?

A

But: the ‘visual’ face pattern in an array of neurons could also represent an activity pattern on a tonotopic or somatosensory map; it is not inherently visual

And: dynamic properties may refer to visual movement, changes in pitch, stroke across the body

So: Answer is No, local input statistics do not offer a full explanation

34
Q

What is the argument for feature detectors playing a role in this?

A

Argument: each sensory modality is specific (identifiable) by virtue of detectors that are uniquely present only in that modality (e.g. hearing – consonance, harmony)

35
Q

Give Pennartz’ view on this feature detector argument

A

-Imagine unique feature detectors in the brain for vision and taste, but unconnected. Formally (topologically) it does not matter if these are separated physically, even across two brains; so how would a brain tell a sensory input is unique?

An isolated feature detector responds to a specific stimulus, but so does a smoke detector (non-conscious). Why would an unconnected feature detector code any particular modality / sensation? So: Answer is No: unique feature detectors alone do not help us out

36
Q

What does Searle, based on his Chinese room, say that brains must have that computers do not?

A

The brain must have causal powers (lacking in computers) that endow it with consciousness

37
Q

The Chinese room poses a problem for computers having consciousness, but what does it mean for the brain?

A

the brain does have a ‘mind’ and also lives in a kind of Chinese room: the Cuneiform room. Meaning of inputs and outputs is also unknown to the brain – they are unidentified spike patterns

The Chinese room is a legitimate problem, but it equally applies to the brain (not just to computers). Using a diversity of (unidentified) sensory inputs, the brain must construct a world model that identifies (or infers) modalities by itself

38
Q

What could Searles ‘causal powers’ be or not be?

A

Searle’s “causal powers” of the brain: unlikely to be
new physicochemical properties (~ vitalism)

We must rather search for unique organisational /computational principles

39
Q

How are neural network models limited in cognitive neuroscience?

A

Neural network models: suffice to explain some mental properties, but do not solve (e.g.) Modality Identification problem (=> problem of meaning)

40
Q

If we compare neural nets to non-living things in nature, what are the possible outcomes?

A

–neural networks could be unique: would confirm validity of neurocomputational approach

–if inanimates show similar functions: are neural nets too limited, or underconstrained?

–inanimates showing similar functions: this could also mean that ‘mind’ is present in many inanimate systems => panpsychism

41
Q

Give an example of what an inanimate neural network could look like. What does Pennartz conclude from this?

A

A “Rocky” neural network:
* Sunlight provides “sensory input”
* Neural connections: infrared radiation from rock to rock
* Activation state: rock’s temperature
* Output of neural net: classify sensory input as “cool…..luke-warm….hot”
* Task may be too simple: but also direction of sunlight can be decoded from network outputs
* Conclusion: basic properties of neural networks fulfilled by collection of rocks

42
Q

Maybe there is recurrent feedback lacking? Does it lack learning and memory capacity? How does Pennartz respond to these?

A

Possible objection: recurrent feedback lacking?
(cf. P. Churchland)
=> can be fixed by particular arrangement of rocks

*Does the rock formation lack learning and
memory capacity?
=> storage of heat (kinetic energy) counts as
memory?
=> also: pattern of rock-rock connectivity can be modified when light input changes (e.g. clouds can induce “synaptic plasticity”)

*Does a sandy beach store a memory of your foot?
=> network of cohesive (connected) sand particles
*If beach would have pattern-completing abilities, would you consider the beach conscious?

43
Q

What standpoint is the chinese room thought experiment against?

A

Against hard functionalist standpoint of ‘computers are a good metaphor for the mind/brain’

44
Q

Should we thus accept panpsychism? What is Pennartz’ objections to this? (3)

A

:: Overt behaviour is lacking => not decisive
:: Rocky networks lack representations (dreaming, perceiving, imagining, thinking-in-language)
:: Beach may have “passive” representation (=mold), but this contrasts with memory
–as active re-creation and reconstruction
–as something intended to be used as memory

45
Q

What therefore is Pennartz’ take on panpsychism?

A

Hard to reject panpsychism completely, but: alternative is more likely, viz. to classify (classic) network models as insufficient / under-constrained when one aims to explain consciousness

46
Q

In what sense are neural networks underconstrained?

A

Models do not solve MI problem (=> meaning)

There exist brain systems structured as neural nets, yet are not linked to consciousness (e.g. cerebellum)

Brain systems that are associated with consciousness, can become ‘unconscious’ (e.g. anesthesia) but still behave like neural nets then

47
Q

What would panpsychism do to terms like consciousness?

A

Panpsychism would lead to an overgeneralised use of terms like Consciousness => meaningless term; why not just say complexity

48
Q

How does Pennartz relate consciousness to physics?

A

Consciousness refers to certain objects showing manifest properties (of being conscious)
=> compare it to magnetism or frozenness (in physics)

49
Q

Describe global workspace theory

A

Global workspace is a hub or informational ‘marketplace’ from which information is broadcast. Broadcasting is mainly via corticocortical fibers. Pre-conscious vs conscious stages of processing involves crossing a threshold => “ignition” process

50
Q

What are critiques of GNWT? (4)

A

–model addresses logistics of information exchange and “access” aspects
–does not address phenomenal content
–implementations are classic neural network models, facing the same problems
–there is evidence for “ignition” (-> PFC), but only if perception is coupled to motor response (reporting)

51
Q

What is meant by this phenomenal vs access consciousness?

A

Phenomenal vs. Access Consciousness: experienced content vs cognitive processes operating on this content (e.g. reporting, working memory, attention)

52
Q

What theory of consciousness was approached by Tononi?

A

Tononi: substrate for consciousness is large cluster of neuronal groups (“dynamic core”, coding color, motion, shape,..)

conscious state: high degree of differentiation of information coded
conscious state: high degree of integration (defined by Information Theory)

53
Q

But what would the ‘core’ of information theory be? In terms of neural structures?

A

Membership of Core is flexible and changes rapidly (members: cells in MT, V4, etc.)

54
Q

How would a camera hold up in IIT?

A

Digitised camera picture contains differentiated information in terms of pixel activation, but there is no integration

55
Q

How is information defined within information theory?

A

Information is Reduction of Uncertainty e.g in Guess who, as more questions are asked chance that person p is selected increases. A bit (binary digit) is the amount of information needed to choose between two equally likely alternatives:

Start of game; Maximal uncertainty
2 alternatives: 0 or 1
Add 2 alternatives: 0 or 1; 00,01,10,11
Add 2 alternatives: 0 or 1; 000, 001…

In total 8 alternatives (states) are encoded by making 3 binary
decision steps:
8 = 2^3
Or: I = ^2log K = - sum_K( (1/k) log(1/k) )

With I: Information (also: H, Entropy; log of number of alternatives or states; degrees of freedom in a system) (H = I = 3)

K: Number of states or alternatives to be encoded (K=8)

56
Q

Give a generalised form of this equation for discrete variables

A

I = ^2log K = - sum_i( (pi) log(pi) )
i: indexes the alternatives (1….K)
pi: probability of alternative i
Negative sum of the probability of each outcome * the log probability

57
Q

What happens as pi decreases?

A

Note: if pi decreases, the negative log increases
more steeply than that pi decreases!
=> Unlikely events convey more information than likely ones

y = - a log x

logarithmic function so like exponential function along y axis

58
Q

What conclusion does Pennartz draw about the relevancy of these equations

A

NB, Shannon information is about statistics of relationships in a system (not: meaning)

59
Q

Describe the concept of mutual information in IIT; What does it indicate

A

MI (A;B)= H(A) + H(B) – H(AB)
A and B are subsystems of the overall system, AB

H(A) is the statistical entropy of subset A
( information entropy is the average amount of information conveyed by an event, when considering all possible outcomes.)

  • MI is the surplus of possible states A and B can assume alone but summed up, relative to the states of their combination
  • If A and B are fully independent and each alone can assume 2 states => MI = 1 + 1 - 2 = 0
  • If A and B are fully dependent and each alone can assume 2 states => MI = 1 + 1 - 1 = 1

Thus, Mutual information indicates Statistical dependence

60
Q

How do you measure integration in IIT?

A

Int(x) = sum(H(xi) – H(X))
xi : member of a larger neuronal group (X)àsummation across all members of X

Int(X): integration within X

61
Q

How do you measure clustering in IIT? What does this mean?

A

Cl(X) = Int(X) / MI (X;Y)
Cluster Index (CI): some subsystems are more integrated / statistically dependent than others

= ratio between statistical dependence within group X relative to the statistical dependence between X and the rest of the system Y

62
Q

What is Tononi’s claim about the cluster index?

A

Tononi: a high Cluster Index would correspond to high degree of consciousness (1998; later on: use of Phi-measure)

63
Q

Critically evaluate Tononi’s Integrated Information Theory of consciousness

A

Captures a number of relevant consciousness phenomena
e.g. dynamism, combinations of “essential nodes” to form a changeable core. This is a serious attempt to produce axiomatic, quantitative theory accounting for “integration” and “differentiation” in perception

But: boils down to quantification of statistical dependence
in and between subsets of neurons.
:: Integration as “scene construction” not explained (but was original goal)
:: Phenomenal awareness not explained (why qualia?)
:: No intentionality and no ”matching” to the world

64
Q

Where does pennartz claim this theory brings us back to?

A

Panpsychism: e.g weather systems on earth have patches of high / low stat. independence