What is consciousness Flashcards

1
Q

According to Chalmers - consciousness

A
  • consciousness as Chalmers and other philosophers use, is subjective experience - like phenomenology
    What is it like to be a bat - bats get around through echo-location
  • We understand how bats function - but what is it like to be a bat, we know what it’s like to see the world as a human being, but when it comes to a bat, even when we know how bats echo-locate and everything, what does it feel like?
  • The question of what it’s like to be a bat is a question of the character of the bat’s conscious experiences
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Consciousness = subjective experience

A

Consciousness = subjective experience

  • Chalmer’s not asking whether consciousness required understanding
    Bats don’t have understanding, but they still have a conscious experience
  • Chalmers isn’t asking if consciousness is about intelligence, or self-consciousness/self-awareness, does a bat have some notion as a self as an existing being, but it’s not obvious that this is necessary for being conscious
  • Does a self driving car know what the colour red looks like to a human being - we have no good reason to think that the car has a conscious experience of the colour red like the conscious experience we have
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Argument for LLM’s consciousness (Large Language Models)

A

Argument for LLM consciousness
1. LLMs have X
2. If a system has X, it is probably conscious
3. Therefore, LLMs are probably conscious

Candidates for X:
X = seems conscious
X = self-report
X = conversational ability
X = general intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

X as seems conscious

A

A lot of AI seem conscious
Chalmers points out that giving people the impression of being conscious is something AI has been doing for a while, even to way back then when it was clear that AI was not conscious.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

X as self report

A

X = self-report
- Blake Lemoine spoke claiming that Google had a secret conscious computer system
- You ask if the computer has emotions and it tells you that the computer has feelings and emotions, and gives claims about what kinds of conscious experiences they have - is this good evidence?
- Chalmers doesn’t think this is good evidence
- It’s regurgitating about what humans have said about conscious experience

  • What we used to do in computer programming, ask what is it that we want the computer to do, what are the steps for that, and what are the substeps
    That’s not how these recent AI has been made - it’s a process called a gradient descent
  • Huxley called it the doctrine of human continuity - at every single point in the process of the human eye being created, there was tiny steps that lead to slighter reproductive advantage, to the human eye
  • This is the same with the language models
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

X as conversational ability or general intelligence

A

X = conversational ability, or general intelligence
Rather than asking if they are conscious
Unlike prescripted speaking, they can create their own conversations
Generality in the stuff it can go and being able to put different skills together in apparently novel ways
John Searle would not say this is enough for computers to be conscious
Systems reply: Whole systems understand Chinese
Searle: I could “internalise” whole system, and still not understand

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Argument against LLM’s consciousness

A
  1. LLMs lack X
  2. If a system lacks X, it probably isn’t consciousness
  3. Therefore, LLM’s are probably not conscious

Candidates for X:
- biology
- senses and embodiment
-world-models and self-models
- recurrent processing
- global workspace
- unified agency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

X as biology

A

Strong AI: following the right program is sufficient for understanding - in the sense of logical strength, goes way out there, of following the program and simulating the behavior of someone with understanding is sufficient for understanding.
Compare: (i) Simulating human behavior sufficient for thinking
If two computers made of different hardware are running the same program, they are producing the same behavior

Searle’s counterexample: The Chinese room, is a counterexample to both the claim that running a program is sufficient for understanding and understanding is not sufficient for running a program
Searle: Physical hardware also necessary for cognitive states
Brains are also machines, our mental characteristics, in this case, depend not just on the program we are running, but the actual physical hardware that is inside our heads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

X as world-model

A

World model- represents statistical patterns of the words
Stochastic parrot
World model - represents real objects that would refer to

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Behaviourism

A

Behaviourism - Mental states are behavioural dispositions

The behaviourist says that mental states such as beliefs and emotions are behavioural dispositions, they are tendancies to engage in the behaviour that is characteristics of those mental states
Eg. If I’m in an irritable mood, I’m disposed to act like someone who is irritable

If two wine glasses are wholly disposed to shatter, they are both equally fragile

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Physicalism

A

physicalism
Mental states, specifically sedations, like conscious experiences, are brain states

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Functionalism

A

Functionalism - mental states are functional states
Armstrong gives an example that poison causes sickness or death when ingested
It’s just the kind of effects they have, that’s what makes them poison
We know something is a poison we need to do scientific examination - but what unionised them as poisonous is an a priori argument, that is that they cause sickness or death when ingested.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly