Critique of Computationalism Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Who is John Searle?

A
  • 1932- (89 years old)
  • American philosopher
  • Presented the Chinese Room Argument in his paper “Minds, Brains, and Programs” published in Behavioral and Brain Sciences in 1980.
  • Purpose: To show that a computer program cannot be intelligent. AI is not possible in the sense that a computer program cannot have a mind. A computer program can simulate a mind and act as if it is intelligent even though it actually isn’t.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the Chinese Room?

A
  • Searle, who is a non-Chinese speaker, sits in a room with boxes of Chinese symbols and a book of instructions.
  • A Chinese speaker outside the room passes messages with Chinese characters through the door. The input consists of questions such as “Do you understand Chinese?”, “Have to been to China?”.
  • Inside the room, Searle looks up the characters in the instruction book, and it will tell him the appropriate answer to the questions in Chinese.
  • Conclusion: The person appears to understand Chinese because the answers are meaningful and in Chinese. However, this is not the case.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The Chinese Room according to a computationalist.

A
  • Same functional components as a computer (inputs/outputs, memory, program).
  • Chinese Room does computation.
    • Implementation-independent (could be a computer program).
    • Systematically interpretable (follows Chinese grammar).
    • Symbol manipulation (shape of symbols, not content).
  • Passes the Turing test: the Chinese speaker outside the room is not able to distinguish a native Chinese speaker from the Chinese room.
  • Conclusion: The Chinese Room is intelligent and there is understanding.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The Chinese Room according to Searle.

A
  • Searle does not understand Chinese simply by running a “computer program” for understanding Chinese. He is just manipulating formal symbols. In the same manner, computational models function by manipulating formal symbols and following instructions.
  • Ergo, if there is no understanding in the Chinese room, then no computational problem is able to understand and be intelligent. Just running a computer program and manipulating symbols is not enough to guarantee cognition, perception, understanding, thinking etc.
  • Axiom 1: Computer programs are formal (syntactic).
  • Axiom 2: Human minds have mental content (semantics)
  • Axiom 3: Syntax by itself is neither constitutive of nor sufficient for semantics.
  • Conclusion: Programs are neither constitutive of nor sufficient for minds.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the system reply to the Chinese Room argument?

A
  • The reply: The room as a whole understands Chinese. The room has memory, a script, inputs and outputs and that is all you need for understanding.
  • Searle’s reply to the system reply: Take all the contents of the room and put it all in my head (i.e. memorize all symbols, memorize all rules). However, I still do not understand Chinese. Searle argues that strong AI leads to pan-psychism (i.e. everything has a mind, my stomach, heart, liver etc.).
  • Passes the T2 Turing test: The system performs the task functionally. It is able to communicate in Chinese. However, according to Searle, passing a T2 test does not establish semantics.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the brain simulator reply to the Chinese Room Argument?

A
  • The reply: What if we build a system that mimics a Chinese speaker’s brain at the levels of neurons, i.e. mimic every single neuron of the speaker. Wouldn’t that system understand?
  • Searle’s reply: The brain simulator reply gives up computationalism because now cognition isn’t implementation-independent. Still doesn’t understand Chinese.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the robot reply to the Chinese Room argument?

A
  • The reply: A robot is constructed that gets Chinese inputs and gives Chinese outputs and walks around and does all the other stuff we do because of the program running it. Camera, microphone, motor for controlling limbs make it possible for the robot to interact with the world.
  • Searle’s reply: No, imagine Searle sitting in the robot and controlling it like he would it the Chinese Room. Sensory signals (via camera, microphone) are received and movements are created (via motor), but there is no more reason to think that the system understand its more than the room as a whole. It is simply just the Chinese Room inside a robot.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Two important points that Searle makes,

A

Point 1: Searle does not state that a computer cannot think. He acknowledges that brains are also computers in some sense, but a computer program by itself cannot think because the program is simply just formal symbol manipulation.

Point 2: Simulation is not duplication. The computer can be programmed to print out statements involving humor, love, being afraid. However, the computer will never be able to actually fall in love, having a sense of humor and feeling afraid.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the take home message with The Chinese Room and Turing levels?

A

The Chinese Room passes the T2 test, but not the T3 or T4.

To have a system that is implementation-independent you can only pass the T2 Turing test.

However, if you can only pass a T2 Turing test, your system will not have intrinsic semantics.

To get semantics you must pass at least a T3 Turing test. But then you’re system won’t be implementation-independent.

If computation is implementation-independent, systematically interpretable and formal manipulation, no computation model will ever be truly cognitive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is implied in lack of biology?

A

Point 1: Computational processes are implementation-independent.

Point 2: Mental states are dependent upon brain anatomy and physiology. Brains are biological organs. All mental phenomena are caused by neurophysiological processes in the brain. Mental states (e.g. thirst, pain, vision) is caused by specific neurons firing in specific neural networks.

Point 3: A program simulating digestion will not be able to actually digest a pizza. Similarly, a program simulating the neurons firing when being thirsty, you will not be able to actually create the feeling of thirst. A program simulating cognition will not be able to actually create cognition. People fail to recognize that the mind is just as much as biological phenomenon as digestion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the symbol grounding problem?

A
  • Symbols are arbitrary. But how do we ground them? How do we attach meaning to them? If all you do is formal symbol manipulation, how do you then create meaning?
  • Symbols are not just empty shapes, they are grounded to the real-world by humans. But computation can’t ground symbols, i.e. the user needs to assign the semantics by interpretation.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the computationalist reply to the symbol grounding problem?

A

Cognition is computation (i.e. implementation-independent, systematically interpretable, symbol manipulation). Meaning is a product of the computations. Thus, the symbol grounding problem is not important because it is just formal symbol manipulation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the embodiment reply to the symbol grounding problem?

A

Meaning arises from engaging with the world. We have understanding of the concept “two” because we have observed it through our senses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the connectionist reply to the symbol grounding problem?

A

More biological approach to simulating cognitive functions by taking the architecture of the brain (i.e. neurons, synapses, neural networks) as starting point. The connectionists model’s solution to the symbol grounding problem is that they are created and trained based on the programmer’s intentions and are thus not intrinsic to the system. Also, meaning arises from the connections (i.e. weights) in the network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly