Searle Ch2 Flashcards
10 marker
-define Strong AI: “thé mind is to the brain, as the program is to the computer hardware”
-liberalism: the view that anything can be a mind
-“minds are semantically […] they have a content”
-S’ argument:
1. Brains cause minds
2. Syntax is not sufficient for semantics
3. Computer programs are entirely defined by their formal or syntactical structure
Therefore, No program is sufficient to give a system of mind, and the way that brain functions cause minds cannot be solely in virtue of running a program
CR TE: the man will never understand (have the semantics) even though syntax is sufficient
15 marker structure
Intro: reiterate 10 marker points if needed; Searle ultimately fails because his arguments don’t fit with BN
P1:
-Systems Reply: the whole system could understand Chinese
Searle: the person could internalise all the system by memorising the rules, but still wouldn’t understand Chinese; Since there is nothing in the system that isn’t inside the person, then the system won’t understand either
-Robot Reply: symbols need to be grounded in the experience of the physical world; CR fails because it is not connected to the external world - if the computer was inside a robot it would gain understanding
Searle: as long as the computer program is manipulating the symbols, it still has no understanding + strong AI fails if it wants to assert the need for a set of causal relations to the outside world
-Brain simulator reply: modify the program to replicate a Chinese person’s neuron firing sequence and you might get semantics from a replication of syntax
Searle: water pipes - the man manipulating the valves “certainly doesn’t understand Chinese, and neither do the water pipes”
-combinations reply
-(and S’ responses to all these): “zero times three is naught”
P2: Untutored intuitions
- pinker and the church lands
- magnet and light example
- Dennett philosopher syndrome: Mistaking a failure of the imagination for an insight into necessity
P3: Many mansions
- programming may not be enough for a computer to have intentionality atm, but later software might develop other systems
- S: this wasn’t AI’s original point - the claim was that mental processes=computational processes
- programme v implementation: Chalmers says a program is syntactical, program implementation is semantical - so semantics could be found in the implementation
P4: Fallacy of Composition and BN problems
- to embrace the CR conclusion to to commit the fallacy of composition - attributing the property of a part to the whole. Components like the person might lack semantics, but the system as a whole doesnt necessarily last semantics
- if neurons can come together and create consciousness, there is no reason that the CR can have consciousness overall - semantics could be a macro feature!