PHIL4-6 Flashcards

1
Q

What is syntax vs semantics?

A

Symbols, states and rules are the syntax of PSS (form or structure, what the TM “cares” about). The interpretation we give to these are its semantics (meaning, what it is about)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does Brentano 1874 say about the aboutness?

A

It is considered the mark of the mental. Every mental phenomenon includes something as object within itself. — our mental states are almost always about things.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does the PSSH say about the “aboutness” of the human mind? What about computers?

A

Mental states can be about things because they are symbolic states. The same goes for computational states.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does the PSSH say about computers solving interesting problems?

A

They need to be set up such that the syntax “tracks” the semantics. A PSS that follows the syntactic rules can solve interesting problems because those rules reflect the normative principles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the quote at the heart of PSSH and the “Proof that AI is possible?”

A

If you take care of the syntax, the semantics take care of themselves

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the core element challenged by the Chinese Room Argument?

A

Distinction between syntax and semantics, claimed by the PSSH explaining for solving interesting problems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the Chinese Room Argument?

A

P1. The man in the Chinese Room is a PSS that passes the Turing Test
P2. The man in the Chinese Room does not understand.
C1. Being a PSS that passes the Turing Test is not sufficient for understanding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the notion of understanding according to Searle?

A

Knowledge of semantics: knowing what the symbols are about

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The Chinese Room is one of the strongest arguments against .. challenging … criteria like …

A

Symbolic AI, behavioral criteria, the Turing Test. - Behaving “as if” there is understanding is insufficient for understanding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the replies in Searle’s Chinese Room experiment?

A

System Reply, Robot Reply, Brain simulator Reply

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the System’s Reply and Searle’s response?

A

Although the man in the room does not understand Chinese, the system composed of the man, the rule
book, and symbols understands Chinese. Focusing only on the man is like focusing only on the brain’s
frontal lobe or on a digital computer’s CPU.

Response: The rule book and symbols do not provide knowledge of what the symbols are actually about. But, that’s what (according to Searle) is required for understanding!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the Robot reply and Searle’s response?

A

Although inputs and outputs consisting of symbols are insufficient, inputs and outputs consisting of
perceptual stimuli and behavioral responses are sufficient.

Response: The man dos not actually receive the things being perceived or acted upon; he only receives Chinese transcriptions of it. Still only symbols.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the brain simulator reply and Searle’s response to it?

A

Although rules for transforming Chinese symbols into other Chinese symbols are insufficient for
understanding, rules that simulate the brain activity of a native Chinese speaker are sufficient.

Response: Simulating wrong things about the brain. Simulating formal structure instead of causal properties, ability to produce intentional states. Rainstorm example, has the ability to make things wet

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the Fairness principle?

A

Any criterion for attributing understanding should not be so stringent that humans fail to satisfy it. (Do humans know what their mental states are about?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How can Searle’s argument be challenged besides the Fairness principle?

A

Whether it applies to other aspects of intelligence other than understanding (playing chess - decision making, we would not doubt that a system is really playing chess or making decisions)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is weak vs strong AI

A

Weak: computer programs that simulate intelligence. Strong: computer programs that are intelligent. (Is there a difference?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are philosophical challenges to Symbolic AI?

A

Chinese room argument, Lady Lovelace Objection, Argument from consciousness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are practical challenges to Symbolic AI?

A

Frame problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is a practical challenges of Symbolic AI and a possible solution?

A

It is difficult to articulate rules that are effective and tractable for many problems. (Like recognizing chairs). A possible solution is delegating this task to the computer through ML

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What kind of questions raised by Symbolic AI can be solved using ML?

A

Which symbols should be used to represent task environment
Which rules should be deployed to arrive at a particular goal
How those symbols and rules should be implemented

21
Q

What are NN?

A

NN are universal function approximations: Replicate any mapping between input and output symbols. NN can solve “interesting problems” insofar they are viewed as mappings of this kind.

22
Q

What are the strong components of NN?

A

They work with subsymbolic representations: These are generally not meaningful to us at the level of single subsymbols
They use distributed representation: Many subsymbols together represent some “concept”, might still be meaningful

23
Q

What are the challenges to NNs?

A

Catastrophic Forgetting and Systematicity

24
Q

What is Catastrophic Forgetting?

A

New facts are memorized by changing weights throughout the network and this may lead to catastrophic forgetting of previously learned facts. This does not happen with Symbolic AI or human learning

25
Q

What is Systematicity?

A

Some problems require representations that are systematic, the same concept occurs in many different “thoughts”. “Mary loves Ben and Ben Loves Mary”. PSS that can process first can process the latter but neural network can learn to recognize one sentence without recognizing the other.

26
Q

What challenges facing Symbolic AI are addressed by ML and NN?

A

Distinguishing relevant from irrelevant problems (Frame problem) and articulating rules that are effective, tractable and more flexible.

27
Q

What is a LLM?

A

A type of neural network. But their design seems to take into account the specifics of the task: language and next-word prediction. Take into account context and positional encoding.

28
Q

What is emergence?

A

Quantitative differences in the system leading to qualitative differences in performance

29
Q

How do LLMs deal with the challenges for NNs?

A

LLMs seem to suffer from a lesser extent from catastrophic forgetting. It seems possible to modify facts without forgetting other facts/

LLMs seem able to respond appropriately to completely novel inputs that are unlikely to have been in the training data. This would imply that these systems learn symbolic-like representations of concepts or even rules

30
Q

How does Searle’s sense of understanding apply to LLMs?

A

LLMs seem to learn meaningful correlations between statistical patterns in the data. Embeddings allow them to recognize similar and dissimilar words and use this information to accurately predict the next words. (Is this enough for understanding of language? — language might require additional abilities, e.g. social context and reference to real objects in the world.

31
Q

What is the relationship between ML, NN and Symbolic AI?

A

It is not incompatible, nor can they fully replace it. They might implement some symbolic rules that allow them to generate language. As it is ML, it is hard to know what rules are used.

32
Q

What is the argument from consciousness?

A

Rule-based symbol manipulation (or any kind of computation) is not sufficient for consciousness, and for this reason, not sufficient for intelligence. (Turing test)

33
Q

What is consciousness?

A

Thinking, awareness (knowledge), doing things for a reason, to satisfy goals, awareness of ourselves, our bodies and place in society

34
Q

What kind of consciousness is the most scientifically challenging natural phenomenon?

A

Phenomenally — how you experience something

35
Q

What is todays challenge with conscious vs unconscious?

A

Explaining what makes us consciously experiencing anything at all, and how this experience emerges from our brain and body.

Chalmers: Why are some mental states and processes conscious while others are not?

36
Q

What elements do conscious states and processes consist of?

A

Cognitive aspect and phenomenal aspect: E.g. Pain

Cognitive: Saying “ouch”, pain avoidance, neural activity
Phenomenal: What it is like to experience pain

37
Q

What is the difference between cognitive and phenomenal aspects?

A

Cognitive aspects are interesting but no more (or less) philosophically challenging than any other mental states or processes. (Intelligence, understanding).

The phenomenal aspects are so challenging that we may question whether the phenomenal aspects of consciousness can be studied scientifically at all.

38
Q

What is the explanatory gap?

A

Explaining phenomenal aspects of consciousness - the “what is it like”-ness (what is it like to be a bat?) is sometimes called the ‘hard problem’.

Hard because there appears to be an unbridgeable explanatory gap between the cognitive and the phenomenal aspects of consciousness.

39
Q

How come we’re unable to acquire knowledge about the phenomenal aspects?

A

The only method of acquiring knowledge about phenomenal aspects is introspection, a first-person (subjective) method. Science is grounded on observation and does not rely on introspection, thus does not yield knowledge of the phenomenal aspect of consciousness.

40
Q

What are two questions in AI about consciousness?

A

Can we produce machines that are conscious?

If yes, could we know that they are in fact conscious? (How?)

41
Q

Can we build Concious Robots?

A

Two general strategies:

  1. Advance ‘a priori’ arguments: for or against the presence of phenomenal consciousness in machines.
  2. Engineer conscious machines (possible through inspiration from conscious organisms)
42
Q

What would be a priori arguments to deny we can build machines that possess phenomenal consciousness?

A

Machines don’t have a soul, souls are required. Machines are not born from conscious begins, machines are too simple, machines are not made of organic matter, and this is required. Etc.

43
Q

What are a priori reasons to claim that robots can possess phenomenal consciousness?

A

Panpsychism: Phenomenal consciousness may be an aspect of all material beings!

If meat can be phenomenally conscious, why not silicon chips?

44
Q

What is the problem with a priori arguments for the possibility of conscious robots?

A

They are affected by the explanatory gap: As we don’t know how the phenomenal aspects of conscious experience are connected to the physical world, we can’t make inferences about possibility or impossibility of conscious machines.

45
Q

Can we engineer sufficient mechanisms for consciousness in robots?

A

We might have reason to suspect they possess all of the cognitive aspects but again no more reason to attribute phenomenal aspects than to deny them.

46
Q

What is the Artificial Consciousness Test?

A

An imititation gamer but focusing on questions that elicit reflection and intuitions about consciousness. E.g. “ is there life after death?”, “Can you imagine what it is like to be a bat?”. Only machines with phenomenal states would answer these questions in ways that would “fools” the interrogator into thinking that they actually have it.

47
Q

What is the chip test?

A

A neural prosthesis: silicon chip perfectly replicating input output behavior, replacing bit by bit subjects brain with prostheses. After each replacement the subject engages in introspective reflection and reporting. Machines can be conscious if, after full replacement, the subject issues no abnormal introspective reports.

48
Q

What is the problem with criteria for phenomenal consciousness?

A

They are too limited by the explanatory gap: No amount of information about how machines behave or how they are constructed can tell us whether or not they are phenomenally conscious.