Cognitive Wheels Flashcards

1
Q

What is the Frame problem? What does it imply for Denett?

A

In general the frame problem descibes the challenge of information representation and structuring, which first became apparent in AI using frame axions to represent all necessary world elements and possible actions.
Denett says that this isn’t purely a problem for AI, but also for understanding human conciousness. “Inteligence is a matter of using well what you know.”-so how are we able to decide what information is when inportant?
This devides into a semantic (What information?) and a syntactic problem (How to structure it?).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the installation problem?

A

Choosing a useable format for information to be “installed”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the language of thought theory of mental representation? Why is it unlikely?

A

Each distinguishable proposition separately inscribed in the system. It would be way to exaustive to be practical.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the Spinozistic solution to mental representation? Why is it unlikely?

A

Using a minimal set of axioms and deducing all needed information from it.
This seems unlikely for time and space limitations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the minimum goal for Denett in tackeling the frame problem for AI agents?

A

To be surprised about an unexpected outcome, as this requires predictions and general world knowledge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the problem of induction? How does it relate to the frame problem?

A

Having good expectations about any future event (ows, anothers, nature events etc.)
Solving the problem of induction would only solve the semantic and not the syntactic part of the frame problem. Implementing and structeriung the right predictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why can’t we just tell AI to ignore unchanged states?

A

There is no direct way to do this while reducing the axioms used. We still produce all unwanted theorems plus statements about their irrelevance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the qualification problem?

A

We need the system to ignore most of what it knows and use only a small well-chosen portion of it’s knowledge (non-monotonic inferences in AI)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What mechanisms does the human brain use ? Why can’t we just implement these into AI and call it a day?

A

Humans exibit habits, tendencies to leap to conclusions, stereotypes etc.
The problem for AI is that when it solely relies on these, it cannot recover from a misanalysis, because these “short cuts” are all there is. Humans can only use these (semi) successfully, because they can return to the full implementation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does Denett mean by cognitive wheels?

A

Wheels are a great solution, but do not resemble biological solutions to the same problem of getting around.
Denett says it is a misunderstanding to say AI is necessarily just like a cognitive wheel, as higher levels of it can still resemble human cognition even if the micro level is nothing like it’s biological counterpart.
Still even if strong AI was made up of “only cognitive wheels” it would still tell us someting about cognition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly