Cognitive Wheels Flashcards
What is the Frame problem? What does it imply for Denett?
In general the frame problem descibes the challenge of information representation and structuring, which first became apparent in AI using frame axions to represent all necessary world elements and possible actions.
Denett says that this isn’t purely a problem for AI, but also for understanding human conciousness. “Inteligence is a matter of using well what you know.”-so how are we able to decide what information is when inportant?
This devides into a semantic (What information?) and a syntactic problem (How to structure it?).
What is the installation problem?
Choosing a useable format for information to be “installed”.
What is the language of thought theory of mental representation? Why is it unlikely?
Each distinguishable proposition separately inscribed in the system. It would be way to exaustive to be practical.
What is the Spinozistic solution to mental representation? Why is it unlikely?
Using a minimal set of axioms and deducing all needed information from it.
This seems unlikely for time and space limitations.
What is the minimum goal for Denett in tackeling the frame problem for AI agents?
To be surprised about an unexpected outcome, as this requires predictions and general world knowledge.
What is the problem of induction? How does it relate to the frame problem?
Having good expectations about any future event (ows, anothers, nature events etc.)
Solving the problem of induction would only solve the semantic and not the syntactic part of the frame problem. Implementing and structeriung the right predictions.
Why can’t we just tell AI to ignore unchanged states?
There is no direct way to do this while reducing the axioms used. We still produce all unwanted theorems plus statements about their irrelevance.
What is the qualification problem?
We need the system to ignore most of what it knows and use only a small well-chosen portion of it’s knowledge (non-monotonic inferences in AI)
What mechanisms does the human brain use ? Why can’t we just implement these into AI and call it a day?
Humans exibit habits, tendencies to leap to conclusions, stereotypes etc.
The problem for AI is that when it solely relies on these, it cannot recover from a misanalysis, because these “short cuts” are all there is. Humans can only use these (semi) successfully, because they can return to the full implementation.
What does Denett mean by cognitive wheels?
Wheels are a great solution, but do not resemble biological solutions to the same problem of getting around.
Denett says it is a misunderstanding to say AI is necessarily just like a cognitive wheel, as higher levels of it can still resemble human cognition even if the micro level is nothing like it’s biological counterpart.
Still even if strong AI was made up of “only cognitive wheels” it would still tell us someting about cognition.