chapter 15 Flashcards
Douglas Lenat’s Cyc project
The most famous and longest-lasting attempt to manually encode commonsense knowledge for machines
Lenat concluded that true progress in AI would require machines to have common sense
he decided to create a huge collection of facts about the world, along with the logical rules by which programs could use this collection to deduce the facts they needed.
enat’s goal was for Cyc to contain all the unwritten knowledge that humans have
However, humanlike abstraction and analogy making are not skills that can be captured by Cyc’s massive set of facts or, I believe, by logical inference in general.
Bongard problems
Each problem features twelve boxes: six on the left and six on the right. The six left-hand boxes in each problem exemplify the “same” concept, the six right-hand boxes exemplify a related concept, and the two concepts perfectly distinguish the two sets. The challenge is to find the two concepts.
challenges of bongard problems
- their solution requires abstraction and analogy-making abilities
> abstraction and analogy are all about perceiving “the subtlety of sameness.”. > To discover this subtle sameness, you need to determine which attributes of the situation are relevant and which you can ignore - ability to perceive new concepts on the fly
convnets and bongard problems
An immediate obstacle is that a set of twelve training examples is laughably inadequate for training a ConvNet; even twelve hundred might not be sufficient.
conceptual slippage
an idea at the heart of analogy making
When you attempt to perceive the essential “sameness” of two different situations, some concepts from the first situation need to “slip”—that is, to be replaced by related concepts in the second situation
Copycat
program envisioned by Hofstadter
would solve problems like these by using very general algorithms, similar to those he believed humans used when making analogies in any domain
Copycat solved analogy problems via a continual interaction between the program’s perceptual processes (that is, noticing features in a particular letter-string analogy problem) and its prior concepts (for example, letter, letter group, successor, predecessor, same, and opposite).
metacognition
the ability to perceive and reflect on one’s own thinking
Copycat, like all of the other AI programs I’ve discussed in this book, had no mechanisms for self-perception, and this hurt its performance
The program would sometimes get stuck, trying again and again to solve a problem in the wrong way, and could never perceive that it had previously been down a similar, unsuccessful path.
Metacat
not only solved analogy problems in Copycat’s letter-string domain but also tried to perceive patterns in its own actions.
When the program ran, it produced a running commentary about what concepts it recognized in its own problem- solving process.
Situate
combines the object-recognition abilities of deep neural networks with Copycat’s active-symbol architecture, in order to recognize instances of particular situations by making analogies.
We would like our program to be able to recognize not only straightforward examples, but also unorthodox examples that require conceptual slippages.
Copycat, Metacat, and Situate
three examples of several analogy-making programs that are based on Hofstadter’s active-symbol architecture.
active-symbol architecture
one of many approaches in the AI community to creating programs that can make analogies
Karpathy
Karpathy’s example beautifully captures the complexity of human understanding and renders with crystal clarity the magnitude of the challenge for AI.
we need embodiment
embodiment hypothesis
a machine cannot attain human-level intelligence without having some kind of body that interacts with the world.