Chapter 1 + 3 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Where did the ideas that led to the first programmable computers come from?

A

Mathematicians’ attempts to understand human thought—particularly logic—as a mechanical process of “symbol manipulation.” Digital computers are essentially symbol manipulators, pushing around combinations of the symbols 0 and 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How is intelligence a ‘suitcase word?’

A

Because its central notion—intelligence—remains so ill-defined. It is packed like a suitcase with a jumble of different meanings. Artificial intelligence inherits this packing problem, sporting different meanings in different contexts.

intelligence can be binary (something is or is not intelligent), on a continuum (one thing is more intelligent than another thing), or multidimensional (someone can have high verbal intelligence but low emotional intelligence).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

For better or worse, the field of AI has largely ignored these various distinctions. What has it focused on instead?

A

Instead, it has focused on two efforts: one scientific and one practical. On the scientific side, AI researchers are investigating the mechanisms of “natural” (that is, biological) intelligence by trying to embed it in computers. On the practical side, AI proponents simply want to create computer programs that perform tasks as well as or better than humans, without worrying about whether these programs are actually thinking in the way humans think.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

At the 1956 Dartmouth workshop, different participants espoused divergent opinions about the correct approach to take to develop AI.

Describe three of these opinions

A

Some people—generally mathematicians—promoted mathematical logic and deductive reasoning as the language of rational thought.

Others championed inductive methods in which programs extract statistics from data and use probabilities to deal with uncertainty.

Still others believed firmly in taking inspiration from biology and psychology to create brain-like programs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

At the 1956 Dartmouth workshop, different participants espoused divergent opinions about the correct approach to take to develop AI.

Describe three of these opinions

A

Some people—generally mathematicians—promoted mathematical logic and deductive reasoning as the language of rational thought.

Others championed inductive methods in which programs extract statistics from data and use probabilities to deal with uncertainty.

Still others believed firmly in taking inspiration from biology and psychology to create brain-like programs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How was this disagreement about the correct approach for AI resolved?

A

Arguments among proponents of these various approaches persist to this day. And each approach has generated its own panoply of principles and techniques, fortified by specialty conferences and journals, with little communication among the subspecialties.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which family of AI methods has ‘risen above the anarchy to become the dominant AI paradigm’?

A

One family of AI methods—collectively called deep learning (or deep neural networks).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which family of AI methods has ‘risen above the anarchy to become the dominant AI paradigm’?

A

One family of AI methods—collectively called deep learning (or deep neural networks).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How is AI and deep learning not the same thing?

A

AI is a field that includes a broad set of approaches, with the goal of creating machines with intelligence. Deep learning is only one such approach.
Deep learning is itself one method among many in the field of machine learning, a subfield of AI in which machines “learn” from data or from their own “experiences.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What philosophical split occurred early in the AI research community?

A

The split between symbolic and subsymbolic AI. A symbolic AI program’s knowledge consists of words or phrases (the “symbols”), typically understandable to a human, along with rules by which the program can combine and process these symbols in order to perform its assigned task. (e.g general problem solver, similar to how we code: CURRENT STATE:
LEFT-BANK = [3 MISSIONARIES, 3 CANNIBALS, 1 BOAT] RIGHT-BANK = [EMPTY] )

Subsymbolic AI programs do not contain the kind of human-understandable language we saw in the Missionaries and Cannibals example above. Instead, a subsymbolic program is essentially a stack of equations—a thicket of often hard-to-interpret operations on numbers. (

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How did these two approaches differ in their view of AI

A

Advocates of the symbolic approach to AI argued that to attain intelligence in computers, it would not be necessary to build programs that mimic the brain. Instead, the argument goes, general intelligence can be captured entirely by the right kind of symbol-processing program.

Symbolic AI was originally inspired by mathematical logic as well as by the way people described their conscious thought processes. In contrast, subsymbolic approaches to AI took inspiration from neuroscience and sought to capture the sometimes-unconscious thought processes underlying what some have called fast perception, such as recognising faces or identifying spoken words.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How did these two approaches differ in their view of AI

A

Advocates of the symbolic approach to AI argued that to attain intelligence in computers, it would not be necessary to build programs that mimic the brain. Instead, the argument goes, general intelligence can be captured entirely by the right kind of symbol-processing program.

Symbolic AI was originally inspired by mathematical logic as well as by the way people described their conscious thought processes. In contrast, subsymbolic approaches to AI took inspiration from neuroscience and sought to capture the sometimes-unconscious thought processes underlying what some have called fast perception, such as recognizing faces or identifying spoken words.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

At the 1956 Dartmouth workshop, different participants espoused divergent opinions about the correct approach to take to develop AI.

Describe three of these opinions

A

Some people—generally mathematicians—promoted mathematical logic and deductive reasoning as the language of rational thought.

Others championed inductive methods in which programs extract statistics from data and use probabilities to deal with uncertainty.

Still others believed firmly in taking inspiration from biology and psychology to create brain-like programs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How was this disagreement about the correct approach for AI resolved?

A

Arguments among proponents of these various approaches persist to this day. And each approach has generated its own panoply of principles and techniques, fortified by specialty conferences and journals, with little communication among the subspecialties.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which family of AI methods has ‘risen above the anarchy to become the dominant AI paradigm’?

A

One family of AI methods—collectively called deep learning (or deep neural networks).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How is AI and deep learning not the same thing?

A

AI is a field that includes a broad set of approaches, with the goal of creating machines with intelligence. Deep learning is only one such approach.

Deep learning is itself one method among many in the field of machine learning, a subfield of AI in which machines “learn” from data or from their own “experiences.”

15
Q

What philosophical split occurred early in the AI research community?

A

The split between symbolic and subsymbolic AI.

A symbolic AI program’s knowledge consists of words or phrases (the “symbols”), typically understandable to a human, along with rules by which the program can combine and process these symbols in order to perform its assigned task.

Subsymbolic AI programs do not contain the kind of human-understandable language we saw in the Missionaries and Cannibals example above. Instead, a subsymbolic program is essentially a stack of equations—a thicket of often hard-to-interpret operations on numbers.

16
Q

How did these two approaches differ in their view of AI

A

Advocates of the symbolic approach to AI argued that to attain intelligence in computers, it would not be necessary to build programs that mimic the brain. Instead, the argument goes, general intelligence can be captured entirely by the right kind of symbol-processing program. Agreed, the workings of such a program would be vastly more complex than the Missionaries and Cannibals example, but it would still consist of symbols, combinations of symbols, and rules and operations on symbols.

Symbolic AI was originally inspired by mathematical logic as well as by the way people described their conscious thought processes. In contrast, subsymbolic approaches to AI took inspiration from neuroscience and sought to capture the sometimes-unconscious thought processes underlying what some have called fast perception, such as recognizing faces or identifying spoken words.

17
Q

Describe an example of symbolic AI

A

General problem solver (similar to how we code). The creators of the General Problem Solver had recorded several students “thinking out loud” while solving the cannibals and missionaries conundrum and other logic puzzles and programmed to mimic this:
CURRENT STATE:
LEFT-BANK = [3 MISSIONARIES, 3 CANNIBALS, 1 BOAT] RIGHT-BANK = [EMPTY]
DESIRED STATE:
LEFT-BANK = [EMPTY]
RIGHT-BANK = [3 MISSIONARIES, 3 CANNIBALS, 1 BOAT] )

At each step in its procedure, GPS attempts to change its current state to make it more similar to the desired state. In its code, the program has “operators” (in the form of subprograms) that can transform the current state into a new state and “rules” that encode the constraints of the task.

18
Q

Describe an example of subsymbolic AI

A

Perceptron was an important milestone in AI and was the influential great-grandparent of modern AI’s most successful tool, deep neural networks. Rosenblatt’s invention of perceptrons was inspired by the way in which neurons process information.

Analogous to the neuron, the perceptron adds up its inputs, and if the resulting sum is equal to or greater than the perceptron’s threshold, the perceptron outputs the value 1 (it “fires”); otherwise it outputs the value 0 (it “does not fire”). E.g recognising the figure ‘8’ through 324 pixels

19
Q

Unlike the symbolic General Problem Solver system that I described earlier, a perceptron doesn’t have any explicit rules for performing its task; all of its “knowledge” is encoded in the numbers making up its weights and threshold.

But how, exactly, can we determine the correct weights and threshold for a given task?

A

Rosenblatt proposed a brain-inspired answer: the perceptron should learn these values on its own. Like the behavioral psychology theories popular at the time, Rosenblatt’s idea was that perceptrons should learn via conditioning. Rosenblatt’s idea was that the perceptron should similarly be trained on examples: it should be rewarded when it fires correctly and punished when it errs.

20
Q

What name is given to this form of conditioning and how does it work - 3

A

This form of conditioning is now known in AI as supervised learning. During training, the learning system is given an example, it produces an output, and it is then given a “supervision signal,” which tells how much the system’s output differs from the correct output. The system then uses this signal to adjust its weights and threshold.

Supervised learning typically requires a large set of positive examples (for instance, a collection of 8s written by different people) and negative examples (for instance, a collection of other handwritten digits, not including 8s). Some of the positive and negative examples are used to train the system; these are called the training set. The remainder—the test set—is used to evaluate the system’s performance after it has been trained.

21
Q

Name and describe the algorithm used to carry out this supervised learning

A

The perceptron-learning algorithm: Initially, the weights and threshold are set to random values between −1 and 1. The first training example is given to the perceptron; at this point, the perceptron doesn’t see the correct category label. The perceptron multiplies each input by its weight, sums up all the results, compares the sum with the threshold, and outputs either 1 or 0. If the perceptron is correct, the weights and threshold don’t change. But if the perceptron is wrong, the weights and threshold are changed a little bit, making the perceptron’s sum on this training example closer to producing the right answer.

Moreover, the amount each weight is changed depends on its associated input value; that is, the blame for the error is meted out depending on which inputs had the most impact. For example, in the 8 of figure 3A, the higher-intensity (here, black) pixels would have the most impact, and the pixels with 0 intensity (here, white) would have no impact. This process then repeats with gradual changes (similar to Skinner) until it settles on a set of weights and threshold values which work for the whole set.

22
Q

Did symbolic or subsymbolic AI ‘dominate’ the field for the first three decades?

A

Symbolic AI of the kind illustrated by GPS ended up dominating the field for its first three decades, most notably in the form of expert systems, in which human experts devised rules for computer programs to use in tasks such as medical diagnosis and legal decision-making.

23
Q

How does the perceptron highlight a major difference between symbolic and subsymbolic AI?

A

The fact that a perceptron’s “knowledge” consists of a set of numbers—namely, the weights and threshold it has learned—means that it is hard to uncover the rules the perceptron is using in performing its recognition task. The perceptron’s rules are not symbolic; unlike the General Problem Solver’s symbols, such as LEFT-BANK, #MISSIONARIES, and MOVE, a perceptron’s weights and threshold don’t stand for particular concepts. It’s not easy to translate these numbers into rules that are understandable by humans. The situation gets much worse with modern neural networks that have millions of weights.

24
Q

Describe a rough analogy here in these forms of AI and neurons

A

If you watch neurons firing, you would likely not get any insight into the thinking or the “rules” you used to make a particular decision. However, the human brain has given rise to language, which allows you to use symbols (words and phrases) to tell me—often imperfectly—what your thoughts are about or why you did a certain thing. In this sense, our neural firings can be considered subsymbolic.

25
Q

Describe a rough analogy here in these forms of AI and neurons
What limitations did the perceptron have and what consequence did this have?

A

Minsky and his MIT colleague Seymour Papert published a book, Perceptrons,20in which they gave a mathematical proof showing that the types of problems a perceptron could solve perfectly were very limited and that the perceptron-learning algorithm would not do well in scaling up to tasks requiring a large number of weights and thresholds. Minsky and Papert pointed out that if a perceptron is augmented by adding a “layer” of simulated neurons, the types of problems that the device can solve is, in principle, much broader.21 A perceptron with such an added layer is called a multilayer neural network. Such negative speculations were at least part of the reason that funding for neural network research dried up in the late 1960s, at the same time that symbolic AI was flush with government dollars.

26
Q

What further consequences following this dispatching of subsymbolic AI?

A

Proponents of symbolic AI were writing grant proposals promising impending breakthroughs in areas such as speech and language understanding, commonsense reasoning, robot navigation, and autonomous vehicles. By the mid-1970s, while some very narrowly focused expert systems were successfully deployed, the more general AI breakthroughs that had been promised had not materialized. Funding agencies and the government noticed and funcing was drastically reduced.

27
Q

Describe the cycle that this demonstrates, which tends to occur in AI research

A

Phase 1 (AI spring): New ideas create a lot of optimism in the research community. Results of imminent AI breakthroughs are promised, and often hyped in the news media. Money pours in from government funders and venture capitalists for both academic research and commercial start-ups. Phase 2 (AI winter): The promised breakthroughs don’t occur, or are much less impressive than promised. Government funding and venture capital dry up. Start-up companies fold, and AI research slows.

28
Q

What does it mean to say that ‘easy things are hard’?

A

The original goals of AI—computers that could converse with us in natural language, describe what they saw through their camera eyes, learn new concepts after seeing only a few examples—are things that young children can easily do, but, surprisingly, these “easy things” have turned out to be harder for AI to achieve than diagnosing complex diseases, beating human champions at chess and Go, and solving complex algebraic problems.

29
Q

Like every AI spring before it, our current one features experts predicting that “general AI”—AI that equals or surpasses humans in most ways—will be here soon.

Why is this unlikely based on current AI?

A

While much of this optimism is based on the recent successes of deep learning, these programs—like all instances of AI to date—are still examples of what is called “narrow” or “weak” AI. I.e they are systems that can perform only one narrowly defined task (or a small set of related tasks, e.g google translate).

30
Q

When and how was the Turing test passed?

A

A chatbot named “Eugene Goostman,” created by a group of Russian and Ukrainian programmers, won the competition by fooling ten (or 33.3 percent) of the judges. The chatbot’s programmers have given it linguistic rules that allow it to pinpoint key information in its input and to store that information for later use. In addition, the chatbot stores a database of “commonsense knowledge,” encoded by human programmers, along with some logic rules. If none of the chatbot’s rules apply to an input, it just changes the subject. The system’s rules also encode its “personality”—in this case, a thirteen-year-old Ukrainian boy whose English is good but (conveniently) not perfect.

31
Q

Ray Kurzweil has long been AI’s leading optimist. A former student of Marvin Minsky’s at MIT, Kurzweil has had a distinguished career as an inventor: he invented the first text-to-speech machine as well as one of the world’s best music synthesizers.

But what is he known best for?

A

His futurist prognostications, most notably the idea of the Singularity: “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly
transformed. Kurzweil uses the term singularity in the sense of “an event capable of rupturing the fabric of human history.”For Kurzweil, this singular event is the point in time when AI exceeds human intelligence.

32
Q

Kurzweil’s ideas were spurred by the mathematician I. J. Good’s speculations on the potential of an intelligence explosion. What is meant by this ‘intelligence explosion’?

A

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.”

33
Q

What does Kurzweil base his ideas on?

A

Kurzweil bases all of his predictions on the idea of “exponential progress” in many areas of science and technology, especially computers.

34
Q

What is meant by Moore’s law?

A

Gordon Moore, cofounder of Intel Corporation, identified a trend that has come to be known as Moore’s law: the number of components on a computer chip doubles approximately every one to two years. In other words, the components are getting exponentially smaller (and cheaper), and computer speed and memory are increasing at an exponential rate.

35
Q

Why is a ‘futurist’ a nice career to have?

A

You write books making predictions that can’t be evaluated for decades and whose ultimate validity won’t affect your reputation—or your book sales—in the here and now.

36
Q

What was made to prevent this pattern?

A

In 2002, a website called Long Bets was created to help keep futurists honest. Long Bets is “an arena for competitive, accountable predictions,” allowing a predictor to make a long-term prediction that specifies a date and a challenger to challenge the prediction, both putting money on a wager that will be paid off after the prediction’s date is passed.