chapter 6 Flashcards

1
Q

deep neural networks learning strategy

A

learning from data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

GOFAI learning strategy

A

human programmers construct explicit rules for intelligent behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

convnet learning strategy

A

supervised learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

supervised learning

A

convnets gradually change their weights as they process the examples in the training set again and again, over many epochs, learning to classify each input as one of a fixed set of possible output categories

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

difference human learning and supervised learning/deep learning

A

even the youngest children learn an open-ended set of categories and can recognize instances of most categories after seeing only a few examples. convnet need a lot (big data)

Moreover, unlike convnets, children don’t learn passively: they ask questions etc.

reliance of deep learning networks on extensive collections of labeled training data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

hyperparameters

A

an umbrella term that refers to all the aspects of the network that need to be set up by humans to allow learning to even begin

While ConvNets use back-propagation to learn their “parameters” (that is, weights) from training examples, this learning is enabled by a collection of hyperparameters

so it is inaccurate to say that today’s successful ConvNets learn “on their own.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

examples of hyperparameters

A

the number of layers in the network, the size of the units’ “receptive fields” at each layer, how large the change in each weight should be during learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

long-tail problem

A

the vast range of possible unexpected situations an AI system could be faced with.

This is a problem if we rely solely on supervised learning to provide our AI system with its knowledge of the world; the situations in the tail don’t show up in the training data often enough, if at all, so the system is more likely to make errors when faced with such unexpected cases.

for this reason, supervised learning is not a viable path to general- purpose AI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

solution to long tail problem

A

to use supervised learning on small amounts of labeled data and learn everything else via unsupervised learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

UNsupervised learning

A

learning refers to a broad group of methods for learning categories or actions WITHOUT LABELED DATA

Examples include methods for clustering examples based on their similarity or learning a new category via analogy to known categories

no one has yet come up with the kinds of algorithms needed to perform successful unsupervised learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

human competence that lacks in AI

A

common sense

Many people believe that until AI systems have common sense as humans do, we won’t be able to trust them to be fully autonomous in complex real-world situations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

overfitting

A

The machine learns what it observes in the data rather than what you (the human) might observe.

If there are statistical associations in the training data, even if irrelevant to the task at hand, the machine will happily learn those instead of what you wanted it to learn.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

bias in AI

A

data set for training face recognition systems contain more faces that are male and white, because the images were downloaded from online image searches, and photos of faces that appear online are skewed toward featuring famous or powerful people, who are predominately white and male.

AI systems trained on biased data can magnify these biases and do real damage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

explainable AI, transparent AI, or interpretable machine learning

A

hese terms refer to research on getting AI systems— particularly deep networks—to explain their decisions in a way that humans can understand

ways to visualize the features that a given convolutional neural network has learned and, in some cases, to determine which parts of the input are most responsible for the output decision.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

adversarial example

A

to make specific changes to pixels of an example so that it looks completely unchanged to humans but gets classified with very high confidence by the system as something completely different

the opposite is also possible
> to computationally “evolve” images that look like random noise to humans but for which systems assigned specific object categories with greater than 99 percent confidence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

adversarial learning

A

developing strategies that defend against potential (human) adversaries who could attack machine-learning systems

Adversarial-learning researchers often start their work by demonstrating possible ways in which existing systems can be attacked

17
Q

Clever Hans

A

metaphor for any individual (or program!) that gives the appearance of understanding but is actually responding to unintentional cues given by a trainer.