1. Fundamentals of Deep Learning Flashcards
How do artificial intelligence, machine learning, and deep learning relate to each other?
AI
the effort to automate intellectual tasks normally per- formed by humans
Symbolic AI
For a fairly long time, many experts believed that human-level artificial intelligence could be achieved by having programmers handcraft a sufficiently large set of explicit rules for manipulating knowledge.
Limitations of Symbolic AI
Although symbolic AI proved suitable to solve well-defined, logical problems, such as playing chess, it turned out to be intractable to figure out explicit rules for solving more complex, fuzzy problems, such as image classification, speech recognition, and lan- guage translation.
Machine Learning vs Classical Programming
3 things required for ML
Central problem in ML
the central problem in machine learning and deep learning is to meaningfully transform data: in other words, to learn useful representations of the input data at hand—representations that get us closer to the expected output.
what’s a representation?
At its core, it’s a different way to look at data—to rep- resent or encode data. For instance, a color image can be encoded in the RGB format (red-green-blue) or in the HSV format (hue-saturation-value): these are two different representations of the same data. Some tasks that may be difficult with one represen- tation can become easy with another. For example, the task “select all red pixels in the image” is simpler in the RG format, whereas “make the image less saturated” is simpler in the HSV format. Machine-learning models are all about finding appropriate repre- sentations for their input data—transformations of the data that make it more amena- ble to the task at hand, such as a classification task.
Deep Learning
Deep learning is a specific subfield of machine learning: a new take on learning repre- sentations from data that puts an emphasis on learning successive layers of increasingly meaningful representations. The deep in deep learning isn’t a reference to any kind of deeper understanding achieved by the approach; rather, it stands for this idea of suc- cessive layers of representations. How many layers contribute to a model of the data is called the depth of the model. Other appropriate names for the field could have been layered representations learning and hierarchical representations learning. Modern deep learning often involves tens or even hundreds of successive layers of representations— and they’re all learned automatically from exposure to training data. Meanwhile, other approaches to machine learning tend to focus on learning only one or two lay- ers of representations of the data; hence, they’re sometimes called shallow learning.
What do the representations learned by a deep-learning algorithm look like?
Deep Learning Process Diagram
Kernel Methods
Kernel methods are a group of classification algorithms, the best known of which is the support vector machine (SVM).
Decision Boundary
SVMs aim at solving classification problems by finding good decision boundaries (see figure 1.10) between two sets of points belonging to two different categories. A decision boundary can be thought of as a line or surface separating your training data into two spaces corresponding to two categories. To classify new data points, you just need to check which side of the decision boundary they fall on.
SVMs proceed to find these boundaries in two steps: