Machine Learning - Supervised Flashcards
Classification
We are trying to predict results in a discrete output. In other words, we are trying to map input variables into discrete categories.
Classification: Linear Binary
Binary classification problems arise when we seek to separate two sets of data points in R^n, each corresponding to a given class. We seek to separate the two data sets using simple ‘‘boundaries’’, typically hyperplanes. Once the boundary is found, we can use it to predict the class a new point belongs to, by simply checking on which side of the boundary it falls.
Decision Trees
A tree of questions to guide an end user to a conclusion based on values from a single vector of data. The classic example is a medical diagnosis based on a set of symptoms for a particular patient. A common problem in data science is to automatically or semi-automatically generate decision trees based on large sets of data coupled to known conclusions. Example algorithms are CART and ID3. (Submitted by Michael Malak)
Decision Trees
each node in the tree tests an attribute, decisions are represented by the edges leading away from that node with leaf nodes representing the final decision for all instances that reach that leaf node
Decision Trees: Best Uses
…
Decision Trees: Cons
1: Complex trees are hard to interpret; 2: Duplication within the same sub-tree is possible
Decision Trees: Dealing with missing values
1) have a specific edge for no value; 2) track the number of instances that follow each path and assign that instance to the most popular edge; 3) give instances weights and split the instance evenly down each possible edge. Once the value has reached the leaf nodes, recombine it using the weights
Decision Trees: Definition
Each node in the tree tests a single attribute, decisions are represented by the edges leading away from that node with leaf nodes representing the final decision.
Decision Trees: Example Applications
1: Star classification; 2: Medical diagnosis; 3: Credit risk analysis
Decision Trees: Flavors
CART, ID3
Decision Trees: Pros
1: Fast; 2: Robust to noise and missing values; 3: Accurate
Decision Trees: Pruining
since decision trees are often created by continuously splitting the instances until there is no more information gain, it is possible that some splits were done with too little information gain and result in overfitting of the model to the training data. Basic idea is to check child nodes to see if combining them with their parent would result in an increase in entropy below some threshold
Decision Trees: Replicated subtree problem
because only a single attribute is tested at a node, it is possible that two copies of the same subtree will need to be placed in a tree if the attributes from that subtree were not tested in the root node
Decision Trees: Restriction Bias
…
Decision Trees: Testing nominal values
if a node has edges for each possible value of that nominal value, then that value will not be tested again further down the tree. If a node groups the values into subsets with an edge per subset, then that attribute may be tested again
Decision Trees: Testing numeric values
can be tested for less than, greater than, equal to, within some range. Can result in just two edges or multiple edges. Can also have an edge that represents no value. May be tested multiple times in a single path
Deep Learning
Refers to a class of methods that includes neural networks and deep belief nets. Useful for finding a hierarchy of the most significant features, characteristics, and explanatory variables in complex data sets. Particularly useful in unsupervised machine learning of large unlabeled datasets. The goal is to learn multiple layers of abstraction for some data. For example, to recognize images, we might want to first examine the pixels and recognize edges; then examine the edges and recognize contours; examine the contours to find shapes; the shapes to find objects; and so on.
Ensemble Learning
…
Hidden Layer
The second layer of a three-layer network where the input layer sends its signals, performs intermediary processing
K-Nearest Neighbors: Cons
1: Performs poorly on high-dimensionality datasets; 2: Expensive and slow to predict new instances; 3: Must define a meaningful distance function;
K-Nearest Neighbors: Definition
K-NN is an algorithm that can be used when you have a objects that have been classified or labeled and other similar objects that haven’t been classified or labeled yet, and you want a way to automatically label them.
K-Nearest Neighbors: Example Applications
1: Computer security: intrusion detection; 2: Fault detection in semiconducter manufacturing; 3: Video content retrieval; 4: Gene expression
K-Nearest Neighbors: Preference Bias
Good for measuring distance based approximations, good for outlier detection
K-Nearest Neighbors: Pros
1: Simple; 2: Powerful; 3: Lazy, no training involved; 4: Naturally handles multiclass classification and regression
K-Nearest Neighbors: Restriction Bias
Low-dimensional datasets
K-Nearest Neighbors: Type
Supervised learning, instance based
KNN
K-NN is an algorithm that can be used when you have a bunch of objects that have been classified or labeled in some way, and other similar objects that haven’t gotten classified or labeled yet, and you want a way to automatically label them.
Linear Regression
Trying to fit a linear continuous function to the data. Univariate or Multivariate.
Linear Regression: Cons
1: Unable to model complex relationships, 2: Unable to capture nonlinear relationships without first transforming the inputs
Linear Regression: Definition
Trying to fit a linear continuous function to the data to predict results. Can be univariate or multivariate.
Linear Regression: Example Applications
1: Fitting a line
Linear Regression: Preference Bias
1: Prefers continuous variables; 2: A first look at a dataset; 3: Numerical data with lots of features
Linear Regression: Pros
1: Very fast - runs in constant time, 2: Easy to understand the model, 3: Less prone to overfitting
Linear Regression: Restriction Bias
Low restriction on problems it can solve
Linear Regression: Type
Supervised learning, regression class
Logistic Regression
A kind of regression analysis often used when the dependent variable is dichotomous and scored 0 or 1. It is usually used for predicting whether something will happen or not, such as graduation, business failure, or heart attack-anything that can be expressed as event/non-event. Independent variables may be categorical or continuous in logistic regression analysis.
Multiclass-Classification: One-vs-all
Multiclass classification. Reducing a classification problem with multiple features that have to be predicted to a simple classification problem by looking at one feature at a time. Then to determin the final prediction we take the max of all the predicted values.
Naive Bayes
…
Naive Bayes: Cons
…
Naive Bayes: Definition
Given its simplicity and the assumption that the independent variables are statistically independent, Naive Bayes models are effective classification tools that are easy to use and interpret. Naive Bayes is particularly appropriate when the dimensionality of the independent space is high. For the reasons given above, Naive Bayes can often outperform other more sophisticated classification methods. A variety of methods exist for modeling the conditional distributions of the inputs including normal, lognormal, gamma, and Poisson.
Naive Bayes: Example Applications
…
Naive Bayes: Flavors
A variety of methods exist for modeling the conditional distributions of the inputs including normal, lognormal, gamma, and Poisson.
Naive Bayes: Preference Bias
Works on problems where the inputs are independent from each other
Naive Bayes: Pros
1: Easy to use and interpret; 2: Works well with high dimensional problems
Naive Bayes: Restriction Bias
Prefers problems where the probability will always be greater than zero for each class
Naive Bayes: Type
Supervised learning; used for classification; probabalistic approach
Neural Networks
In neuronal networks the process of calculating the subsequent layers of the network. Each layer depends on the calculations done on the layer before it.
Neural Networks
Interconnected neural cells. With experience, networks can learn, as feedback strengthens or inhibits connections that produce certain results. Computer simulations of neural networks show analogous learning.
Neural Networks: Cons
1: Prone to overfitting; 2: Long training time; 3: Requires significant computing power for large datasets; 4: Model is essentially unreadable; 5: Work best with “homogenous” data where features all have similar meanings
Neural Networks: Definition
With experience, networks can learn, as feedback strengthens or inhibits connections that produce certain results. Each layer depends on the calculations done on the layer before it.
Neural Networks: Example Applications
1: Images; 2: Video; 3: “Human-intelligence” type tasks like driving or flying; 4: Robotics
Neural Networks: Flavors
Deep learning
Neural Networks: Preference Bias
Prefers binary inputs
Neural Networks: Pros
1: Extremely powerful, can model even very complex relationships; 2: No need to understand the underlying data; 3: Almost works by “magic”
Neural Networks: Random Initialization
Symmetry breaking for neural networks is achieved by:
Neural Networks: Restriction Bias
Little restriction bias
Neural Networks: Type
Supervised learning; nonlinear functional approximation
Overview of algorithms
…
Probabilistic Graphical Model (a.k.a. Graphical Model)
Ways of encoding the structure (independencies) of a probability distribution into a picture. The two main types of graphical models are directed graphical models and undirected graphical models, probability distributions represented by directed and undirected graphs respectively. Each node in the graph represents a random variable, and a connection between two nodes indicates a possible dependence between the random variables. So, for example, a fully disconnected graph would represent a fully independent set of random variables, meaning the distribution could be fully factored as P(x,y,z,…)=P(x)P(y)P(z)… Note that the graphs represent structures, not probabilities themselves.
Random Forest
…
Random Forests
A decision tree classifier that produces a “forest of trees”, yielding highly accurate models, essentially by iteratively randomizing one input variable at a time in order to learn if this randomization process actually produces a less accurate classifier. If it doesn’t, then that variable is ousted from the model.
Recommendation Systems: Collaborative filtering
Based on past user behavior. Each user’s history of behaviors (ratings, purchases, or viewing history) is used to make associations between users with similar behavior and between items of interest to the same users. Example: Netflix. Methods: 1. Neighborhood-based methods, based on user-user or item-item distances; 2. Latent factor or reduced- dimension models, which automatically discover a small number of descriptive factors for users and items; 3. Low-rank matrix factorization is the best-known example of reduced-dimension models and is among the most flexible and successful methods underlying recommendation systems. There are many variants of matrix factorization, including probabilistic and Bayesian versions. Restricted Boltzmann machines, a type of deep learning neural network, are another state-of-the-art approach.
Recommendation Systems: Collaborative filtering: Matrix Factorization
….Probabilistic and Bayesian versions, Restricted Boltzmann machines, a type of deep learning neural network, are another state-of-the-art approach.
Recommendation Systems: Companies using them
Retailers: Amazon, Target; Movies + Music Sites: Netflix, last.fm, Pandora; Social networks: Facebook, Twitter; Grocery stores: Tesco; Content publishers: Ad networks: Yahoo!, Google; CRM: Next-best offer in marketing decision making
Recommendation Systems: Content-based filtering
Gathers information (e.g., demographics, genre, keywords, preferences, survey responses) to generate a profile for each user or item. Users are matched to items based on their profiles. Example: Pandora’s Music Genome Project.
Regression Analysis
We are trying to predict results within a continuous output, meaning that we are trying to map input variables to some continuous function.
Regression Trees
a single regression equation is much smaller and less complex than a regression tree, but tends to also be much less accurate.
Regression Trees
decision trees which predict numeric quantities. The leaf nodes of these trees have a numeric quantity instead of a class. This numeric quantity is often decided by taking the average of all training set values to which the leaf node applies
Sigmoid Function
an S-shaped mathamatical curve is often used to describe the activation function of a neuron over time
Stepwise Regression
Variable selection process for multivariate regression. In forward stepwise selection, a seed variable is selected and each additional variable is inputed into the model, but only kept if it significantly improves goodness of fit (as measured by increases in R^2). Backwards selection starts with all variables, and removes them one by one until removing an additional one decreases R^2 by a non-trivial amount. Two deficiencies of this method are that the seed chosen disproportionately impacts which variables are kept, and that the decision is made using R^2, not Adjusted R^2. (submitted by Santiago Perez)
Supervised Learning
We are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the output. Categorized into “regression” and “classification” problems.
Support Vector Machine
can extrapolate information from one dimensional data (input space) and some information about weights & correlative relationships to another dimension (feature space)
Support Vector Machine
divide an instance space by finding the line that is as far as possible from both classes. This line is called the “maximum-margin hyperplane”
Support Vector Machine
Powerful Jedi machine learning classifier. Among classification algorithms used in supervised machine learning, SVM usually produces the most accurate classifications. Read more about SVM in this article “The Importance of Location in Real Estate, Weather, and Machine Learning.”
Support Vector Machine
when determining the maximum-margin hyperplane for a support vector machine, only the points near the hyperplane are important. These points near the boundary are called the support vectors
Support Vector Machines: Cons
1: Need to select a good kernel function; 2: Model parameters are difficult to interpret; 3: Sometimes numerical stability problems; 4: Requires significant memory and processing power
Support Vector Machines: Definition
Divides an instance space by finding the line that is as far as possible from both classes. This line is called the “maximum-margin hyperplane”. Only the points near the hyperplane are important. These points near the boundary are called the support vectors.
Support Vector Machines: Example Applications
1: Text classification; 2: Image classification; 3: Handwriting recognition
Support Vector Machine: Kernels
since support vector machines use dot-products (just like linear classifiers) when determining the hyperplane, they can be turned into a non-linear classifier by replacing the dot-product with a kernel such as the radial-basis function
Support Vector Machine: libsvm
open source library for SVMs written in C++ (w/a Java version as well). Trains an SVM model, makes predictions, and tests predictions w/in a dataset with support for kernel methods such as the radial-basis function
Support Vector Machines: Preference Bias
Works where there is a definite distinction between two classifications
Support Vector Machines: Pros
1: Can model complex, nonlinear relationships; 2: Robust to noise (because they maximize margins)
Support Vector Machines: Restriction Bias
Prefers binary classification problems
Support Vector Machines: Type
Supervised learning for defining a decision boundary