Chapter 9: Unsupervised Learning Techniques Flashcards

1
Q

How would you define clustering? Can you name a few clustering algorithms?

A

In Machine Learning, clustering is the unsupervised task of grouping similar instances together. The notion of similarity depends on the task at hand: for example, in some cases two nearby instances will be considered similar, while in others similar instances may be far apart as long as they belong to the same densely packed group. Popular clustering algorithms include K-Means, DBSCAN, agglomerative clustering, BIRCH, Mean-Shift, affinity propagation, and spectral :

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are some of the main applications of clustering algorithms?

A

The main applications of clustering algorithms include data analysis, customer segmentation, recommender systems, search engines, image segmentation, semi-supervised learning, dimensionality reduction, anomaly detection, and novelty detection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Describe two techniques to select the right number of clusters when using k-means.

A

The elbow rule is a simple technique to select the number of clusters when using K-Means: just plot the inertia (the mean squared distance from each instance to its nearest centroid) as a function of the number of clusters, and find the point in the curve where the inertia stops dropping fast (the “elbow”). This is generally close to the optimal number of clusters. Another approach is to plot the silhouette score as a function of the number of clusters. There will often be a peak, and the optimal number of clusters is generally nearby. The silhouette score is the mean silhouette coefficient over all instances. This coefficient varies from +1 for instances that are well inside their cluster and far from other clusters, to –1 for instances that are very close to another cluster. You may also plot the silhouette diagrams and perform a more thorough analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is label propagation? Why would you implement it, and how?

A

Labeling a dataset is costly and time-consuming. Therefore, it is common to have plenty of unlabeled instances, but few labeled instances. Label propagation is a technique that consists in copying some (or all) of the labels from the labeled instances to similar unlabeled instances. This can greatly extend the number of labeled instances, and thereby allow a supervised algorithm to reach better performance (this is a form of semi-supervised learning). One approach is to use a clustering algorithm such as K-Means on all the instances, then for each cluster find the most common label or the label of the most representative instance (i.e., the one closest to the centroid) and propagate it to the unlabeled instances in the same cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Can you name two clustering algorithms that can scale to large datasets? And two that look for regions of high density?

A

K-Means and BIRCH scale well to large datasets. DBSCAN and Mean-Shift look for regions of high density.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Can you think of a use case where active learning would be useful? How would you implement it?

A

Active learning is useful whenever you have plenty of unlabeled instances but labeling is costly. In this case (which is very common), rather than randomly selecting instances to label, it is often preferable to perform active learning, where human experts interact with the learning algorithm, providing labels for specific instances when the algorithm requests them. A common approach is uncertainty sampling (see the Active Learning section in chapter 9).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the difference between anomaly detection and novelty detection?

A

Many people use the terms anomaly detection and novelty detection interchangeably, but they are not exactly the same. In anomaly detection, the algorithm is trained on a dataset that may contain outliers, and the goal is typically to identify these outliers (within the training set), as well as outliers among new instances. In novelty detection, the algorithm is trained on a dataset that is presumed to be “clean,” and the objective is to detect novelties strictly among new instances. Some algorithms work best for anomaly detection (e.g., Isolation Forest), while others are better suited for novelty detection (e.g., one-class SVM).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a Gaussian mixture? What tasks can you use it for?

A

A Gaussian mixture model (GMM) is a probabilistic model that assumes that the instances were generated from a mixture of several Gaussian distributions whose parameters are unknown. In other words, the assumption is that the data is grouped into a finite number of clusters, each with an ellipsoidal shape (but the clusters may have different ellipsoidal shapes, sizes, orientations, and densities), and we don’t know which cluster each instance belongs to. This model is useful for density estimation, clustering, and anomaly detection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Can you name two techniques to find the right number of clusters when using a Gaussian mixture model?

A

One way to find the right number of clusters when using a Gaussian mixture model is to plot the Bayesian information criterion (BIC) or the Akaike information criterion (AIC) as a function of the number of clusters, then choose the number of clusters that minimizes the BIC or AIC. Another technique is to use a Bayesian Gaussian mixture model, which automatically selects the number of clusters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The classic Olivetti faces dataset contains 400 grayscale 64 × 64–pixel images of
faces. Each image is flattened to a 1D vector of size 4,096. Forty different people
were photographed (10 times each), and the usual task is to train a model that can predict which person is represented in each picture. Load the dataset using
the sklearn.datasets.fetch_olivetti_faces() function, then split it into a
training set, a validation set, and a test set (note that the dataset is already scaled
between 0 and 1). Since the dataset is quite small, you will probably want to
use stratified sampling to ensure that there are the same number of images per
person in each set. Next, cluster the images using k-means, and ensure that you
have a good number of clusters (using one of the techniques discussed in this
chapter). Visualize the clusters: do you see similar faces in each cluster?

A

https://colab.research.google.com/github/ageron/handson-ml3/blob/main/09_unsupervised_learning.ipynb#scrollTo=pqovJP7i4VkU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Continuing with the Olivetti faces dataset, train a classifier to predict which
person is represented in each picture, and evaluate it on the validation set. Next,
use k-means as a dimensionality reduction tool, and train a classifier on the
reduced set. Search for the number of clusters that allows the classifier to get
the best performance: what performance can you reach? What if you append the
features from the reduced set to the original features (again, searching for the
best number of clusters)?

A

https://colab.research.google.com/github/ageron/handson-ml3/blob/main/09_unsupervised_learning.ipynb#scrollTo=pqovJP7i4VkU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Train a Gaussian mixture model on the Olivetti faces dataset. To speed up the
algorithm, you should probably reduce the dataset’s dimensionality (e.g., use
PCA, preserving 99% of the variance). Use the model to generate some new
faces (using the sample() method), and visualize them (if you used PCA, you
will need to use its inverse_transform() method). Try to modify some images
(e.g., rotate, flip, darken) and see if the model can detect the anomalies (i.e.,
compare the output of the score_samples() method for normal images and for
anomalies).

A

https://colab.research.google.com/github/ageron/handson-ml3/blob/main/09_unsupervised_learning.ipynb#scrollTo=pqovJP7i4VkU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Some dimensionality reduction techniques can also be used for anomaly detec‐
tion. For example, take the Olivetti faces dataset and reduce it with PCA, preserv‐
ing 99% of the variance. Then compute the reconstruction error for each image.
Next, take some of the modified images you built in the previous exercise and
look at their reconstruction error: notice how much larger it is. If you plot a
reconstructed image, you will see why: it tries to reconstruct a normal face.

A

https://colab.research.google.com/github/ageron/handson-ml3/blob/main/09_unsupervised_learning.ipynb#scrollTo=pqovJP7i4VkU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly