Chapter 1: The Machine Learning Landscape Flashcards

1
Q

What is precision in ML classification?

A

Precision in ML classification refers to a models ability to correctly identify true positives and answers the question, out of all of the things I predicted positive, how many were actually positive?

Precision = TP / (TP + FP)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is recall in ML classification?

A

Recall in machine learning classification quantifies the true positive rate of predictions, by answering the question, of all the true positives in the data, how many did the model correctly identify?

Recall = TP / (TP + FN)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the first step in the ML project checklist?

A

Frame the problem and looking at the big picture.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

When framing the problem and looking at the bigger picture, why is it important to define the objective in business terms?

A

Knowing the objective is important as it will determine:
- How you frame the problem.
- Which technical solution architecture you’ll chose.
- How you will determine success.
- How much time you will spend optimising the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When framing the problem and looking at the bigger picture, should you ask what current solutions are in place?

A

Yes, current solutions will often give a reference for performance as well as how the problem has been approached previously.

This information is vital as you develop a solution and gives a sense as to how much incremental value a data science solution could bring. This can be expressed as extra money made/money saved by implementing DS solution compared to previous method.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What the eight steps in the data science checklist?

A
  1. Frame the problem and look at the big picture.
  2. Get the data.
  3. Explore the data to gain insights.
  4. Prepare the data to better expose the underlying data patterns to machine learning algorithms.
  5. Explore many different models and shortlist the best ones.
  6. Fine-tune your models and combine them into a great solution.
  7. Present your solution.
  8. Launch, monitor, and maintain your system.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

In the first item on the data science checklist, Frame the Problem and Look at the Big Picture, what are some of the steps involved?

A

Define the objective in business terms.

How will your solution be used?

What are the current solutions/workarounds (if any)?

How should you frame this problem (supervised/unsupervised, online/offline, etc.)?

How should performance be measured?

Is the performance measure aligned with the business objective?

What would be the minimum performance needed to reach the business objective?

What are comparable problems? Can you reuse experience or tools?

Is human expertise available?

How would you solve the problem manually?

List the assumptions you (or others) have made so far.

Verify assumptions if possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Refactor using kloze deletion: In the second item on the data science checklist, Getting the Data, what are the steps involved?

A

List the data you need and how much you need.

Find and document where you can get that data.

Check how much space it will take.

Check legal obligations, and get authorization if necessary.

Get access authorizations.

Create a workspace (with enough storage space).

Get the data.

Convert the data to a format you can easily manipulate (without changing the data itself).

Ensure sensitive information is deleted or protected (e.g., anonymized).

Check the size and type of data (time series, sample, geographical, etc.).

Sample a test set, put it aside, and never look at it (no data snooping!).

Note: automate as much as possible so you can easily get fresh data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

In the third item of the data science checklist, Explore the Data, what are some of the steps involved?

A

Create a copy of the data for exploration (sampling it down to a manageable size if necessary).

Create a Jupyter notebook to keep a record of your data exploration.

Study each attribute and its characteristics:

  • Name
  • Type (categorical, int/float, bounded/unbounded, text, structured, etc.)
  • % of missing values
  • Noisiness and type of noise (stochastic, outliers, rounding errors, etc.)
  • Usefulness for the task
  • Type of distribution (Gaussian, uniform, logarithmic, etc.)

For supervised learning tasks, identify the target attribute(s).

Visualize the data.

Study the correlations between attributes.

Study how you would solve the problem manually.

Identify the promising transformations you may want to apply.

Identify extra data that would be useful (go back to “Get the Data”).

Document what you have learned.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In the fourth item of the data science checklist, Prepare the data, what are some of the steps involved?

A

Clean the data:

  • Fix or remove outliers (optional).
  • Fill in missing values (e.g., with zero, mean, median…​) or drop their rows (or columns).

Perform feature selection (optional):

  • Drop the attributes that provide no useful information for the task.

Perform feature engineering, where appropriate:

  • Discretize continuous features.
  • Decompose features (e.g., categorical, date/time, etc.).
  • Add promising transformations of features (e.g., log(x), sqrt(x), x2, etc.).
  • Aggregate features into promising new features.

Perform feature scaling:

  • Standardize or normalize features.

Notes:
Work on copies of the data (keep the original dataset intact).

Write functions for all data transformations you apply, for five reasons:

So you can easily prepare the data the next time you get a fresh dataset

So you can apply these transformations in future projects

To clean and prepare the test set

To clean and prepare new data instances once your solution is live

To make it easy to treat your preparation choices as hyperparameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

In the fifth item of the data science checklist, Shortlisting Promising models, what are some of the steps involved?

A

Train many quick-and-dirty models from different categories (e.g., linear, naive Bayes, SVM, random forest, neural net, etc.) using standard parameters.

Measure and compare their performance:

For each model, use N-fold cross-validation and compute the mean and standard deviation of the performance measure on the N folds.

Analyze the most significant variables for each algorithm.

Analyze the types of errors the models make:

What data would a human have used to avoid these errors?

Perform a quick round of feature selection and engineering.

Perform one or two more quick iterations of the five previous steps.

Shortlist the top three to five most promising models, preferring models that make different types of errors.

Notes:

If the data is huge, you may want to sample smaller training sets so you can train many different models in a reasonable time (be aware that this penalizes complex models such as large neural nets or random forests).

Once again, try to automate these steps as much as possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

In the sixth item on the data science checklist, Fine Tune the System, what are some of the steps involved?

A

Fine-tune the hyperparameters using cross-validation:

Treat your data transformation choices as hyperparameters, especially when you are not sure about them (e.g., if you’re not sure whether to replace missing values with zeros or with the median value, or to just drop the rows).

Unless there are very few hyperparameter values to explore, prefer random search over grid search. If training is very long, you may prefer a Bayesian optimization approach (e.g., using Gaussian process priors, as described by Jasper Snoek et al.1).

Try ensemble methods. Combining your best models will often produce better performance than running them individually.

Once you are confident about your final model, measure its performance on the test set to estimate the generalization error.

WARNING

Don’t tweak your model after measuring the generalization error: you would just start overfitting the test set.
Notes:

You will want to use as much data as possible for this step, especially as you move toward the end of fine-tuning.

As always, automate what you can.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

In the seventh item on the data science checklist, Present Your Solution, what are some of the steps involved?

A

Document what you have done.

Create a nice presentation:

  • Make sure you highlight the big picture first.
  • Explain why your solution achieves the business objective.

Don’t forget to present interesting points you noticed along the way:

  • Describe what worked and what did not.
  • List your assumptions and your system’s limitations.

Ensure your key findings are communicated through beautiful visualizations or easy-to-remember statements (e.g., “the median income is the number-one predictor of housing prices”).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The final item on the data science checklist, Launch, what are the steps involved?

A

Get your solution ready for production (plug into production data inputs, write unit tests, etc.).

Write monitoring code to check your system’s live performance at regular intervals and trigger alerts when it drops:

  • Beware of slow degradation: models tend to “rot” as data evolves.
  • Measuring performance may require a human pipeline (e.g., via a crowdsourcing service).
  • Also monitor your inputs’ quality (e.g., a malfunctioning sensor sending random values, or another team’s output becoming stale). This is particularly important for online learning systems.

Retrain your models on a regular basis on fresh data (automate as much as possible).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does overfitting to the training data refer too in machine learning?

A

Overfitting refers to when a machine learning model performs well on training data, but does not generalise well to out of training set data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What can cause a model to overfit?

A

Using a model which is too complex relative to the amount of data and how noisey the dataset is.

17
Q

How can one reduce overfitting?

A

Model:
- Choose a simpler model with few parameters, linear model vs high degree polynomial.
- Reduce the number of attributes in the training data.
- Constrain the model using regularisation.

Data:
- Gather more data.
- Reduce noise in data by fixing errors and removing outliers.

18
Q

What is underfitting the training data?

A

When the model is too simple to learn the underlying structure of the data and predictions are inaccurate.

19
Q

How can you fix underfitting?

A

Model:
- Select a more powerful model with more parameters.
- Reduce constraints on the model.

Data:
- Feature engineering.

20
Q

What are some reasons for a ML system not working well?

A

Training Data:
- Too small
- Not representative,
- Is noisy
- Is polluted with irrelevant features (garbage in, garbage out).

Model:
- Underfitting - Too simple (in which case it will underfit).
- Overfitting - Too complex (in which case it will overfit).

21
Q

How can you estimate the generalisation (out of sample error) of a trained machine learning model?

A

Perform predictions on an unseen test data set and evaluating the error rate. This tells you how well the model performs on instances it has never seen before.

22
Q

What is a sign of overfitting?

A

Low training error but high generalisation error.

23
Q

What is the difference between training and tuning of a model?

A

Training refers to the estimation of model parameters e.g. coefficients of a linear regression, from the training data, for optimal performance. Tuning (hyperparameter tuning) refers to the estimation of what the optimal hyperparameters (e.g. regularisation rate for regularised regression) for optimal performance.

24
Q

Why is it not advisable to tune hyperparameters on your test set?

A

The model and hyperparameters will become overfit to performance on the test set and may not generalise to unseen data well.

25
Q

What is a validation set?

A

This is part of the training dataset (typically 15% of overall data available) which is held out from training the model to be used for hyperparameter tuning.

26
Q

How is a validation set used?

A
  • Data is split in to train/validation/test (70%/15%/15%).
  • Multiple models are trained on the training set, with various hyperparameters.
  • Select the model which performs best on the validation set.
  • Train best performing model on training set + validation set.
  • Evaluate the fully trained model on the test set to estimate the generalisation error.
27
Q

How does k-fold cross validation work?

A
  • Split full training data in to k equal bins.
  • Run k separate training experiments, selecting one k subset as the test set and the remaining k-1 sets as training sets.
  • Re-run this experiment k times and average the test results from all of the experiments.
28
Q

What is a draw back of k-fold cross validation?

A

As k increases, so does compute and run time.

28
Q

What is a practical use of k-fold validation?

A
  • Model selection: When deciding which model to chose on a dataset, cross validation can be performed with each model and the best performing one will be picked.
  • Hyperparameter tuning: When selecting the optimal value of a hyperparameter, the value can be altered with multiple runs of the model and performance can be judged on the results of k-fold validation.
29
Q

Why do we do k-fold cross validation?

A

When splitting the data for training and testing, there may be a bias introduced when splitting once, performing cross validation automates the splitting of the data and uses all data in training and testing.

29
Q

What can be the cause in a drop in performance between training and validation set? What can you do to improve it?

A

Problem: A drop in performance between train and validation set is usually because of overfitting to the training set.

Possible fixes:
- Simplifying the model.
- Regularising the model.
- Get more training data.
- Clean up the data from noise.

29
Q

When a model performs well on train and validation set but performance drops in the test set, what is the cause? What can you do to improve it?

A

Problem: There is not enough representative data in the test set, this is a data mismatch.

Possible fixes:
- Pre-process data to be as representative as possible.