Path3.Mod1.g - Automated Machine Learning - Metric Effects and Meanings Flashcards

1
Q

Describe what Macro, Micro and Weighted Average versions are for Classification Metrics

A
  • Macro - Calc the particular Metric for each class and take the unweighted average
  • Micro - Calc the particular metric globally by counting true positives, false negatives and false positives (excludes true negatives)
  • Weighted - Calculate the particular metric for each class and take the weighted average based on the number of samples per class (i.e. Macro / Count(per class))
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

US Senate vs US House

Describe how Class Imbalance affects choice of Metric w.r.t. Weight.

A

With Class Imbalance, it may be more informative to use Macro Average, where minority classes are given equal weighting to majority classes.

This is different from Weighted Average where the total number of samples per class determines its ultimate contribution to the average.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Selecting an Evaluation Metric for AutoML influences how the model is trained. Selecting Precision for a Classification task results in…

A

…A Model that minimizes False Positive Rate. Remember, Precision is the Model’s ability to avoid incorrectly predicting a Positive result…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Selecting an Evaluation Metric for AutoML influences how the model is trained. Selecting Recall for a Classification task results in…

A

…a Model that minimizes False Negative Rate. Remember Recall is the Model’s ability to guess True correctly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Selecting an Evaluation Metric for AutoML influences how the model is trained. Selecting Accuracy for a Classification task results in…

A

…a Model with the Highest Accuracy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Selecting an Evaluation Metric for AutoML influences how the model is trained. Selecting AUC for a Classification task results in…

A

…a Model with the Highest AUC possible. Remember AUC indicates how well the Model can distinguish between a Positive class and a Negative class

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Calibration…

Selecting an Evaluation Metric for AutoML influences how the model is trained. Selecting Mean Absolute Error for a Regression task results in…

A

…a Model that’s good when you are equally concerned about overestimating and underestimating, while getting as close as possible to true values (i.e. a Model that’s well Calibrated)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

IV => DV

Selecting an Evaluation Metric for AutoML influences how the model is trained. Selecting R2 for a Regression task results in…

A

…a Model that’s useful when you want to understand how well the independent variables explain the variability of the dependent variable (ELI5: how well the data fit the regression model (the goodness of fit))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

min(|E|)

Selecting an Evaluation Metric for AutoML influences how the model is trained. Selecting Root Mean Absolute Error for a Regression task results in…

A

…a Model that focuses on minimizing the average magnitude of errors, regardless of their direction, which is often useful when large errors can be costly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly