Path3.Mod1.g - Automated Machine Learning - Metric Effects and Meanings Flashcards
Describe what Macro, Micro and Weighted Average versions are for Classification Metrics
- Macro - Calc the particular Metric for each class and take the unweighted average
- Micro - Calc the particular metric globally by counting true positives, false negatives and false positives (excludes true negatives)
-
Weighted - Calculate the particular metric for each class and take the weighted average based on the number of samples per class (i.e.
Macro / Count(per class)
)
US Senate vs US House
Describe how Class Imbalance affects choice of Metric w.r.t. Weight.
With Class Imbalance, it may be more informative to use Macro Average, where minority classes are given equal weighting to majority classes.
This is different from Weighted Average where the total number of samples per class determines its ultimate contribution to the average.
Selecting an Evaluation Metric for AutoML influences how the model is trained. Selecting Precision for a Classification task results in…
…A Model that minimizes False Positive Rate. Remember, Precision is the Model’s ability to avoid incorrectly predicting a Positive result…
Selecting an Evaluation Metric for AutoML influences how the model is trained. Selecting Recall for a Classification task results in…
…a Model that minimizes False Negative Rate. Remember Recall is the Model’s ability to guess True correctly
Selecting an Evaluation Metric for AutoML influences how the model is trained. Selecting Accuracy for a Classification task results in…
…a Model with the Highest Accuracy
Selecting an Evaluation Metric for AutoML influences how the model is trained. Selecting AUC for a Classification task results in…
…a Model with the Highest AUC possible. Remember AUC indicates how well the Model can distinguish between a Positive class and a Negative class
Calibration…
Selecting an Evaluation Metric for AutoML influences how the model is trained. Selecting Mean Absolute Error for a Regression task results in…
…a Model that’s good when you are equally concerned about overestimating and underestimating, while getting as close as possible to true values (i.e. a Model that’s well Calibrated)
IV => DV
Selecting an Evaluation Metric for AutoML influences how the model is trained. Selecting R2 for a Regression task results in…
…a Model that’s useful when you want to understand how well the independent variables explain the variability of the dependent variable (ELI5: how well the data fit the regression model (the goodness of fit))
min(|E|)
Selecting an Evaluation Metric for AutoML influences how the model is trained. Selecting Root Mean Absolute Error for a Regression task results in…
…a Model that focuses on minimizing the average magnitude of errors, regardless of their direction, which is often useful when large errors can be costly