Path3.Mod1.f - Automated Machine Learning - Evaluate and Compare Models In ML Studio Flashcards
DA BM, EM VE
- ML Studio > AutoML Experiment > Overview Page - Two things this page shows you
- ML Studio > AutoML Experiment > Models Page - What you can Explore and what you can View…
- Review input Data Asset and summary of the Best Model
- Explore Models that were trained (by algorithm names), with View Explanation for the best one, and Responsible AI Dashboard
CBD MFVI HCFD
Data Guardrails:
- Where they are located
- Three data guardrails auto-applied to classification models
Job Details > Data Guardrails tab. A training job has to complete first before you can view what guardrails were applied.
- Class Balancing Detection - Imbalanced or underrepresented classes
- Missing Feature Value Imputation - Provide values when missing (average, most common value, etc.)
- High Cardinality Feature Detection - Reduce fields that appear to have high cardinality (nearly or always a random value)
Data Guardrails are applied when…
…Featurization is enabled for your experiment.
P D A
Data Guardrails have three possible states
- Passed: No problems, no action required
- Done: Changes applied to your data, review changes made by AutoML
- Alerted: An issue was detected and couldn’t be fixed. Review data to fix
Where AutoML displays the Scaling and Normalization techniques applied during training and in what format
Models Tab > Algorithm Name column. If a technique is applied, it is listed in this format: <applied scaling and/or normalization techniques,...>, <algorithm name>
You can View Explanation for any trained model by selecting one of the trained models in the Overview tab and select the Explain Model subtab (T/F)
True. The explanation is an approximation of the model’s interprerability. This doesn’t go just for the Best Model/Top Pick. You can opt to see an Explanation and even Responsible AI for any Model per request.