Lecture 14 - Explainable ML: Model Agnostic Methods Flashcards
What does “model agnostic” mean in explainability methods?
Model agnostic methods can explain any type of machine learning model, regardless of its internal structure. They focus on understanding the relationships between input features and outputs.
Why are model agnostic methods important in explainable ML?
They are versatile, applicable to any model type, and provide insights without requiring access to the model’s internal mechanics, making them useful for black-box models.
What is a Partial Dependence Plot?
A graph that shows the relationship between one or more features and the predicted outcome, averaging out the effects of other features.
What are the advantages and limitations of PDP?
Pros:
Helps identify causal relationships.
Easy to interpret for continuous and categorical features.
Cons:
Unrealistic when features are correlated.
Cannot capture complex feature interactions.
“PDP = Plot Data Points” (helps recall it focuses on plotting averaged outputs).
How does Permutation Feature Importance work?
It shuffles feature values and measures the change in model performance to assess the feature’s importance.
What are global surrogate models?
Interpretable models (e.g., linear regression) used to approximate the predictions of black-box models.
What distinguishes local surrogate models?
They explain model predictions for specific instances, often using simpler models like linear regression around the instance of interest.
What is LIME?
A technique to explain individual predictions by creating a local interpretable model around the prediction.
What are the four desired characteristics of LIME?
Interpretability: Understandable by humans.
Local Fidelity: Faithful to the model’s behavior locally.
Model-Agnosticity: Works with any model.
Global Insight: Explains broader model behavior.