SHAP Paper Flashcards
What is SHAP (SHapley Additive exPlanations)?
SHAP is a unified framework that assigns feature importance values to individual predictions using Shapley values from game theory.
What are additive feature attribution methods?
These methods use a linear explanation model where the prediction is a sum of feature attributions, such as SHAP, LIME, and DeepLIFT.
What is the significance of SHAP’s unique solution?
SHAP ensures that the additive feature attribution method satisfies three desirable properties: local accuracy, missingness, and consistency.
What is local accuracy in SHAP?
Local accuracy means that the explanation model should perfectly match the original model’s prediction for the given instance.
What is the missingness property in SHAP?
Missingness ensures that features that are not present (i.e., missing in the input) should not contribute to the prediction.
What is the consistency property in SHAP?
Consistency ensures that if a model changes so that a feature’s contribution increases, the feature’s attribution should not decrease.
How does SHAP unify other methods like LIME and DeepLIFT?
SHAP generalizes these methods by providing a consistent approach for additive feature attribution, ensuring that SHAP values adhere to desirable properties.
What is Kernel SHAP?
Kernel SHAP is a model-agnostic method that uses a weighted linear regression to estimate SHAP values, improving the sample efficiency over traditional Shapley sampling.
How does SHAP handle complex models like neural networks?
Deep SHAP approximates SHAP values by leveraging DeepLIFT’s backpropagation rules and applying them recursively across the layers of a deep network.
What are the advantages of SHAP over LIME?
SHAP provides better fidelity to the original model, ensures consistency, and more accurately reflects feature importance across all possible orderings of features.