Lecture 13 - Explainable ML: Introduction & Interpretable Models Flashcards
What are the basic requirements for interpretability in machine learning?
Intelligibility: Humanly understandable input features.
Transparency: Simple, easy-to-follow model structure.
Compact Features: Use a minimal set of predictive features.
Mnemonic: Interpretable models are Tidy and Clear (ITC).
What are interpretable models?
Models inherently easy to understand, such as:
Linear models: Linear regression, logistic regression.
Decision trees: Hierarchical, rule-based structures.
Generalized Additive Models (GAMs): Allow nonlinear components while maintaining interpretability.
What is the difference between interpretable and black-box models?
Interpretable Models:
Transparent and self-explanatory.
High fidelity in explaining decisions.
Black-Box Models:
Complex, require post-hoc explanations.
Often lack transparency.
Define a linear model and its equation.
A model assuming a linear relationship between input and output.
Equation: E[g(y)]=w0+w1x1+…+wdxd.
What are Generalized Additive Models (GAMs)?
GAMs extend linear models by allowing nonlinear transformations of features: E[h(y)]=w0+f1(x1)+f2(x2)+…+fd(xd).
What are assumptions in linear modeling?
Linearity: Relationships are linear.
Normality: Target variable follows a normal distribution.
Homoscedasticity: Constant variance of errors.
Independence: Instances are independent.
What makes decision trees interpretable?
Visual Interpretation: The tree diagram.
Rule-Based Explanation: “IF-THEN” rules derived from root-to-leaf paths.
What are challenges of decision tree interpretability?
Deep Trees: Harder to interpret with many nodes.
Continuous Features: Repeatedly used, increasing depth.
Solutions: Feature discretization, pruning.
How is feature importance measured in decision trees?
Using split metrics like information gain. The importance of a feature is proportional to how much it reduces uncertainty.
How is feature importance calculated in linear models?
By 𝑡-statistics , t = w_i / SE(w_i) where 𝑆𝐸 is the standard error of weights.
Why are GAMs preferred for interpretability?
Nonlinear relationships with individual features can be modeled.
Partial dependence plots enhance interpretability.
What are the properties of explanation methods in XAI?
Expressive Power: How explanations are communicated (e.g., rules, graphs).
Translucency: Dependency on model internals.
Portability: Applicability to various ML models.
Algorithmic Complexity: Computational effort required.
What metrics are used to evaluate explanations?
Accuracy: Predicts unseen data effectively.
Fidelity: Matches predictions of the original model.
Certainty: Reflects model confidence.
Comprehensibility: Human interpretability of explanations.
What is fidelity in explainability?
Fidelity measures how closely the explanation matches the original model’s predictions.
What are the characteristics of a good explanation?
Contrastive: Compares outcomes (e.g., why A and not B).
Selective: Focuses on the most important factors.
Truthful: Matches real-world evidence.
Consistent: Aligns with prior beliefs.