Lecture 13 - Explainable ML: Introduction & Interpretable Models Flashcards

1
Q

What are the basic requirements for interpretability in machine learning?

A

Intelligibility: Humanly understandable input features.

Transparency: Simple, easy-to-follow model structure.

Compact Features: Use a minimal set of predictive features.

Mnemonic: Interpretable models are Tidy and Clear (ITC).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are interpretable models?

A

Models inherently easy to understand, such as:

Linear models: Linear regression, logistic regression.

Decision trees: Hierarchical, rule-based structures.

Generalized Additive Models (GAMs): Allow nonlinear components while maintaining interpretability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the difference between interpretable and black-box models?

A

Interpretable Models:
Transparent and self-explanatory.
High fidelity in explaining decisions.

Black-Box Models:
Complex, require post-hoc explanations.
Often lack transparency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Define a linear model and its equation.

A

A model assuming a linear relationship between input and output.
Equation: E[g(y)]=w0​+w1​x1​+…+wd​xd​.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are Generalized Additive Models (GAMs)?

A

GAMs extend linear models by allowing nonlinear transformations of features: E[h(y)]=w0​+f1​(x1​)+f2​(x2​)+…+fd​(xd​).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are assumptions in linear modeling?

A

Linearity: Relationships are linear.
Normality: Target variable follows a normal distribution.
Homoscedasticity: Constant variance of errors.
Independence: Instances are independent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What makes decision trees interpretable?

A

Visual Interpretation: The tree diagram.

Rule-Based Explanation: “IF-THEN” rules derived from root-to-leaf paths.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are challenges of decision tree interpretability?

A

Deep Trees: Harder to interpret with many nodes.
Continuous Features: Repeatedly used, increasing depth.
Solutions: Feature discretization, pruning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How is feature importance measured in decision trees?

A

Using split metrics like information gain. The importance of a feature is proportional to how much it reduces uncertainty.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How is feature importance calculated in linear models?

A

By 𝑡-statistics , t = w_i / SE(w_i) where 𝑆𝐸 is the standard error of weights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why are GAMs preferred for interpretability?

A

Nonlinear relationships with individual features can be modeled.

Partial dependence plots enhance interpretability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the properties of explanation methods in XAI?

A

Expressive Power: How explanations are communicated (e.g., rules, graphs).

Translucency: Dependency on model internals.

Portability: Applicability to various ML models.

Algorithmic Complexity: Computational effort required.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What metrics are used to evaluate explanations?

A

Accuracy: Predicts unseen data effectively.

Fidelity: Matches predictions of the original model.

Certainty: Reflects model confidence.

Comprehensibility: Human interpretability of explanations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is fidelity in explainability?

A

Fidelity measures how closely the explanation matches the original model’s predictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the characteristics of a good explanation?

A

Contrastive: Compares outcomes (e.g., why A and not B).

Selective: Focuses on the most important factors.

Truthful: Matches real-world evidence.

Consistent: Aligns with prior beliefs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why is contrast important in explanations?

A

Contrastive explanations answer “Why this instead of that?” making them intuitive for users.

17
Q

What is the magical number 7 ± 2 in XAI?

A

Humans can process 5-9 cognitive entities at a time, so explanations should include only the most critical factors.

18
Q

Differentiate between global and local explanations.

A

Global Explanations: Provide a holistic view of model behavior.

Local Explanations: Focus on individual predictions.

19
Q

What are intrinsic and post-hoc explanation methods?

A

Intrinsic: Simplicity of the model itself provides interpretability (e.g., decision trees).

Post-hoc: Requires an additional method to explain a black-box model (e.g., SHAP).

20
Q

What are the results of interpretation methods?

A

Feature Summary Statistics: Feature importance.

Model Internals: Weights, decision rules.

Data Points: Counterfactuals or exemplar instances.

21
Q

What is simulatability in explainability?

A

Measures how well a human can predict model outcomes given its explanation.

22
Q

How is plausibility evaluated in explanations?

A

Requires user studies to determine if humans find the explanation convincing and understandable.

23
Q

What is the role of generality in a good explanation?

A

Explanations should apply to a wide range of instances and not just specific cases.