Lecture 14 - Explainable ML: Model Agnostic Methods Flashcards

1
Q

What does “model agnostic” mean in explainability methods?

A

Model agnostic methods can explain any type of machine learning model, regardless of its internal structure. They focus on understanding the relationships between input features and outputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why are model agnostic methods important in explainable ML?

A

They are versatile, applicable to any model type, and provide insights without requiring access to the model’s internal mechanics, making them useful for black-box models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a Partial Dependence Plot?

A

A graph that shows the relationship between one or more features and the predicted outcome, averaging out the effects of other features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the advantages and limitations of PDP?

A

Pros:
Helps identify causal relationships.
Easy to interpret for continuous and categorical features.

Cons:
Unrealistic when features are correlated.
Cannot capture complex feature interactions.

“PDP = Plot Data Points” (helps recall it focuses on plotting averaged outputs).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How does Permutation Feature Importance work?

A

It shuffles feature values and measures the change in model performance to assess the feature’s importance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are global surrogate models?

A

Interpretable models (e.g., linear regression) used to approximate the predictions of black-box models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What distinguishes local surrogate models?

A

They explain model predictions for specific instances, often using simpler models like linear regression around the instance of interest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is LIME?

A

A technique to explain individual predictions by creating a local interpretable model around the prediction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the four desired characteristics of LIME?

A

Interpretability: Understandable by humans.

Local Fidelity: Faithful to the model’s behavior locally.

Model-Agnosticity: Works with any model.

Global Insight: Explains broader model behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly