week 3 Flashcards

1
Q

What is XAI (Explainable AI)?

A

XAI focuses on making AI models transparent and understandable, aiding debugging, improvement, and building trust.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the two main approaches to achieving model understanding?

A
  1. Build inherently explainable models (e.g., decision trees, linear regression). 2. Explain pre-built models in a post-hoc manner (e.g., LIME, SHAP).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are intrinsic methods in XAI?

A

Intrinsic methods are explanations built into the model itself, such as decision trees or linear regression models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are post-hoc methods in XAI?

A

Post-hoc methods are explanations applied after the model is built, like LIME or SHAP, and can be model-agnostic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the difference between model-specific and model-agnostic methods?

A

Model-specific methods are tailored to particular model types, while model-agnostic methods can be applied to any model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are some examples of model-specific techniques?

A

Techniques include ANOVA for statistical analysis, variable importance in random forests, and partial dependence plots.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How does LIME explain model predictions?

A

LIME creates a simple, interpretable model around the specific instance being explained by perturbing feature values and fitting a local surrogate model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the main goal of LIME’s objective function?

A

To create a surrogate model that is both faithful to the original complex model and simple enough to be interpretable in the local neighborhood of the instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is SHAP and how does it work?

A

SHAP (SHapley Additive exPlanations) attributes model predictions to features using game theory principles, providing feature importance and direction of influence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are local vs. global explanations?

A

Local explanations clarify why a specific instance was predicted, while global explanations elucidate the overall model behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why is it important to evaluate explanations in XAI?

A

To ensure explanations accurately reflect model reasoning, meet stakeholder needs, and identify potential biases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are key evaluation criteria for XAI explanations?

A

Fidelity, comprehensibility, sufficiency, and trustworthiness are critical criteria for evaluating explanations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the levels of evaluating explanations?

A

Evaluation can occur at the application level (real-world testing), human level (simplified tasks for laypersons), or function level (proxy tasks without humans).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What properties make explanations effective?

A

Expressive power, translucency, portability, and algorithmic complexity are properties that affect explanation quality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What characteristics define good human-friendly explanations?

A

Good explanations are contrastive, causal, counterfactual, and tailored to the audience’s context.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the importance of benchmarks in XAI?

A

Benchmarks enable systematic comparison of explanation methods and provide standardized datasets and evaluation metrics.

17
Q

What are some challenges in XAI evaluation?

A

Challenges include difficulty in assessing explanation quality and lack of principled guidelines for practitioners.

18
Q

What are the properties of individual explanations in XAI?

A

The properties include accuracy, fidelity, consistency, and stability, which ensure that explanations are reliable and useful.

19
Q

What does fidelity mean in the context of XAI explanations?

A

Fidelity refers to how closely the explanation approximates the model’s predictions, ensuring that the explanation accurately reflects the model’s behavior.

20
Q

What is the significance of consistency in individual explanations?

A

Consistency ensures that different models provide similar explanations for the same input data, promoting reliability across models.

21
Q

Why is stability important in individual explanations?

A

Stability means that similar instances should receive similar explanations, enhancing trust and reliability in model predictions.

22
Q

What does comprehensibility mean in evaluating XAI explanations?

A

Comprehensibility measures how easy it is for humans to understand the explanation, taking into account the target audience’s knowledge level.

23
Q

What is the purpose of evaluating explanations at the application level?

A

Evaluating at the application level involves real-world testing by end users to ensure that explanations are practical, useful, and understandable in actual settings.

24
Q

How are explanations evaluated at the human level?

A

At the human level, explanations are tested with simplified tasks for laypersons to assess their understandability and usability without requiring deep technical knowledge.

25
Q

What is the function level of explanation evaluation?

A

The function level uses proxy tasks, such as comparing explanation fidelity to model predictions on test datasets, without involving human subjects.

26
Q

What is expressive power in XAI explanations?

A

Expressive power refers to the ability of an explanation to succinctly capture the structure of the model’s behavior, often through decision rules or visual representations.

27
Q

What does translucency mean in the context of explanations?

A

Translucency describes the reliance of an explanation on the internal workings of the model, such as using feature weights in a linear model to clarify decision-making.

28
Q

How does portability apply to XAI explanations?

A

Portability is the ability of an explanation technique to be applied across different model types, such as SHAP values being used with both tree-based models and neural networks.

29
Q

What is algorithmic complexity in XAI evaluations?

A

Algorithmic complexity addresses the computational demands of generating explanations, which can impact the feasibility of using certain methods with large datasets or complex models.

30
Q

Why is it important to assess the trustworthiness of explanations?

A

Trustworthiness ensures that explanations are reliable, unbiased, and accurately reflect the model’s decision-making, fostering confidence in the model’s predictions.

31
Q

How does SHAP handle feature interaction?

A

SHAP captures interaction effects between features by attributing contributions to the prediction based on combinations of feature values, providing a more nuanced interpretation.

32
Q

What are contrastive explanations?

A

Contrastive explanations highlight the differences between outcomes, explaining why one scenario occurs over another by focusing on the differing features.

33
Q

What are counterfactual explanations?

A

Counterfactual explanations provide insights into what changes would be necessary to alter the outcome, helping to understand the conditions under which a different result would occur.

34
Q

What does it mean for an explanation to be causal?

A

Causal explanations focus on the key causes that directly impact the prediction, highlighting the most influential features driving the model’s decision.